uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
1,314,259,996,563 | arxiv | \section{Introduction}
Multi-task learning (MTL) and semi-supervised learning are both successful paradigms for learning in scenarios with limited labelled data and have in recent years been applied to almost all areas of NLP. Applications of MTL in NLP, for example, include partial parsing \cite{Soegaard:Goldberg:16}, text normalisation \cite{Bollman:ea:17}, neural machine translation \cite{Luong:ea:16}, and keyphrase boundary classification \citep{Augenstein2017KBC}.
Contemporary work in MTL for NLP typically focuses on learning representations that are useful across tasks, often through hard parameter sharing of hidden layers of neural networks \cite{Collobert2011,Soegaard:Goldberg:16}. If tasks share optimal hypothesis classes at the level of these representations, MTL leads to improvements \cite{Baxter:00}. However, while sharing hidden layers of neural networks is an effective regulariser \cite{Soegaard:Goldberg:16}, we potentially {\em loose synergies between the classification functions} trained to associate these representations with class labels. This paper sets out to build an architecture in which such synergies are exploited, with an application to pairwise sequence classification tasks. Doing so, we achieve a new state of the art on topic-based sentiment analysis.
For many NLP tasks, disparate label sets are weakly correlated, e.g. part-of-speech tags correlate with dependencies \cite{Hashimoto2017}, sentiment correlates with emotion \cite{Felbo2017,EisnerEmoji}, etc. We thus propose to induce a joint label embedding space (visualised in Figure \ref{fig:label_embeddings}) using a Label Embedding Layer that allows us to model these relationships, which we show helps with learning.
In addition, for tasks where labels are closely related, we should be able to not only model their relationship, but also to directly estimate the corresponding label of the target task based on auxiliary predictions. To this end, we propose to train a Label Transfer Network (LTN) jointly with the model to produce pseudo-labels across tasks.
The LTN can be used to label unlabelled and auxiliary task data by utilising the `dark knowledge' \cite{Hinton2015} contained in auxiliary model predictions. This pseudo-labelled data is then incorporated into the model via semi-supervised learning, leading to a natural combination of multi-task learning and semi-supervised learning. We additionally augment the LTN with data-specific diversity features \cite{ruder2017emnlp} that aid in learning.
\paragraph{Contributions} Our contributions are: a) We model the relationships between labels by inducing a joint label space for multi-task learning. b) We propose a Label Transfer Network that learns to transfer labels between tasks and propose to use semi-supervised learning to leverage them for training. c) We evaluate MTL approaches on a variety of classification tasks and shed new light on settings where multi-task learning works. d) We perform an extensive ablation study of our model. e) We report state-of-the-art performance on topic-based sentiment analysis.
\section{Related work}
\paragraph{Learning task similarities} Existing approaches for learning similarities between tasks enforce a clustering of tasks \cite{Evgeniou2005,Jacob2009}, induce a shared prior \cite{Yu2005,Xue2007,DaumeIII2009}, or learn a grouping \cite{Kang2011,Kumar2012}. These approaches focus on homogeneous tasks and employ linear or Bayesian models. They can thus not be directly applied to our setting with tasks using disparate label sets.
\paragraph{Multi-task learning with neural networks} Recent work in multi-task learning goes beyond hard parameter sharing~\citep{Caruana:93} and considers different sharing structures, e.g. only sharing at lower layers \citep{Soegaard:Goldberg:16} and induces private and shared subspaces \citep{Liu2017,ruder2017sluice}. These approaches, however, are not able to take into account relationships between labels that may aid in learning. Another related direction is to train on disparate annotations of the same task \cite{chen-zhang-liu:2016:EMNLP2016,Peng2017}. In contrast, the different nature of our tasks requires a modelling of their label spaces.
\paragraph{Semi-supervised learning} There exists a wide range of semi-supervised learning algorithms, e.g., self-training, co-training, tri-training, EM, and combinations thereof, several of which have also been used in NLP. Our approach is probably most closely related to an algorithm called {\em co-forest} \cite{Li:Zhou:07}. In co-forest, like here, each learner is improved with unlabeled instances labeled by the ensemble consisting of all the other learners.
Note also that several researchers have proposed using auxiliary tasks that are unsupervised \cite{Plank2016a,Rei2017}, which also leads to a form of semi-supervised models.
\paragraph{Label transformations} The idea of manually mapping between label sets or learning such a mapping to facilitate transfer is not new. \newcite{Zhang:ea:12} use distributional information to map from a language-specific tagset to a tagset used for other languages, in order to facilitate cross-lingual transfer. More related to this work, \newcite{Kim:ea:15} use canonical correlation analysis to transfer between tasks with disparate label spaces. There has also been work on label transformations in the context of multi-label classification problems \cite{Yeh:ea:17}.
\section{Multi-task learning with disparate label spaces}
\subsection{Problem definition}
In our multi-task learning scenario, we have access to labelled datasets for $T$ tasks $\mathcal{T}_1, \ldots, \mathcal{T}_T$ at training time with a target task $\mathcal{T}_T$ that we particularly care about. The training dataset for task $\mathcal{T}_i$ consists of $N_k$ examples $X_{\mathcal{T}_i} = \{x_1^{\mathcal{T}_i}, \ldots, x_{N_k}^{\mathcal{T}_i}\}$ and their labels $Y_{\mathcal{T}_i} = \{\mathbf{y}_1^{\mathcal{T}_i}, \ldots, \mathbf{y}_{N_k}^{\mathcal{T}_i}\}$.
Our base model is a deep neural network that performs classic hard parameter sharing \cite{Caruana:93}: It shares its parameters across tasks and has task-specific softmax output layers, which output a probability distribution $\mathbf{p}^{\mathcal{T}_i}$ for task $\mathcal{T}_i$ according to the following equation:
\begin{equation}
\mathbf{p}^{\mathcal{T}_i} = \mathrm{softmax}(\mathbf{W}^{\mathcal{T}_i}\mathbf{h} + \mathbf{b}^{\mathcal{T}_i})
\end{equation}
where $\mathrm{softmax}(\mathbf{x}) = e^\mathbf{x} / \sum^{ \|\mathbf{x}\| }_{i=1} e^{\mathbf{x}_i}$, $\mathbf{W}^{\mathcal{T}_i} \in \mathbb{R}^{L_i \times h}$, $\mathbf{b}^{\mathcal{T}_i} \in \mathbb{R}^{L_i}$ is the weight matrix and bias term of the output layer of task $\mathcal{T}_i$ respectively, $\mathbf{h} \in \mathbb{R}^h$ is the jointly learned hidden representation, $L_i$ is the number of labels for task $\mathcal{T}_i$, and $h$ is the dimensionality of $\mathbf{h}$.
The MTL model is then trained to minimise the sum of the individual task losses:
\begin{equation} \label{eq:mtl_loss}
\mathcal{L} = \lambda_1 \mathcal{L}_1 + \ldots + \lambda_T \mathcal{L}_T
\end{equation}
where $\mathcal{L}_i$ is the negative log-likelihood objective $\mathcal{L}_i = \mathcal{H}(\mathbf{p}^{\mathcal{T}_i},\mathbf{y}^{\mathcal{T}_i}) = - \frac{1}{N} \sum_n \sum_j \log \mathbf{p}_j^{\mathcal{T}_i} \mathbf{y}_j^{\mathcal{T}_i} $ and $\lambda_i$ is a parameter that determines the weight of task $\mathcal{T}_i$. In practice, we apply the same weight to all tasks. We show the full set-up in Figure \ref{fig:mtl}.
\begin{figure*}[!htb]
\begin{subfigure}{.32\linewidth}
\centering
\includegraphics[height=2.2in]{multi-task_learning_cropped.pdf}
\caption{Multi-task learning} \label{fig:mtl}
\end{subfigure}%
\hspace*{0.4cm}
\begin{subfigure}{.21\linewidth}
\centering
\includegraphics[height=2.2in]{label_embedding_layer_cropped.pdf}
\caption{MTL with LEL} \label{fig:lel}
\end{subfigure}
\hspace*{0.4cm}
\begin{subfigure}{.45\linewidth}
\centering
\includegraphics[height=2.2in]{label_transfer_network_cropped.pdf}
\caption{Semi-supervised MTL with LTN} \label{fig:semi-supervised_mtl}
\end{subfigure}
\caption{a) Multi-task learning (MTL) with hard parameter sharing and 3 tasks $\mathcal{T}_{1-3}$ and $L_{1-3}$ labels per task. A shared representation $\mathbf{h}$ is used as input to task-specific softmax layers, which optimise cross-entropy losses $\mathcal{L}_{1-3}$. b) MTL with the Label Embedding Layer (LEL) embeds task labels $\mathbf{l}_{1-L_i}^{\mathcal{T}_{1-3}}$ in a joint embedding space and uses these for prediction with a label compatibility function. c) Semi-supervised MTL with the Label Transfer Network (LTN) in addition optimises an unsupervised loss $\mathcal{L}_{pseudo}$ over pseudo-labels $\mathbf{z}^{\mathcal{T}_T}$ on auxiliary/unlabelled data. The pseudo-labels $\mathbf{z}^{\mathcal{T}_T}$ are produced by the LTN for the main task $\mathcal{T}_T$ using the concatenation of auxiliary task label output embeddings $[\mathbf{o}_{i-1},\mathbf{o}_i, \mathbf{o}_{i+1}]$ as input.}
\label{fig:training-procedures}
\end{figure*}
\subsection{Label Embedding Layer}
In order to learn the relationships between labels, we propose a Label Embedding Layer (LEL) that embeds the labels of all tasks in a joint space. Instead of training separate softmax output layers as above, we introduce a label compatibility function $c(\cdot, \cdot)$ that measures how similar a label with embedding $\mathbf{l}$ is to the hidden representation $\mathbf{h}$:
\begin{equation}
c(\mathbf{l},\mathbf{h}) = \mathbf{l} \cdot \mathbf{h}
\end{equation}
where $\cdot$ is the dot product. This is similar to the Universal Schema Latent Feature Model introduced by \newcite{Riedel2013}. In contrast to other models that use the dot product in the objective function, we do not have to rely on negative sampling and a hinge loss \cite{Collobert2008} as negative instances (labels) are known. For efficiency purposes, we use matrix multiplication instead of a single dot product and softmax instead of sigmoid activations:
\begin{equation}
\mathbf{p} = \mathrm{softmax}(\mathbf{L} \mathbf{h})
\end{equation}
where $\mathbf{L} \in \mathbb{R}^{(\sum_i L_i) \times l}$ is the label embedding matrix for all tasks and $l$ is the dimensionality of the label embeddings. In practice, we set $l$ to the hidden dimensionality $h$. We use padding if $l < h$. We apply a task-specific mask to $\mathbf{L}$ in order to obtain a task-specific probability distribution $\mathbf{p}^{\mathcal{T}_i}$. The LEL is shared across all tasks, which allows us to learn the relationships between the labels in the joint embedding space. We show MTL with the LEL in Figure \ref{fig:lel}.
\subsection{Label Transfer Network}
The LEL allows us to learn the relationships between labels. In order to make use of these relationships, we would like to leverage the predictions of our auxiliary tasks to estimate a label for the target task. To this end, we introduce the Label Transfer Network (LTN). This network takes the auxiliary task outputs as input. In particular, we define the output label embedding $\mathbf{o}_i$ of task $\mathcal{T}_i$ as the sum of the task's label embeddings $\mathbf{l}_j$ weighted with their probability $\mathbf{p}^{\mathcal{T}_i}_j$:
\begin{equation}
\mathbf{o}_i = \sum^{L_i}_{j=1} \mathbf{p}^{\mathcal{T}_i}_j \mathbf{l}_j
\end{equation}
The label embeddings $\mathbf{l}$ encode general relationship between labels, while the model's probability distribution $\mathbf{p}^{\mathcal{T}_i}$ over its predictions encodes fine-grained information useful for learning \cite{Hinton2015}. The LTN is trained on labelled target task data. For each example, the corresponding label output embeddings of the auxiliary tasks are fed into a multi-layer perceptron (MLP), which is trained with a negative log-likelihood objective $\mathcal{L}_\mathrm{LTN}$ to produce a pseudo-label $\mathbf{z}^{\mathcal{T}_T}$ for the target task $\mathcal{T}_{T}$:
\begin{equation}
\mathrm{LTN}_T = \mathrm{MLP}([\mathbf{o}_1, \ldots, \mathbf{o}_{T-1}])
\end{equation}
where $[\cdot, \cdot]$ designates concatenation. The mapping of the tasks in the LTN yields another signal that can be useful for optimisation and act as a regulariser. The LTN can also be seen as a mixture-of-experts layer \cite{Jacobs1991} where the experts are the auxiliary task models. As the label embeddings are learned jointly with the main model, the LTN is more sensitive to the relationships between labels than a separately learned mixture-of-experts model that only relies on the experts' output distributions. As such, the LTN can be directly used to produce predictions on unseen data.
\subsection{Semi-supervised MTL}
The downside of the LTN is that it requires additional parameters and relies on the predictions of the auxiliary models, which impacts the runtime during testing. Instead, of using the LTN for prediction directly, we can use it to provide pseudo-labels for unlabelled or auxiliary task data by utilising auxiliary predictions for semi-supervised learning.
We train the target task model on the pseudo-labelled data to minimise the squared error between the model predictions $\mathbf{p}^{\mathcal{T}_i}$ and the pseudo labels $\mathbf{z}^{\mathcal{T}_i}$ produced by the LTN:
\begin{equation}
\mathcal{L}_{pseudo} = MSE(\mathbf{p}^{\mathcal{T}_T}, \mathbf{z}^{\mathcal{T}_T}) = ||\mathbf{p}^{\mathcal{T}_T} - \mathbf{z}^{\mathcal{T}_T}|| ^ 2
\end{equation}
We add this loss term to the MTL loss in Equation \ref{eq:mtl_loss}.
As the LTN is learned together with the MTL model, pseudo-labels produced early during training will likely not be helpful as they are based on unreliable auxiliary predictions. For this reason, we first train the base MTL model until convergence and then augment it with the LTN.
We show the full semi-supervised learning procedure in Figure \ref{fig:semi-supervised_mtl}.
\subsection{Data-specific features}
When there is a domain shift between the datasets of different tasks as is common for instance when learning NER models with different label sets, the output label embeddings might not contain sufficient information to bridge the domain gap.
To mitigate this discrepancy, we augment the LTN's input with features that have been found useful for transfer learning \cite{ruder2017emnlp}. In particular, we use
the number of word types, type-token ratio, entropy, Simpson's index, and Rényi entropy as diversity features. We calculate each feature for each example
\footnote{For more information regarding the feature calculation, refer to \newcite{ruder2017emnlp}.} The features are then concatenated with the input of the LTN.
\subsection{Other multi-task improvements}
Hard parameter sharing can be overly restrictive and provide a regularisation that is too heavy when jointly learning many tasks. For this reason, we propose several additional improvements that seek to alleviate this burden: We use skip-connections, which have been shown to be useful for multi-task learning in recent work \cite{ruder2017sluice}. Furthermore, we add a task-specific layer before the output layer, which is useful for learning task-specific transformations of the shared representations \cite{Soegaard:Goldberg:16,ruder2017sluice}.
\section{Experiments}
For our experiments, we evaluate on a wide range of text classification tasks. In particular, we choose pairwise classification tasks---i.e. those that condition the reading of one sequence on another sequence---as we are interested in understanding if knowledge can be transferred even for these more complex interactions. To the best of our knowledge, this is the first work on transfer learning between such pairwise sequence classification tasks. We implement all our models in Tensorflow \cite{abadi2016tensorflow} and release the code at \url{https://github.com/coastalcph/mtl-disparate}.
\begin{table}[t]
\centering
\begin{tabular}{l l c c c}
\toprule
Task & Domain & $N$ & $L$ & Metric\\
\midrule
{\tt Topic-2} & Twitter & 4,346 & 2 & $\rho^{PN}$ \\
{\tt Topic-5} & Twitter & 6,000 & 5 & $MAE^M$\\
{\tt Target} & Twitter & 6,248 & 3 & $F_1^M$ \\
{\tt Stance} & Twitter & 2,914 & 3 & $F_1^{FA}$\\
{\tt ABSA-L} & Reviews & 2,909 & 3 & $Acc$\\
{\tt ABSA-R} & Reviews & 2,507 & 3 & $Acc$\\
{\tt FNC-1} & News & 39,741 & 4 & $Acc$\\
{\tt MultiNLI} & Diverse & 392,702 & 3 & $Acc$\\
\bottomrule
\end{tabular}%
\caption{Training set statistics and evaluation metrics of every task. $N$: \# of examples. $L$: \# of labels.}
\label{tab:dataset-stats}
\end{table}
\subsection{Tasks and datasets}\label{sec:datasets}
\setlength{\tabcolsep}{0.3em}
\begin{table}[h
\fontsize{10}{10}\selectfont
\begin{center}
\begin{tabular}{|L|}
\toprule
\textbf{Topic-based sentiment analysis}: \\
\textit{Tweet}: No power at home, sat in the dark listening to AC/DC in the hope it'll make the electricity come back again \\
\textit{Topic}: AC/DC \\
\textit{Label}: positive \\
\midrule
\textbf{Target-dependent sentiment analysis}: \\
\textit{Text}: how do you like settlers of catan for the wii? \\
\textit{Target}: wii \\
\textit{Label}: neutral \\
\midrule
\textbf{Aspect-based sentiment analysis}: \\
\textit{Text}: For the price, you cannot eat this well in Manhattan\\
\textit{Aspects}: restaurant prices, food quality \\
\textit{Label}: positive\\
\midrule
\textbf{Stance detection}: \\
\textit{Tweet}: Be prepared - if we continue the policies of the liberal left, we will be \#Greece\\
\textit{Target}: Donald Trump\\
\textit{Label}: favor\\
\midrule
\textbf{Fake news detection}: \\
\textit{Document}: Dino Ferrari hooked the whopper wels catfish, (...), which could be the biggest in the world.\\
\textit{Headline}: Fisherman lands 19 STONE catfish which could be the biggest in the world to be hooked\\
\textit{Label}: agree\\
\midrule
\textbf{Natural language inference}: \\
\textit{Premise}: Fun for only children\\
\textit{Hypothesis}: Fun for adults and children\\
\textit{Label}: contradiction\\
\bottomrule
\end{tabular}
\end{center}
\caption{\label{tab:dataset-examples} Example instances from the datasets described in Section \ref{sec:datasets}.}
\end{table}
We use the following tasks and datasets for our experiments, show task statistics in Table \ref{tab:dataset-stats}, and summarise examples in Table \ref{tab:dataset-examples}:
\paragraph{Topic-based sentiment analysis} Topic-based sentiment analysis aims to estimate the sentiment of a tweet known to be about a given topic. We use the data from SemEval-2016 Task 4 Subtask B and C \cite{SemEval:2016:task4} for predicting on a two-point scale of positive and negative ({\tt Topic-2}) and five-point scale ranging from highly negative to highly positive ({\tt Topic-5}) respectively. An example from this dataset would be to classify the tweet ``No power at home, sat in the dark listening to AC/DC in the hope it'll make the electricity come back again'' known to be about the topic ``AC/DC'', which is labelled as a positive sentiment. The evaluation metrics for {\tt Topic-2} and {\tt Topic-5} are macro-averaged recall ($\rho^{PN}$) and macro-averaged mean absolute error ($MAE^M$) respectively, which are both averaged across topics.
\paragraph{Target-dependent sentiment analysis} Target-dependent sentiment analysis ({\tt Target}) seeks to classify the sentiment of a text's author towards an entity that occurs in the text as positive, negative, or neutral. We use the data from \citeauthor{Dong2014} \shortcite{Dong2014}. An example instance is the expression ``how do you like settlers of catan for the wii?'' which is labelled as neutral towards the target ``wii'.' The evaluation metric is macro-averaged $F_1$ ($F_1^M$).
\paragraph{Aspect-based sentiment analysis} Aspect-based sentiment analysis is the task of identifying whether an aspect, i.e. a particular property of an item is associated with a positive, negative, or neutral sentiment \cite{Ruder2016a}. We use the data of SemEval-2016 Task 5 Subtask 1 Slot 3 \cite{Pontiki2016Aspect} for the laptops ({\tt ABSA-L}) and restaurants ({\tt ABSA-R}) domains. An example is the sentence ``For the price, you cannot eat this well in Manhattan'', labelled as positive towards both the aspects ``restaurant prices'' and ``food quality''. The evaluation metric for both domains is accuracy ($Acc$).
\paragraph{Stance detection} Stance detection ({\tt Stance}) requires a model, given a text and a target entity, which might not appear in the text, to predict whether the author of the text is in favour or against the target or whether neither inference is likely \cite{Augenstein2016stance}. We use the data of SemEval-2016 Task 6 Subtask B \cite{mohammad-EtAl:2016:SemEval}. An example from this dataset would be to predict the stance of the tweet ``Be prepared - if we continue the policies of the liberal left, we will be \#Greece'' towards the topic ``Donald Trump'', labelled as ``favor''. The evaluation metric is the macro-averaged $F_1$ score of the ``favour'' and ``against'' classes ($F_1^{FA}$).
\begin{table*}[h]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{l c c c c c c c c}
\toprule
& {\tt Stance} & {\tt FNC} & {\tt MultiNLI} & {\tt Topic-2} & {\tt Topic-5}* & {\tt ABSA-L} & {\tt ABSA-R} & {\tt Target}\\
\midrule
\newcite{Augenstein2016stance} & \textbf{49.01} & - & - & - & - & - & - & -\\
\newcite{Riedel2017} & - & \textbf{88.46} & - & - & - & - & - & -\\
\citeauthor{chen2017recurrent} \shortcite{chen2017recurrent} & - & - & \textbf{74.90} & - & - & - & - & -\\
\citeauthor{palogiannidi2016tweester} \shortcite{palogiannidi2016tweester} & - & - & - & \underline{79.90} & - & - & - & -\\
\citeauthor{balikas2016twise} \shortcite{balikas2016twise} & - & - & - & - & \textbf{0.719} & - & - & - \\
\citeauthor{Brun2016} \shortcite{Brun2016} & - & - & - & - & - & - & \textbf{88.13} & -\\
\citeauthor{Kumar2016} \shortcite{Kumar2016} & - & - & - & - & - & \textbf{82.77} & \underline{86.73} & -\\
\citeauthor{Vo2015} \shortcite{Vo2015} & - & - & - & - & - & - &- & \textbf{69.90} \\
\midrule
STL & 41.1 & 72.72 & 49.25 & 63.92 & 0.919 & \underline{76.74} & 67.47 & 64.01 \\
\midrule
MTL + LEL & \underline{46.26} & 72.71 & \underline{49.94} & \textbf{80.52} & 0.814 & 74.94 & 79.90 & \underline{66.42} \\
MTL + LEL + LTN, main model & 43.16 & \underline{72.73} & 48.75 & 73.90 & \underline{0.810} & 75.06 & 83.71 & 66.10 \\
MTL + LEL + LTN + semi, main model & 43.56 & 72.72 & 48.00 & 72.35 & 0.821 & 75.42 & 83.26 & 63.00 \\
\bottomrule
\end{tabular}%
}
\caption{Comparison of our best performing models on the test set against a single task baseline and the state of the art, with task specific metrics. *: lower is better. Bold: best. Underlined: second-best.}
\label{tab:results-sota}
\end{table*}
\paragraph{Fake news detection} The goal of fake news detection in the context of the Fake News Challenge\footnote{\url{http://www.fakenewschallenge.org/}} is to estimate whether the body of a news article agrees, disagrees, discusses, or is unrelated towards a headline. We use the data from the first stage of the Fake News Challenge ({\tt FNC-1}). An example for this dataset is the document ``Dino Ferrari hooked the whopper wels catfish, (...), which could be the biggest in the world.'' with the headline ``Fisherman lands 19 STONE catfish which could be the biggest in the world to be hooked'' labelled as ``agree''. The evaluation metric is accuracy ($Acc$)\footnote{We use the same metric as \newcite{Riedel2017}.}.
\paragraph{Natural language inference} Natural language inference is the task of predicting whether one sentences entails, contradicts, or is neutral towards another one. We use the Multi-Genre NLI corpus ({\tt MultiNLI}) from the RepEval 2017 shared task \cite{Nangia2017}. An example for an instance would be the sentence pair ``Fun for only children'', ``Fun for adults and children'', which are in a ``contradiction'' relationship. The evaluation metric is accuracy ($Acc$).
\subsection{Base model}
Our base model is the Bidirectional Encoding model \cite{Augenstein2016stance}, a state-of-the-art model for stance detection that conditions a bidirectional LSTM (BiLSTM) encoding of a text on the BiLSTM encoding of the target. Unlike \newcite{Augenstein2016stance}, we do not pre-train word embeddings on a larger set of unlabelled in-domain text for each task as we are mainly interested in exploring the benefit of multi-task learning for generalisation.
\subsection{Training settings}
We use BiLSTMs with one hidden layer of $100$ dimensions, $100$-dimensional randomly initialised word embeddings, a label embedding size of $100$. We train our models with RMSProp, a learning rate of $0.001$, a batch size of $128$, and early stopping on the validation set of the main task with a patience of $3$.
\section{Results}
Our main results are shown in Table \ref{tab:results-sota}, with a comparison against the state of the art. We present the results of our multi-task learning network with label embeddings (MTL + LEL), multi-task learning with label transfer (MTL + LEL + LTN), and the semi-supervised extension of this model. On 7/8 tasks, at least one of our architectures is better than single-task learning; and in 4/8, all our architectures are much better than single-task learning.
The state-of-the-art systems we compare against are often highly specialised, task-dependent architectures. Our architectures, in contrast, have not been optimised to compare favourably against the state of the art, as our main objective is to develop a novel approach to multi-task learning leveraging synergies between label sets and knowledge of marginal distributions from unlabeled data. For example, we do not use pre-trained word embeddings \cite{Augenstein2016stance,palogiannidi2016tweester,Vo2015}, class weighting to deal with label imbalance \cite{balikas2016twise}, or domain-specific sentiment lexicons \cite{Brun2016,Kumar2016}. Nevertheless, our approach outperforms the state-of-the-art on two-way topic-based sentiment analysis ({\tt Topic-2}).
The poor performance compared to the state-of-the-art on {\tt FNC} and {\tt MultiNLI} is expected; as we alternate among the tasks during training, our model only sees a comparatively small number of examples of both corpora, which are one and two orders of magnitude larger than the other datasets. For this reason, we do not achieve good performance on these tasks as main tasks, but they are still useful as auxiliary tasks as seen in Table \ref{tab:auxiliary-tasks}.
\section{Analysis}
\subsection{Label Embeddings}
Our results above show that, indeed, modelling the similarity between tasks using label embeddings sometimes leads to much better performance. Figure \ref{fig:label_embeddings} shows why. In Figure~\ref{fig:label_embeddings}, we visualise the label embeddings of an MTL+LEL model trained on all tasks, using PCA. As we can see, similar labels are clustered together across tasks, e.g. there are two positive clusters (middle-right and top-right), two negative clusters (middle-left and bottom-left), and two neutral clusters (middle-top and middle-bottom).
Our visualisation also provides us with a picture of what auxilary tasks are beneficial, and to what extent we can expect synergies from multi-task learning. For instance, the notion of positive sentiment appears to be very similar across the topic-based and aspect-based tasks, while the conceptions of negative and neutral sentiment differ. In addition, we can see that the model has failed to learn a relationship between {\tt MultiNLI} labels and those of other tasks, possibly accounting for its poor performance on the inference task. We did not evaluate the correlation between label embeddings and task performance, but \newcite{Bjerva2017} recently suggested that mutual information of target and auxiliary task label sets is a good predictor of gains from multi-task learning.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{label_embeddings.png}
\caption{Label embeddings of all tasks. Positive, negative, and neutral labels are clustered together.}
\label{fig:label_embeddings}
\end{figure}
\subsection{Auxilary Tasks}
\begin{table}[h]
\centering
\begin{tabular}{l l}
\toprule
Main task & Auxiliary tasks\\
\midrule
{\tt Topic-2} & {\tt FNC-1}, {\tt MultiNLI}, {\tt Target} \\
\multirow{2}{*}{{\tt Topic-5}} & {\tt FNC-1}, {\tt MultiNLI}, {\tt ABSA-L}, \\
& {\tt Target}\\
{\tt Target} & {\tt FNC-1}, {\tt MultiNLI}, {\tt Topic-5} \\
{\tt Stance} & {\tt FNC-1}, {\tt MultiNLI}, {\tt Target} \\
{\tt ABSA-L} & {\tt Topic-5} \\
{\tt ABSA-R} & {\tt Topic-5}, {\tt ABSA-L}, {\tt Target}\\
\multirow{2}{*}{{\tt FNC-1}} & {\tt Stance}, {\tt MultiNLI}, {\tt Topic-5},\\
& {\tt ABSA-R}, {\tt Target} \\
{\tt MultiNLI} & {\tt Topic-5} \\
\bottomrule
\end{tabular}%
\caption{Best-performing auxiliary tasks for different main tasks.}
\label{tab:auxiliary-tasks}
\end{table}
For each task, we show the auxiliary tasks that achieved the best performance on the development data in Table \ref{tab:auxiliary-tasks}. In contrast to most existing work, we did not restrict ourselves to performing multi-task learning with only one auxiliary task \cite{Soegaard:Goldberg:16,Bingel:ea:17}. Indeed we find that most often a combination of auxiliary tasks achieves the best performance. In-domain tasks are less used than we assumed; only {\tt Target} is consistently used by all Twitter main tasks. In addition, tasks with a higher number of labels, e.g. {\tt Topic-5} are used more often. Such tasks provide a more fine-grained reward signal, which may help in learning representations that generalise better. Finally, tasks with large amounts of training data such as {\tt FNC-1} and {\tt MultiNLI} are also used more often. Even if not directly related, the larger amount of training data that can be indirectly leveraged via multi-task learning may help the model focus on relevant parts of the representation space \cite{Caruana:93}. These observations shed additional light on when multi-task learning may be useful that go beyond existing studies \cite{Bingel:ea:17}.
\begin{table*}[h]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{l c c c c c c c c}
\toprule
& {\tt Stance} & {\tt FNC} & {\tt MultiNLI} & {\tt Topic-2} & {\tt Topic-5}* & {\tt ABSA-L} & {\tt ABSA-R} & {\tt Target}\\
\midrule
MTL & 44.12 & \underline{72.75} & \underline{49.39} & \textbf{80.74} & 0.859 & 74.94 & 82.25 & 65.73 \\
\midrule
MTL + LEL & \textbf{46.26} & 72.71 & \textbf{49.94} & \underline{80.52} & 0.814 & 74.94 & 79.90 & \textbf{66.42} \\
MTL + LTN & 40.95 & 72.72 & 44.14 & 78.31 & 0.851 & 73.98 & 82.37 & 63.71 \\
MTL + LTN, main model & 41.60 & 72.72 & 47.62 & 79.98 & 0.814 & \underline{75.54} & 81.70 & 65.61 \\
MTL + LEL + LTN & \underline{44.48} & \textbf{72.76} & 43.72 & 74.07 & 0.821 & \textbf{75.66} & 81.92 & 65.00 \\
MTL + LEL + LTN, main model & 43.16 & 72.73 & 48.75 & 73.90 & 0.810 & 75.06 & \textbf{83.71} & \underline{66.10} \\
\midrule
MTL + LEL + LTN + main preds feats & 42.78 & 72.72 & 45.41 & 66.30 & 0.835 & 73.86 & 81.81 & 65.08 \\
MTL + LEL + LTN + main preds feats, main model & 42.65 & 72.73 & 48.81 & 67.53 & \textbf{0.803} & 75.18 & 82.59 & 63.95 \\
\midrule
MTL + LEL + LTN + main preds feats -- diversity feats & 42.78 & 72.72 & 43.13 & 66.3 & 0.835 & 73.5 & 81.7 & 63.95 \\
MTL + LEL + LTN + main preds feats -- diversity feats, main model & 42.47 & 72.74 & 47.84 & 67.53 & \underline{0.807} & 74.82 & 82.14 & 65.11 \\
\midrule
MTL + LEL + LTN + semi & 42.65 & \underline{72.75} & 44.28 & 77.81 & 0.841 & 74.10 & 81.36 & 64.45 \\
MTL + LEL + LTN + semi, main model & 43.56 & 72.72 & 48.00 & 72.35 & 0.821 & 75.42 & \underline{83.26} & 63.00 \\
\bottomrule
\end{tabular}%
}
\caption{Ablation results with task-specific evaluation metrics on test set with early stopping on dev set. \textit{LTN} means the output of the relabelling function is shown, which does not use the task predictions, only predictions from other tasks. \textit{LTN + main preds feats} means main model predictions are used as features for the relabelling function. \textit{LTN, main model} means that the main model predictions of the model that trains a relabelling function are used. Note that for {\tt MultiNLI}, we down-sample the training data. *: lower is better. Bold: best. Underlined: second-best.}
\label{tab:ablation}
\end{table*}
\subsection{Ablation analysis}
We now perform a detailed ablation analysis of our model, the results of which are shown in Table \ref{tab:ablation}. We ablate whether to use the LEL (\textit{+ LEL}), whether to use the LTN (\textit{+ LTN}), whether to use the LEL output or the main model output for prediction (main model output is indicated by \textit{, main model}), and whether to use the LTN as a regulariser or for semi-supervised learning (semi-supervised learning is indicated by \textit{+ semi}). We further test whether to use diversity features (\textit{-- diversity feats}) and whether to use main model predictions for the LTN (\textit{+ main model feats}).
Overall, the addition of the Label Embedding Layer improves the performance over regular MTL in almost all cases.
\begin{table}[h]
\centering
\resizebox{0.5\textwidth}{!}{%
\begin{tabular}{l c c c c }
\toprule
Task & Main & LTN & Main (Semi) & LTN (Semi) \\
\midrule
{\tt Stance} & 2.12 & 2.62 & 1.94 & 1.28 \\
{\tt FNC} & 4.28 & 2.49 & 6.92 & 4.84 \\
{\tt MultiNLI} & 1.5 & 1.95 & 1.94 & 1.28 \\
{\tt Topic-2} & 6.45 & 4.44 & 5.87 & 5.59 \\
{\tt Topic-5}* & 9.22 & 9.71 & 11.3 & 5.90 \\
{\tt ABSA-L} & 3.79 & 2.52 & 9.06 & 6.63 \\
{\tt ABSA-R} & 10.6 & 6.70 & 9.06 & 6.63 \\
{\tt Target} & 26.3 & 14.6 & 20.1 & 15.7 \\
\bottomrule
\end{tabular}
}
\caption{Error analysis of LTN with and without semi-supervised learning for all tasks. Metric shown: percentage of correct predictions only made by either the relabelling function or the main model, respectively, relative to the the number of all correct predictions.}
\label{tab:ltn-errorana}
\end{table}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\linewidth]{relabel_plot.png}
\caption{Learning curves with LTN for selected tasks, dev performances shown. The main model is pre-trained for 10 epochs, after which the relabelling function is trained.}
\label{fig:ltn_learning_curve}
\end{figure}
\subsection{Label transfer network}
To understand the performance of the LTN, we analyse learning curves of the relabelling function vs. the main model. Examples for all tasks without semi-supervised learning are shown in Figure \ref{fig:ltn_learning_curve}.
One can observe that the relabelling model does not take long to converge as it has fewer parameters than the main model. Once the relabelling model is learned alongside the main model, the main model performance first stagnates, then starts to increase again. For some of the tasks, the main model ends up with a higher task score than the relabelling model.
We hypothesise that the softmax predictions of other, even highly related tasks are less helpful for predicting main labels than the output layer of the main task model. At best, learning the relabelling model alongside the main model might act as a regulariser to the main model and thus improve the main model's performance over a baseline MTL model, as it is the case for {\tt TOPIC-5} (see Table \ref{tab:ablation}).
To further analyse the performance of the LTN, we look into to what degree predictions of the main model and the relabelling model for individual instances are complementary to one another. Or, said differently, we measure the percentage of correct predictions made only by the relabelling model or made only by the main model, relative to the number of correct predictions overall. Results of this for each task are shown in Table \ref{tab:ltn-errorana} for the LTN with and without semi-supervised learning. One can observe that, even though the relabelling function overall contributes to the score to a lesser degree than the main model, a substantial number of correct predictions are made by the relabelling function that are missed by the main model. This is most prominently pronounced for {\tt ABSA-R}, where the proportion is 14.6.
\section{Conclusion}
We have presented a multi-task learning architecture that (i) leverages potential synergies between classifier functions relating shared representations with disparate label spaces and (ii) enables learning from mixtures of labeled and unlabeled data. We have presented experiments with combinations of eight pairwise sequence classification tasks. Our results show that leveraging synergies between label spaces sometimes leads to big improvements, and we have presented a new state of the art for topic-based sentiment analysis. Our analysis further showed that (a) the learned label embeddings were indicative of gains from multi-task learning, (b) auxiliary tasks were often beneficial across domains, and (c) label embeddings almost always led to better performance. We also investigated the dynamics of the label transfer network we use for exploiting the synergies between disparate label spaces.
\section*{Acknowledgments}
Sebastian Ruder is supported by the Irish Research Council Grant Number EBPPG/2014/30 and Science Foundation Ireland Grant Number SFI/12/RC/2289. Anders Søgaard is supported by the ERC Starting Grant Number 313695. Isabelle Augenstein is supported by Eurostars grant Number E10138. We further gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.
|
1,314,259,996,564 | arxiv | \section{Introduction}
Spontaneous breaking of the chiral symmetry is considered
to be the origin of hadron mass.
The chiral symmetry is expected to be (partially) restored
in finite density and the hadron mass is predicted to decrease,
even at the normal nuclear density.
Our purpose is to investigate the origin of hadoron mass
through mass modification of hadrons.
The dilepton decay channel of vector mesons produced in nuclear reactions
is a good probe of in-medium mass modification
since it is free from the final state interactions.
We take $p+A \rightarrow \phi + X$ reaction as an example
to explain the expected invariant mass distribution of the
vector meson. A $\phi$ meson produced inside a target nucleus
travels and then decays inside or outside the target nucleus.
When the $\phi$ meson decays outside the target nucleus,
the mass spectrum is well-known invariant mass in vacuum
as in Fig.~\ref{fig:inv}(a).
When the $\phi$ meson decays inside the target nucleus,
the observed mass is the one in medium.
So if in-medium mass modification exists,
the mass distribution is modified to some extent as in Fig.~\ref{fig:inv}(b).
What we experimentally measure is the sum of these cases as in Fig.~\ref{fig:inv}(c).
\begin{figure}[hb]
\begin{center}
\includegraphics[width=.8\linewidth]{invariantmass.eps}
\end{center}
\caption{Invariant mass spectra of $\phi$ meson (a) in vacuum , and (b) in medium.
(c) is the sum of (a)+(b). The dashed line indicates the $\phi$ meson mass
in vacuum.}
\label{fig:inv}
\end{figure}
The KEK E325 experiment was performed at KEK Proton Synchrotron
to search for in-medium mass modification using the method explained above.
They measured the invariant mass spectra of $e^+e^-$ pairs produced
in 12~GeV proton beam induced nuclear reactions.
As the nuclear targets, C and Cu were used.
The mass resolution was about 11~MeV$/c^2$.
Figure~\ref{fig:invphie325} shows the invariant mass spectrum of $\phi$ meson
produced in Cu target with $\beta\gamma(=P/M)<1.25$~\cite{muto}. The blue line
represents an expected line shape assuming mass in vacuum
including experimental effects.
There is an excess on the lower side of the $\phi$ mass peak over the expected line shape.
Figure~\ref{fig:bg-excess-e325} shows the amount of excess versus $\beta\gamma$ of
$\phi$ mesons.
This figure support the picture that slower $\phi$ meson
in larger nuclear target have higher
probability to decay inside the nuclear target so that it experience medium modification.
To quantitatively extract information on the medium effect,
they assume linear dependence of the mass on density as,
\begin{equation}
\frac{m(\rho)}{m(0)} = 1 - k \frac{\rho}{\rho_0},
\end{equation}
where $m(\rho)$ is the mass at density $\rho$, $\rho_0$ is the normal nuclear density,
and $k$ is the parameter to be determined.
Similarly for the width, they assume
\begin {equation}
\frac{\Gamma(\rho)}{\Gamma(0)} = 1 + k_2 \frac{\rho}{\rho_0},
\end{equation}
where $\Gamma(\rho)$ is the width at density $\rho$, and $k_2$
is the parameter to be determined.
They obtained $k=0.034^{+0.006}_{-0.007}$ and $k_2=2.6^{+1.8}_{-1.2}$,
which means that the mass of $\phi$ decreases by 3.4\% and the width
gets wider by 3.6 times at the normal nuclear density.
They also observed modification of $\rho$ and $\omega$ mass and concluded that
$k=0.092 \pm 0.002$, assuming that the parameter is common for $\rho$ and
$\omega$~\cite{e325-omega}.
Width broadening was not necessary to reproduce the observed invariant mass.
The mass shift parameters $k$ for $\rho/\omega$, and $\phi$ are
at the same level as the calculations based on QCD sum rule~\cite{QCDsum}
\begin{figure}
\begin{center}
\begin{minipage}{0.40\linewidth}
\vspace{2ex}
\includegraphics[width=.95\linewidth]{e325phiCu_.eps}
\vspace{1ex}
\caption{Invariant mass spectrum of $\phi$ meson obtained with $p+$Cu reactions
by KEK E325 experiment. $\beta\gamma$ of $\phi$ is $<1.25$.
}
\label{fig:invphie325}
\end{minipage}
\begin{minipage}{0.10\linewidth}
~
\end{minipage}
\begin{minipage}{0.40\linewidth}
\includegraphics[width=.95\linewidth]{bg-excess-e325.eps}
\caption{Amount of excess versus $\beta\gamma$ of $\phi$
measured by KEK E325 experiment.}
\label{fig:bg-excess-e325}
\end{minipage}
\end{center}
\end{figure}
The CLAS g7 experiment at Jefferson Laboratory
used $\gamma + A$ reactions and the light vector mesons
were reconstructed using $e^+e^-$ decay channel~\cite{g7}.
The results obtained with $^2$H, C, Fe, Ti targets were presented.
For $\omega$ and $\phi$ meson, no mass shift was assumed in the analysis
due to their long life. The $\omega$ and $\phi$ contributions were subtracted
to extract the invariant mass spectra of $\rho$ meson.
The mass of $\rho$ meson in nuclear medium does not show any mass shift.
The width are broadened and
it is consistent with an expectation from collisional broadening.
Dilepton invariant mass spectra were also measured in heavy ion collisions.
CERES /NA45 reported $e^+e^-$ invariant mass measured in 158~AGeV Pb+Au collisions~\cite{na45}.
An in-medium broadening of the $\rho$ mass scenario is
favored over a $\rho$ mass dropping scenario.
PHENIX also reported invariant mass spectra of $e^+e^-$ in Au+Au collisions
at $\sqrt{s}=200$A~GeV~\cite{phenix}.
An enhancement is observed in the low mass region (below $\phi$ peak).
The enhancement at quite low mass ($m_{ee}<0.3$~GeV$/c^2$) and
high $p_T$ ($1<p_T<5$~GeV/$c$)
is interpreted as the production of virtual direct photons,
which leads to their temperature measurement.
No theoretical models could explain quantitatively the enhancement
at the low mass and low $p_T$ region.
\section{J-PARC E16 Experiment}
There exists some modification in $e^+e^-$ mass spectrum but
the origin is not yet clear. There are even contradiction in the interpretation.
We propose to pursue this problem using the same
reaction as KEK-E325 but with 100 times more statistics ($10^3~\phi \rightarrow 10^5~\phi$)
and with two times better mass resolution (11~MeV/$c^2$ $\rightarrow$ 5~MeV/$c^2$).
The proposal was approved as stage-1 and the experiment
was named E16~\cite{proposal}.
We use 30~GeV $p+A \rightarrow \rho/\omega/\phi\ X$ reactions
and measure dilepton invariant mass spectra. As the nuclear target,
CH$_2$, C, Cu, Pb are used.
The J-PARC E16 experiment has the following advantages and disadvantages compared
to other experiments. It observes $e^+e^-$ decay channel so
it can eliminate final state interactions in contrast to the case of
experiments using hadronic decay channels.
However, $e^+e^-$ decay channel has very tiny branching ratio ($\sim 3\times 10^{-4}$
for $\phi$ meson).
The E16 experiment uses proton induced reactions,
therefore, the system is cold and static so is simpler compared to that of heavy ion collisions.
The E16 experiment is expected to measure $\phi$ meson invariant mass modification.
Compared to $\rho$ and $\omega$ mesons, $\phi$ meson has a non-overlapping
separated peak and a narrower width.
However, the production cross section of $\phi$ meson is much smaller
than that of $\rho$ and $\omega$,
thus it is difficult to collect high statistics and
CLAS-g7 and CERES cannot discuss the mass spectra of $\phi$.
The disadvantages, mainly come from the fact
that $\phi \rightarrow e^+e^-$ is a rare probe,
are overcome by collecting high statistics data.
When the statistics is achieved,
the invariant mass distribution of slowly moving $\phi$ meson whose $\beta\gamma$
is less than 0.5 which is obtained with Pb target is expected to have double peak
as in Fig.~\ref{fig:invmass-phi-e16}.
Note that the modification parameters obtained by KEK-E325 are assumed.
The $\beta\gamma$ and the target size dependence of the modification
expected to be obtained is in Fig.~\ref{fig:bg-excess-e16}.
So more systematic study is possible.
We are able to obtain dispersion relation as the blue points in Fig.~\ref{fig:dispersion},
which is qualitatively new information.
These new information can give further insight on the in-medium modification.
\begin{figure}
\begin{center}
\begin{minipage}{0.45\linewidth}
\includegraphics[width=.9\linewidth]{invmass-phi.eps}
\caption{Invariant mass distribution of $\phi$ meson with $\beta\gamma<0.5$
expected to be observed by J-PARC E16 experiment using Pb target.}
\label{fig:invmass-phi-e16}
\end{minipage}
\begin{minipage}{0.05\linewidth}
~
\end{minipage}
\begin{minipage}{0.45\linewidth}
\includegraphics[width=.9\linewidth]{bg-excess-e16.eps}
\caption{Amount of excess versus $\beta\gamma$ of $\phi$
expected to be obtained by J-PARC E16 experiment.}
\label{fig:bg-excess-e16}
\end{minipage}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=.35\linewidth]{dispersion.eps}
\caption{Dispersion relation.
The red dotted curve shows a theory calculation by S.H.~Lee~\cite{lee}.
Note that the calculation is limited for momentum of less than 1~GeV/$c$
and is extrapolated to 3~GeV/$c$.
Black dotted curve shows the uncertainty of the calculation.
Blue points shows the statistical uncertainties
expected to be obtained by the J-PARC E16 experiment.
The center values are taken
from the theoretical calculation mentioned above.
Purple point is the results obtained by KEK E325 experiment.
}
\label{fig:dispersion}
\end{center}
\end{figure}
\subsection{J-PARC and the high momentum beam line}
To achieve 100 times more statistics, We utilize 10 times more intense beam
($10^9$ protons per pulse (ppp) $\rightarrow$ $10^{10}$ ppp),
a spectrometer with 5 times larger acceptance,
and 2 times larger production rate due to the increased beam energy
(12~GeV $\rightarrow$ 30~GeV).
The J-PARC E16 experiment plans to use the high momentum beam line
which will be constructed at J-PARC Hadron Experimental Facility.
J-PARC, Japan Proton Accelerator Research Complex,
is a high intensity proton accelerator and is located at Tokai village in Japan.
The Main Ring (MR) of J-PARC can accelerate protons up to 30~GeV.
Figure~\ref{fig:sy-hd} shows the plan view of the switchyard and
the Hadron Experimental Facility.
The protons in the MR are slowly extracted to LINE-A.
The proton beam follows LINE-A through the switchyard
and is delivered to the Hadron Experimental Facility.
The protons collide the T1 target to provide secondary beams
to the existing beam lines such as K1.8, K1.8BR, and KL.
The beam power was 24~kW as of 2013,
which corresponds to $3\times 10^{13}$ ppp.
To make primary proton beam available to E16 experiment,
the high momentum beam line which is called LINE-B
is being constructed.
LINE-B borrows a small piece of the beam ($\sim 10^{-4}$) in LINE-A
with a Lambertson-type magnet at the switchyard.
The beam is extracted to the south side of the Hadron Experimental Facility
where E16 spectrometer is to be built.
\begin{figure}
\begin{center}
\includegraphics[width=0.95\linewidth]{sy-hd.eps}
\caption{Plan view of the switchyard and the Hadron Experimental Facility.}
\label{fig:sy-hd}
\end{center}
\end{figure}
\subsection {E16 spectrometer}
A 3D view of the J-PARC E16 spectrometer is shown on the left side
of Fig.~\ref{fig:spectrometer}.
The E16 detectors are all installed inside a giant dipole magnet
with a field strength of 1.7~T at the center.
A horizontal cut view at the center is presented on the right side of Fig.~\ref{fig:spectrometer}.
The proton beam runs from bottom to the top of the figure,
and hit the target at the center of the spectrometer.
The spectrometer consists of GEM Trackers (GTR)~\cite{GTR},
Hadron Blind Detectors (HBD)~\cite{HBD}, and lead glass (LG) calorimeters.
A module is defined as a set of GTR, HBD and LG which covers 30 degrees
both horizontally and vertically. The full design of the spectrometer
consists of 26 modules. GTR is made of three layers of position-sensitive
GEM tracking chambers
with the sizes of 100 $\times$ 100 mm$^2$, 200 $\times$ 200 mm$^2$ and 300 $\times$ 300 mm$^2$,
respectively. HBD is a cherenkov counter and is used for electron identification
together with LG.
Particle tracks in the magnetic field
are reconstructed with GTR so that the momenta are measured.
Electron candidates are selected with HBD and LG.
Position resolution of 100~$\mu m$ with incident angles of up to 30 degrees
is required for GTR.
Rejection factors of 100 and 25 are required for HBD and LG, respectively.
\begin{figure}
\begin{center}
\begin{minipage}{0.40\linewidth}
\includegraphics[width=1.0\linewidth]{e16-spectrometer3d.eps}
\end{minipage}
\begin{minipage}{0.55\linewidth}
\includegraphics[width=1\linewidth,clip]{j-parc-phi-detector5c.eps}
\end{minipage}
\caption{3D view (Left) and plan view (Right) of the E16 spectrometer.}
\label{fig:spectrometer}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\begin{minipage}{0.45\linewidth}
\includegraphics[width=0.9\linewidth]{gem.eps}
\caption{Schematic of a GEM tracking chamber.}
\label{fig:gem}
\end{minipage}
\begin{minipage}{0.05\linewidth}
~
\end{minipage}
\begin{minipage}{0.45\linewidth}
\includegraphics[width=0.9\linewidth]{gtr.eps}
\vspace{2ex}
\caption{Picture of the production type of the GEM tracking chambers.
The sizes are 100 $\times$ 100 mm$^2$ and 200 $\times$ 200 mm$^2$, respectively.
}
\label{fig:picgem}
\end{minipage}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\begin{minipage}{0.45\linewidth}
\includegraphics[width=.9\linewidth]{hbd_schematics.eps}
\caption{Schematic of the photocathode of HBD.}
\label{fig:hbd-schematics}
\end{minipage}
\begin{minipage}{0.05\linewidth}
~
\end{minipage}
\begin{minipage}{0.45\linewidth}
\vspace{10ex}
\includegraphics[width=.9\linewidth]{hbdpic.eps}
\vspace{3ex}
\caption{Picture of a prototype of HBD in real size.}
\label{fig:hbdpic}
\end{minipage}
\end{center}
\end{figure}
\subsection{R \& D status of the spectrometer}
A schematic of a GEM tracking chamber is shown in Fig.~\ref{fig:gem}.
Ar + CO2 (70:30) is used as the amplification gas.
Three GEM foils are placed and they amplify the ionization electrons
produced by a traversing charged particle in the gap above the top GEM.
The amplified signal is readout with two dimensional strip readout board.
A custom preamp board using APV25 chip~\cite{apv} has been developed.
The mass production type of GEM tracking chambers with three different sizes
and preamp boards
have been built. The performance of them was evaluated with charged particle beams
at J-PARC and ELPH.
The required resolution of 100~$\mu m$ was
achieved for incident angles of up to 30 degrees.
A picture of the GEM chambers with two sizes are shown in Fig.~\ref{fig:picgem}.
First level trigger is readout from the bottom of the GEM foil.
A prototype of ASD (Amplifier-Shaper-Discriminator) ASIC for the trigger readout
has also been developed.
HBD is a type of cherenkov detector using
CsI evaporated GEM as a photocathode.
Our HBD has been developed based on the PHENIX HBD experience~\cite{HBDphenix}.
CF$_4$ serves as radiator and amplification gas.
With the radiator length of 50~cm, 11 photoelectrons are expected.
A schematic of the photocathode is shown in Fig.~\ref{fig:hbd-schematics}.
The incident electron emit cherenkov photons. The photons
are converted into photoelectrons by the CsI layer
which is evaporated on top of the top GEM.
The photoelectrons are then amplified by the GEMs.
A weak reverse bias field is applied in the gap between the mesh and the top GEM,
so that the ionization electrons in the gap are swept into the mesh.
Even with the reverse bias field, photoelectrons produced near the top GEM surface
are still attracted by the GEM's field and are amplified to be detected.
Therefore, HBD is blind to ionization while is sensitive to cherenkov photons.
The size of the photocathode of a HBD module is 600 $\times$ 600 mm$^2$ and
four photocathodes with a size of 300 $\times$ 300~mm$^2$ are used to fill the module.
Extensive R \& D effort has been performed to establish HBD components
such as efficient and robust CsI GEMs, airtight chambers and readout boards.
Prototypes of GEMs with and without CsI, chambers and readout boards in small sizes
and real sizes were produced.
A picture of a prototype of the HBD chamber in real size is shown in Fig.~\ref{fig:hbdpic}.
It corresponds to a module of the spectrometer.
A beam test was performed with negatively charged particle beam of
1.0~GeV/$c$ at the J-PARC
K1.1BR to evaluate the performance of a prototype HBD in small size.
A pion rejection factor of 100 with an electron efficiency of 80\% was achieved
using cluster size analysis.
The prototype of HBD in real size also operates well.
The performance meet the required rejection and efficiency for the experiment.
\subsection{Schedule}
The experiment was approved as stage-1 in 2007.
Detector R\&D started in 2008.
The construction budget of the high-p beam line was approved in 2013.
Technical design report was submitted and the mass production
of detectors started in 2014.
Due to the budgetary limitation, we start with one third of the full design.
The one third of the full design will be ready for the first physics
run which is anticipated in JFY2016.
\section{Other related experiments at J-PARC}
J-PARC E26 experiment has been proposed to investigate
$\omega$ meson in nuclear medium\cite{Ozawa1, Ozawa2}.
It plan to use $\pi^-$ beam at J-PARC K1.8 beam line
with a momentum of 1.8~GeV/$c$ and with an intensity of $1\times 10^7$ / pulse.
The reaction $\pi^{-} A \rightarrow \omega n X$ is used.
Invariant mass of $\omega$ meson is measured
with $\omega \rightarrow \pi^0\gamma \rightarrow 3\gamma$ decay mode.
When neutron is detected at zero degree, recoilless $\omega$ production
is realized.
The condition is suitable for the study of in-medium effect.
Nuclear $\omega$ bound state can be searched via
forward neutron measurement.
J-PARC E29 experiment has been proposed to investigate in-medium mass modification
of $\phi$ meson via $\phi$ meson bound state in target nucleus\cite{Ohnishi1,Ohnishi2}.
It plan to use $\bar{p}$ beam with a momentum of 1.1~GeV/$c$
and with an intensity of $1\times 10^6$/pulse.
When four strangeness are identified in the final state,
the double $\phi$ production, $\bar{p} + p \rightarrow \phi \phi$, dominates.
The forward-going $\phi$ meson is detected via $K^+K^-$ decay.
The $\phi$ meson in nucleus is detected via $\Lambda K^+$ decay,
which occur only when $\phi$ is in nucleus ($\phi+p \rightarrow \Lambda K^+$).
Missing mass spectrum is calculated with the beam momentum
and the forward-going $\phi$ momentum.
The backward $\phi$ is at the same order of Fermi momentum which is detected
via $\Lambda K^+$ decay in nucleus.
When high intensity high resolution secondary beam line (HIHR)
which was proposed by RCNP is realized,
experimental study of $\phi$ meson in nuclear medium
using a similar method as J-PARC E26 can be done.
A $10^9/$pulse $\pi^-$ beam with a momentum of $\sim 2$~GeV is used
to induce the $\pi^- + p \rightarrow \phi n$ reaction.
If the neutron is identified at the forward angle, ultra slow $\phi$ is selected.
Forward neutron measurement may lead to observation of
nuclear $\phi$ bound state.
About 10 times more $\phi$ compared to E16 is expected to be collected
with $\beta\gamma<0.5$.
\section{Summary}
The origin of hadron mass is studied through mass modification of vector mesons.
There are many measurements of dilepton invariant mass in hot and cold system.
There exists some modification but the origin is not yet clear.
J-PARC E16 experiment pursue it by collecting 100 times more statics
compare to the KEK E325 experiment.
We expect to obtain double peak structure in $\phi$ meson invariant mass spectra,
wide range of system size dependence of the in-medium modification,
and the dispersion relation of $\phi$ meson in nuclear medium.
We start with one-third of the design configuration and physics
run is anticipated in JFY2016.
More experiments regarding vector meson mass modification are planned and
whole together provide further insights on the origin of mass,
and the chiral symmetry.
\section{Acknowledgments}
We would like to give our thanks to the staff of KEK Fuji test beam line, ELPH at Tohoku
University, LEPS at SPring-8, J-PARC Hadron Experimental Facility and
RIKEN RI Beam Factory for their support for the beam test of detectors.
We also would like to thank to KEK electronics system group (e-sys)
for their help in the development and test of the readout circuits.
This study was partly supported by
Grant-in-Aid for JSPS Fellows 12J01196, RIKEN SPDR program,
and MEXT/JSPS KAKENHI Grant Numbers 19654036, 19340075, 21105004 and 26247048.
|
1,314,259,996,565 | arxiv | \section*{Acknowledgements}
I am grateful to N.~Kivel, D.~M\"uller and M.~Strikman for illuminating
discussions.
Especially I enjoyed discussions with A.~Radyushkin,
who independently and from different perspective
came to the conclusions that the BaBar data favour the flat pion DA \cite{tolya}. I am thankful to him for exchange by
ideas and his advises.
|
1,314,259,996,566 | arxiv | \section{Introduction}
\label{intro}
Analytical and numerical models of moving heat sources are widely used in the simulations of
welding \cite{arora2019thermal}, laser treatment and cutting \cite{gladush2011physics}, frictional systems \cite{laraqi2003exact}, laser and electron beam additive manufacturing \cite{gusarov2009model,promoppatum2017comprehensive}.
Pioneering works for the quasi-stationary problems with moving heat sources have been performed theoretically by Wilson \cite{wilson1904} and later by Rosenthal with application to welding \cite{rosenthal1946theory}.
More complex transient solutions and models with different shapes of the heat sources and the heat intensity distributions have been later developed \cite{eagar1983temperature,hou2000general}.
Today, the moving heat source models are the basis for the simulations of the powder bed fusion processes. Such models allow to evaluate the thermal state of the parts during 3d-printing and subsequently to predict the related effects in the change of materials microstructure and porosity, phase composition, surface roughness, over-melting effects and the residual stress state \cite{yan2018review,srivastava2020multi,solyaev2019overmelting}. The most common approach is to use the Gaussian-type models for the laser beam together with numerical simulations of the conjugate heat and mass transfer inside the melt pool and in the heat affected zone \cite{yan2018review}. Simplified quasi-stationary and point-source analytical and semi-analytical models are also used to obtain approximate predictions for the melt pool morphology and to select the optimal values of the process parameters (laser power, scanning speed, spot size, etc.) \cite{promoppatum2017comprehensive,tian2020melt}. Analytical solutions are also involved in the novel hybrid methods for the improvement of computational efficiency of the long-term finite-element simulations \cite{moran2021scan}.
In the present paper we developed a new class of the moving heat source models that allow to take into account the effect of the material microstructure.
Obtained analytical and semi-analytical solutions are developed within the high-order heat transfer theory. This theory can be formally deduced from the
known mixture theory of the heat conducting two-component systems
(or the two-temperature model) \cite{forest2010some}. Also, considered theory can be obtained based on the extended approaches of continuum thermodynamics,
assuming that the free energy density depends not only on the temperature field and its first gradients, but also on its second gradients \cite{forest2010some,nguyen2005non} (that's why we called the derived solutions as the "gradient models"). The governing equation of the considered model, i.e. the heat equation, has the fourth order with respect to the spatial coordinates and the second order with respect to time. This model is the generalized variant of Cattaneo's and Guyer-Krumhansl's models (see Ref.\cite{forest2010some}).
In order to derive closed form analytical solutions we introduce simplified constitutive assumptions about the relations between the gradient parameters of the model. As the result, the developed solutions contain single additional length scale parameter that can be treated as the material's internal characteristic length scale. Similar parameters always arise in the high-order gradient theories that are well established today in elasticity \cite{dell2009generalized}, in hydrodynamics \cite{fried2006tractions,rosi2013propagation}, in electrodynamics \cite{lazar2020second}, and in different coupled field theories \cite{sciarra2007second,solyaev2021electric}. Second gradient modification of the Newtonian gravity theory have been also discussed recently in Ref. \cite{lazar2020gradient}.
Identification of the length scale parameters of gradient theories is a specific problem that have been solved previously, for example, within the gradient elasticity theory for the single crystals and idealized structures based on the molecular dynamics\cite{lurie2017identification} and first principle calculations \cite{shodja2013ab}. Discrete and numerical models have been used to evaluate these parameters for the inhomogeneous materials and metamaterials \cite{yvonnet2020computational,abdoul2018strain,dell2016large}.
Experimentally observed size effects have been also used for the parameters identification of gradient models of nano-composites and nano-fluids \cite{ma2018inclusion,solyaev2020gen}.
Generally, the need for the use of high-order gradient continuum theories can be justified for the precesses with relatively high gradients of the field variables. Such situations arise when one consider the small scale systems \cite{cordero2016second}, microstructured materials and metamaterials \cite{eremeyev2018linear}, and the high frequency phenomena \cite{askes2011gradient}. Giant spatial and temporal gradients of temperature in the laser treatment processes make them a promising area of application of gradient theories. For simulations of such processes the two-temperature model and the Guyer-Krumhansl's model have been widely established \cite{kovacs2018analytic,naldo2020understanding}. In the present paper, we show that phenomenological gradient models of heat transfer can be also useful for the refined analysis of the melt pool morphology that forms during the laser powder bed fusion.
The rest part of the paper is organized as follows. In Section 2 we consider the gradient theory of heat transfer and propose its particular variant, which allows an explicit definition of general solution for moving source problems. In Section 3 we derive a closed form solutions for the point, line and Gaussian heat sources. Dimensionless forms of these solutions are given. Method for identification of the additional length scale parameter of the model is proposed. In Section 4 we present the results of numerical calculations for the temperature profiles and the melt pool shape based on the obtained solutions for moving source problems.
Comparison of the model with available experimental data and example of identification of the model's additional length scale parameter for the tungsten powder are presented.
\section{Gradient model of heat transfer}
\label{sec1}
\subsection{Governing equation}
\label{sec1a}
Consider the heat conduction in the infinite isotropic and homogeneous medium. High-order heat equation within the considered gradient theory of heat transfer can be expressed in the following form \cite{forest2010some}:
\begin{equation}
\label{HE}
(1 - g^2 \Delta)\dot \theta + \tau \ddot \theta = \kappa (1 - l^2 \Delta) \Delta \theta + \frac{1}{c\rho}\hat q
\end{equation}
where $\theta(\textbf x) = T - T_i$ $[K]$ is the rise of temperature $T(\textbf x)$ over the initial level $T_i$ at a point $\textbf x = \{x,y,z\}$; $\kappa$ $[m^2/s]$ is thermal diffusivity, $c$ $[J/(kg \,K)]$ is the heat capacity at constant pressure, $\rho$ $[kg/m^3]$ is the mass density, $\tau$ $[s]$ is relaxation time
$g$ $[m]$ is the so-called dissipation parameter \cite{fulop2018emergence},
$l$ $[m]$ is the additional length scale parameter of gradient theory, $\hat q$ $[J/m^3]$ is the volumetric power source; $\Delta$ is three-dimensional Laplacian operator and $\Delta\Delta = \Delta^2$ is the biharmonic operator.
In comparison with standard Guyer-Krumhansl model \cite{fulop2018emergence}, equation \eqref{HE} contains additional biharmonic term $\Delta^2\theta$.
Previously, it was shown\cite{forest2010some} that equation \eqref{HE} straightforwardly follows from the
theory of the two-component heat conducting mixtures (or
the two-temperature model)
if one assumes that the averaged temperature can be approximated by arithmetic mean of the individual temperatures of the components.
We can also note that the governing equation of the form \eqref{HE} is similar to those one that arise in the theory of multi-component diffusion \cite{aifantis1979new}. Biharmonic term $\Delta^2\theta$ also arises in the heat transfer models with non-local Fourier low \cite{ramu2015compact}.
From the phenomenological point of view, we can directly apply the governing equation \eqref{HE} for the simulations of heat transfer in the powder bed fusion processes by using some effective properties $\kappa,\, c,\,\rho, \,g,\,\tau, \,l$ of the powder. Relation between these effective properties and the initial parameters of the components in mixture theory (or in the two-temperature model) can be also derived \cite{forest2010some}.
In general, these material properties depend on the temperature, on the phase transition effects, etc. However, in the following analysis we will neglect such dependences like it is usually done in the simplified analytical approaches \cite{gladush2011physics}. Thus, we will use the averaged over the temperature range effective properties of the materials under consideration.
Appropriate initial and boundary conditions of the considered model can be deduced based on the analysis of the balance equations or using the variational approach \cite{nguyen2005non,lurie2019variational}.
\subsection{Quasi-stationary problem}
\label{sec1b}
Considering problems of the moving heat sources we assume that the volumetric source $\hat q$ moves along $Ox$ axis with constant velocity $v$. It is convenient to define then a new coordinate system $O\xi yz$ that moves with the heat source such that:
\begin{equation}
\label{X}
\xi = x - vt
\end{equation}
Redefinitions of coordinates in the governing equation \eqref{HE} according to the standard rules ($\dot \theta \rightarrow - v \theta_{,\xi} + \dot\theta$,\, $\ddot\theta \rightarrow v^2 \theta_{,\xi\xi} - 2v\dot\theta_{,\xi}+ \ddot\theta$,\, $\theta_{,x}\rightarrow\theta_{,\xi}$, etc.) reduce it to the following form:
\begin{equation}
\label{HExi}
(1 - g^2 \Delta)\dot \theta
- v(1 - g^2 \Delta) \theta_{,\xi}
- 2 \tau v \dot \theta_{,\xi}
+ v^2 \tau \theta_{,\xi\xi}
+ \tau \ddot \theta
= \kappa (1 - l^2 \Delta) \Delta \theta + \frac{1}{c\rho}\hat q
\end{equation}
For the long enough bodies, the quasi-stationary condition can be achieved and the temperature distribution become independent on time. Corresponding steady-state form of equation \eqref{HExi} can be represented as follows:
\begin{equation}
\label{HEss}
(1 - l^2 \Delta) \Delta \theta
+ \frac{v}{\kappa}(1 - g^2 \Delta) \theta_{,\xi}
- \frac{v^2 \tau}{\kappa} \theta_{,\xi\xi}
+ \frac{1}{k}\hat q
= 0
\end{equation}
where $k$ is thermal conductivity coefficient ($\kappa= k /c\rho$) and comma denotes the differentiation with respect to the corresponding spatial coordinate.
Note, that if we assume that the length scale parameters and the relaxation time equal to zero: $l=g=0$, $\tau=0$, then equation \eqref{HEss} reduces to classical equation that was initially used by Rosenthal and other researchers within the moving source problems \cite{rosenthal1946theory,panas2014moving}.
\subsection{Simplified gradient model and its general solution}
\label{sec1c}
Simulations with equation \eqref{HEss} or even with its initial transient variant \eqref{HE} can be performed by using numerical methods. Though, it is difficult to find a general solution and Green functions for these equations in a closed form. Using Hankel transform of Eq. \eqref{HEss}, it can be reduced to the fourth order ordinary differential equation, which can be easily solved, however the inverse transform cannot be performed analytically (such an approach have been used in Ref. \cite{ramu2015compact}). Direct application of standard Rosenthal's substitution $\theta(\xi,y,z) = \phi(\xi,y,z) e^{-\frac{u}{2\kappa}x}$ for the quasi-stationary problem \eqref{HEss} also does not make it easier to find the solution.
Thus, in the present work we propose to use some additional constitutive assumptions. This will allow us to obtain an approximate variant of the model that contains single additional length scale parameter and that can be resolved within the considered problems. Namely, we assume the following relations for the dissipation parameter and for the relaxation time:
\begin{equation}
\label{as}
g^2 =2l^2, \quad \tau = l^2/\kappa
\end{equation}
Substituting \eqref{as} into \eqref{HEss}, we obtain the following simplified variant of the model:
\begin{equation}
\label{HEsss}
(1 - l^2 \Delta) \Delta \theta
+ \frac{v}{\kappa}(1 - 2l^2 \Delta) \theta_{,\xi}
- l^2\frac{v^2}{\kappa^2} \theta_{,\xi\xi}
+ \frac{1}{k}\hat q
= 0
\end{equation}
It is not obvious, why the solution of model \eqref{HEsss} can be more simple than those one of \eqref{HEss}. However, it can be checked by the direct substitution, that this equation \eqref{HEsss} can be reformulated in the following form with linear differential operators:
\begin{equation}
\begin{aligned}
\label{HEo}
\mathcal H [\mathcal L [\theta]] + \frac{1}{k}\hat q = 0 \quad
\Longleftrightarrow \quad
(1 - l^2\Delta - l^2\frac{v}{\kappa}\partial_{\xi} )(\Delta\theta + \frac{v}{\kappa} \theta_{,\xi})
+ \frac{1}{k}\hat q = 0
\end{aligned}
\end{equation}
where
\begin{equation}
\begin{aligned}
\label{HEod}
\mathcal L [...] = \Delta + \frac{v}{\kappa} \partial_{\xi},\qquad
\mathcal H [...] = 1 - l^2 \mathcal L [...] = 1 - l^2\Delta - l^2\frac{v}{\kappa} \partial_{\xi}
\end{aligned}
\end{equation}
where $\partial_{\xi}$ means the differentiation with respect to coordinate $\xi$.
Then, the solution of equation \eqref{HEsss} (or that is the same \eqref{HEo}) can be represented as the sum of classical part $\theta_0$ and additional gradient part $\theta_1$:
\begin{equation}
\label{GS}
\theta = \theta_0 + \theta_1,\\
\end{equation}
wherein these parts of the solution obey the corresponding classical and modified gradient differential equations of the second order:
\begin{equation}
\label{GSo}
\mathcal L[\theta_0] + \frac{1}{k}\hat q = 0\\ \qquad \text{and} \qquad
\frac{1}{l^2}\mathcal H[\theta_1] + \frac{1}{k}\hat q = 0
\end{equation}
Substituting solution \eqref{GS} into \eqref{HEo} and taking into account \eqref{GSo}, we can check that the governing equation of the model will be satisfied identically:
\begin{equation}
\begin{aligned}
\label{GSeq}
\mathcal H [\mathcal L [\theta]] + \frac{1}{k}\hat q
&= \mathcal H [\mathcal L [\theta_0 + \theta_1]] + \frac{1}{k}\hat q \\
&= \mathcal H [\mathcal L [\theta_0]] + \mathcal L [\mathcal H [\theta_1]]
+ \frac{1}{k}\hat q\\
&= -\mathcal H \left[\frac{1}{k}\hat q\right]
- \mathcal L \left[l^2\frac{1}{k}\hat q\right]
+ \frac{1}{k}\hat q\\
&= -\frac{1}{k}\hat q
+ l^2 \mathcal L \left[\frac{1}{k}\hat q\right]
- \mathcal L \left[l^2\frac{1}{k}\hat q\right]
+ \frac{1}{k}\hat q \equiv 0
\end{aligned}
\end{equation}
Note, that solutions for equations \eqref{GSo} can be easily find based on the standard approaches used in the classical problems of moving heat sources. Thus, the representation \eqref{GS}, \eqref{GSo} give us a tool for construction of the general and particular solutions of the proposed simplified gradient model \eqref{HEsss}. Also we can use it to find the Green functions of the model.
Proposed approach for the development of the simplified models was initially used in gradient elasticity \cite{askes2011gradient}, in which the similar constitutive assumptions allow to obtain the operator form of equilibrium equations with superposition of the classical elasticity operator and the Helmholtz or generalized Helmholtz operators \cite{askes2011gradient,lurie2006interphase}. Representation of the gradient elasticity solution through the gradient and classical parts (similar to \eqref{GS}) have been used, e.g. in micromechanics problems \cite{lurie2006interphase,solyaev2019three}.
\section{Gradient models of moving heat sources}
\label{sec3}
\subsection{Point and line heat sources models}
\label{intro}
\label{sec3a}
Point and line heat source models are usually used for the analysis of the so-called conductive and the key-hole modes of the melt pool formation, respectively \cite{patel2020melting}.
Such type of the sources can be modeled by using the following spatial Dirac delta functions that are used as the right parts of the governing equations \cite{panas2014moving}:
\begin{equation}
\begin{aligned}
\label{RP}
\hat q_{point} &= Q\, \delta(\xi)\delta(y)\delta(z) = Q\, \delta(\textbf r)\\
\hat q_{line} &= Q\, \delta(\xi)\delta(y)\\
\end{aligned}
\end{equation}
where $\delta(...)$ is the Dirac delta function, $Q$ is the power magnitude and $\textbf r$ is the position vector in the moving coordinate system $O\xi yz$.
Thus, we should find the solution of equation \eqref{HEo} for the volumetric power sources \eqref{RP} and we start with the case of the point heat source problem. Based on the proposed approach, we should solve the following equations to find classical $\theta_0$ and gradient $\theta_1$ parts of the solution:
\begin{equation}
\begin{aligned}
\label{AP1}
\mathcal L[\theta_0] = - \frac{Q}{k} \delta(\textbf r)
\quad\Longleftrightarrow\quad
\Delta\theta_0 + \frac{v}{\kappa} (\theta_0)_{,\xi} = - \frac{Q}{k}\delta(\textbf r)
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\label{AP2}
\frac{1}{l^2}\mathcal H[\theta_1] = - \frac{Q}{k} \delta(\textbf r)
\quad\Longleftrightarrow\quad
\frac{1}{l^2}\theta_1 - \Delta\theta_1 - \frac{v}{\kappa} (\theta_1)_{,\xi} =
- \frac{Q}{k} \delta(\textbf r)
\end{aligned}
\end{equation}
Equation \eqref{AP1} corresponds to the classical model of the moving point heat source and its solution is the widely known Rosental's formula \cite{rosenthal1946theory,panas2014moving}:
\begin{equation}
\begin{aligned}
\label{AP1sol}
\theta_0 = \frac{Q}{4\pi k R} \text{e}^{-\frac{v}{2\kappa}(\xi + R)}
\end{aligned}
\end{equation}
where $R=|\textbf r| = \sqrt{\xi^2+y^2+z^2}$.
Modified gradient equation \eqref{AP2} is not more complex than the classical one and it can be also solved following the approach proposed by Rosenthal \cite{rosenthal1946theory,panas2014moving} for the derivation of classical solution \eqref{AP1sol}. Therefore, we assume that the gradient part of the solution can be represented as follows:
\begin{equation}
\begin{aligned}
\label{AP2as}
\theta_1(\textbf r) = \phi(\textbf r) \text e^{-\frac{v}{2\kappa} \xi}
\end{aligned}
\end{equation}
Substituting \eqref{AP2as} into \eqref{AP2} we can obtain the following Helmholtz-type equation with respect to function $\phi(\textbf r)$:
\begin{equation*}
\frac{1}{l^2}\phi
- \Delta\phi + \underline{\frac{v}{\kappa} \phi_{,\xi}} - \frac{v^2}{4\kappa^2}\phi
- \underline{\frac{v}{\kappa} \phi_{,\xi}} + \frac{v^2}{2\kappa^2} \phi =
- \frac{Q}{k} \delta(\textbf r) \text e^{\frac{v}{2\kappa} \xi}
\end{equation*}
\begin{equation}
\label{AP2f}
\Longrightarrow\quad \Delta\phi - \left(\frac{1}{l^2} + \frac{v^2}{4\kappa^2}\right)\phi
= \frac{Q}{k} \delta(\textbf r)
\end{equation}
where it is seen that the underlined terms are canceled and we take into account that $\delta(\textbf r) \text e^{\frac{v}{2\kappa} \xi} = \delta(\textbf r)$ (see Ref.\cite{eagar1983temperature}).
The main advantage of substitution \eqref{AP2as} is that it allows to avoid the first order derivative of the function (corresponding terms are cancelled) and to reduce the initial differential equation to the standard radially-symmetric Helmholtz equation. Note, that this assumption \eqref{AP2as} can be effectively used only in the framework of presented simplified variant of gradient theory with relations between additional gradient parameters given by \eqref{as}. In the case of general model \eqref{HEss}, or in the case of the models with some other type of constitutive assumptions instead of \eqref{as}, the substitution \eqref{AP2as} will not work, i.e. the first order and moreover, the second order derivatives of $\phi(\textbf r)$ will remain in the final equation. This will make further difficulties for the analytical solution. Thus, the physical meaning of the constitutive assumptions \eqref{as} is that there exists some similarity between the gradient and the classical parts of the solution, such that the effects of the heat source movement can be represented by the same decaying function $\text e^{-\frac{v}{2\kappa} \xi}$ and the rest part of the problem can be reduced to the solution of the Helmholtz equations.
Based on \eqref{AP2f} and \eqref{AP2as} we find then the gradient part of the solution as follows:
\begin{equation}
\label{AP2sol}
\begin{aligned}
\phi(\textbf r) &= - \frac{Q}{4\pi k R} \text{e}^{-\sqrt{\frac{1}{l^2}+\frac{v^2}{4\kappa^2}}R}\\[5pt]
\theta_1(\textbf r) &= - \frac{Q}{4\pi k R} \text{e}^{-\frac{v}{2\kappa} \xi-\sqrt{\frac{1}{l^2}+\frac{v^2}{4\kappa^2}}R}
\end{aligned}
\end{equation}
and the total solution for the temperature distribution $\theta=\theta_0+\theta_1$ becomes to:
\begin{equation}
\begin{aligned}
\label{GFp}
\theta(\textbf r) &= \frac{Q}{4\pi k} \frac{\text{e}^{-\frac{v}{2\kappa}\xi}}{R}
\left(
\text{e}^{-\frac{v}{2\kappa}R} -
\text{e}^{-\sqrt{\frac{1}{l^2}+\frac{v^2}{4\kappa^2}}R}
\right)
\end{aligned}
\end{equation}
For the powder bed fusion processes, the problem of the heat source that moves over the half-space is of interest. Solution of this problem can be approximated based on the doubled solution for the infinite space assuming the thermal insulation condition at the free surface (i.e. neglecting the convective and radiative heat transfer like it is usually done in similar classical models \cite{panas2014moving}). Then, the dimensionless form of the solution for the point source that moves over the half-space can be defined based on \eqref{GFp} as follows:
\begin{equation}
\begin{aligned}
\label{GFpn}
\bar \theta_p(\bar{\textbf r}) &= n \frac{\text{e}^{-\bar \xi}}{\bar R}
\left(
\text{e}^{-\bar R} -
\text{e}^{-\sqrt{1+\, \text{Pe}_m^{-2}}\,\bar R}
\right)
\end{aligned}
\end{equation}
where $\bar \theta_p(\bar{\textbf r}) = \theta(\bar{\textbf r})/(T_m-T_i)$ is the rise of temperature normalized with respect to some critical value that can be related, e.g. with the material melting point $T_m$;
$n = \frac{Q v }{4\pi \kappa^2 \rho c(T_m-T_i)}$ is the so-called operating parameter \cite{eagar1983temperature}; $\bar \xi = \frac{v \xi}{2\kappa}$, $\bar R = \frac{v R}{2\kappa}$, $\bar{\textbf r} = \frac{v}{2\kappa}\textbf r$ are the non-dimensional coordinates.
The main feature of solution \eqref{GFpn} is the presence of additional non-dimensional group of parameters that we defined here as the micro-scale Peclet number:
\begin{equation}
\label{pem}
\text{Pe}_m = \frac{v l}{2\kappa}
\end{equation}
In classical models of heat and mass transfer, the value of Peclet number (Pe) defines the ratio between the rate of convection and the rate of diffusion processes. In application to the moving heat source problems, Peclet number is introduced usually to define the ratio of the heat diffusion characteristic time $L^2/\kappa$ to the transit time of the heat source $2L/v$, i.e. Pe $=vL/(2\kappa)$, where $L$ is some macroscopic characteristic length scale of the problem \cite{hou2000general}.
In the present case in \eqref{GFpn},\eqref{pem}, we have the definition of the micro-scale Peclet number through the internal length scale parameter of material $l$. This additional parameter of gradient theory defines the intensity of non-classical effects in the gradient solutions and amount of corrections that can be obtained for the classical predictions for the temperature distribution and, e.g. for the melt pool shape and size.
In the case of its small value ($l \rightarrow 0$ and $\text{Pe}_m \rightarrow 0$) we obtained the classical theory and classical Rosenthal solution for the moving heat source \cite{rosenthal1946theory,panas2014moving}. In the case of large values of $l$ and $\text{Pe}_m$ the gradient effects become pronounced.
Note, that such non-dimensional parameter $\text{Pe}_m$ is specific for the heat transfer problems. For example, in the inclusion problems of gradient elasticity and gradient hydrodynamics corresponding non-dimensional group is defined as the ratio of inclusion size and the length scale parameter \cite{solyaev2019three}. In wave propagation problems, the length scale parameter arises in gradient solutions as the ratio with the wavelength \cite{solyaev2021electric}.
In opposite to classical Rosenthal's solution, gradient solution \eqref{GFpn} does not contain singularity and the maximum value of the temperature rise is given by:
\begin{equation}
\begin{aligned}
\label{GFpmax}
\bar \theta_{p,max} =
\lim\limits_{\textbf r \rightarrow 0}\bar \theta_p = n \left(
\sqrt{1+\, \text{Pe}_m^{-2}} - 1\right)
\end{aligned}
\end{equation}
Regularization of classical singular solutions is typical for gradient theories \cite{lazar2020second,lazar2013fundamentals}. In the present case this formula \eqref{GFpmax} may not be such an effective for the description of the real fusion processes, since in the area of maximum heating there arise additional effects related to melting, partial evaporation, hydrodynamics of melt pool, etc. However, the assessment for the maximum temperature \eqref{GFpmax} can be useful when the heat input power is not very high, e.g. in some laser treatment applications. Also we can use Eq. \eqref{GFpmax} to identify the value of the material's length scale parameter. This can be done in the following way. From the experiments one can find the minimum value of the operating parameter $n$ that is needed for the occurrence of melting (see, e.g. Ref.\cite{li2004comparison}). This situation corresponds to the case, when the normalized temperature rise equals to one, i.e. $n_{min} = n: \,\, \bar \theta_{p,max}=1$.
Then, based on Eq. \eqref{GFpmax} we can identify the value of the micro-scale Peclet number as follows:
\begin{equation}
\begin{aligned}
\label{Pei}
\text{Pe}_m = \frac{n_{min}}{\sqrt{1+2n_{min}}}
\end{aligned}
\end{equation}
Therefore, the classical case with Pe$_m=0$ corresponds to zero value of minimum operating parameter $n_{min}=0$, i.e. for any infinitesimal external heat input applied in a single (moving) point there will arise some melted area. In opposite, the gradient model assumes that material can sustain some amount of concentrated heat input without melting.
Returning to the dimensional definitions in \eqref{Pei} after some algebraic simplifications we can obtain a closed form assessment for the experimental identification of the material's length scale parameter:
\begin{equation}
\begin{aligned}
\label{li}
l = \frac{Q_{min}}{2\pi k(T_m-T_0) \sqrt{1+\frac{Q_{min}v}{2\pi k (T_m-T_0) \kappa}}}
\end{aligned}
\end{equation}
where $Q_{min} = \lambda P_{min}$ is the power magnitude; $\lambda$ is the absorptivity of the material; $P_{min}$ is the minimum laser power needed for the occurrence of melting in a given material with melting point $T_m$ and with properties $k$ and $\kappa$ (averaged over the temperature range $[T_i,T_m]$) in the case of the laser scanning speed $v$.
Stability of the identified values of parameter $l$ from the experiments with different scanning speed $v$ and corresponding values of $P_{min}$ can be used for validation of the presented gradient model. Solution for the stationary sources can be also easily obtained from the developed solution assuming that $v=0$.
Based on the solution \eqref{GFpn} we can also evaluate the cooling rate that is realized around the moving heat source. This quantity can be related to the thermal gradient in the motion direction as follows \cite{cline1977heat}:
\begin{equation}
\begin{aligned}
\label{GFpmaxcr}
\frac{\partial \bar \theta_p}{\partial t} = - v \frac{\partial \bar \theta_p}{\partial \bar\xi}
= v \left( \left(1+\frac{\bar\xi}{\bar R^2}+\frac{\bar\xi}{\bar R}\right) \bar \theta_p(\bar{\textbf r})
+ \frac{\bar\xi}{\bar R}
\left(\sqrt{1+\text{Pe}_m^{-2}} - 1\right) \bar \theta_1(\bar{\textbf r})
\right)
\end{aligned}
\end{equation}
where $\bar \theta_1$ is the non-dimensional gradient part of the solution \eqref{AP2sol}.\\
Considering problem with the line source $\hat q_{line}$ (see \eqref{RP}) we can reduce it to the two-dimensional statement and solve in a polar coordinates. Solution of this problem is very similar to the previous one and we give it without derivations:
\begin{equation}
\begin{aligned}
\label{GFl}
\theta_l(\textbf r) &= \frac{Q}{2\pi k} \text{e}^{-\frac{v}{2\kappa}\xi}
\left(
K_0\left(\frac{v r}{2\kappa}\right) -
K_0\left(r\,\sqrt{\frac{1}{l^2}+\frac{v^2}{4\kappa^2}}\right)
\right)
\end{aligned}
\end{equation}
where $K_0$ is zero order modified Bessel function of the second kind and $r=\sqrt{\xi^2+y^2}$ is the radial distance from the line source to the given point in polar coordinates.
Dimensionless form of solution \eqref{GFl} is given by:
\begin{equation}
\begin{aligned}
\label{GFln}
\bar \theta_l(\bar{\textbf r}) &= n \, \text{e}^{-\bar \xi}
\left(
K_0\left(\bar r\right) -
K_0\left(\bar r\,\sqrt{1+\text{Pe}_m^{-2}}\right)
\right)
\end{aligned}
\end{equation}
As previously, reduction to classical solution $\bar \theta_{l,class}(\bar{\textbf r}) = n\, \text{e}^{-\bar \xi}K_0\left(\bar r\right)$ \cite{panas2014moving} is realized for the case of zero value of Pe$_m$ number. Maximum temperature rise is also finite in this gradient solution \eqref{GFln}, however the location of this maximum shifts out from the origin of coordinate system (this will be illustrated in the Results section). Approximate value of the maximum temperature rise for a not very high values of Pe$_m$ can be obtained based on the series expansion of \eqref{GFln} at the origin of coordinate system and it is given by:
\begin{equation}
\begin{aligned}
\label{GFlns}
\bar \theta_{l,max}&= \frac{n}{2} \log{\left(1+\text{Pe}_m^{-2}\right)}
\end{aligned}
\end{equation}
\subsection{Gaussian heat source}
Point source model is a rough approximation for the real distribution of a heat flux around the laser spot. More realistic will be the models of circular heat sources with Gaussian distribution of a heat flux \cite{eagar1983temperature, yan2018review}. Solutions for temperature rise in such models can be found based on the Green function method. Appropriate Green function for the considered model can be obtained based on the point source solution \eqref{GFp} using similar to classical approach that was described, e.g. in Refs. \cite{eagar1983temperature, panas2014moving}. One should consider the problem with two point sources placed symmetrically with respect to the plane $z=0$. Assuming that distance between these sources tends to zero, one can obtain the following expression for the Green function based on \eqref{GFp}:
\begin{equation}
\begin{aligned}
\label{GFpc}
G(\textbf r,\textbf r_0) &= \frac{1}{2\pi k }
\frac{
\text{e}^{-\frac{v}{2\kappa}(\xi-\xi_0)}}{R_0}
\left(
\text{e}^{-\frac{v}{2\kappa}R_0} -
\text{e}^{-\sqrt{\frac{1}{l^2}+\frac{v^2}{4\kappa^2}}\,R_0}
\right)
\end{aligned}
\end{equation}
where $R_0=|\textbf r-\textbf r_0| = \sqrt{(\xi-\xi_0)^2+(y-y_0)^2+z^2}$
is the distance between actual point under consideration $\textbf r=\{\xi,y,z\}$ and the location of the point source on the surface of the half-space $\textbf r_0=\{\xi_0,y_0,0\}$; and the magnitude of the power input is $Q=1$.
Classical Green function follows from Eq. \eqref{GFpc} in the case $l\rightarrow0$.
The energy distribution of the Gaussian laser beam is defined by \cite{eagar1983temperature}:
\begin{equation}
\begin{aligned}
\label{gaus}
\hat q_g(\xi,y) = \frac{Q}{2\pi a^2 }\,
\text e^{-\frac{\xi^2+y^2}{2a^2}}
\end{aligned}
\end{equation}
where $\lambda$ is the absorptivity of the material and $a$ is the characteristic laser beam radius.
Then, the temperature rise can be evaluated through the convolution of the Green function \eqref{GFpc} and the distribution \eqref{gaus}. Final solution can be presented in the following dimensionless form in cylindrical coordinate system $\textbf r = \{r, \phi, z\}$:
\begin{equation}
\begin{aligned}
\label{conv}
&\bar\theta_g(\bar{\textbf r})=
\int\limits_{0}^{\bar r_{max}}
\int\limits_{0}^{2\pi}
\hat q_g(\bar\xi_0, \bar y_0) \,G(\bar{\textbf r},\bar{\textbf r}_0) \,\bar r_0\,
d\phi_0d\bar r_0\\
& =
\frac{n}{\pi \text{Pe}^{2}}
\int\limits_{0}^{\bar r_{max}}
\int\limits_{0}^{2\pi}
\frac{
\text{e}^{-(\bar r\cos\phi-\bar r_0\cos\phi_0)
- \bar r_0^2/(2\text{Pe}^{2})}}{\bar R_0}
\left(
\text{e}^{-\bar R_0} -
\text{e}^{-\sqrt{1+\text{Pe}_m^{-2}}\,\bar R_0}
\right)
\bar r_0
d\phi_0d\bar r_0
\end{aligned}
\end{equation}
where
$\bar r_0 = \frac{v}{2\kappa} |\textbf r_0| =\frac{v}{2\kappa} \sqrt{\xi_0^2+y_0^2}$ is the non-dimensional radial coordinate of the point heat source; and Pe $=\frac{va}{2\kappa}$ is the standard definition for the Peclet number of the problem.
In general case, upper limit of integration along radial coordinate in \eqref{conv} should be $\bar r_{max}=\frac{v}{2\kappa}r_{max}=\infty$, however, we can take into account that the energy of Gaussian laser beam is located mostly inside the circular area which radius equals to $5a$ \cite{li2004comparison}. Then, in numerical calculations we can use the non-dimensional upper limit defined by $\bar r_{max}= 5$Pe.
Peculiarity of this integral is that it does not contain singularities and it can be evaluated numerically even more easily than the similar one in classical models (see, e.g. Ref.\cite{li2004comparison}). This integral contains the macro-scale and the micro-scale dimensionless parameters Pe and Pe$_m$. The relation between these parameters defines the ratio between the beam radius and the length scale parameter: Pe/Pe$_m = a/l$. And it is notable, that we cannot simplify the dimensionless form of the solution such that it contains the single non-dimensional group of parameters.
\section{Results and discussion}
\label{res}
In this section we give the illustrations for the derived solutions. Also we give an example of identification of the length scale parameter of the model. All calculations are performed with the dimensionless forms of the solutions. Unless otherwise stated, we use the unit value of the operating parameter $n=1$.
\begin{figure}[!t]
\textbf a
\includegraphics[width=0.45\linewidth]{1a}\quad
\textbf b
\includegraphics[width=0.45\linewidth]{1b}\\[10pt]
\textbf c
\includegraphics[width=0.45\linewidth]{1c}\quad
\textbf d
\includegraphics[width=0.45\linewidth]{1d}
\caption{ \textbf a: Distribution of dimensionless temperature rise in classical (transparent color) and gradient (blue color) solutions due to action of moving point heat source at the half-space surface $\bar z=0$, \textbf b: Influence of the micro-sale Peclet number on the temperature profile along the direction of movement, \textbf b: Isotherms in the heat affected zone at the surface $\bar z=0$, \textbf d: change of the melt pool shape with increase of the micro-scale Peclet number.}
\label{fig1}
\end{figure}
Comparison between classical and gradient \eqref{GFpn} solutions for the moving point heat source is presented in Fig. \ref{fig1}. Here we present the distribution for the temperature rise at the surface $z=0$ (Fig. \ref{fig1}a, b). It is seen, that in comparison with standard Rosenthal's solution, the gradient solution predicts lower temperature rise around the action of point heat source. Far from this point, both solutions have similar asymptotic behavior and the temperature profiles almost coincide at the distance $\bar\xi<-2$ behind the point source (Fig. \ref{fig1}b). In gradient solution, there arise a fast decrease of the maximum temperature rise $\bar\theta_{p,max}$ with increase of the micro-scale Peclet number Pe$_m$ (Fig. \ref{fig1}b, black lines). In accordance to Eqv. \eqref{GFpmax}, the value of $\bar\theta_{p,max}$ reduces from the infinity (classical solution, red line in Fig. \ref{fig1}b) to the unit value when the non-dimensional number Pe$_m$ changes from 0 to $\sim$0.5.
The shape of the heat affected zone and of the melt pool are also significantly affected by the gradient effects. In Figs. \ref{fig1}c, d we present the comparison between the isotherms in classical and gradient solutions at the surface of the half-space $\bar z =0$ and at the sagittal plane $\bar y =0$, respectively. It is seen, that the width and the length of the heat affected zone become smaller in gradient solution (Fig. \ref{fig1}c). The melt pool shape corresponds to the isotherm $\bar\theta=1$ in Figs. \ref{fig1}c, d. It is seen, that the melt pool size reduces with the increase of the micro-scale number (Fig. \ref{fig1}b). Thus, when the material's internal characteristic length scale $l$ becomes large enough, the solution predicts the considerable decrease of the melt pool dimensions. The value of the melt pool depth $\bar z_{max}$ can be evaluated based on the derived solutions as the maximum value of coordinate $\bar z$ that corresponds to the isotherm $\bar\theta=1$ in the sagittal plane $\bar y =0$, i.e.:
\begin{equation}
\begin{aligned}
\label{zmax}
\bar z_{max} = \{\max |\bar z|: \bar\theta(\bar \xi,0,\bar z) = 1\}
\end{aligned}
\end{equation}
\begin{figure}
\textbf a
\includegraphics[width=0.3\linewidth]{2a}
\textbf b
\includegraphics[width=0.3\linewidth]{2b}
\textbf c
\includegraphics[width=0.31\linewidth]{2c}
\caption{Dependence of the melt pool depth on the micro-scale Peclet number (\textbf a) and on the operating parameter (\textbf b); and the cooling rate distribution behind the heat source at different depths $\bar z$ (\textbf c). Red and black colors correspond to classical and gradient solutions, respectively, in these plots.}
\label{fig2}
\end{figure}
This problem \eqref{zmax} can be solved numerically. For the classical solution corresponding analytical approximations have been also presented \cite{ramos2019analytical}. In Fig. \ref{fig2} we show the evaluated dependences of the melt pool depth on the micro-scale Peclet number and on the operating parameter $n$. It is seen, that with increase of Pe$_m$ the melt pool depth reduces down to zero, while for small values of Pe$_m$ gradient solution asymptotically approaches some classical values (Fig. \ref{fig2}a). With increase of the operating parameter the depth of melt pool become larger. However, there exist some ranges of the heat input power (operating parameter defines this power in the dimensionless solution) when the melt pool does not arise (Fig. \ref{fig2}b). And in opposite to classical solution for the point heat source, in the gradient solution the dependence $\bar z (n)$ starts from some non-zero value and the material main sustain some amount of the concentrated heat input without melting (Fig. \ref{fig2}b, black curves). Corresponding minimum value of the heat source power for the given value of the micro-scale Peclet number is defined by the equation \eqref{Pei}.
Thus, from the experiments with laser melting of the powder materials one may measured the dependence of the melt poole depth on the operating parameter and overlay the obtained experimental data with the plots similar to those one presented in Fig. \ref{fig2}b. In such a way the presented model can be validated and the length scale parameter of the model can be identified. For the welding of solid materials such experiments have been widely performed and the models of moving heat sources with different distributions of heat flux over some finite size area (as the generalization of the point source model) were validated \cite{eagar1983temperature}. However, for the solid materials the influence of the gradient effect may not be such pronounced. We suppose that the most significant clarifications within the gradient theories can be obtained for the fusion processes of the powder materials, in which the microstructural effects play a significant role \cite{barchiesi2021granular}.
Another approach for validation of the presented gradient models can be related to the measuring of the cooling rate during the laser melting of the powders. In Fig. \ref{fig2}c it is shown that the gradient solution \eqref{GFpmaxcr} predicts the finite level of the cooling rate at the point of action of the heat source. Note, that the surface and subsurface cooling rates can measure experimentally
\cite{pauly2018experimental}. Thus, the simplified gradient model even for the point source can be used for the processing of the corresponding experimental data.
\begin{figure}
\textbf a
\includegraphics[width=0.4\linewidth]{3a}\,\,
\textbf b
\includegraphics[width=0.5\linewidth]{3b}
\caption{Illustrations for the two-dimensional solution of the moving line heat source problem, \textbf a: temperature rise along the movement direction, \textbf b: isotherms of the solution}
\label{fig3}
\end{figure}
Gradient solution for the moving line source is illustrated in Fig. \ref{fig3}. Here it is seen, that in opposite to classical solution, the maximum temperature rise arise not at the centre of the coordinate system, but with some shift in opposite to the direction of movement (Fig. \ref{fig3}, black lines). This is a typical situation, that arises even in classical solutions for the problems with Gaussian heat flux distribution (see below) and other type of the more general heat source models\cite{hou2000general}. Assessment for the maximum temperature rise in gradient solution was given for the line heat source by equation \eqref{GFpmax} that was evaluated at the origin of the coordinate system. From Fig. \ref{fig3}a it is seen, that this assessment is rather accurate and even in the case of the high gradient effects (large Pe$_m$) the maximum temperature rise is closed to those one at the origin of coordinates.
Another peculiarity of the line heat source solution can be seen in the Fig. \ref{fig3}b where we plot the isotherms at the plane $\bar\xi$-$\bar y$. It is shown, that in opposite to the point source solution (Fig. \ref{fig1}d), the gradient effects do not make such a strong influence on the line source isotherms. All effects are concentrated in the melt pool zone (contour line $\bar\theta=1$) and for the smaller levels of heating ($\bar\theta\leq0.8$) the isotherms become close to classical. The line source model is used usually for the simulation of the high-power processes with the key-hole mode formation of the melt pool. For such processes the gradient clarifications become less important in the area outside from the melt pool. Nevertheless, inside the melt pool region it can be used for the refined assessments for its shape, cooling rates, etc.
\begin{figure}[b!]
\centering
\includegraphics[width=0.45\linewidth]{4a}
\caption{Profiles of the temperature rise along the movement direction of the Gaussian heat source for different values of the standard Peclet number Pe and the micro-scale Peclet number Pe$_m$.}
\label{fig4}
\end{figure}
The next example of calculations is given for the Gaussian heat source \eqref{conv} in Figs. \ref{fig4}, \ref{fig5}. In Fig. \ref{fig4} we show the influence of the standard and the micro-scale Peclet numbers Pe and Pe$_m$ on the temperature profile along the movement direction. Classical semi-analytical solution with Pe$_m=0$ provides the assessment for the finite level temperature rise, however, the decrease of the temperature with increase of Pe number is not so strong as in the case of Pe$_m$. For the same values of these numbers, classical solution for Gaussian beam (Pe$=0.1$, Pe$_m=0$) predicts the six time higher temperature rise in comparison with those one of gradient solution for the point source (Pe $=0$, Pe$_m=0.1$). Thus, the same values of these non-dimensional parameters provide the changes of different order in the resulting solutions.
It is interesting to note, that the maximum temperature rise in the gradient Gaussian beam solution (Fig. \ref{fig4}, black dashed line) is higher than those one of the gradient point source solution (Fig. \ref{fig4}, black solid line). This is the consequence of the more intensive heat flux around the center of the laser spot (origin of the coordinates) that is prescribed by the Gaussian distribution \cite{cline1977heat}. Also, the gradient effects do not change the asymptotic far-field behavior of classical solutions nor for the point nor for the Gaussian heat sources.
\begin{figure}[b!]
\textbf a
\includegraphics[width=0.5\linewidth]{4b}
\textbf b
\includegraphics[width=0.4\linewidth]{5}
\caption{Change of melt pool shape (\textbf a) and aspect ratio (\textbf b) in the classical (red lines) and gradient (black lines) solutions for the Gaussian heat source}
\label{fig5}
\end{figure}
The influence of the Peclet numbers Pe and Pe$_m$ on the melt pool size and shape under the Gaussian heat source is shown in Fig. \ref{fig5}. Here it is seen the main difference between these non-dimensional numbers. Change of the standard Pe number significantly influenced the aspect ratio (AR) of the melt pool. Its width become larger than the depth (Fig. \ref{fig5}a). In the case of relatively high Pe numbers there arise the aspect ratio up to AR = 6 and higher. In the case of small Pe values the Gaussian beam solution tends to the point source model solution with axial symmetry (AR = 2). Numerical evaluation of the melt pool aspect ratio within the developed solutions was performed according to the following definition:
\begin{equation}
\begin{aligned}
\label{AR}
AR = \frac{\bar y_{max}}{\bar z_{max}}
= \left\{\frac{\max |\bar y|}{\max |\bar z|}: \bar\theta(\bar \xi,\bar y,\bar z) = 1\right\}
\end{aligned}
\end{equation}
The non-zero values of Pe$_m$ and corresponding gradient effects provide the reduction of all dimensions of the melt pool (Fig. \ref{fig5}a). However, the dependence of the melt pool AR is more complex and non-monotonic in gradient solution (Fig. \ref{fig5}b). For a not very large values of Pe$_m$ there arise a slight decrease of the AR value in gradient solution (Fig. \ref{fig5}b, dotted and dashed lines). However, in the case of large values of Pe$_m$ there arise the inverse effect. Moreover, the melting will not arise if Pe$_m$ and Pe values will be too large (Fig. \ref{fig5}b, solid and dot dashed black lines). According to the definitions of these number, this last case corresponds to the situation when the velocity of the heat source $v$ is large, or the material thermal diffusivity $\kappa$ is small, or the characteristic radius of Gaussian beam $a$ is large (the beam is defocused and the heat is distributed over the large area) or the material length scale parameter $l$ is large. In these cases, the melt pool dimensions become small, however its AR may become even larger the in the classical solution.
\begin{figure}[b!]
\centering
\includegraphics[width=0.6\linewidth]{6}
\caption{Dependence of the melt pool depth on the mean particles size for the selective laser melting of tungsten powders. Red line -- classical models for the point and Gaussian heat source. Black lines -- gradient model of point heat source. Green lines -- gradient model of Gaussian heat source. Experimental data is taken from Ref. \cite{zhang2019influence}.}
\label{fig6}
\end{figure}
Finally, example of comparison of the modeling results with experimental data is shown in Fig. \ref{fig6}. Here we present the dependence of the melt pool depth on the mean particles size for the laser melting of the tungsten powders. Experimental data (black dots in Fig. \ref{fig6}) is taken from Ref. \cite{zhang2019influence}. It is seen, that in the experiments, there arise the decrease of the melt pool depth for the powders with larger particles. Note, that classical models of heat sources cannot be directly used for the description of such experiments due to the lack of the appropriate length-scale parameters. To describe such effects within classical models one should introduce some additional relations for the dependence of the material properties on the powder particles size. Namely, in Ref. \cite{zhang2019influence} it was found that the absorptivity of the tungsten powder may variates in the range of $\lambda=0.5...0.6$ for the used mean sizes of the powder particles $d = 4...50\,\mu$m. However, such variation of absorptivity cannot be used to explain the observed decrease of the melt pool size in about 4 times. In Ref. \cite{zhang2019influence} it was suggested that this effect can be explained by the additional influence of the inhomogeneous irradiation and the peculiarities of the Marangoni flow in the powder beds with large particles. By the other researchers it was also noted that the change of the powder particle size in the laser melting processes may affects the number of competing factors, such that the amount of the contact thermal conductivity inside the powder \cite{tolochko2003mechanisms}, radiation penetration depth and the effective extinction coefficient \cite{tolochko2003mechanisms, rombouts2005light, gusarov2020radiative, zhang2016selective}, volumetric specific heat \cite{tolochko2003mechanisms}, balling phenomenon \cite{zhang2016selective}, agglomeration, surface state and related kinetics of densification \cite{simchi2004role}. In the present work, we show that the gradient models of moving heat sources can be also used for the continuum-level description of such experimental data by using phenomenological introduction of the relation between the mean particle size and the model's length scale parameter.
The modeling results in Fig. \ref{fig6} were obtained by using the point source model \eqref{GFpn} and the Gaussian source model \eqref{conv}. Firstly, in these solutions we defined the dimensionless coordinates scale ($\bar \xi = v \xi /(2\kappa)$, etc.), and the Peclet numbers Pe$_m = vl/(2\kappa)$ and Pe $= va/(2\kappa)$. According to the experiments, in these definitions we used the scanning velocity $v=0.2$ m/s and the Gaussian laser beam radius $a=35\, \mu$m. Thermal diffusivity of the tungsten powder $\kappa$ is not available. However, it is known that the thermal diffusivity of the solid tungsten can be approximated by the constant value $\kappa_W = 50$ mm$^2$/s in a wide temperature range \cite{fukuda2018thermal, tanabe2003temperature}. For the powder material the value of thermal diffusivity can be less than those one of the solid material in about 20 times \cite{ahsan2020experimental}. Thus, for the rough assessment, in the calculations we used the value $\kappa = \kappa_W/2 = 25$ mm$^2$/s. This value approximates the change of the powder properties from a very low level at the room temperature up to the relatively high level at the melting point. Based on the made assumptions we defined the dimensionless quantities as follows:
$$
\bar \xi = \frac{v \xi}{2\kappa} = \frac{\xi}{L_0},\quad
\bar R = \frac{v R}{2\kappa} = \frac{R}{L_0},\quad
\bar r = \frac{v R}{2\kappa} = \frac{R}{L_0}
$$
$$
\text{Pe} = \frac{v a}{2\kappa} = \frac{a}{L_0} = 0.14, \quad
\text{Pe}_m = \frac{v l}{2\kappa} = \frac{l}{L_0}
$$
where $L_0 = 2v/\kappa = 250\,\mu$m is the absolute distance that corresponds to the unit value of the dimensionless coordinates.
Then, we assumed that the length scale parameter $l$ can be related to the mean size of the powder particles $d$ that is known from the experiments\cite{zhang2019influence}. We suppose the linear relation $l=kd$ ($k$ -- proportionality coefficient), such that the micro-scale Peclet number was finally defined as:
$$
\text{Pe}_m = k \frac{d}{L_0}
$$.
The last unknown parameter of the models \eqref{GFpn}, \eqref{conv} is the operating parameter $n$. For the explicit definition of this parameter we need an additional information about the temperature dependent density and heat capacity of the powder. Due to the absence of this data, we identified this parameter $n$ by fitting the classical models to the experiments with the smallest mean powder size $d = 5.7\,\mu$m. We found that point source solution predicts the experimental melt pool depth $z_{max} = 193\,\mu$m when $n=1.4$, while for the classical Gaussian source we found that this parameter should equals $n=0.72$. Then, we used these operating parameters in the corresponding gradient models of moving heat sources to obtain the predictions for the melt pool depth in the case of different mean size of powder particles.
Note, that the found values of the operating parameters have a typical order. To show this, for example, we may found the approximate theoretical value of the operating parameter by its standard definition as follows
\begin{equation}
\label{oper}
n_{theor} = \frac{\lambda P v }{4\pi \kappa^2 \rho c(T_m-T_i)} \approx 2.8
\end{equation}
where we use the powder absorptivity $\lambda=0.55$ (mean value that was determined in Ref.\cite{zhang2019influence}), laser power $P=350$ W (used in the experiments\cite{zhang2019influence}), $\kappa = 25$ mm$^2$/s (defined above), $\rho=\rho_w/2 = 9625$ kg/$m^3$ (twice lighter than the solid tungsten), heat capacity $c=c_w \approx 266$ J/(kg K) (twice higher that of the solid tungsten\cite{zhao2016thermal}), tungsten melting point $T_m=3422$ C$^o$ and initial temperature $T_i=20$ C$^o$.
This assessment \eqref{oper} is given here just for comparison with identified values of $n$ and we did not use theoretical value $n_{theor}=2.8$ in the calculations because this leads to the significant overestimations of the melt pool depth.
Therefore, based on the all assumptions discussed above, in the calculations we used the sets of parameters that are listed in Table \ref{tab1}. We used the experimental values of the mean particles size $d = \{5.7, 16.52, 22.47, 37.26,$ $ 47.63\}\, \mu$m from Ref.\cite{zhang2019influence} to define the length scale parameter $l=kd$ with three values of proportionality coefficients $k=3, 3.5$ and 4.
\begin{table}
\caption{Parameters of the models used in the calculations for the selective laser melting experiments with tungsten powders}
\label{tab1}
\centering
\small
\begin{tabular}{llllllll}
\hline\noalign{\smallskip}
Parameter & \,\quad\,&Dimensions &\quad\,& Value \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Scanning speed, $v$ && m/s && 0.2 \\
Thermal diffusivity, $\kappa$\quad & & m$^2$/s && $25\cdot10^{-6}$ \\
Coordinates scale, $L_0 = 2\kappa/v$\quad & & $\mu$m && 250 \\
Pe number && - && 0.14$^*$ \\
Pe$_m$ number && - && $k d /L_0$ \\
Operating parameter, $n$ && - && 1.4 (0.7$^*$) \\
Critical temperature rise, $T_m-T_i$ && $^o$C && 3400\\
\noalign{\smallskip}\hline\\
\end{tabular}\\
$^*${parameters for the Gaussian source model}
\end{table}
Estimated predictions for the melt pool depth are shown in Fig. \ref{fig6} by black lines (gradient point source model) and green lines (gradient Gaussian source model). It is seen, that these models allow to predict the experimentally observed effects of decrease of the melt pool depth with increase of the mean particles size. The most close predictions are obtained if the proportionality coefficient equals to $k=3.5$. The difference between the Gaussian and the point source models is negligible in the present case (since the Peclet number is not very large). Classical predictions are shown by the horizontal red line in Fig. \ref{fig6}. Classical models do not contain additional length scale parameters and cannot predict such type of the size effects without introduction of additional relations for the dependence of the materials properties on the mean particles size.
\section{Conclusion}
\label{con}
In this paper we propose a new simplified phenomenological gradient theory of heat transfer for simulations of the laser powder bed fusion processes. We show, that this theory allow to obtain a generalized solutions for the classical moving source problems that take into account the effects of the material internal characteristic length scale. For the powder bed fusion we propose to related this internal length scale with the mean particle size of the powder. Known experimental data allowed us to validate this assumption and to found that the length scale parameter of the model may have the order of 3.5 of mean particles diameter.
All gradient effects are incorporated in the derived solutions through the new kind of the non-dimensional group of parameters that we defined here as the micro-scale Peclet number. In the case of small value of the this parameter we obtain the classical solutions, while its large values correspond to the case of the strong influence of the gradient effects and related decrease of the temperature field. Namely, the point and the line source solutions, as well as the Green functions of the presented theory does not contain singularities that can be useful for the practical applications and for the development of more complex numerical/analytical simulation methods.
In general, it seems, that there may exist other variants of the simplified theories that allow to obtain a closed form representation of general solution similar to \eqref{GS}, \eqref{GSo}. Development of such more general theories is the subject for the authors future work as well as the further efforts for the experimental identification of the length scale parameters for different powder materials. \\
\bibliographystyle{elsarticle-num}
|
1,314,259,996,567 | arxiv | \subsection{\large Supplemental Material: Nonlinear Polariton Fluids in a Flatband Reveal Discrete Gap Solitons}
\section{Resonant excitation of the flatband modes}
To determine the optimal excitation scheme for the resonant drive of the flatband, we study the momentum-space resolved photoluminescence (PL) of the flatband modes. This pattern can be obtained by spectrally filtering the emission, measured under weak non-resonant pumping, at the flatband energy. The $(k_x, k_y)$ map of the emission is then reconstructed from spectra measured at different values of $k_y$, such as the one shown in Fig. 1(c) of the main text. The result is presented in Fig.~\ref{figS1}(b): the intensity is zero at the center of the Brillouin zone (BZ) ($(k_x, k_y) = 0$), and is maximal at the BZ edges ($k_x = \pi/a$). This reflects the antisymmetric nature of the flatband eigenmodes (opposite phase on $A,C$ pillars). Thus, to ensure efficient coupling to the polariton states in the flatband, the pumping beam is tilted from normal incidence by $5.0^\circ$ along the $x$ direction, corresponding to the edge of the first BZ.
\begin{figure}[!h]
\includegraphics[width=0.6\linewidth]{./FigS1.pdf}
\caption{\label{figS1} (a) Real- (top) and momentum-space (bottom) images of the laser spot used for resonant excitation. (b) Measured real- and momentum-space photoluminescence at the flatband energy. (c,d) Calculated real- and momentum-space emission pattern of two examples of flatband eigenstates: (c) a single plaquette $\ket{f_n}$; and (d) a linear combination of four plaquettes with alternating sign on neighboring cells $\sum_{j=0}^{3}(-1)^j \ket{f_{n+j}}$. In all panels, dotted lines indicate the edges of the first Brillouin zone.
}
\end{figure}
The real-space PL at the energy of the flatband can also be obtained with a similar method (spectral filtering of the real-space PL) and is shown at the top of Fig.~\ref{figS1}(b). As expected due to geometric frustration, we measure vanishing intensity on $B$ sites.
Let us comment briefly on the consequence of exciting the flatband at the BZ edge. When the wave-vector of the driving field is equal to $k=\pi/a$, it imposes a phase difference of exactly $\pi$ between neighboring unit cells. The opposite sign on $C$ sites in neighboring unit cells leads to a destructive interference on $A$ sites. As an illustration, Fig.~\ref{figS1}(c,d) shows calculated real- and momentum-space emission pattern of two different localized eigenstates: a single plaquette $\ket{f_n}$, and a linear superposition of four plaquettes, of same magnitude but alternating sign on neighboring unit cells, which can be written $\sum_{j=0}^{3}(-1)^j \ket{f_{n+j}}$. To compute these radiation patterns, we use a simplistic description of the eigenfunctions of the chain of pillars: we consider a Gaussian-shaped orbital per pillar (corresponding to the $s$ mode).
To construct a given wave function, we assign the amplitude and phase computed from the tight-binding model to each of these Gaussian shaped orbitals.
The momentum-space radiation pattern is obtained by Fourier transformation of this wave function. The state in Fig.~\ref{figS1}(d), with a phase difference of $\pi$ between neighboring plaquettes, corresponds to a Bloch state $\ket{\psi(\pi/a)}$, but truncated to only 4 unit cells.
Importantly the non-linear domains measured in our experiment indeed present a spatial pattern similar to the one calculated in Fig.~\ref{figS1}(d). This pattern reflects the phase imposed by the drive at the edge of the BZ, resulting in low intensity on $A$ sites inside the non-linear domains because of destructive interferences. $A$ sites have significant intensity only at the edge of the domains or in regions where disorder overcomes interaction energy (like in Fig.~2(d) or Fig.~2(e) in the main text) and locally breaks the destructive interferences.
\section{Parameters for numerical simulations}
The discrete Gross-Pitaevskii equation introduced to model our experiments (Eq. (2) of the main text) is a set of $3N$ equations, that describe the time evolution of the polariton amplitude on each site. $N$ is the number of unit cells in the lattice. The coupling terms $t_{n,m}$ between the different sites are linked to the coupling constant $t$ and $t'$ of the Lieb tight-binding Hamiltonian (Eq. (1) of the main text) as follows:
\begin{align}
t_{nm} =
\begin{cases}
t &\mathrm{if\ sites\ } n,m \mathrm{\ are\ neighboring\ } B \mathrm{\ and\ } C \mathrm{\ sites,}\\
t' &\mathrm{if\ sites\ } n,m \mathrm{\ are\ neighboring\ } A \mathrm{\ and\ } B \mathrm{\ sites,}\\
0 &\mathrm{otherwise,\ i.e.\ if\ sites\ } n,m \mathrm{\ are\ not\ neighbors.}
\end{cases}
\end{align}
The values of the parameters are deduced from the tight-binding fit to the measured polariton dispersion as shown in Fig. 1(c) of the main text. In the case of the flatband, we take $E_A = E_C = E_0$, $E_B = E_C - 10 \gamma$ and $t = t' = 10 \gamma$, with $\gamma = 30 \mathrm{\mu eV}$ (and the energy offset $E_0 = 0$ for simplicity). For the dispersive band, we use $E_A = E_C + 6\gamma$, all other parameters being unchanged.
\section{Numerical simulations: steady-state spatial profiles}
In this section we present the steady-state spatial intensity profiles calculated for the quasi-resonant injection of polaritons in the flatband, that correspond to the total intensity and domain size from Fig.~2(b,i) of the main text. The drive detuning is $\Delta = 3 \gamma$ and the drive wave vector $k_p = \pi/a$. The calculated total intensity versus drive power $F^2$ from Fig.~2(b) of the main text is reproduced in Fig.~\ref{figS2}(a). Fig.~\ref{figS2}(b-d) shows the calculated spatial profile on pillars $C$ for different values of $F^2$. In each case, a nonlinear domain delimited by a sharp drop in occupation at the edges is clearly identified. As $F^2$ is increased above the first abrupt intensity jump, each intensity jump corresponds to an increase of the domain size by exactly 1 unit cell (UC).
As a comparison, we repeat the numerical simulation with same excitation conditions ($\Delta = 3 \gamma$, $k_p = \pi/a$), but with $E_A = 6 \gamma$, such that the middle band is now dispersive and the drive frequency lies within the band. The results are presented in Fig.~\ref{figS2}(e-h). As detailed in the main text, a single jump is observed in the calculated total intensity as the drive power $F^2$ is increased. Moreover, the spatial density profile in the nonlinear regime is smooth and does not evolve significantly as $F^2$ is increased.
\clearpage
\begin{figure}[!h]
\includegraphics[width=0.7\linewidth]{./FigS2.pdf}
\caption{\label{figS2} \textbf{Comparison between the polariton non-linear dynamics in the flat and when driving the system within a dispersive band} : (a-d) calculation for the flatband with $E_A = 0$; (e-h) calculations for the dispersive band with $E_A = 6 \gamma$; in both cases $\Delta = 3 \gamma$. (a,e) Total intensity calculated as a function of $F^2$. (b-d, f-h) Steady-state occupation $|c_n|^2$ on sites $C$ for various values of the drive intensities $F^2$. Dashed lines indicate the shape of the pumping spot. For panels (b-d), $s$ is the domain size.
}
\end{figure}
\section{Influence of disorder in a flatband}
\begin{figure}[!h]
\includegraphics[width=0.5\linewidth]{./FigS3.pdf}
\caption{\label{figS3} \textbf{Influence of disorder on the nonlinear regime for the flatband.} (a) Measured size of the nonlinear domains as a function of excitation power (reproduced from Fig. 2(h) of the main text). (b) Measured intensity profile on pillars $A,C$ as a function of resonant drive energy in the linear regime ($P=10 \mathrm{\mu W}$). (d) Calculated size of the nonlinear domain for increasing excitation drive $F^2$, when including a redshift $\delta_{dis}$ on the sites indicated in black in (c). This redshift mimics the effect of local disorder in the chain.
}
\end{figure}
As explained in the main text, we experimentally observe that the size of the nonlinear domain formed in the flatband can jump by several unit cells at a time when the pumping power is increased (see Fig. 2(h) of the main text, reproduced in Fig.~\ref{figS3}(a)). This feature is not reproduced by the simulation (see Fig. 2(i) of the main text), where increases by one unit cell only are obtained (except for the first jump).
In the following we show that disorder in the on-site energies can explain this discrepancy.
Disorder can strongly affect the physics of particles in a flatband, for example leading to the fragmentation of a bosonic condensate into plaquette-sized localized modes \cite{Baboux2016}. Indeed, since kinetic energy is zero in a flatband, any finite amount of disorder will break the flatband picture. In a dissipative context, disorder strength needs to be greater than the linewidth to significantly alter the physics. Experimentally, disorder mainly stems from small fluctuations in the pillar size and shape, caused by etching.
An estimate of the disorder strength can be extracted from resonant spectroscopy of the flatband eigenstates in the linear regime. Figure~\ref{figS3}(b) shows the light intensity transmitted through $A$ and $C$ pillars when scanning the laser energy. When the laser is in resonance with an eigenstate, an intensity maximum is observed. The figure clearly indicates some spatial energy spreading of the eigenstates across the lattice. More precisely for this particular part of the chain, a redshift is observed to the left of UC 0, and to the right of UC 5. The maximal energy difference between the different states is around $80\, \mathrm{\mu eV}$, comparable to the laser detuning $\Delta = 90 \mathrm{\mu eV}$ used in Fig.~\ref{figS3}(a) (and Fig. 2 of the main text). Thus in the experiments disorder strength is comparable to the interaction energy.
Note that imaging of the eigenstates as done in Fig.~\ref{figS3}(b) does not allow extracting precise on-site disorder on each individual pillar.
Nevertheless, to get a flavor of the effect of disorder on the nonlinear domains, we introduce in our simulations a distribution of on-site energies which results in a distribution of eigenstates resembling the measured one. A redshift $\delta_{\mathrm{dis}}$ is introduced for the on-site energy of all sites on 2 UCs to the left of the excitation spot (see Fig.~\ref{figS3}(c)). The corresponding simulation of the nonlinear domain size versus $F^2$ is shown in Fig.~\ref{figS3}(d) for different disorder strengths. As in the experiment, we observe series of jumps of different amplitudes.
For instance for $\delta_{\mathrm{dis}} = 0.5 \gamma$, an abrupt jump from 6 to 9 UCs occurs at $F^2 \approx 250$. It corresponds to a progression of the domain edge through all redshifted sites at once. For stronger disorder amplitude, additional big jumps in the domain size are observed at higher excitation powers. The redshifted sites thus create a barrier for the domain edges, and modify the growth of the domains with power.
In the experiment, the disorder landscape is certainly more complex but this simple simulation provides good understanding of the effect of disorder on the observed nonlinear dynamics. We have verified on several lattices realizing different disorder configurations that the nonlinear behavior reported here is qualitatively the same.
\section{Truncated Bloch Waves in the gap above a dispersive band}
We investigate with numerical simulations the behavior of a nonlinear fluid injected in the gap above a dispersive band. Figure~\ref{figS5}(b-d) presents the steady-state profiles calculated in the nonlinear regime and without disorder, for $E_A = 6 \gamma$ and different values of the drive energy detuning with respect to the bottom of the middle band: $\Delta = 4$, $5$ and $7 \gamma$. Note that for $E_A=6 \gamma$, the width of the middle band is $\sim 4.6 \gamma$, so that when $\Delta > 4.6 \gamma$ the drive lies within the gap. For $\Delta = 4 \gamma$, i.e. for a drive below the band edge, the propagation outside the spot is visible as a spatial exponential decay of the intensity. The propagation length $L$ characteristic of the spatial decay is given by $L = v_g / \gamma$, with $v_g = \hbar^{-1} (\partial E / \partial k)$ the group velocity at energy $\Delta$. Increasing the drive energy to $\Delta = 5 \gamma$, a sharp spatial decrease in the intensity is now observed at UC $\pm 11$. For $\Delta = 7 \gamma$, further into the gap, the domain edge is even sharper. In this excitation configuration, since the drive injects polaritons within the gap, there is no single-particle state at this energy. As a result, the interaction energy provided by the drive cannot be converted into kinetic energy: propagation of particles out of the excitation region is prevented. This localization mechanism arising from the interplay between interactions and the existence of an energy gap is precisely the one at play in the formation of gap solitons, and in particular of Truncated Bloch Waves, as originally discussed in Refs.~\cite{Alexander2006, Alexander2006b, Wang2009}.
\begin{figure}[!h]
\includegraphics[width=0.7\linewidth]{./FigS5.pdf}
\caption{\label{figS5} (a) Band structure calculated by diagonalization of the tight-binding Lieb Hamiltonian for $E_A = 6 \gamma$, $E_B = -10 \gamma$ and $E_C = 0$. The shaded gray region indicates the total spectral width of the middle band. Blue dots indicate the drive energy and wave vector used in panels (b-d). (b-d) Steady-state occupation $|c_n|^2$ on sites $C$ calculated for different values of $\Delta$, in the nonlinear regime (for a value of $F^2$ indicated in each panel).
}
\end{figure}
Thus when the dispersive band is excited within the gap at high energy, Truncated Bloch waves are excited in a similar way as for the flatband. The energy injected in the system is larger than the maximum kinetic energy the system can accommodate so that non linear domains with sharp edges are formed. In the flatband, since kinetic energy is strictly zero, this regime is achieved as soon as the driving energy overcomes the other energy scales of the system, namely the spectral linewidth and disorder.
\section{Multistability of the nonlinear domains}
\begin{figure}[!h]
\includegraphics[width=0.6\linewidth]{./FigSxx.pdf}
\caption{\label{figSxx} (a-d) Total emission intensity measured under resonant excitation of the flatband for different the power scans. In each panel, the starting excitation condition is denoted by a black square and arrows indicate the scan direction.
}
\end{figure}
In Fig.~\ref{figSxx} we present several experimental power scans obtained with excitation parameters similar as those used in Fig.~2 of the main text ($\Delta = 90\ \mathrm{\mu eV}$ and $k_p = \pi / a$). For each of these power scans, the starting condition is denoted by a black square. Fig.~3(a) of the main text reproduces all these measurements on top of each other.
\section{Influence of disorder within a dispersive band}
\begin{figure}[!h]
\includegraphics[width=0.7\linewidth]{./FigS4.pdf}
\caption{\label{figS4} \textbf{Influence of disorder on the nonlinear regime for the dispersive band.} (a) Calculated total intensity in the lattice versus $F^2$, for an increasing (blue) and decreasing (red) drive intensity. The redshift amplitude is $\delta_{\mathrm{dis}} = 2 \,\gamma$ on the same sites as for the flatband, and the drive detuning $\Delta = 1.5 \gamma$. (b) Total emission intensity measured in the dispersive band as a function of excitation power (reproduced from Fig.~4(a) from the main text).
}
\end{figure}
Disorder also has an influence on the nonlinear regime in the dispersive band. This is due to the fact that the disorder amplitude in the experiment, on the order of $80\ \mathrm{\mu eV}$, is comparable to the interaction and kinetic energy of the fluid with our choice of laser detuning $\Delta = 60\ \mathrm{\mu eV}$.
In Fig.~\ref{figS4}, we present the results of a numerical simulation taking into account disorder in the dispersive band: we introduce a redshift $\delta_{\mathrm{dis}} = 2 \gamma$ on the same sites as in Fig.~\ref{figS3}(c), $\Delta= 1.5 \gamma$ and $E_A = 6 \gamma$. The total population versus $F^2$ in the up and down scans are in excellent agreement with the experimental results from Fig.~4(a) of the main text, reproduced in Fig.~\ref{figS4}(b). Indeed, the presence of disorder explains the first nonlinear increase in the total intensity before the abrupt jump (only one jump was observed in disorder free simulations, see Fig.~4(b) of the main text).
\end{document}
|
1,314,259,996,568 | arxiv | \section{Introduction}
Joint \emph{Chandra X-ray Observatory} and \emph{Hubble Space
Telescope} (\emph{HST}) observations of globular clusters have revealed
large populations of faint X-ray sources ($L_X \lesssim
10^{33.5}~\mbox{${\rm erg}~{\rm s}^{-1}$}$) which include quiescent low-mass X-ray binaries
(qLMXBs), cataclysmic variables (CVs), chromospherically active
binaries (ABs), and millisecond pulsars (MSPs)
\citep{Verbunt06,Heinke10}. These populations range in size from tens
to hundreds of objects per cluster. The presence of these objects is
closely related to the cluster dynamics, as demonstrated by
\citet{Pooley03}, who found the existence of a strong correlation
between the source population size and the encounter rate in the
cluster core, $\Gamma \propto {\rho_0}^2 {r_c}^3 {v_0}^{-1}$, where
$\rho_0$ is the central density, $r_c$ is the core radius, and $v_0$
is the central velocity dispersion. \citet{Pooley06} subsequently
found evidence that the majority of CVs in dense globular clusters
were produced dynamically. Clusters that undergo core collapse pass
through phases of extremely high central density and are thus expected
to undergo repeated bursts of close binary production in their cores.
It is therefore of great interest to examine the faint X-ray source
populations in core-collapsed clusters.
In a previous study, we carried out a deep \emph{HST}\ ACS/WFC imaging study
of the nearest core-collapsed globular cluster NGC~6397 in the filters
F435W (\mbox{$B_{435}$}), F625W (\mbox{$R_{625}$}), and F658N (\mbox{H$\alpha$}), in which we identified
optical counterparts to 69 of the 79 \emph{Chandra}\ sources that lie within the
half-mass radius \citep{Cohn10}. A striking finding of this study was
that there is a bimodal distribution of CVs, consisting of a brighter
group of six for which the optical emission is dominated by
contributions from the secondary and the accretion disk, and a fainter
group of seven for which the white dwarf dominates the optical
emission. The brighter CVs are much more concentrated towards the
cluster center than the fainter CVs. We speculated that this may
be the result of dynamical evolution in which CVs are formed in
and near the cluster core and subsequently migrate to larger distances
from the cluster center as they age and become fainter. The faintest
CVs that we identified in our study of NGC~6397 have absolute
magnitudes around $M_R \sim 11.2$ that likely put them near the
minimum of the CV period distribution found in the Sloan Digital Sky
Survey \citep{Gaensicke09}. \citet{Cool13} used \emph{HST}\ imaging to
search for optical counterparts to \emph{Chandra}\ sources in the massive
globular cluster $\omega$\,Cen. They reported finding 27 candidate CVs,
of which 14 lie in the magnitude range $M_R \sim 10.4 - 12.6$. Thus,
faint CV populations that likely extend to the minimum of the CV
period distribution are known to exist in two globular clusters.
To extend the search for faint CV populations, we have investigated
NGC~6752, the second closest core-collapsed cluster at a distance of
4.0~kpc \citep{Harris96}. The puzzling dynamical status of NGC~6752
has been addressed in a number of studies. \citet{Djorgovski86}
classified the cluster as core-collapsed based on its ground-based
$B$-band surface-brightness profile, which shows a power-law region
outside of a resolved core. In \citet{Lugger95}, we reexamined the
profile using ground-based $U$-band data, which is less affected by
bright giants, and found that it could be fitted by either a modified
power law (power-law + core) or a standard King model, with the former
providing a somewhat better fit than the latter. We nevertheless
advanced the conservative interpretation that NGC~6752 is not required
to be in a post-collapse state. \citet{Ferraro03a} used a combination
of \emph{HST}\ WFPC2 and ground-based images to construct surface-brightness
and surface-density profiles, finding that these are best fitted by a
double King model, viz.\ one King model to describe the inner profile
and another one to describe the outer profile. They took this as
evidence that NGC~6752 is experiencing a post-core-collapse bounce.
Most recently, \citet{Thomson12} obtained surface-density profiles
from \emph{HST}\ ACS imaging, again finding that the profile is not well
described by a single King model. They present a double-King-model
fit and conclude that the cluster is either in a core-collapsed or
post-core-collapse state. They note that the double King model is
purely phenomenological, without a physical basis. Based on this
previous work, we have undertaken a reexamination of the
surface-density profile of NGC~6752 in the present study.
The X-ray source population of NGC~6752 has been previously studied by
\citet{Pooley02}, \citet{Kaluzny09}, and \citet{Thomson12}.
\citet{Pooley02} identified a total of 19 \emph{Chandra}\ sources within the
115\arcsec\ half-mass radius of the cluster to a limiting X-ray
luminosity of $L_X \sim 10^{30} ~\mbox{${\rm erg}~{\rm s}^{-1}$}$. They proposed 12 optical
identifications based on archival \emph{HST}\ WFPC2 imaging and one radio
identification. They found that 10 of the sources are likely to be
CVs, one to three are likely to be ABs, and one or two are possible
background objects. \citet{Kaluzny09} identified a periodic variable
that they suggested as the optical counterpart to the \citet{Pooley02}
source CX19.\@ \citet{Thomson12} reanalyzed the identifications of the
\citet{Pooley02} sources using multi-wavelength imaging (FUV to
$I$-band) from the \emph{HST}\ STIS, ACS, and WFC3\@. They identified
dwarf nova outbursts from two previously identified CVs, suggested
optical counterparts to CX8 and CX12 (and an alternate optical
counterpart to CX16), and failed to confirm suggested counterparts to
CX11 \citep{Pooley02} and CX19 \citep{Kaluzny09}.
\begin{figure*}
\epsscale{0.8}
\plotone{f1.pdf}
\figcaption{Drizzle-combined \emph{HST}\ ACS/WFC \mbox{H$\alpha$}\ mosaic of NGC~6752
with search regions for \emph{Chandra}\ sources within the half-mass
radius. North is up and east is to the left. The inner green circle
represents the core radius of 4\farcs6 and the outer green circle
represents the half-mass radius of 1\farcm9. Source labels are
omitted near the core for clarity. There are a total of 39 sources
detected within the half-mass radius. The search region radius is
defined as the maximum of 2.5 times the formal error circle radius
and 0\farcs3.
\label{f:mosaic}}
\end{figure*}
\begin{figure*}
\epsscale{0.8}
\plotone{f2.pdf}
\figcaption{The core-radius region of the drizzle-combined \mbox{H$\alpha$}\ mosaic
of NGC~6752 with error circles and search regions for
\emph{Chandra}\ sources. The green circle represents the core radius of
4\farcs6.
\label{f:mosaic_zoom}}
\end{figure*}
The \emph{Chandra}\ study by \citet{Pooley02} is based on 29\,ks of the
total of 67\,ks of ACIS exposure that is available for this cluster.
\citet{Forestell14} have analyzed the complete dataset, detecting 39
sources within the half-mass radius, to a limiting luminosity of $L_X
\approx 5 \times 10^{29} ~\mbox{${\rm erg}~{\rm s}^{-1}$}$. In order to search for counterparts
of this deeper set of \emph{Chandra}\ sources, we have made use of the
ACS/WFC imaging database that is also being used to search for
helium-core white dwarfs in NGC~6752 \citep{Hernandez13}, results of
which will be reported elsewhere.
In the following sections, we describe the \emph{Chandra}\ and \emph{HST}\ datasets
that we use in this study, the method that we use to analyze the
\emph{HST}\ dataset to find optical counterparts to the \emph{Chandra}\ sources,
the set of counterparts that results from this analysis, the optical
variability of the counterparts, and the spatial distribution of the
stars and X-ray sources in NGC~6752.
\section{Data \label{data}}
The \emph{Chandra}\ imaging used in this study is the combination of
Observation IDs 948 (PI: Lewin) and 6612 (PI: Heinke), which were
carried out with the ACIS-S instrument. The exposure times for these
datasets are 29\,ks and 38\,ks respectively. The processing of these
data is described in detail by \citet{Forestell14}. Source detections
were made with the {\tt wavdetect} and {\tt pwdetect} software
utilities. This resulted in a catalog of 39 sources within the
half-mass radius of NGC 6752, extending the previous catalog
numbering by \citet{Pooley02} in order of decreasing source
brightness.\footnote{Note that CX12 from \citet{Pooley02} is divided
into three sources in this new catalog, viz.\ CX20, CX23, and CX24.}
We restrict the present study to the analysis of these 39 sources.
The optical imaging used in this study is the \emph{HST}\ GO-12254 dataset
(PI: Cool), which provides deep, highly dithered ACS/WFC imaging of
the half-mass radius region of NGC~6752 in \mbox{$B_{435}$}, \mbox{$R_{625}$}, and \mbox{H$\alpha$}. The
dither strategy was designed to recover the full angular resolution of
the \emph{HST}, which is advantageous for performing photometry in the very
crowded central regions of NGC~6752. The images were obtained over
six 2-orbit visits, spread over 180\,d, in order to sample the stellar
point-spread function (PSF) at a range of roll angles. When combined
by a drizzling technique, the resulting mosaic images are free of
diffraction spikes and similar PSF artifacts. The dataset consists of
6 short \mbox{$B_{435}$}\ (10\,s), 12 long \mbox{$B_{435}$}\ (380\,s), 6 short \mbox{$R_{625}$}\ (10\,s), 12
long \mbox{$R_{625}$}\ (360\,s), and 24 \mbox{H$\alpha$}\ (12 each of 724\,s and 820\,s)
exposures. The short exposures were designed to provide accurate
photometry for stars above the main-sequence turnoff (MSTO)\@. With
the large number of \mbox{H$\alpha$}\ frames, the PSF sampling is particularly
complete for this band.
\section{Analysis Method \label{analysis}}
\subsection{Photometry \label{photometry}}
The \emph{HST}\ data were analyzed using software based on the program
developed for the ACS Globular Cluster Treasury project, described in
\citet{Anderson08}; we have previously used this software in the
search for counterparts to \emph{Chandra}\ sources in NGC~6397. The
reductions were done using an updated version of this software known
as KS2\@. Since our best coverage was in \mbox{H$\alpha$}, we did the first several
iterations of star finding on those images. In order to capture very
faint stars on both the main sequence and the white dwarf sequence,
we followed this with an iteration of star finding using the \mbox{$R_{625}$}\
exposures alone and then a final iteration using the \mbox{$B_{435}$}\ and \mbox{$R_{625}$}\
exposures combined. A total of 68,439 stars were detected in a
mosaic that covers the half-mass radius (115\arcsec) and somewhat
beyond, reaching to 150\arcsec\ in the corners.
KS2 uses multiple methods for measuring magnitudes. For stars that
are well exposed in individual images, we adopted KS2 photometry
derived from fitting the PSF to stars in the individual images and
averaging the results using sigma clipping to remove outlying values
due to cosmic rays, defective pixels, etc. For faint stars, we
adopted the KS2 photometry derived from a simultaneous fit of the PSF
to all exposures (see \citealt{Anderson08} for details). In the
\mbox{$B_{435}$}\ and \mbox{$R_{625}$}\ bands, photometry was performed separately for the short
and long frames.
\begin{figure*}
\centering
\includegraphics*[clip,viewport=18 144 592 718,width=5.5in]{f3.png}
\figcaption{Proper-motion cleaned color-magnitude diagrams for stars
within the half-mass radius of NGC~6752 and CV identifications. The
candidates have been selected based on their blue color and/or
\mbox{H$\alpha$}\ excess. Open symbols indicate less certain CV identifications,
either due to a weak or absent \mbox{H$\alpha$}\ excess and/or uncertain
photometry. Note that in the right panel, the bright CVs mostly lie
to the \mbox{H$\alpha$}-excess side of the MS, while the faint CVs mostly lie to
the \mbox{H$\alpha$}-excess side of the WD clump, which itself lies to the
\mbox{H$\alpha$}-deficit side of the MS. All candidate counterparts are shown,
independent of their proper-motion status. The counterparts to CX3,
CX10, and CX15 have proper motions that are consistent with the
extragalactic frame. The counterpart to CX29 has a proper motion
that is not consistent with either the cluster frame nor the
extragalactic frame. Several other counterparts have undetermined
proper motions (see Table~\ref{t:counterparts}).
\label{f:CMD_CV}}
\end{figure*}
\begin{figure*}
\centering
\includegraphics*[clip,viewport=18 144 592 718,width=5.5in]{f4.png}
\figcaption{Proper-motion cleaned color-magnitude diagrams for stars
within the half-mass radius of NGC~6752 and AB identifications. The
candidates have been selected based on their red color and generally
small \mbox{H$\alpha$}\ excess. All candidate counterparts are shown, independent
of their proper motion status. The two counterparts to CX8 have an
apparent proper motion that is inconsistent with the cluster mean.
\label{f:CMD_AB}}
\end{figure*}
Photometric calibration to the {\small VEGAMAG} system was performed
by doing aperture photometry on moderately bright, isolated stars
within a 0\farcs1 radius aperture, finding the aperture correction to
an infinite radius aperture from \citet{Sirianni05}, calculating the
median offset between the KS2 photometry and the aperture photometry,
and applying the calibrations from the \emph{HST}\ ACS calibration
website.\footnote{http://www.stsci.edu/hst/acs/analysis/zeropoints} We
produced drizzle-combined mosaic images using the STScI PyRaF routine
{\tt astrodrizzle} from the {\tt drizzlepac} package. The
drizzle-combined images were oversampled by a factor of two, in order
to increase the effective resolution. The resulting supersampled
mosaics have an approximately 12,000$\times$12,000 format and cover an
approximately circular field of diameter 5\arcmin\ with a pixel scale
of 0\farcs025. Figures \ref{f:mosaic} and \ref{f:mosaic_zoom} show
the drizzle-combined mosaic of 24 \mbox{H$\alpha$}\ frames, together with error
circles for the 39 \emph{Chandra}\ sources within the half-mass radius.
Color-magnitude diagrams (CMDs) were constructed
using the \mbox{$R_{625}$}\ magnitudes, and the \mbox{$\B\!-\!\R$}\ and \mbox{$\ha\!-\!\R$}\ color indices.
For faint stars ($\mbox{$R_{625}$} > 21$), we used the photometry derived from
simultaneous fits to all of the long exposures. For
intermediate-brightness stars ($18 < \mbox{$R_{625}$} < 21$), we used photometry
derived from measurements in individual long exposures. For brighter
stars ($\mbox{$R_{625}$} < 18$), most of which are saturated in the long exposures,
we used photometry derived from individual short exposures. The very
brightest stars ($\mbox{$R_{625}$} \lesssim 14.2$) are saturated even in short
exposures, and are thus less well measured.
Figures \ref{f:CMD_CV} and \ref{f:CMD_AB} show the
resulting CMDs, where proper-motion cleaning has been applied to
filter out field stars (see \S\ref{astrometry}). Without
proper-motion cleaning, the (\mbox{$\B\!-\!\R$}, \mbox{$R_{625}$}) CMD reaches deepest for the
bluest stars, since the faintest red main-sequence (MS) stars are
below the detection limit in \mbox{$B_{435}$}. The upper part of the white dwarf
(WD) cooling sequence is clearly detected in the (\mbox{$\B\!-\!\R$}, \mbox{$R_{625}$}) CMD,
extending to nearly 10 mag below the MSTO in \mbox{$R_{625}$}. There is also a
suggestion of a second WD sequence above the primary
carbon-oxygen-core WD sequence, which \citet{Hernandez13} interpreted as
a helium-core WD sequence.
\begin{figure*}
\centering
\includegraphics*[clip,viewport=18 144 592 718,width=5.5in]{f5.png}
\figcaption{Proper-motion component distribution for stars in the
magnitude range $18 \le \mbox{$R_{625}$} < 23$. The zero-point corresponds to the
systemic cluster motion. The red point and error bars indicate the
mean and standard error of the mean of the measured proper motions
of eight elliptical galaxies, six of which are in the mosaic field
and two of which are in a separate outer field of NGC 6752. The nine
objects indicated by blue numbers are initially selected CV
candidate counterparts that fall in this magnitude range. The CX2,
CX4, CX5, CX7, CX21, and CX35 counterparts are clearly consistent
with cluster membership, while the CX3, CX10, and CX15 counterparts
have a discordant proper motion that is consistent with the mean
galaxy proper motion. We thus interpret these sources as being
likely AGNs, as discussed in the text. The CX29 counterpart (not
shown) also has a discordant proper motion; however this may well be
compromised by the presence of a much brighter neighbor (see
Fig.~\ref{f:finding_charts}).
\label{f:PM}}
\end{figure*}
\begin{figure*}
\centering
\includegraphics*[clip,viewport=18 144 592 718,width=5.5in]{f6.png}
\figcaption{Proper-motion component distribution for stars in the
magnitude range $21 \le \mbox{$R_{625}$} < 23$ with the proper motions of 10
galaxies overplotted as blue and magenta dots. The zero-point
corresponds to the systemic cluster motion. The red point and error
bars indicate the mean and standard error of the mean of the eight
brightest galaxies (blue dots), six of which are in the mosaic field
and two of which are in a separate outer field of NGC 6752. The two
faintest galaxies (magenta dots) have discordant proper motions
relative to the clump defined by the eight brightest galaxies and
thus were not included in the average. The two numbered dots are the
candidate counterparts to sources CX26 and CX39 (see
Fig.~\ref{f:finding_charts}).
\label{f:galaxy_PM}}
\end{figure*}
\subsection{Astrometry\label{astrometry}}
The drizzle-combined \emph{HST}\ ACS/WFC mosaic for each filter was
rectified to the ICRS using approximately 600 astrometric standards
from the USNO UCAC3 catalog. The RMS residual of the plate solution
was 0\farcs09 in each coordinate. We determined a boresight
correction for the \emph{Chandra}\ source coordinates from
\citet{Forestell14} by computing the mean offsets between the
\emph{HST}\ and \emph{Chandra}\ coordinates for three of the brightest CVs from
\citet{Pooley02}, viz.\ sources CX2, CX3, and CX4. The total shift in
the coordinates between the \emph{Chandra}\ and \emph{HST}\ systems was
approximately 0\farcs1.
We searched for optical counterparts to the \emph{Chandra}\ sources by
overlaying the \emph{Chandra}\ error circles on the drizzle-combined
\emph{HST}\ image mosaics, with the boresight correction applied to the
\emph{Chandra}\ source positions from \citet{Forestell14}. Since the
uncertainty in the optical positions $(\lesssim 0\farcs1$) was small
compared with the uncertainty in the X-ray positions ($\lesssim
0\farcs3$), we neglected the contribution of the former to the total
positional uncertainty.
In order to test candidate counterparts for cluster membership, we
computed proper motion components for all of the objects in the region
covered by the image mosaic. The present dataset was used for the
reference epoch. We used the GO-10775 (PI: Sarajedini) dataset for
the second epoch. It was obtained with the ACS/WFC in 2006 May and
covers very nearly all of the half-mass radius region. With a mean
epoch for our dataset of 2011 Aug, the two epochs provide a 5.3-year
time baseline for proper motion determinations.
The distribution of proper motion components depends on the magnitudes
of the stars for which the proper motions are measured, since
measurement uncertainties become dominant for the fainter stars.
Thus, we compute the amplitude of the proper motion for each of the
candidate counterparts and compare it to the RMS proper motion
amplitude for stars of similar magnitude. We judge membership to be
unlikely for stars with proper motion amplitudes that exceed the RMS
amplitude by more than a factor of three. Figure~\ref{f:PM} shows the
proper motion component distribution for stars in the wide magnitude
range $18 \le \mbox{$R_{625}$} < 23$, with the nine initially selected CV candidates
in this range with measured proper motions indicated. Six of the
candidates have proper-motion amplitudes that are clearly consistent
with membership, while three---the counterparts to CX3, CX10, and
CX15---are clearly discordant. It is striking that these three objects
have nearly the same proper motion, suggesting that they might, in
fact, be background active galactic nuclei (AGNs).
To investigate this possibility, we determined the absolute proper
motion of NGC 6752 by locating a sample of galaxies in our image
mosaic and also in an outer field that was imaged by the ACS/WFC in
2004 and 2012. We used the SExtractor software \citep{Bertin96} to
detect objects in both fields and filtered the list of detected
objects on a combination of ``stellarity'' index, ellipticity, PSF
FWHM, and the difference between the maximum surface brightness and
the Kron magnitude\footnote{This difference measures the central
concentration of the object image and provides another means of
star/galaxy discrimination \citep{Annunziatella13}.} to generate a
list of candidate galaxies. This list includes both galaxies and
bright-star PSF artifacts. In order to filter out the later, we
visually inspected each of the several hundred galaxy candidates and
selected the obvious galaxies. This resulted in a list of 10 galaxies
with measured proper motions, eight in the central region and two in
the outer region. The result of this analysis is shown in
Figure~\ref{f:galaxy_PM}, where the galaxy proper motions are
overplotted on the distribution of proper motions for all objects in
the magnitude range $21 \le \mbox{$R_{625}$} \le 23$. With the exception of two
outliers, the galaxies fall in a compact clump, which defines the
absolute proper-motion zero-point. The two outliers are, in fact, the
faintest two galaxies in the sample and thus likely have the least
reliably determined proper motions. The mean galaxy proper motion is
plotted in both Figs.~\ref{f:PM} and \ref{f:galaxy_PM}. It is clear
from these figures that the proper motions of the proposed
counterparts to CX3, CX10, and CX15 agree with the galaxy mean. Given
this agreement in proper motion, it appears highly likely that CX3,
CX10, and CX15 are actually background active galactic nuclei (AGNs)
rather than CVs in the cluster. Further evidence for this
interpretation is given in \S\ref{source_types}.
\section{Results}
\subsection{\emph{Chandra}\ Source Identification \label{source_ID}}
We used the optical CMDs to detect likely \emph{Chandra}\ source
counterparts and investigate their properties. For each of the 39
\emph{Chandra}\ sources within the half-mass radius, we checked the CMD
locations of all objects that fell within a distance of the maximum of
2.5 times the error circle radius and 0\farcs3. The rationale for
choosing this search region size is that the formal {\tt pwdetect}
error circle radii are quite small for the brightest sources ($\sim
0\farcs02$) and in some of these cases potential candidates were
located near, but not within, the actual error circle. We note that
\citet{Hong05} have observed that wavelet detection algorithms
systematically underestimate positional uncertainty. Their
prescription for determining 95\% positional uncertainty produces an
asymptotic lower-limiting value of about 0\farcs3 for an on-axis
source.
\newcommand{\nodata}{\nodata}
\newcommand{\nointerlineskip\raggedright\hangindent 1.5ex \hangafter 1}{\nointerlineskip\raggedright\hangindent 1.5ex \hangafter 1}
\newcommand{\notebox}[1]{\parbox[t]{1.25in}{\nointerlineskip\raggedright\hangindent 1.5ex \hangafter 1 #1\vskip 2pt}}
\newcommand{\tablenotemark{i}}{\tablenotemark{i}}
\tabletypesize{\scriptsize}
\startlongtable
\begin{deluxetable*}{llcccclccccl}
\tablecolumns{13}
\tablewidth{8in}
\tablecaption{\textbf{Optical Counterpart Summary}\label{t:counterparts}}
\tablehead{
\colhead{Source\tablenotemark{a}} &
\colhead{RA, Dec (J2000)\tablenotemark{b}} &
\colhead{$r_{\rm err}~('')$\tablenotemark{c}} &
\colhead{$r~(')$\tablenotemark{d}} &
\colhead{$N_{\rm detect}$\tablenotemark{e}} &
\colhead{Offset\tablenotemark{f}} &
\colhead{Type\tablenotemark{g}} &
\colhead{PM\tablenotemark{h}} &
\colhead{\mbox{$R_{625}$}} &
\colhead{\mbox{$B_{435}$}} &
\colhead{$\mbox{H$\alpha$}$} &
\colhead{Notes}
}
\startdata
CX1 & 19:10:51.138 $-$59:59:11.92 & 0.01 & 0.17 & 2 & 1.8 & CV & \nodata & 19.38 & 19.46 & 19.01 & \\
CX2 & 19:10:56.005 $-$59:59:37.33 & 0.02 & 0.73 & 1 & 1.0 & CV & c & 19.22 & 20.36 & 18.60 & \\
CX3 & 19:10:40.375 $-$59:58:41.47 & 0.02 & 1.52 & 1 & 1.3 & AGN & f & 20.97 & 21.76 & 20.72 & \\
CX4 & 19:10:51.586 $-$59:59:01.73 & 0.02 & 0.08 & 2 & 0.8 & CV & c & 20.10 & 21.09 & 19.35 & \\
CX5 & 19:10:51.414 $-$59:59:05.18 & 0.02 & 0.09 & 1 & 1.6 & CV? & c & 18.57 & 19.65 & 18.33 & \\
CX6 & 19:10:51.505 $-$59:59:27.10 & 0.03 & 0.38 & 1 & 0.8 & CV? & \nodata & 23.87 & 24.07 & 23.78 & \notebox{very blue, small \mbox{H$\alpha$}\ excess} \\
CX7 & 19:10:51.511 $-$59:58:56.85 & 0.02 & 0.15 & 2 & 1.9 & CV & c & 20.94 & 22.01 & 19.91 & \\
CX8 & 19:11:02.969 $-$59:59:41.92 & 0.05 & 1.49 & 3 & 5.3 & AB? & \nodata & 18.34 & 20.91 & 17.54 & \notebox{two red, \mbox{H$\alpha$}-excess objects} \\
CX8 & & & & 3\tablenotemark{i} & 6.4 & AB? & \nodata & 18.48 & 21.06 & 17.67 & \\
CX9 & 19:10:51.766 $-$59:58:59.25 & 0.04 & 0.10 & 2 & 3.0 & CV? & \nodata & 22.42 & 24.40 & 22.01 & \notebox{somewhat blue, \mbox{H$\alpha$}\ excess in color-color diagram} \\
CX10 & 19:10:54.754 $-$59:59:13.86 & 0.06 & 0.37 & 1 & 1.2 & AGN & f & 19.70 & 20.16 & 19.47 & \\
CX11 & 19:10:52.408 $-$59:59:05.64 & 0.08 & 0.04 & 2 & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata & \notebox{counterpart suggested by \citet{Pooley02} is not detected; only MS stars present in search area; MSP D} \\
CX13 & 19:10:40.610 $-$60:00:05.91 & 0.14 & 1.76 & 1 & 1.0 & CV & c & 24.32 & 24.60 & 23.38 & \\
CX14 & 19:10:52.063 $-$59:59:09.09 & 0.08 & 0.08 & 2 & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata & \notebox{only MS stars present in search area} \\
CX15 & 19:10:55.847 $-$59:57:45.78 & 0.08 & 1.39 & 1 & 1.0 & AGN & f & 22.66 & 23.05 & 22.41 & \\
CX16 & 19:10:42.531 $-$59:58:43.03 & 0.13 & 1.25 & 1 & 1.1 & AB & c & 18.33 & 19.39 & 18.09 & \\
CX17 & 19:11:05.258 $-$59:59:04.42 & 0.16 & 1.65 & 1 & \nodata & GLX & \nodata & \nodata & \nodata & \nodata & \notebox{asymmetric, extended object; photometry not possible} \\
CX18 & 19:10:52.056 $-$59:59:03.71 & 0.14 & 0.02 & 2 & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata & \notebox{only MS stars present in search area} \\
CX19 & 19:10:55.600 $-$59:59:17.33 & 0.16 & 0.49 & 3 & 1.8 & AB? & c & 17.58 & 18.35 & 17.37 & \notebox{normal \mbox{$\B\!-\!\R$}, small \mbox{H$\alpha$}\ excess} \\
CX20 & 19:10:52.848 $-$59:59:02.54 & 0.10 & 0.10 & 1 & 2.7 & AB & c & 20.20 & 21.99 & 19.87 & \\
CX21 & 19:10:49.516 $-$59:59:43.16 & 0.11 & 0.72 & 2 & 0.7 & CV? & c & 18.53 & 19.06 & 18.40 & \notebox{blue, small \mbox{H$\alpha$}\ excess in color-color diagram} \\
CX22 & 19:11:02.950 $-$59:57:58.91 & 0.13 & 1.74 & 0 & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata & \notebox{empty search region} \\
CX23 & 19:10:52.546 $-$59:59:04.38 & 0.11 & 0.06 & 4 & 0.4 & CV? & c & 19.58 & 20.68 & 19.11 & \notebox{uncertain photometry} \\
CX24 & 19:10:52.670 $-$59:59:03.21 & 0.12 & 0.07 & 4\tablenotemark{i} & 3.4 & CV? & \nodata & 22.66 & 22.97 & 22.61 & \notebox{weakly detected in \mbox{$R_{625}$}\ and \mbox{H$\alpha$}} \\
CX25 & 19:10:51.957 $-$59:58:40.55 & 0.13 & 0.40 & 5\tablenotemark{i} & 3.9 & CV? & \nodata & 25.51 & 25.73 & 24.81 & \notebox{weakly detected in \mbox{$R_{625}$}\ and \mbox{H$\alpha$}} \\
CX26 & 19:10:39.162 $-$59:59:45.15 & 0.23 & 1.75 & 2 & 1.9 & GLX & f & 23.00 & 25.16 & 22.55 & \notebox{extended elliptical image} \\
CX27 & 19:10:52.059 $-$59:59:00.84 & 0.12 & 0.06 & 1\tablenotemark{i} & 3.7 & AB? & c & 20.13 & 21.65 & 19.83 & \notebox{normal \mbox{$\B\!-\!\R$}, small \mbox{H$\alpha$}\ excess; MSP B} \\
CX28 & 19:10:42.509 $-$59:59:44.46 & 0.22 & 1.37 & 0 & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata & \notebox{empty search region} \\
CX29 & 19:10:52.293 $-$59:59:01.79 & 0.16 & 0.05 & 4 & 0.4 & CV? & f & 20.83 & 22.49 & 20.50 & \notebox{slightly blue, small \mbox{H$\alpha$}\ excess} \\
CX30 & 19:10:40.678 $-$59:58:39.61 & 0.19 & 1.49 & 0 & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata & \notebox{faint object in wings of much brighter star; not detected by KS2} \\
CX31 & 19:10:50.514 $-$59:57:37.11 & 0.17 & 1.47 & 1 & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata & \notebox{only red giant present in search area} \\
CX32 & 19:10:54.137 $-$59:59:11.04 & 0.17 & 0.28 & 1 & 1.7 & CV? & \nodata & 20.54 & 22.12 & 20.23 & \notebox{slightly blue, slight \mbox{H$\alpha$}\ excess} \\
CX33 & 19:11:03.287 $-$59:58:01.31 & 0.19 & 1.75 & 0 & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata & \notebox{empty search region} \\
CX34 & 19:10:45.694 $-$59:58:20.09 & 0.20 & 1.09 & 1 & 1.6 & AB & c & 20.23 & 22.04 & 19.88 & \\
CX35 & 19:10:52.165 $-$59:59:16.73 & 0.19 & 0.20 & 2 & 0.4 & CV? & \nodata & 24.11 & 24.89 & 23.35 & \notebox{uncertain photometry} \\
CX36 & 19:10:49.585 $-$59:58:26.41 & 0.20 & 0.71 & 1 & 2.2 & CV & \nodata & 24.85 & 24.93 & 23.81 & \\
CX37 & 19:10:50.509 $-$59:59:08.77 & 0.21 & 0.21 & 4 & 1.1 & AB? & c & 20.42 & 22.35 & 20.12 & \notebox{somewhat red, no \mbox{H$\alpha$}\ excess in color-color diagram} \\
CX38 & 19:11:02.151 $-$59:58:11.81 & 0.24 & 1.53 & 1 & 1.4 & AB? & c & 19.15 & 20.34 & 18.93 & \notebox{slightly red, slight \mbox{H$\alpha$}\ excess} \\
CX39 & 19:10:46.352 $-$59:57:49.92 & 0.22 & 1.44 & 1 & 1.8 & GLX & f & 22.26 & 24.86 & 21.89 & \notebox{extended elliptical image} \\
CX40 & 19:10:50.357 $-$59:59:13.89 & 0.27 & 0.27 & 7 & 2.2 & AB & c & 19.72 & 21.18 & 19.45 & \\
\enddata
\tablenotetext{a}{From \citet{Forestell14}.}
\tablenotetext{b}{\emph{Chandra}\ source positions from \citet{Forestell14}
have been boresight-corrected to align with the drizzled image
coordinate system, \\ which is tied to the ICRS.}
\tablenotetext{c}{Wavdetect error circle radius in arcsec. The search
area radius is $\max(2.5\, r_{\rm err},0.3'')$.}
\tablenotetext{d}{Projected distance from cluster center in arcmin.}
\tablenotetext{e}{Number of objects detected within $\max(2.5\, r_{\rm err},0.3'')$.}
\tablenotetext{f}{Offset of counterpart from X-ray position in units of $r_{\rm err}$.}
\tablenotetext{g}{%
CV = cataclysmic variable;
CV? = less certain CV identification for reason noted in table;
AB = active binary candidate; \\
AB? = less certain AB identification for reason noted in table;
GLX = galaxy;
AGN = active galactic nucleus.
}
\tablenotetext{h}{Proper-motion membership:
c = consistent with cluster;
f = consistent with field;
\nodata\ = no proper-motion measurement.}
\tablenotetext{i}{Preferred counterpart lies somewhat outside of formal search region.}
\end{deluxetable*}
\begin{figure*}
\epsscale{0.92}
\plotone{f7.pdf}
\figcaption{Finding charts for revised and new source
identifications. All charts are for the \mbox{H$\alpha$}\ band, with the
exception of those for sources CX24 and CX25, which are for the
\mbox{$B_{435}$}\ band, in which the counterpart was more strongly detected. North
is up and east is to the left. With the exception of the chart for
CX17, the regions shown are $1\farcs5 \times 1\farcs5$. For
CX17, the region is $4\farcs5 \times 4\farcs5$. The inner
circles represent the formal error circle and the outer circles
represent the search regions, which have a radius of the maximum of
$2.5\, r_{\rm err}$ and 0\farcs3. The arrows point to the
candidate counterparts. As discussed in \S\S\ref{astrometry} and
\ref{source_types}, the counterparts to CX3, CX10, and CX15, which
were previously classified as CVs, now appear to be AGN, based on
their proper motions and lack of an \mbox{H$\alpha$}\ excess.
\label{f:finding_charts}}
\end{figure*}
Objects that fell on the main sequence, subgiant branch, or giant
branch were considered to be unlikely counterparts, given the
relatively low X-ray to optical flux ratio, $f_X/f_{\rm opt}$, of
such stars, in contrast to the ranges for chromospherically active
binaries and cataclysmic variables. Table~\ref{t:counterparts}
summarizes the result of this counterpart
search. Figure~\ref{f:finding_charts} shows finding charts for
identifications that have changed or are new since the previous
studies by \citet{Pooley02} and \citet{Thomson12}.
\subsection{Source Types \label{source_types}}
Based on the location of the proposed counterparts in the CMDs, we
primarily assigned types of candidate cataclysmic variable (CV) and
candidate chromospherically active binary (AB)\@. Candidate CVs were
generally identified by being significantly to the blue of the MS
and/or having large \mbox{H$\alpha$}\ excesses (either relative to the MS or to the
WD sequence). ABs were defined as lying within $\sim0.75$~mag above
the MS (and thus within $\sim0.2$~mag to the red of the MS) and having
small \mbox{H$\alpha$}\ excesses ($\la 0.1$~mag), based on our previous analysis of
ABs in NGC~6397 \citep{Cohn10}.
Although our proposed counterpart to CX5 lies slightly to the red side
of the MS, its high $L_X = 1.1\times10^{32}~\mbox{${\rm erg}~{\rm s}^{-1}$}$ and moderately high
$f_X/f_{\rm opt} = 1.4$ tend to support its identification as a CV.\@
One possibility is that the object detected at the position of CX5 is
a MS star that is covering up the much fainter true counterpart.
However, examination of the stellar image does not indicate any
obvious asymmetry. Given the weak \mbox{H$\alpha$}~excess of CX5, we include it
among the less certain CV identifications.
In cases where only MS stars were present in the search region
(viz.\ CX11, CX14, CX18, and CX35) and in one case where a red giant
was present in the search area (CX31), we noted this in
Table~\ref{t:counterparts} and did not assign a counterpart. We note
that an AB with a low mass ratio and weak lines could look like a MS
star in both CMDs. In some of these cases, it may be that a bright
star near the \emph{Chandra}\ source location is covering up a much fainter
star.
Three objects, viz.\ sources CX17, CX26, and CX39, were classified as
background galaxies, based on the extended appearance of their images.
The \mbox{$R_{625}$}-band images of the source CX26 and CX39 counterparts resemble
elliptical galaxies in appearance, with a smooth elongation. The
image of the source CX17 counterpart has a more complex structure,
possibly suggestive of interacting galaxies. The two apparent
elliptical galaxies have moderate X-ray to optical flux ratios
of $\sim0.3 - 1$, consistent with normal galaxies (see
\S\ref{flux_ratio}). It was not possible to perform PSF-fitting
photometry on the source CX17 counterpart, given its rather amorphous
structure, without a central nucleus, and thus we did not compute an
X-ray to optical flux ratio for it. As noted by \citet{Pooley02},
this source coincides with a radio source, which further supports its
identification as a galaxy.
Three objects that have been previously classified as CVs,
viz.\ sources CX3, CX10, and CX15, were ultimately classified as
background AGNs, based on their proper motions, which are discordant
from the cluster distribution but in agreement with the galaxy mean
(which included the proper motion of the counterpart to
CX39).\footnote{The counterpart to CX26 has a discordant proper motion
from the eight galaxies used to calculate the galaxy mean. We
attribute this to a poorly determined proper motion for this object.}
The deep imaging in this study indicates that these three source
counterparts are pointlike, without any hint of extension. This is
consistent with an AGN interpretation. As can be seen in
Fig.~\ref{f:fx_fopt}, these three sources have moderate to high X-ray
to optical flux ratios of $\sim0.3 - 20$, consistent with
AGNs. Examination of the (\mbox{$\ha\!-\!\R$}, \mbox{$R_{625}$}) CMD indicates that the counterparts
to these three sources do not show an \mbox{$\ha\!-\!\R$}\ excess although all three
counterparts are blue in the (\mbox{$\B\!-\!\R$}, \mbox{$R_{625}$}) CMD\@. This is further evidence
for the AGN interpretation, as AGN would not be expected to show a
zero-redshift \mbox{H$\alpha$}\ excess.
Figure~\ref{f:CMD_CV} shows the location of the likely or less certain
CV candidates in the CMDs. As we noted in \citet{Cohn10} for the CV
population in NGC~6397, there is a brighter group of CV candidates
that generally lie blueward of the main sequence in \mbox{$\B\!-\!\R$}\ and generally
show \mbox{H$\alpha$}\ excesses, and a fainter group of CV candidates that are
distributed around the WD cooling sequence in the (\mbox{$\B\!-\!\R$}, \mbox{$R_{625}$}) CMD and
generally show \mbox{H$\alpha$}\ excesses relative to the WD sequence. All but two
of the 11 brightest CV candidates lie 0.1 -- 1.3 mag to the blue of
the MS in the (\mbox{$\B\!-\!\R$}, \mbox{$R_{625}$}) CMD; the candidate counterpart to source CX2
lies on the MS and the counterpart to CX5 lies slightly to the red of
the MS\@. As we noted in \citet{Cohn10}, the optical emission of the
bright CV systems appears to be dominated by the secondary in the
\mbox{$R_{625}$}\ band, with a larger contribution from the disk in the
\mbox{$B_{435}$}\ band. The fairly high \mbox{$R_{625}$}-band fluxes indicate that the secondaries
are relatively massive, $\sim 0.5-0.7~\mbox{$M_\odot$}$. Five of the nine
brightest CV candidates, viz.\ sources CX1, CX2, CX4, CX7, and CX23,
show \mbox{H$\alpha$}\ excesses of 0.2 -- 0.8 mag relative to the MS\@. Of the
remaining four bright CV candidates, sources CX5, CX29, and CX32 show a
small \mbox{H$\alpha$}\ excess, and source CX21 is slightly to the \mbox{H$\alpha$}\ deficit
side of the MS\@. Though CX21 does not show a \mbox{H$\alpha$}\ excess, its
proper motion verifies that it is a cluster member, and therefore
cannot be an AGN. Further investigation of the \mbox{H$\alpha$}-status of the
proposed CV counterparts using the color-color diagram is described
below.
About 1.5 mag below the faintest of the bright CV candidates are two
possible transitional objects, the counterparts to sources CX9 and
CX24\@. The candidate counterpart to CX9 has a small \mbox{$\B\!-\!\R$}\ excess and
lies on the MS in the (\mbox{$\ha\!-\!\R$}, \mbox{$R_{625}$}) CMD\@. The candidate counterpart to
CX24 lies near the WD regions in both the (\mbox{$\B\!-\!\R$}, \mbox{$R_{625}$}) and (\mbox{$\ha\!-\!\R$}, \mbox{$R_{625}$})
CMDs\@.
About 1.5 mag below the proposed CX9 and CX24 counterparts lie
five faint CV candidates. These fall on or near the WD sequence in
the (\mbox{$\B\!-\!\R$}, \mbox{$R_{625}$}) CMD\@. Four of the five, viz.\ the counterparts to
sources CX13, CX25, CX35, and CX36, show \mbox{H$\alpha$}\ excesses relative to the
normal WD sequence in the (\mbox{$\ha\!-\!\R$}, \mbox{$R_{625}$}) CMD\@. The fourth faint CV
candidate, source CX6, is found among the main distribution of white
dwarfs in its \mbox{$\ha\!-\!\R$}\ color index. As we noted in \citet{Cohn10}, the
optical fluxes for the faint CV candidates in NGC~6397 are dominated
by the contribution of the WD\@. The \mbox{H$\alpha$}\ excess that four of the
five faint CV candidates show relative to the WD distribution suggests
that the faint CVs have a strong \mbox{H$\alpha$}-emission core (due to an
accretion disk) within the broad absorption lines of the WD continuum.
The five faint CV candidates in NGC~6752 are clustered near $\mbox{$R_{625}$} \sim
24.5$, which corresponds to $M_R \sim 11.4$, which is similar to the
magnitude of the clump of faint CVs observed in NGC~6397. As we
noted, this characteristic magnitude is also similar to that of the
sharp peak in the field CV distribution observed in the Sloan Digital
Sky Survey by \citet{Gaensicke09}. The peak in the field distribution
occurs near the CV period minimum ($P\approx \mbox{80 -- 86}\,{\rm
min}$). Thus, it appears likely that the faint CVs that we are
observing in NGC~6752 also correspond to the period-minimum
population. We can also compare the X-ray luminosity to detailed
simulations of CV evolution in globular clusters. We use the
simulations of \citet{Ivanova06}, which predict the optical and X-ray
flux of CVs in globular clusters throughout their evolution. In the
range $M_V = 11-12$, we find that 80\% of their simulated CVs are
predicted to have periods below the period gap, but not yet increasing
in period, with small numbers of CVs above the period gap,
``period-bouncers'' with degenerate hydrogen-rich companions, and AM
CVn stars. The predicted $L_X$ range for these populations is
$L_X(0.5-8\,{\rm keV}) = 5\times10^{29}-3\times10^{31}\,\mbox{${\rm erg}~{\rm s}^{-1}$}$, and
the median $L_X$ is $\sim 4\times10^{30}\,\mbox{${\rm erg}~{\rm s}^{-1}$}$; these well match the
$L_X$ range of these five candidate faint CVs,
$7\times10^{29}-5\times10^{31}\,\mbox{${\rm erg}~{\rm s}^{-1}$}$, and their median of
$2\times10^{30}\,\mbox{${\rm erg}~{\rm s}^{-1}$}$ (from \citealt{Forestell14}).
\begin{figure*}
\centering
\includegraphics*[clip,viewport=18 144 592 718,width=5.5in]{f8.png}
\figcaption{Proper-motion cleaned color-color diagram for stars within
the half-mass radius of NGC~6752 and CV identifications. The
candidates are the same as in Fig.~\ref{f:CMD_CV}. Open symbols
indicate less certain CV identifications. The red line is a linear
regression of \mbox{$\ha\!-\!\R$}\ on \mbox{$\B\!-\!\R$}\ over the range $-0.5 \le \mbox{$\B\!-\!\R$} \le
2.2$. All stars brighter than $\mbox{$R_{625}$} = 15.5$ have been excluded, since
saturation effects set in at the bright end. The blue end of the
color-color relation is populated by stars on the extreme blue
horizontal branch. Note that all of the candidates except the
counterpart to CX24 lie below (i.e.\ to the \mbox{H$\alpha$}-excess side of) the
color-color relation.
\label{f:CCD_CV}}
\end{figure*}
\begin{figure*}
\centering
\includegraphics*[clip,viewport=18 144 592 718,width=5.5in]{f9.png}
\figcaption{Proper-motion cleaned color-color diagram for stars within
the half-mass radius of NGC~6752 and AB identifications. The
candidates are the same as in Fig.~\ref{f:CMD_AB}. Note that all of
the candidates except the counterpart to CX37 lie below (i.e.\ to
the \mbox{H$\alpha$}-excess side of) the color-color relation.
\label{f:CCD_AB}}
\end{figure*}
Further insight into the \mbox{H$\alpha$}-status of the CV candidates NGC~6752 is
provided by the color-color diagram, Fig.~\ref{f:CCD_CV}. It can be
seen that there is a nearly linear relation between the two colors, as
indicated by the red line that represents a linear regression of \mbox{$\ha\!-\!\R$}\ on
\mbox{$\B\!-\!\R$}\ over the range $-0.5 \le \mbox{$\B\!-\!\R$} \le 2.2$. All of the proposed
counterparts, with the exception of those to sources CX6, CX21, and
CX24, fall significantly below the relation. Thus, in some cases where
the candidate does not lie to the \mbox{H$\alpha$}-excess side of the MS in the
(\mbox{$\ha\!-\!\R$}, \mbox{$R_{625}$}) CMD, it still shows a significant \mbox{H$\alpha$}-excess relative to
other objects of its \mbox{$\B\!-\!\R$}\ color. The proposed counterparts to CX6 and
CX21 fall very slightly to the \mbox{H$\alpha$}-excess side of the color-color
relation, while the proposed counterpart to CX24 falls a small
distance to the \mbox{H$\alpha$}-deficit side of the relation, though all three
counterparts are consistent with the MS color-color relation, within
the \mbox{$\ha\!-\!\R$}\ scatter at their \mbox{$R_{625}$}\ magnitudes ($\sim0.1$\,mag at CX24's
$\mbox{$R_{625}$}=22.66$).
We can think of several scenarios to explain these systems without
clear \mbox{H$\alpha$}\ excesses or deficits. There might be only small
\mbox{H$\alpha$}\ emission, which is not strong enough to dominate over the
\mbox{H$\alpha$}\ absorption. For example, CX21 might be interpreted as a bright,
nova-like CV, with an accretion flow that is mostly optically thick,
leading to weak \mbox{H$\alpha$}\ emission. Perhaps there is no accretion disk,
which would suggest a radio millisecond pulsar nature. CX24 indeed has
a small X-ray luminosity and soft X-ray color, consistent with the
X-rays from most radio millisecond pulsars
(e.g.\ \citealt{Bogdanov11}). However, CX6 is brighter and has a
harder X-ray spectrum. This is consistent with radio millisecond
pulsars showing strong shocks, the redbacks and black widows
(e.g.\ \citealt{Gentile14,Bogdanov10}), but the optical data show a He
WD companion, apparently ruling out a millisecond pulsar nature for
CX6. CX21's optical counterpart, however, is consistent with a redback
millisecond pulsar, as is its low X-ray luminosity and moderately soft
hardness. Finally, any or all of these three objects could be AM CVn
stars, where a He WD (that has lost its outer H layer) donates mass to
(typically) a CO WD\@. In such a system, optical flux could come from
the accreting WD and/or the accretion disk, neither of which would
show hydrogen lines in emission or absorption. Such systems have been
predicted to be present in globular clusters (e.g.\ by
\citealt{Ivanova06}) and evidence for the detection of an AM CVn
binary in NGC 1851 has been presented by \citet{Zurek16}.
Figure~\ref{f:CMD_AB} shows the location of the objects identified as
AB stars in the CMDs\@. As in NGC~6397, these objects sometimes lie
close to the edge of the MS in either the right or left panel, but
deviate from the MS by a larger amount in the other panel. Of the
nine AB counterparts listed in Table~\ref{t:counterparts}, eight are
newly identified in our study. This increase in the detection rate
appears to result both from the deeper \emph{Chandra}\ imaging dataset that
we used, compared with that of \citet{Pooley02}, and the increased
photometric precision possible with the ACS/WFC vs.\ the WFPC2, which
allows us to identify objects that deviate by small amounts from the
MS\@. As seen in NGC~6397 and 47~Tuc, the ABs are likely to be a
mixture of BY Draconis stars, W Ursa Majoris stars, and other contact
binaries \citep[see][]{Albrow01,Taylor01,Cohn10}.
Figure~\ref{f:CCD_AB} shows the location of the objects identified as
AB stars in the color-color diagram. As can be seen in the figure, all
of the AB candidates except for the counterpart to CX37 fall below the
line that represents the linear regression to the color-color
relation. As in NGC~6397, the \mbox{H$\alpha$}\ excesses of the AB candidates are
typically much smaller than those of the CV candidates. This is
consistent with the findings of \citet{Kafka06} and \citet{Beccari14}
that chromospherically active binaries do not show \mbox{H$\alpha$}\ equivalent
widths in excess of about 10\,\AA\@. An apparent exception is the
object pair that is the proposed counterpart to CX8, which falls well
below the linear regression, as discussed and explained below.
The faintest ABs that are detected in the X-ray in NGC~6752 have $\mbox{$R_{625}$}
\approx 20.5$, corresponding to $M_R \approx 7.4$. At this magnitude,
we reach the X-ray detection limit of $L_X \approx 5 \times
10^{29}~\mbox{${\rm erg}~{\rm s}^{-1}$}$. In contrast, the AB sequence that we detected in
NGC~6397 reaches a deeper limit of $M_R \approx 9.7$, owing to the
deeper X-ray detection limit of $L_X \approx 9 \times 10^{28}~\mbox{${\rm erg}~{\rm s}^{-1}$}$.
Given deeper X-ray imaging, it should be possible to trace the AB
sequence farther down the MS of NGC~6752. New, deeper
\emph{Chandra}\ observations are now scheduled for 2017.
Our proposed counterpart to source CX8 stands out for its very red
color and large apparent \mbox{H$\alpha$}\ excess. There are actually two
photometrically similar stars located 0\farcs2 apart, within about
0\farcs3 of the X-ray source position (see
Fig.~\ref{f:finding_charts}; the photometry for these stars is plotted
in Fig.~\ref{f:CMD_AB}). We found similarly discordant stars in
NGC~6397 and interpreted them as likely foreground ABs superposed on
the cluster, with distance moduli that put them well above the
fiducial sequences in the CMDs \citep[][see their \S4.2 and
Fig.~4]{Cohn10}. The vertical offset for the possible CX8
counterparts, roughly estimated at $3.6-4.6$ magnitudes (taking into
account the opposite effects of metallicity and reddening on NGC~6752
stars), suggests a distance $5-8$ times closer, or $500-800$\,pc. At
this distance, the two stars are $100-160$\,AU in projection from each
other, making them likely to be bound, but the separation is much too
large to produce chromospheric activity. One possible explanation for
the activity is that the two stars are both unresolved binaries. A
plausible explanation (which we explore further below) is that the two
stars are each an unresolved binary. (We note that explaining the
activity as due to youth is rather unlikely, since only a tiny
fraction of field M stars are likely to be young enough to have
substantial chromospheric activity, while many M stars are likely to
reside in tight binaries.)
We note that even without actual \mbox{H$\alpha$}\ emission, M star spectra have a
peak at the location of the \mbox{H$\alpha$}\ line, owing to the presence of TiO
bands to either side of this wavelength. We have used the {\tt
synphot} utility in {\small IRAF/STSDAS} to estimate the apparent
\mbox{$\ha\!-\!\R$}\ excess for a normal M5V star as a function of metallicity. We
used the \citet{Castelli04} spectra for metallicity values of $-1.5$
(appropriate to NGC~6752), 0.0 (solar), and 0.5 (super solar), with
$T_{\rm eff} = 3500\,{\rm K}$ and $\log(g) = 5.0$. The predicted
values of \mbox{$\ha\!-\!\R$}\ for these three metallicities are $-0.41$, $-0.58$, and
$-0.59$, respectively. We note that our proposed counterparts to CX8
have a \mbox{$\ha\!-\!\R$}\ value of $-0.80$, suggesting that there is some actual
\mbox{H$\alpha$}\ emission, although about 75\% of the apparent \mbox{$\ha\!-\!\R$}\ excess can be
accounted for as a consequence of the spectral features of a normal
mid-M dwarf star that is part of the disk population. We calculated an
equivalent width (EW) corresponding to the residual \mbox{$\ha\!-\!\R$}\ excess of
$-0.2$, following the procedure of \citet{Beccari14}, finding a value
of EW(\mbox{H$\alpha$}) = 13\,\AA. This is similar to the values they found for
early M stars registering an \mbox{H$\alpha$}\ excess in 47~Tuc.
While the two stars do not have a formal proper motion determination
from KS2, it is clear from direct measurement that they have a similar
and quite substantial proper motion, about 27 mas~yr$^{-1}$. For the
distance range of $500-800$\,pc estimated above, this corresponds to a
transverse velocity of $64 - 100\,\kms$. This is a high velocity range
for the thin disk of the Galaxy, but is consistent with the velocity
range for the thick disk \citep[][p.\,656]{Binney98}.
We can gain further information about CX8 from the combined X-ray
spectrum and flux. \citet{Pooley02} noted that it has a very soft
spectrum, and suggest a millisecond pulsar nature cannot be excluded.
\citet{Heinke03} suggested that the spectrum showed evidence for an
emission line, and that a foreground AB nature was the most likely
explanation for this system. We have performed spectral fits to the
combined Chandra data, extracted as described in \citet{Forestell14},
grouping by 15 counts/bin. We find that fitting power law or hydrogen
neutron star atmospheres \citep[NSATMOS,][]{Heinke06} with the cluster
$N_H$ provides unacceptably poor fits. Thawing the $N_H$ allows
decent fits, though we then find
$N_H=3.3^{+0.8}_{-0.7}\times10^{21}\,{\rm cm^{-2}}$ for the NSATMOS
fit, which is 10 times higher than the cluster value, which would be
difficult to explain for a millisecond pulsar nature. A single MEKAL
model \citep{Liedahl95} gives an unacceptable fit, but a double MEKAL
model (as typically found for ABs by e.g.\ \citealt{Dempsey97}) with
the cluster $N_H$ gives acceptable fits ($\chi^2=11.91$ for 9 degrees
of freedom). The inferred temperatures of $1.15^{+0.33}_{-0.18}$ and
$0.38^{+0.21}_{-0.12}$\,keV, the relative emission measures of the two
components (the higher temperature component having
1.4$^{+1.6}_{-0.7}$ the emission measure of the cooler), and the
inferred luminosity ($L_X(0.5-10\,{\rm keV})=2-5\times10^{29}~\mbox{${\rm erg}~{\rm s}^{-1}$}$,
at the distance range estimated above) are all consistent with the
range of BY Dra systems discussed in \citet{Dempsey97}. We also
performed the same fits using unbinned spectra and the C-statistic in
XSPEC, finding results that are consistent within the error bars with
the results above. Thus, the combined evidence from photometry,
proper motion, and the X-ray spectrum strongly indicates that CX8 is a
pair of foreground, chromospherically active, M-dwarf binaries, likely
of solar or greater abundances.
\newpage
\subsection{Chance Coincidences}
In order to estimate the number of chance coincidences of cluster
stars with \emph{Chandra}\ source regions, we computed the number of both MS
stars and blue stars expected to fall in each search region. In both
cases we considered stars with $\mbox{$R_{625}$} > 16$. MS stars were defined by
$\mbox{$\B\!-\!\R$} \ge 0.6$, while blue stars were defined by $\mbox{$\B\!-\!\R$} < 0.6$. The
estimated number of chance coincidences per search area was computed
from the radial surface density for each group of stars times the area
of the search region. No proper-motion cleaning was applied, since in
counting the actual number of objects per search area we did not apply
proper-motion cleaning. The predicted number of MS stars per search
area was typically within about a factor of two of the observed number,
which is consistent given the small-number statistics. On the other
hand, the predicted number of blue stars within each search area was
minuscule, with a median value of about 0.02. The total number of
spurious matches of blue stars with all search areas was predicted to
be 0.9. This indicates that the MS stars that are observed within the
search areas are almost certainly chance superpositions, while the
blue stars are highly likely to be bona fide identifications.
\subsection{Comparison with \citet{Pooley02} and \citet{Thomson12}}
We confirm all of the identifications made by \citet{Pooley02} except
for that of source CX11\@. \citet{Thomson12} similarly were unable to
confirm this identification. In this case, the situation is
complicated by two overlapping diffraction rings around nearby bright
stars. While there is a suggestion of extra flux at the location
indicated by \citet{Pooley02}, it may simply represent the
superposition of the two rings. There are three stars that lie at the
edge of the search area, about 0\farcs3 from the source position. One
of these stars is a subgiant, one is a MSTO star, and the remaining
fainter star has an undetermined \mbox{$R_{625}$}\ magnitude from KS2\@. Based on
aperture photometry, this star appears likely to lie near the MS\@. As
we discuss in \citet{Forestell14}, source CX11 is within the
positional uncertainty of MSP~D, which shows no evidence of binarity
\citep{Corongiu06}. Since an isolated neutron star is not expected to
have a detectable optical counterpart, it is most likely that the
coincidence of the three stars with the source CX11 search area is a
chance superposition. In fact, about $1.7 \pm 1.3$ chance
superpositions are expected within the search area for CX11.
\citet{Pooley02} noted a coincidence between source CX17 and a radio
source detected with the Australia Telescope Compact Array. This is
consistent with our classification of the CX17 optical counterpart as
a galaxy, based on the position of CX17 in the center of a diffuse
optical object that clearly seems to be a galaxy (see
Fig.~\ref{f:finding_charts}), interpreting the radio and X-ray
emission as likely both associated with an active galactic nucleus.
We note that \citet{Thomson12} have suggested a different counterpart
to source CX8 than the pair of red stars that we discuss in
\S\ref{source_types} above. They stated that the $U$ and $B$
photometry of their proposed counterpart indicates that it is a
``faint gap source,'' i.e.\ that it lies between the MS and the
extended blue horizontal branch and thus is a possible CV\@. However,
from the full WFC3 photometry provided by \citet{Thomson12}, it can be
seen that their suggested counterpart to CX8 lies on the main sequence
in the ($U\!-\!B$, $B$) CMD (the only filters in which they detected
it). Our ACS photometry of this star also indicates that it lies on
the MS in both the (\mbox{$\B\!-\!\R$}, \mbox{$R_{625}$}) CMD and the (\mbox{$\ha\!-\!\R$}, \mbox{$R_{625}$}) CMD\@. Thus, based
on all the available data, we see no evidence for interpreting it as a
CV\@.
\begin{figure*}
\epsscale{0.9}
\plotone{f10.pdf}
\figcaption{X-ray to optical \mbox{$R_{625}$}-band flux ratio vs.\ X-ray flux
(0.5--8 keV) for CVs (blue triangles), ABs (red squares), and
galaxies (inverted magenta triangles). Less certain identifications
are plotted with open symbols. The upper axis gives the equivalent
X-ray luminosity assuming that all objects are at the distance of
the cluster. Note that the CVs mostly populate the upper part of
the diagram, while the ABs mostly populate the lower part.
\label{f:fx_fopt}}
\end{figure*}
We also note that \citet{Thomson12} have pointed out that the
counterpart proposed by \citet{Pooley02} for source CX16, which
shows the photometric characteristics of a BY Draconis star, is
outside of the \citet{Pooley02} error circle. They suggest another
closer star as a more likely counterpart. However, we find that the
counterpart proposed by \citet{Pooley02} is the closest star to the
refined source CX16 position from \citet{Forestell14}, falling within
the search area at $1.4~r_{\rm err} = 0\farcs19$ from the X-ray
source location. Thus, we confirm the \citet{Pooley02} identification
of this source.
\citet{Thomson12} have suggested a possible SX Phoenicis star as a
counterpart to source CX12.\@ However, \citet{Forestell14} were able
to resolve CX12 (with the additional \emph{Chandra}\ data) into three
sources (CX20, CX23, and CX24).\@ Consequently, the counterpart
suggested by \citet{Thomson12} lies 0\farcs75 from the nearest
X-ray position (CX24), well outside our search radius. We suggest
counterparts to each of CX20, CX23, and CX24.
\subsection{X-ray to Optical Flux Ratios \label{flux_ratio}}
As in \citet{Cohn10}, we have examined the X-ray to optical flux
ratio, $f_X/f_{\rm opt}$, where we take $f_X (0.5\!-\!8\,{\rm keV})$
from \citet{Forestell14} and set $f_{\rm opt} = f_{R_{625}} =
1.07\times10^{-(0.4R_{625}+6)}$. The latter conversion factor is
computed from the \emph{HST}\ flux calibration constants and includes a
small correction for the total extinction of $A_R = 0.10~{\rm mag}$.
The resulting flux ratio is plotted versus $f_X$ in
Fig.~\ref{f:fx_fopt}. The ratio has been observed to be higher for
accreting sources, such as CVs and LMXBs, than for ABs
\citep[e.g.\ ][]{Edmonds03b,Bassa08}.
We have used this observation to support our identification of the
counterpart to the source CX5 as a possible CV, on the basis of its
high $L_X = 1.1\times10^{32}~\mbox{${\rm erg}~{\rm s}^{-1}$}$ and moderately high $f_X/f_{\rm
opt} = 1.4$. These values are inconsistent with the other objects
classified as ABs, with the exception of CX8\@. As noted in
\S\ref{source_types}, our proposed source CX8 counterpart stands out
for its large \mbox{H$\alpha$}\ excess, which presumably indicates a high level of
chromospheric activity. Its high value of $f_X/f_{\rm opt} = 0.5$ is
consistent with this. As discussed in \S\ref{source_types}, CX8 is
likely to be a foreground object, which would significantly reduce its
inferred $L_X$ value relative to the cluster members.
It can be seen in Fig.~\ref{f:fx_fopt} that the objects classified as
likely or less certain CVs mostly populate the upper part of the
diagram and the likely or less certain AB candidates mostly populate
the lower part. The median flux ratio is 60 times larger for the CVs
than for the ABs. The two galaxies for which flux ratios were
calculated fall in between the bulk of the CVs and the bulk of the
ABs. This suggests that they are normal galaxies, rather than active
galaxies, for which the flux ratio should be at least an order of
magnitude larger.
\begin{figure*}
\centering
\includegraphics*[clip,viewport=18 144 592 718,width=5.5in]{f11.png}
\figcaption{\mbox{H$\alpha$}\ variability versus \mbox{H$\alpha$}\ magnitude for candidate CVs.
The ordinate is the 3$\sigma$-clipped RMS deviation of the up to
24 \mbox{H$\alpha$}\ measurements for each star. The locations of the CV
candidates are plotted as numbers over the distribution for all
stars. Note that most of the bright CVs show a variability signal,
i.e.\ they lie above the relation defined by the majority of the
stars.
\label{f:variability_CV}}
\end{figure*}
\begin{figure*}
\centering
\includegraphics*[clip,viewport=18 144 592 718,width=5.5in]{f12.png}
\figcaption{\mbox{H$\alpha$}\ variability versus \mbox{H$\alpha$}\ magnitude for candidate ABs.
Note that the ABs, on average, show lower variability than the CVs
illustrated in Fig.~\ref{f:variability_CV}.
\label{f:variability_AB}}
\end{figure*}
\subsection{Variability} \label{variability}
Since our dataset provides a 24-exposure time sequence of
\mbox{H$\alpha$}\ exposures, with two exposures per orbit, it was possible to
investigate optical variability of the detected objects. The time
sequence samples time scales shorter than about one hour (the
visibility period per \emph{HST}\ orbit) and also time scales of days to
months. As a measure of the variability, we first adopted the
3$\sigma$-clipped RMS deviation, which we used in our study of
NGC~6397 \citep{Cohn10}.\footnote{Following a suggestion of the
referee, we now compute the fractional RMS deviation of the fluxes
and express this in magnitude units.} Since the use of sigma
clipping rejects outliers, this approach results in a measure of
variability that is most sensitive to the orbital variability of
binary systems, rather than large-amplitude fluctuations of CVs.
We plot $\sigma$(\mbox{H$\alpha$}) versus mean \mbox{H$\alpha$}\ magnitude in
Figs.~\ref{f:variability_CV} and \ref{f:variability_AB}. It can be
seen in these figures that most stars fall along a ``fundamental''
sequence of increasing $\sigma$(\mbox{H$\alpha$}) with increasing magnitude. We
interpret stars falling above this sequence as showing evidence of
optical variability, although some degree of photometric scatter is
also clearly present. We have plotted the locations of the
\emph{Chandra}\ source counterparts on this diagram. A number of the
counterparts show apparently significant variability. This group
includes many of the bright CV counterparts, viz.\ sources CX1, CX2,
CX4, CX7, CX29, and CX32 as seen in Fig.~\ref{f:variability_CV}. Four
of the remaining bright CV candidate counterparts, sources CX5, CX21,
and CX23, show marginal evidence of variability, falling just above
the fundamental sequence. Of the faint CV candidate counterparts,
sources CX6 and CX24 show marginal evidence of variability and sources
CX9, CX13, and CX36 fall on the high side of the fundamental
sequence. At the magnitude of these faint CVs, $\mbox{H$\alpha$} \approx 24$, the
large typical photometric uncertainty of about 0.3 mag makes it
difficult to detect moderate variability. The amplitude of the
variability measured by $\sigma$(\mbox{H$\alpha$}) for the bright CVs is about 0.1
-- 0.3 mag, which is typical for orbital variations. The variability
plot for the ABs is shown in Fig.~\ref{f:variability_AB}. It can be
seen from this figure that sources CX8, CX19, and CX27 show evidence
of variability. As discussed in \S\ref{source_types}, source CX8
appears to be a foreground system.
\begin{figure*}
\centering
\includegraphics*[clip,viewport=18 144 592 718,width=5.5in]{f13.png}
\figcaption{\mbox{H$\alpha$}\ range versus \mbox{H$\alpha$}\ magnitude for candidate CVs. The
ordinate is the full range of the up to 12 \mbox{H$\alpha$}\ measurements for
each star, as we took the faintest of each pair of \mbox{H$\alpha$}\ measurements
as representing the best estimate of the \mbox{H$\alpha$}\ magnitude for that
orbit. The CV candidates are plotted as numbers over the
distribution for all stars. Note the similarity between this figure
and Fig.~\ref{f:variability_CV}, other than the greater amplitude of
the variability here.
\label{f:variability_range}}
\end{figure*}
We also investigated the total range of the \mbox{H$\alpha$}\ magnitudes for each
object in order to search for outburst behavior in the CV
counterparts. In order to filter out cosmic-ray events that affect
some of the individual magnitude measurements, we chose the fainter of
the magnitude measurements from each of the pair of \mbox{H$\alpha$}\ frames per
orbit. We discarded orbits for which there was only one successful
\mbox{H$\alpha$}\ magnitude measurement. We plot the total range of the orbital
\mbox{H$\alpha$}\ magnitudes versus \mbox{H$\alpha$}\ magnitude for all the stars, with the CV
counterparts indicated, in Fig.~\ref{f:variability_range}. Five of
the bright CV counterparts have a variation range of $\sim 0.5 -
1.0$~mag. The source CX1 stands out with the largest range of \mbox{H$\alpha$},
with a value of $\sim 1.3$~mag. While some of this variation range
may be due to photometric uncertainty, examination of the individual
images indicates that it is largely due to actual flux variation.
Thus, source CX1 appears to have undergone a dwarf nova outburst
during the approximately 180\,d interval over which the data were
obtained. We note that \citet{Kaluzny09} observed a 1.5 mag amplitude
outburst from the CX4 counterpart, in a study of stellar variability
in NGC~6752 that included the locations of all of the \citet{Pooley02}
\emph{Chandra}\ sources. \citet{Thomson12} observed an outburst of 1.5 mag
amplitude from the CX1 counterpart and an outburst of 6 mag amplitude
from the CX7 counterpart.
\citet{Kaluzny09} detected periodic variability from a suggested
counterpart to CX19.\@ They found a period of 0.11306\,d, with an
amplitude of a few hundredths of a mag. They note that if this
modulation is due to ellipsoidal variations, then the orbital period
would be twice this large, viz.\ 0.226\,d. While we were not able to
confirm the periodicity of this counterpart with a period-folding
analysis of the \mbox{H$\alpha$}\ time series, Fig.~\ref{f:variability_AB}
indicates that the object is variable, with an amplitude of about 0.04
mag. We note that \citet{Thomson12} did not find any evidence of the
\citet{Kaluzny09} periodicity in their WFC3 near-UV data.
\citet{Kaluzny09} suggest that the counterpart to CX19 is a close
binary hosting a neutron star or a black hole. However, its location
to the red of the MS in the (\mbox{$\B\!-\!\R$}, \mbox{$R_{625}$}) CMD and its low X-ray to optical
flux ratio of $f_X/f_{\rm opt} = 0.007$ suggest instead that it is
likely to be an AB\@. In this case, the longer orbital period of
0.226\,d is strongly preferred, since even the contact binaries in
47~Tuc in this range of stellar masses have periods of at least
0.20\,d \citep{Albrow01}.
\subsection{Spatial Distribution}
We determined the cluster center by iterative centroiding in a
12\arcsec\ radius aperture using a sample of stars with magnitudes in
the range $\mbox{$R_{625}$} < 20.5$, which extends to 4 mag below the MSTO\@. The
resulting center of $\alpha = 19^{\rm h}~10^{\rm m}~52\fs12,~
\delta = -59\arcdeg~59\arcmin~4\farcs4$ agrees well with the recent center
determinations by \citet{Goldsbury10} and \citet{Thomson12}, differing
from the former by 0\farcs11 and from the latter by 0\farcs15.
Experimentation with the centroiding aperture size and the stellar
sample definition indicates that the center position is uncertain by
about 0\farcs5.
We next determined both the cumulative radial distribution and a
binned radial profile for a MSTO group of stars with magnitudes in
the range $16 \le \mbox{$R_{625}$} < 18.5$, which extends to 2 mag below the
MSTO\@. These are shown in Fig.~\ref{f:radial_profile_1}. In order
to assess the overall behavior of the cluster profile, we fitted the
cumulative radial distribution of the MSTO group with a ``generalized
King model,'' which we have also called a ``cored power law.'' As
discussed by \citet{Cohn10}, this takes the form
\begin{equation}
\label{eqn:Cored_PL}
S(r) = S_0 \left[1 + \left({r \over r_0}\right)^2 \right]^{\alpha/2},
\end{equation}
with the core radius $r_c$ related to the scale parameter $r_0$ by,
\begin{equation}
r_c = \left(2^{-2/\alpha} -1 \right)^{1/2} r_0\,.
\end{equation}
Fig.~\ref{f:radial_profile_1} shows the resulting maximum-likelihood
fit where the data have been fitted to a limiting radius of $r_h =
115\arcsec$. As seen in Fig.~\ref{f:mosaic}, the region within $r_h$
is fully covered by the ACS/WFC imaging. While the model provides a
statistically acceptable fit to the profile over this data range, it
can be seen in the right panel of Fig.~\ref{f:radial_profile_1} that
the fit is systematically low in the central region of the cluster and
high in an intermediate radial range. Moreover, the best-fit slope of
$\alpha = -1.28$ also does not agree with that expected for an
``analytic King model,'' for which $\alpha = -2$. Thus, the radial
profile of NGC~6752 is not well fitted by a single-mass King model,
indicating that it does not have a normal-core structure like clusters
such as 47~Tuc \citep{Howell00}. This suggests that NGC~6752 is
in a post-collapse state of evolution as concluded by several previous
studies noted above \citep{Djorgovski86,Ferraro03a,Thomson12}. In
order to evaluate the parameters of the expected post-collapse
surface-density cusp, we fitted the cored-power-law model
(Eqn.~\ref{eqn:Cored_PL}) to the cumulative radial distribution using
an outer limiting radius of 25\farcs8, corresponding to a radial
scale of 0.5~pc, which is the limiting radius adopted by
\citet{Lugger95} for the cored-power-law fits they presented for 15
candidate core-collapsed clusters. The fits are shown in
Fig.~\ref{f:radial_profile_2}. The best-fit parameter values are $r_c
= 4.6'' \pm 1.5''$ and $\alpha = -0.82 \pm 0.07$. These indicate that
NGC~6752 has a well-resolved core, with a surrounding cusp slope that
agrees well with the mean core-collapse slope of $-0.84 \pm 0.10$
found by \citet{Lugger95}. We note that they similarly found that a
cored power law gave a good fit to the central $U$-band
surface-brightness profile of NGC~6752 with best-fit parameters of
$r_c = 6.7'' \pm 1.9''$ and $\alpha = -0.97 \pm 0.15$. These two sets
of best-fit parameter values are consistent with each other to within
1$\sigma$. In retrospect, we conclude that our ``conservative''
interpretation, in \citet{Lugger95}, that NGC~6752 is not required to
be in a post-collapse state, is too conservative and that the
surface-density profile of NGC~6752 provides good evidence that the
cluster has indeed experienced core collapse.
We next examined the radial profiles of a number of different stellar
groups by comparing the cumulative radial distributions shown in
Fig.~\ref{f:cumulative_radial_distributions}. The particular groups
considered are the MSTO group described above, all of the 39
\emph{Chandra}\ sources within $r_h$, the nine brightest CVs, the five
faintest CVs, the ABs, and a blue straggler (BS) group selected from
the CMD as illustrated in Fig.~\ref{f:CMD_BS}. As can be seen in
Fig.~\ref{f:cumulative_radial_distributions}, the \emph{Chandra}\ sources,
the bright CVs, and the BSs show strong central concentration relative
to the MSTO group. In order to quantify the significance of the
differences in the distributions, we performed Kolmogorov-Smirnov
(K-S) comparisons of each sample with the MSTO group. The results are
given in Table~\ref{t:Cored_PL_fits}, where the probability, $p$, of
the two samples being drawn from the same parent distribution is
listed. The distributions of the \emph{Chandra}\ sources, the bright CVs,
and the BSs differ very significantly ($p<1\%$) from that of the MSTO
group. The faint CV and AB distributions do not differ from the MSTO
group at a significant level ($p=31\%$ and $p=27\%$, respectively). A
direct comparison of the bright and faint CVs indicates that these two
groups differ at the 6\% level. While this misses a 5\% cutoff for
statistical significance, it clearly suggests a meaningful difference
between the two distributions.
\begin{figure*}
\plottwo{f14a.pdf}{f14b.pdf}
\figcaption{Radial surface-density profile for the MSTO group. Left
panel: cumulative radial distribution (solid line) with a
cored-power-law fit to 115\arcsec\ (dashed line). Right panel:
binned surface-density profile with the same cored-power-law fit. A
one-sample K-S test indicates that the data are consistent with the
fit with a probability of 14\%. Nevertheless, the curve falls
systematically below the data for $r<5\arcsec$ and above the data
for $5\arcsec < r < 20\arcsec$.
\label{f:radial_profile_1}}
\end{figure*}
\begin{figure*}
\plottwo{f15a.pdf}{f15b.pdf}
\figcaption{Radial surface-density profile for the MSTO group as in
Fig.~\ref{f:radial_profile_1}. Left panel: cumulative radial
distribution (solid line) with a cored-power-law fit to
25\farcs8 = 0.5 pc (dashed line). Right panel: binned
surface-density profile with the same cored-power-law fit. Note that
the fit to the inner profile is much better than that shown in
Fig.~\ref{f:radial_profile_1}.
\label{f:radial_profile_2}}
\end{figure*}
\begin{figure}
\epsscale{1.2}
\plotone{f16.pdf}
\figcaption{Cumulative radial distributions for selected stellar
groups. Note that the \emph{Chandra}\ sources, the bright CVs, and the
BSs show significant central concentration ($p \lesssim 1\%$)
relative to the MSTO group. Fitting information and K-S sample
comparisons for these stellar groups are given in
Table~\ref{t:Cored_PL_fits}.
\label{f:cumulative_radial_distributions}}
\end{figure}
\begin{figure}
\epsscale{1.2}
\plotone{f17.pdf}
\figcaption{Color-magnitude diagram showing the selection of blue
stragglers for the radial distribution analysis. A conservative
selection criterion was used to choose 30 BS candidates within the
half-mass radius.
\label{f:CMD_BS}}
\end{figure}
\begin{deluxetable*}{lrcccccr}
\tabletypesize{\normalsize}
\tablecolumns{8}
\tablewidth{0pt}
\tablecaption{\textbf{Cored-power-law Model Fits to 115\arcsec}
\label{t:Cored_PL_fits}}
\tablehead{
\colhead{Sample} &
\colhead{$N$\tablenotemark{a}} &
\colhead{$q$} &
\colhead{$r_c~(\arcsec)$} &
\colhead{$\alpha$} &
\colhead{$m~(\mbox{$M_\odot$})$} &
\colhead{$\sigma$\tablenotemark{b}} &
\colhead{K-S prob\tablenotemark{c}}
}
\startdata
MSTO &10016 & 1.0 & $12.2 \pm 1.2$ & $-1.27 \pm 0.03$ & $0.80 \pm 0.05$ & \nodata & \nodata \\
\emph{Chandra}\ sources
& 39 & $1.33 \pm 0.12$ & $ 8.5 \pm 0.8$ & $-2.07 \pm 0.28$ & $1.06 \pm 0.10$ & 2.4 & 0.080\% \\
bright CV & 9 & $2.03 \pm 0.35$ & $ 6.0 \pm 0.8$ & $-3.60 \pm 0.79$ & $1.62 \pm 0.28$ & 2.9 & 0.042\% \\
faint CV & 5 & $1.27 \pm 0.24$ & $ 9.0 \pm 2.0$ & $-1.88 \pm 0.54$ & $1.02 \pm 0.19$ & 1.1 & 31\% \\
AB & 9 & $1.31 \pm 0.24$ & $ 8.8 \pm 1.8$ & $-1.97 \pm 0.54$ & $1.05 \pm 0.19$ & 1.3 & 27\% \\
BS & 30 & $1.41 \pm 0.11$ & $ 8.1 \pm 0.7$ & $-2.21 \pm 0.26$ & $1.13 \pm 0.09$ & 3.2 & 0.54\% \\
\enddata
\tablenotetext{a}{Size of sample within 115\arcsec\ of cluster center}
\tablenotetext{b}{Significance of mass excess above MSTO mass}
\tablenotetext{c}{K-S probability of consistency with MSTO group}
\end{deluxetable*}
In order to further investigate the spatial distribution of the
various \emph{Chandra}\ sources and BSs in NGC~6752, we carried out
maximum-likelihood fits of the cored-power-law model to the
surface-density distributions of the groups shown in
Fig.~\ref{f:cumulative_radial_distributions}. As discussed by
\citet{Cohn10}, this procedure provides an estimate for the
characteristic mass of an object in each group relative to the MSTO
mass. As in \citet{Cohn10}, we adopt the approximation that the mass
groups above the MSTO mass are in thermal equilibrium. In this case,
the surface-density profile for a mass group with mass $m$ is given by
Eqn.~\ref{eqn:Cored_PL} with a slope parameter $\alpha$ related to the
turnoff-mass slope $\alpha_{\rm to}$ by
\begin{equation}
\alpha = q\, (\alpha_{\rm to} - 1) + 1
\end{equation}
where $q = m/m_{\rm to}$.
Table~\ref{t:Cored_PL_fits} gives the results of maximum-likelihood
fits of Eqn.~\ref{eqn:Cored_PL} to the turnoff-mass stars,
\emph{Chandra}\ sources, CVs, ABs, and BSs. As can be seen from the table,
the $q$ values for all of the groups exceed unity, indicating that the
characteristic masses exceed the turnoff mass. We assume a MSTO mass
of $0.80 \pm 0.05\,\mbox{$M_\odot$}$, based on the study of \citet{Gruyters14},
who found a MSTO mass of 0.79\,\mbox{$M_\odot$}. For the \emph{Chandra}\ sources, bright
CVs, and BSs, the excesses are significant at the $2.4\sigma-3.2\sigma$
level. We note that the results of this analysis are supported by the
K-S comparison results given in the last column of the table.
The inferred mass range for the bright CVs ($1.6 \pm 0.3\,\mbox{$M_\odot$}$) is
similar to what we found for NGC~6397\@. The median \mbox{$R_{625}$}-band absolute
magnitude for these systems is about $M_R \approx 7$. Assuming that
the \mbox{$R_{625}$}-band flux is dominated by the secondary, this implies a
secondary mass of about 0.6\,\mbox{$M_\odot$}, based on the isochrones of
\citet{Baraffe97}. The corresponding WD mass is $M_{\rm WD} \sim
1.0\,\mbox{$M_\odot$}$, which is consistent with the value of $0.83 \pm
0.23\,\mbox{$M_\odot$}$ found by \citet{Zorotovic11}. The inferred mass range
for the faint CVs ($1.0\pm 0.2\,\mbox{$M_\odot$}$) is consistent with a somewhat
lower mass white dwarf (e.g.\ $M_{\rm WD} \sim 0.8\,\mbox{$M_\odot$}$) and a
secondary mass that has been whittled down to $\lesssim 0.2\,\mbox{$M_\odot$}$.
In such a system, the optical flux would be dominated by the white
dwarf, as observed here.
\newpage
\section{Discussion \label{discussion}}
As in NGC~6397, the bright CVs have a more centrally concentrated
distribution than the faint CVs. As we discussed for that cluster,
this is consistent with the bright CVs representing a recently formed
population that is produced by dynamical interactions near the cluster
center. These interactions may likely include exchange interactions
in which a heavy white dwarf displaces a member of a primordial
binary. As these CVs age, the secondary loses mass and the accretion
rate drops, leading to a reduction in both the optical and X-ray
luminosity \citep{Howell01}. At the same time, two-body interactions
with singles and other binaries scatter the CVs into larger orbits
that put them at increasing distance from the cluster center.
We note that \citet{Hong17} have recently reported Monte-Carlo
simulations of globular cluster dynamical evolution that include CV
formation from primordial binaries and subsequent evolution. They find
that the CVs are more centrally concentrated than the MSTO-mass stars,
with the effect being stronger for the CVs that form from primordial
binaries that have undergone an exchange encounter. This exchange
group includes a population of CVs that are more massive than those
formed from primordial binaries that have not undergone an exchange
encounter. The resulting greater central concentration of the more
massive CVs is qualitatively consistent with our inference that the
brighter CVs in NGC 6752 are more massive than the fainter
ones. \citet{Hong17} note that the radial distribution of the CVs
reflects, ``the remaining memory of the CV formation history and
progenitor masses,'' as well as the present CV mass. Thus, our estimate
of CV masses based on the assumption that they have achieved their
equilibrium radial distribution should be viewed as a first
approximation to a complex process.
\begin{figure*}
\epsscale{1.1}
\plotone{f18.pdf}
\figcaption{Number of X-ray sources versus the stellar encounter rate
$\Gamma$ based on data from \citet{Bahramian13}. The red line shows
a linear regression in log-log space, corresponding to the relation
$N_X \propto \Gamma^{0.58\,\pm0.10}$.
\label{f:Gamma_Nx}}
\end{figure*}
\newpage
\subsection{Stellar Encounter Rate}
\citet{Bahramian13} have reexamined the relationship between the
number of X-ray sources in a cluster and the stellar encounter rate
that was originally found by \citet{Pooley03}. Fig.~\ref{f:Gamma_Nx}
shows the \citet{Bahramian13} dataset. The errors in the encounter
rate $\Gamma$ are adopted from their study, while the errors in the
background-corrected source counts include the Poisson errors in both
the total source counts and the background counts. The line in
Fig.~\ref{f:Gamma_Nx} is a linear regression that corresponds to the
relation $N_X \propto \Gamma^{0.58\,\pm\,0.10}$. The slope value is
consistent with the value of $0.74\pm0.36$ obtained by
\citet{Pooley03}. It can be seen from Fig.~\ref{f:Gamma_Nx} that the
core-collapsed clusters NGC~6752 and NGC~7099 both fall below the mean
relation defined by the entire cluster sample. NGC~6752 is
3.0$\sigma$ below the relation, while NGC~7099 is 2.0$\sigma$
below. In contrast, \citet{Pooley03} found that the core-collapsed
cluster NGC~6397 lies above the relation. This deviation of NGC~6397
from the relation is not statistically significant in our
analysis. Our linear regression analysis quantifies the suggestion of
\citet{Bahramian13} that core-collapsed clusters underproduce X-ray
binaries relative to their computed encounter rates. This may indicate
that binary destruction/ejection is a more vigorous process in
core-collapsed clusters than in those that are in a pre-collapse
state.
\citet{Ivanova06} found, from their Monte-Carlo simulations of CV
formation and evolution in globular cluster environments, that
clusters that underwent core collapse in the past 1 -- 2 Gyr will have
a depleted number of CVs, since CVs that have been destroyed during
core collapse will not yet have been replaced by newly formed
CVs. However, the recent Monte-Carlo simulations of CV evolution in
dynamically evolving clusters by \citet{Belloni17} do not show such a
clear effect.
\section{Summary \label{summary}}
We have searched for optical counterparts to the 39 \emph{Chandra}\ sources
that lie within the half-mass radius of NGC~6752, using \emph{HST}\ ACS/WFC
imaging in \mbox{$B_{435}$}, \mbox{$R_{625}$}, and \mbox{H$\alpha$}. Based primarily on CMD classification, we
found plausible counterparts to 31 of the sources. These include 16
likely or less certain CVs, nine likely or less certain ABs, three
galaxies, and three more likely than not AGNs\@. Our CV/AB
discrimination that is based on CMD location is generally supported by
the X-ray to optical flux ratios for these identifications. The CVs
have, in most cases, significantly higher values of $f_X/f_{\rm opt}$
than do the ABs, as expected.
In comparison to our results for NGC~6397, where all of the CV
candidates exhibited a strong \mbox{H$\alpha$}\ excess in the (\mbox{$\ha\!-\!\R$}, \mbox{$R_{625}$}) CMD
\citep{Cohn10}, many of the CV candidates reported here do not. We
examined the color-color diagram to further investigate this issue and
found that all but three of our CV candidates do show a \mbox{H$\alpha$}\ excess
relative to other stars of the same \mbox{$\B\!-\!\R$}\ color. However, it is not
clear why the \mbox{H$\alpha$}\ excesses of the CV candidates in NGC~6752 are
generally weaker than those of the CV candidates in NGC~6397.
As expected, most of the bright CV candidates registered significant
time variability. The amplitude of the variability, as measured by the
3$\sigma$-clipped RMS of the \mbox{H$\alpha$}\ time series, is typically $\sim
0.1-0.3$~mag, which is characteristic of orbital variations. The
counterpart to source CX1 showed a $\sim 1.3$~mag total range of
variability, which is consistent with a dwarf nova eruption. The ABs
showed substantially less evidence of variability, with only the
counterparts to CX8, CX19, and CX27 falling significantly above the
$\sigma(\mbox{H$\alpha$})$-\mbox{H$\alpha$}\ relation for all stars. Of these three, the two
counterparts to CX8 appear to be a foreground objects that are not
associated with the cluster.
Our determination of the cluster center agreed well with previous
recent determinations. We found that while the radial profile of a
MSTO star group can be acceptably fitted with a cored power law out to
the half-mass radius, this model falls systematically below the
profile in the core and above it at intermediate radii. In addition,
the slope for this cored-power-law fit does not agree with that of an
analytic King model. Thus, we conclude that NGC~6752 does not show a
normal King model profile, in agreement with the previous findings of
\citet{Djorgovski86}, \citet{Ferraro03a}, and \citet{Thomson12}. The
profile is better fit, out to a projected radius of 0.5~pc, by a cored
power law with a slope that is consistent with the typical value for a
post-core-collapse profile. This supports the conclusion that
NGC~6752 is in a post-collapse state.
We compared the radial distributions of several different stellar
groups to that of a MSTO sample. We found that radial distributions of
all of the \emph{Chandra}\ sources, the bright CVs, and the BSs show
a strongly significant central concentration relative to that of the
MSTO group. We performed fits of a cored-power-law model to the
individual groups, in order to estimate the characteristic individual
stellar mass for each group. We found that the \emph{Chandra}\ sources, the
bright CVs, and the BSs all have a characteristic mass that
significantly exceeds the MSTO mass. In the case of the bright CVs,
the characteristic mass of $1.6 \pm 0.3\,\mbox{$M_\odot$}$ is the similar to what
we found for NGC~6397 \citep{Cohn10}.
We found that the bright CVs are more centrally concentrated than the
faint CVs, consistent with a picture in which bright CVs represent a
population that has been recently formed by dynamical interactions
near the cluster center. The faint CVs then would represent an evolved
population that has been scattered out of the cluster core over
time. We find that, like the core-collapsed cluster NGC 7099, NGC 6752
is deficient in X-ray sources relative to the mean relation between
X-ray source population size and encounter rate. This supports the
suggestion that core-collapsed clusters underproduce X-ray binaries,
implying that binary destruction/ejection is more vigorous in
core-collapsed clusters.
\acknowledgements{We thank N.~Ivanova for providing unpublished
details on her simulations of CVs in globular clusters. This work is
supported by NASA grant HST-GO-12254.02-A to Indiana
University. Phyllis Lugger and Haldan Cohn acknowledge the
hospitality of the Department of Astronomy and Astrophysics at the
University of California Santa Cruz, where part of this work was
carried out. Craig Heinke is supported by a NSERC Discovery Grant
and a NSERC Discovery Accelerator Supplement Award.}
\onecolumngrid
\vspace*{12pt}
\software{IRAF,PyRaF,SExtractor,wavdetect,pwdetect,XSPEC}
\twocolumngrid
|
1,314,259,996,569 | arxiv | \section{Introduction}
\label{sec:introduction}The {B}alancing {D}omain {D}ecomposition by
{C}onstraints (BDDC), proposed independently by Cros~\cite{Cros-2003-PSC},
Dohrmann~\cite{Dohrmann-2003-PSC}, and Fragakis and
Papadrakakis~\cite{Fragakis-2003-MHP}, is one of the most popular methods of
iterative substructuring. The method was developed as a preconditioner for the
solution of systems of linear equation obtained by finite element
discretizations of elliptic problems, and it has been originally derived as a
primal counterpart of the Finite Element Tearing and Interconnecting - Dual,
Primal (FETI-DP) method by Farhat et
al.~\cite{Farhat-2001-FDP,Farhat-2000-SDP}. Over the years the BDDC has been
extended to other types of problems, for example to the nearly incompressible
elasticity by Dohrmann~\cite{Dohrmann-2004-SSP}, the Stokes problem by Li and
Widlund~\cite{Li-2006-BAI}, or advection-diffusion problems by Tu and
Li~\cite{Li-2009-CAB,Tu-2008-BDD}. It is also relatively straightforward to
extend the BDDC\ into multiple levels, as noted by
Dohrmann~\cite{Dohrmann-2003-PSC}. The three-level methods were developed in
two and three dimensions by Tu~\cite{Tu-2007-TBT3D,Tu-2007-TBT}, and Mandel et
al.~\cite{Mandel-2008-MMB} extended the algorithm into a multilevel method
within a more general multispace BDDC\ setting. Another class of problems,
important in the context of this paper, is the flow in porous media based on
\emph{mixed} and \emph{mixed-hybrid} finite element discretizations. Probably
the first domain decomposition methods of this class were proposed by
Glowinski and Wheeler~\cite{Glowinski-1988-DDM}. Their Method~II was
preconditioned using BDD by Cowsar et al.~\cite{Cowsar-1995-BDD}, using BDDC
by Tu~\cite{Tu-2007-BAF}, and \v{S}\'{\i}stek et al.~\cite{Sistek-2015-BMF}
extended this methodology to flow in porous media with combined mesh
dimensions. This approach is regarded as \emph{hybrid} because the method
iterates on a system of \emph{dual} variables (as Lagrange multipliers)
enforcing the continuity of flux variables across the substructure interfaces.
An alternative strategy, retaining the original \emph{primal} variables was
proposed by Tu~\cite{Tu-2005-BAM,Tu-2011-TBA}, who combined the BDDC
preconditioner with an earlier algorithmic framework developed by Ewing and
Wang~\cite{Ewing-1992-ASA}, cf. also Mathew~\cite{Mathew-1993-SAIa}, which
allows to solve the saddle-point problem obtained from \emph{mixed} finite
element discretization by conjugate gradients. The Nested BDDC by
Soused\'{\i}k~\cite{Sousedik-2013-NBS} provided a multilevel extension by
applying the framework from~\cite{Tu-2005-BAM} recursively. Most recently,
Zampini and Tu~\cite{Zampini-2017-MBD} presented another approach to
multilevel BDDC\ including adaptive coarse space construction, which relies on
a special, so-called, deluxe scaling.
There are two main ingredients of a BDDC\ method: a coarse space, which is
defined by \emph{constraints} on the values of degrees of freedom, and a
scaling (averaging) operator, which provides a mapping between the solution
space and the space in which the solves in the preconditioner are performed.
The algorithm for adaptive selection of constraints for both methods the BDDC
and FETI-DP\ was originally proposed by Mandel and
Soused\'{\i}k~\cite{Mandel-2007-ASF}.\ The algorithm was later generalized in
a joint work with \v{S}\'{\i}stek~\cite{Mandel-2012-ABT} into three spatial
dimensions and implemented for the BDDC using an approach inspired by a
partial subassembly and a change of variables by Li and
Widlund~\cite{Li-2006-FBB}. Finally, we also reformulated the algorithm to
treat the coarse space explicitly~\cite{Sousedik-2013-AMB}. We note that there
are many other approaches to the adaptive construction of the coarse spaces in
BDDC, see~\cite{Pechstein-2017-UFA} and the references therein, as well as for
BDD see, e.g.,~\cite{Spillane-2013-ASC}. There have been several scalings
studied in the literature. In the \emph{multiplicity} scaling, the weights are
chosen proportionally to the number of subdomains sharing a given degree of
freedom, and it is regarded as not robust for coefficient jumps. The $\rho
$-\emph{scaling }leads to robustness, but it relies on knowledge of the
problem coefficients~\cite{Klawonn-2008-AFA}. The \emph{stiffness} scaling is based on the diagonal of
the stiffness matrix, but in some cases with irregular meshes it may lead to
high condition numbers~\cite{Pechstein-2013-FBE,Pechstein-2011-AFM}. All these
scalings involve diagonal matrices. Finally, the \emph{deluxe} scaling
introduced in~\cite{Dohrmann-2013-SRT} uses dense matrices, which are computed
from inverses of localized Schur complements. It has been observed to be quite
robust~\cite{Oh-2018-BAD,Zampini-2017-MBD} but also computationally intensive.
In this paper, we build on the primal strategy. The starting point is the
two-level algorithm from~\cite{Sousedik-2013-NBS}, which we combine with
adaptive selection of constraints following~\cite{Sousedik-2013-AMB} and apply
it to flow in heterogenous porous media. To this end, we use a reservoir from
the 10th {SPE} {C}omparative {S}olution {P}roject\ ({SPE~10}) cf.,
e.g.,~\cite{Aarnes-2007-INF,Christie-2001-SPE10} as the benchmark problem. The
BDDC method from~\cite{Sousedik-2013-NBS} solves for both flux and pressure
variables. The fluxes are resolved in three steps: the coarse solve is
followed by mutually independent subdomain solves, and last we look for a
divergence-free flux correction and pressure using conjugate gradients (CG)
with the BDDC preconditioner. The coarse solve in the first step is exactly
the same as the coarse solve used in the BDDC preconditioner in the step
three. It is assumed that the initial constraints preserve the iterates in a
\emph{balanced} subspace, in which the preconditioned operator is positive
definite. Our goal here is to adapt the method to flow in realistic
reservoirs, characterized by highly heterogeneous permeability coefficients in
as simple way as possible. In particular, we translate the ideas used for
elliptic problems {in}~\cite{Sousedik-2013-AMB} to mixed formulations of flow
in porous media discretized by the lowest-order Raviart-Thomas finite elements
(RT0). The main component of the extension\ is the use of additional
adaptive\ flux coarse basis functions. The starting point is the condition
number bound formulated as a generalized eigenvalue problem, which is replaced
by a number of local eigenvalue problems formulated for pairs of adjacent
subdomains, and the eigenvectors, corresponding to the eigenvalues larger than
a target condition number are used to construct the additional flux coarse
basis functions. We note that from this perspective our method can be viewed
as a way of numerical upscaling via the coarse basis functions known from the
BDDC. Unlike~\cite{Zampini-2017-MBD} we do not use a change of basis and
partial assembly of operators, and we also illustrate that for this problem
the multiplicity scaling in combination with the adaptive algorithm and a
simple diagonal rescaling of the pressure block in the setup of the problem is
sufficient to construct a robust algorithm. Numerical experiments in both 2D
and 3D demonstrate that the first two steps of the method exhibit some
numerical upscaling properties, and the convergence rate of conjugate
gradients in the last step can be estimated a priori in the setup of the
adaptive algorithm.
The paper is organized as follows. In Section~\ref{sec:model} we introduce the
model problem, in Section~\ref{sec:two-level}\ we recall the BDDC\ method and
the preconditioner, in Section~\ref{sec:adaptive}\ we formulate the algorithm
for adaptive selection of the flux constraints, in
Section~\ref{sec:implementation}\ we discuss some details of implementation,
in Section~\ref{sec:numerical}\ we present results of numerical experiments,
and finallly in Section~\ref{sec:conclusion} we summarize and conclude our work.
For convenience, we identify finite element functions with the vectors of
their coefficients in the corresponding finite element basis. These
coefficients are also called \emph{variables} or \emph{degrees of freedom}. At
a few places we will also identify linear operators with their matrices, in
bases that will be clear from the context. For a symmetric positive definite
bilinear form $a$, we will denote the energy norm by $\left\Vert u\right\Vert
_{a}=\sqrt{a\left( u,u\right) }$.
\section{Model problem}
\label{sec:model}Let $\Omega$ be a bounded domain in $%
\mathbb{R}
^{n}$, where $n=2$ or $3$. We~would like to find the solution of the following
mixed problem, which combines the Darcy's law
relating flux~$\mathbf{u}$ and pressure~$p$, and the
equation of continuity,
\begin{align}
k^{-1}\mathbf{u}+\nabla p & =0\quad\text{in }\Omega,\label{eq:problem-1}\\
\nabla\cdot\mathbf{u} & =f_{\Omega}\quad\text{in }\Omega,
\label{eq:problem-2}\\
p & =p_{N},\quad\text{on }\partial\Gamma_{N},\label{eq:problem-3}\\
\mathbf{u}\cdot\mathbf{n} & =g_{E}\quad\text{on }\partial\Gamma_{E},
\label{eq:problem-4}%
\end{align}
where $\partial\Omega=\overline{\Gamma}_{E}\cup\overline{\Gamma}_{N}$, and
$\mathbf{n}$\ denotes the unit outward normal of $\Omega$. The coefficient
$k=k_{p}/\mu$, where $k_{p}$ is the permeability of the porous medium
and\ $\mu$\ is the viscosity of the fluid. For simplicity, we will set $\mu=1$
and so$~k=k_{p}$. Without loss of generality we will also assume that
$\Gamma_{N}=\emptyset$, which requires a compatibility condition%
\begin{equation}
- \int_{\Omega}f_{\Omega}\,dx+\int_{\partial\Omega}g_{E}\,ds=0,
\label{eq:compatibility}%
\end{equation}
and the pressure $p$ will be uniquely determined up to an additive constant.
We will further assume that $g_{E}=0$. These assumptions motivate the
definition of a space%
\[
\mathbf{H}_{0}(\Omega;\operatorname{div})=\left\{ \mathbf{v:v}\in
L^{2}(\Omega);\nabla\cdot\mathbf{v}\in L^{2}(\Omega)\quad\text{and}%
\quad\mathbf{v}\cdot\mathbf{n}=0\;\text{on }\partial\Omega\right\} ,
\]
equipped with the norm
\[
\left\Vert \mathbf{v}\right\Vert _{\mathbf{H}_{0}(\Omega;\operatorname{div}%
)}^{2}=\left\Vert \mathbf{v}\right\Vert _{L^{2}(\Omega)}^{2}+H_{\Omega}%
^{2}\left\Vert \nabla\cdot\mathbf{v}\right\Vert _{L^{2}(\Omega)}^{2},
\]
where $H_{\Omega}$ is the characteristic size of $\Omega$, and the definition
of a space%
\[
L_{0}^{2}\left( \Omega\right) =\left\{ q:q\in L^{2}\left( \Omega\right)
\quad\text{and}\quad\int_{\Omega}qdx=0\right\} .
\]
The weak form of the problem we wish to solve, is
\begin{align}
\int_{\Omega}k^{-1}\mathbf{u}\cdot\mathbf{v}\,dx-\int_{\Omega}p\left(
\nabla\cdot\mathbf{v}\right) \,dx & =0,\quad\forall\mathbf{v}\in
\mathbf{H}_{0}(\Omega;\operatorname{div}),\label{eq:problem-weak-1}\\
-\int_{\Omega}\left( \nabla\cdot\mathbf{u}\right) q\,dx & =-\int_{\Omega
}f_{\Omega}q\,dx,\quad\forall q\in L_{0}^{2}\left( \Omega\right) .
\label{eq:problem-weak-2}%
\end{align}
We refer, e.g., to the monographs~\cite{Brezzi-1991-MHF,Toselli-2005-DDM} for
additional details and discussion.
Next, let $U$ be the lowest-order Raviart-Thomas (RT0)
finite element space with a zero normal component on $\partial\Omega$ and let
$Q$ be a space of piecewise constant finite element basis functions with a
zero mean on~$\Omega$.
These two spaces, defined on the triangulation$~\mathcal{T}_{h}$ of$~\Omega$,
where $h$\ denotes the mesh size, are finite dimensional subspaces of
$\mathbf{H}_{0}(\Omega;\operatorname{div})$ and $L_{0}^{2}(\Omega)$,
respectively, and they satisfy a uniform inf-sup condition,
see~\cite{Brezzi-1991-MHF}. Let us define the bilinear forms and the
right-hand side by%
\begin{align}
a\left( u,v\right) & =\int_{\Omega}k^{-1}\mathbf{u}\cdot\mathbf{v}%
\,dx,\label{eq:a}\\
b\left( u,q\right) & =-\int_{\Omega}\left( \nabla\cdot\mathbf{u}\right)
q\,dx,\label{eq:b}\\
\left\langle f,q\right\rangle & =-\int_{\Omega}f_{\Omega}q\,dx.
\label{eq:rhs}%
\end{align}
In the mixed finite element approximation of problem (\ref{eq:problem-weak-1}%
)--(\ref{eq:problem-weak-2}), we would like to find a pair of fluxes and
pressures $\left( u,p\right) \in\left( U,Q\right) $ such that
\begin{align}
a\left( u,v\right) +b\left( v,p\right) & =0,\qquad\forall v\in
U,\label{eq:variational-1}\\
b\left( u,q\right) & =\left\langle f,q\right\rangle ,\qquad\forall q\in Q.
\label{eq:variational-2}%
\end{align}
We note that $Q$ is a finite-dimensional subspace of $L_{0}^{2}\left(
\Omega\right) $ and therefore the unique solvability of the mixed problem
(\ref{eq:variational-1})--(\ref{eq:variational-2}) is guaranteed.
In the next section, we will describe the components of the two-level Nested
{BDDC}, which allows an efficient iterative solution of
problem~(\ref{eq:variational-1})--(\ref{eq:variational-2}).
\section{The BDDC method}
\label{sec:two-level}Let us consider a decomposition of$~\Omega$ into a set of
nonoverlapping subdomains $\Omega^{i}$, $i=1,\ldots,N,$ also called
substructures, forming a quasi-uniform triangulation of $\Omega$ and denote
the characteristic subdomain size by~$H$.
Each substructure is a union of finite elements with a matching discretization across the substructure
interfaces. Let $\Gamma^{i}=\partial\Omega^{i}\backslash\partial\Omega$\ be
the set of boundary degrees of freedom of a~substructure$~\Omega^{i}$\ shared
with another substructure$~\Omega^{j}$, $j\neq i$, and define the interface by
$\Gamma=\cup_{i=1}^{N}\Gamma^{i}$. Let us define a \emph{face} as an
intersection $\Gamma^{ij}=\Gamma^{i}\cap\Gamma^{j}$, $i\neq j$ and let us
denote by $\mathcal{F}$ the set of all faces between substructures. Note that
with respect to the~RT0 discretization we define only
\emph{faces}, but no \emph{corners} (nor \emph{edges} in 3D) known from other
types of substructuring.
We will solve problems similar to (\ref{eq:variational-1}%
)--(\ref{eq:variational-2}) on each substructure. As we have noted, such
problems determine the pressure uniquely up to a constant,
so we consider the decomposition of the pressure space
\begin{equation}
Q=Q_{0}\oplus Q_{I},\qquad Q_{I}=Q^{1}\times\cdots Q^{N},
\label{eq:Q_decomposition}%
\end{equation}
where $Q_{0}$ consists of functions that are constant in each subdomain and
have a zero average over the whole domain$~\Omega$, and the product
space$\ Q_{I}$ consists of functions that have zero weighted average over one
subdomain at a time. That is,
\begin{equation}
\int_{\Omega}q_{0}\,dx=0,\quad\forall q_{0}\in Q_{0}\text{\qquad and\qquad
}\int_{\Omega^{i}}q^{i}\,dx=0,\quad\forall q^{i}\in Q^{i},\;i=1,\dots,N.
\label{eq:Q_decomposition-int}%
\end{equation}
Next, let $W^{i}$ be the space of flux finite element functions on a
substructure $\Omega^{i}$ such that all of their degrees of freedom on
$\partial\Omega^{i}\cap\partial\Omega$ are zero, and let%
\[
W=W^{1}\times\dots\times W^{N}.
\]
Hence $U\subset W$ can be viewed as the subspace of
flux functions from~$W$ such that $u \cdot \mathbf{n}$ is continuous across substructure interfaces.
Define $U_{I}\subset U$ as the subspace of
flux functions such that $u \cdot \mathbf{n}$ is zero on the interface~$\Gamma$,
i.e., the space of \textquotedblleft interior\textquotedblright\ flux
functions, and let us also define a mapping
$P:w\in W\longmapsto u_{I}\in U_{I}$ such that%
\begin{align*}
a\left( u_{I},v_{I}\right) +b\left( v_{I},p_{I}\right) & =a\left(
w,v_{I}\right) ,\quad\forall v_{I}\in U_{I},\\
b\left( u_{I},q_{I}\right) & =b\left( w,q_{I}\right) ,\quad\forall
q_{I}\in Q_{I}.
\end{align*}
Functions from $\left( I-P\right) W$
will be called Stokes harmonic, cf.~\cite[Section~9.4.2]{Toselli-2005-DDM}.
Let $\widehat{W}$ be the space of Stokes harmonic functions that are
continuous across substructure interfaces, and such that
\begin{equation}
U=\widehat{W}\oplus U_{I},\qquad\widehat{W}\perp_{a}U_{I}.
\label{eq:discrete-harm}%
\end{equation}
We note that from the divergence theorem, for all $u_{I}\in U_{I}$ and
$q_{0}\in Q_{0}$, we obtain%
\[
b\left( u_{I},q_{0}\right) =-\int_{\Omega}\left( \nabla\cdot u_{I}\right)
q_{0}dx=0.
\]
The BDDC is a two-level method characterized by a selection of certain
\emph{coarse degrees of freedom}. In the present setting these will be flux
averages over faces shared by a pair of substructures at a time and pressure
averages over each substructure. Let us denote by $\widetilde{W}\subset\left(
I-P\right) W$ the subspace of Stokes harmonic functions such that their flux
coarse degrees of freedom on adjacent substructures coincide; for this reason
we will use the terms coarse degrees freedom and \emph{constraints}
interchangeably. Specifically, we define a zero-net flux constraint for a
face$~\Gamma^{ij}$ as%
\begin{equation}
\int_{\Gamma^{ij}}\left( w^{i}-w^{j}\right) \cdot\mathbf{n}^{i}%
\mathbf{\,}ds=0,\qquad w^{i}\in W^{i},\;w^{j}\in W^{j}
\label{eq:starting_flux_constraint}%
\end{equation}
where $\mathbf{n}^{i}$ denotes the unit outward normal of $\Omega^{i}$.
\begin{assumption}
\label{ass:enough-constraints}Initial flux constraints
(\ref{eq:starting_flux_constraint}) are prescribed over all faces.
\end{assumption}
This set of initial constraints will be enriched by the adaptive method
described in Section~\ref{sec:adaptive}. Now, let us define $\widetilde{W}%
_{\Pi}\subset\widetilde{W}$ as the subspace of functions with values given by
the flux coarse degrees of freedom between adjacent substructures, and such
that they are Stokes harmonic, and let us also define $\widetilde{W}_{\Delta
}\subset\widetilde{W}$ as the subspace of all function such that
their flux coarse degrees of freedom vanish. The functions in$~\widetilde{W}%
_{\Pi}$ are uniquely determined by the values of their coarse degrees of
freedom, and%
\begin{equation}
\widetilde{W}=\widetilde{W}_{\Delta}\oplus\widetilde{W}_{\Pi}.
\label{eq:tilde-dec}%
\end{equation}
The next ingredient is the projection $E:\widetilde{W}\rightarrow\widehat{W}$
defined by taking a weighted average of corresponding degrees of freedom on
substructure interfaces, cf. Remark~\ref{rem:scaling}.
In implementation, we define$~\widetilde{W}$ using a matrix$~C_{U}$, which is
a block diagonal with blocks$~C_{U}^{i}$, $i=1,\dots,N$, and it is constructed
exactly as matrix~$C$ in~\cite[Section~2.3]{Mandel-2007-ASF},
\begin{equation}
\widetilde{W}=\left\{ w\in\left( I-P\right) W:C_{U}\left( I-E\right)
w=0\right\} . \label{eq:tilde-def}%
\end{equation}
The values $C_{U}v$ will be called local flux coarse degrees of freedom, and
the space$~\widetilde{W}$ consists of all functions such that their flux
coarse degrees of freedom on adjacent substructures have zero jumps. The
decomposition of the space $Q_{I}$ given by~(\ref{eq:Q_decomposition}) can be
also managed by\ constraints. We remark that this is somewhat non-standard
practice in substructuring, because the constraints are commonly related only
to the degrees of freedom at the interfaces. So, we define a space$~Q^{i}$,
for $i=1,\dots,N$, as
\begin{equation}
Q^{i}=\left\{ \left( q\in Q\right) |_{\Omega^{i}}:C_{Q}^{i}q=0\right\} ,
\label{eq:Qi-def}%
\end{equation}
where the matrices~$C_{Q}^{i}$ are selected so
that~(\ref{eq:Q_decomposition-int}) is satisfied. In implementation,~$C_{Q}%
^{i}$ is a row vector with entries given by volumes of finite elements in
subdomain~$i$. Now we have all ingredients to recall the two-level BDDC
method~\cite[Algorithm~2]{Sousedik-2013-NBS}.
\begin{algorithm}
[BDDC method]\label{alg:two-level-nested}Find the solution $\left(
u,p\right) \in\left( U,Q\right) $\ of problem (\ref{eq:variational-1}%
)--(\ref{eq:variational-2}) by computing:
\begin{enumerate}
\item the coarse component $u_{0}\in\widehat{W}$: solving $\left(
\widetilde{w}_{0},p_{0}\right) \in\left( \widetilde{W}_{\Pi},Q_{0}\right) $
from
\begin{align}
a\left( \widetilde{w}_{0},\widetilde{v}_{\Pi}\right) +b\left(
\widetilde{v}_{\Pi},p_{0}\right) & =0,\qquad\forall\widetilde{v}_{\Pi}%
\in\widetilde{W}_{\Pi},\label{eq:two-level-nested_coarse-1}\\
b\left( \widetilde{w}_{0},q_{0}\right) & =\left\langle f,q_{0}%
\right\rangle ,\qquad\forall q_{0}\in Q_{0},
\label{eq:two-level-nested_coarse-2}%
\end{align}
dropping $p_{0}$, and applying the projection
\[
u_{0}=E\widetilde{w}_{0}.
\]
\item the substructure components $\left( u_{I},p_{I}\right) \in\left(
U_{I},Q_{I}\right) $ from
\begin{align*}
a\left( u_{I},v_{I}\right) +b\left( v_{I},p_{I}\right) & =-a\left(
u_{0},v_{I}\right) ,\qquad\forall v_{I}\in U_{I},\\
b\left( u_{I},q_{I}\right) & =\left\langle f,q_{I}\right\rangle -b\left(
u_{0},q_{I}\right) ,\qquad\forall q_{I}\in Q_{I},
\end{align*}
\qquad\qquad
dropping $p_{I}$, and adding the solutions as
\begin{equation}
u^{\ast}=u_{0}+u_{I}. \label{eq:two-level_u^star}%
\end{equation}
\item the correction and the pressure $\left( u_{\text{corr}},p\right)
\in\left( U,Q\right) $ from%
\begin{align}
a\left( u_{\text{corr}},v\right) +b\left( v,p\right) & =-a\left(
u^{\ast},v\right) ,\qquad\forall v\in U,\label{eq:two-level_corr-1}\\
b\left( u_{\text{corr}},q\right) & =0,\qquad\forall q\in Q.
\label{eq:two-level_corr-2}%
\end{align}
Specifically, use the CG method with the BDDC\ preconditioner defined in
Algorithm~\ref{alg:two-level}, using the same setup of the coarse problem as
in (\ref{eq:two-level-nested_coarse-1})--(\ref{eq:two-level-nested_coarse-2}).
Finally, the flux variables are obtained as
\[
u=u^{\ast}+u_{\text{corr}}.
\]
\end{enumerate}
\end{algorithm}
\begin{remark}
\label{rem:scaling}The difference between problems (\ref{eq:variational-1}%
)--(\ref{eq:variational-2}) and (\ref{eq:two-level_corr-1}%
)--(\ref{eq:two-level_corr-2}) is that the latter problem has a vanishing
second component, and therefore the correction $u_{\text{corr}}$ is
divergence-free by (\ref{eq:two-level_corr-2}). Also, we note that the initial
flux constraints constructed according to~(\ref{eq:starting_flux_constraint})
do not allow scaling weights in the scaling operator$~E$ to vary along the
interface in order for $u^{\ast}$ to satisfy
\[
b\left( u^{\ast},q\right) =\left\langle f,q\right\rangle ,\qquad\forall q\in
Q.
\]
Therefore, in our numerical experiments, we use the multiplicity scaling
unless the coefficient jumps are aligned with subdomain interfaces, see
also~\cite[Remark~2]{Sousedik-2013-NBS}.
\end{remark}
The application of the BDDC\ preconditioner for the computation of $u_{corr}$
using two- resp. three-level method was studied by
Tu~\cite{Tu-2005-BAM,Tu-2011-TBA}. In~\cite{Sousedik-2013-NBS}, we applied
Algorithm~\ref{alg:two-level-nested} recursively. Here, we will introduce a
specific construction of the space$~\widetilde{W}_{\Pi}$ but before doing so,
let us discuss Step~3 of Algorithm~\ref{alg:two-level-nested} in more detail.
The first step in substructuring is typically reduction of the problem to
interfaces. In particular, problem~(\ref{eq:two-level_corr-1}%
)--(\ref{eq:two-level_corr-2}) is reduced to finding $\left( \widehat{w}%
,p_{0}\right) \in\left( \widehat{W},Q_{0}\right) $ such that%
\begin{align}
a\left( \widehat{w},\widehat{v}\right) +b\left( \widehat{v},p_{0}\right)
& =\left\langle f^{\ast},\widehat{v}\right\rangle ,\qquad\forall
\widehat{v}\in\widehat{W},\label{eq:corr-reduced-1}\\
b\left( \widehat{u},q_{0}\right) & =0,\qquad\forall q_{0}\in Q_{0},
\label{eq:corr-reduced-2}%
\end{align}
where $f^{\ast}\in\widehat{W}^{\prime}$ is the reduced right-hand side. In
implementation, the interiors are eliminated by the static condensation,
problem~(\ref{eq:corr-reduced-1})--(\ref{eq:corr-reduced-2}) is solved
iteratively, and the interiors $\left( u_{I},p_{I}\right) \in\left(
U_{I},Q_{I}\right) $ are recovered in the post-correction. The key
observation is, cf.~\cite[Section 9.4.2]{Toselli-2005-DDM}, that if we define
a \emph{balanced} subspace%
\[
\widehat{W}_{B}=\left\{ \widehat{w}\in\widehat{W}:b\left( \widehat{w}%
,q_{0}\right) =0,\quad\forall q_{0}\in Q_{0}\right\} ,
\]
problem~(\ref{eq:corr-reduced-1})--(\ref{eq:corr-reduced-2}) becomes
equivalent to the positive definite problem
\[
\widehat{u}\in\widehat{W}_{B}:\quad a\left( \widehat{u},\widehat{v}\right)
=\left\langle f^{\ast},v\right\rangle ,\quad\forall\widehat{v}\in
\widehat{W}_{B}.
\]
This observation justifies use of the CG method\ preconditioned by the BDDC
provided that an initial guess is balanced, e.g., zero, and the outputs of the
preconditioner are also balanced.
It also implies that the iterates are effectively performed with the flux
unknowns, and the pressure components$~p_{0}$ are resolved in the coarse
correction of the preconditioner. The precise formulation of the two-level
BDDC\ preconditioner for saddle-point problems follows. It is the reduced
variant of~\cite[Algorithm~3]{Sousedik-2013-NBS}.
\begin{algorithm}
[BDDC preconditioner]\label{alg:two-level} Define the preconditioner $\left(
r_{B},0\right) \in\left( \widehat{W}^{\prime},Q_{0}^{\prime}\right)
\longmapsto\left( \widehat{w},p_{0}\right) \in\left( \widehat{W}%
,Q_{0}\right) $\ by computing:
\begin{enumerate}
\item the coarse correction $\left( w_{\Pi},p_{0}\right) \in\left(
\widetilde{W}_{\Pi},Q_{0}\right) $ from
\begin{align}
a\left( w_{\Pi},z_{\Pi}\right) +b\left( z_{\Pi},p_{0}\right) &
=\left\langle r_{B},Ez_{\Pi}\right\rangle ,\qquad\forall z_{\Pi}%
\in\widetilde{W}_{\Pi},\label{eq:two-level_coarse-1}\\
b\left( w_{\Pi},q_{0}\right) & =0,\qquad\forall q_{0}\in Q_{0}.
\label{eq:two-level_coarse-2}%
\end{align}
\item the substructure correction $w_{\Delta}\in\widetilde{W}_{\Delta}$ from%
\begin{align*}
a\left( w_{\Delta},z_{\Delta}\right) +b\left( z_{\Delta},p_{I\Delta
}\right) & =\left\langle r_{B},Ez_{\Delta}\right\rangle ,\qquad\forall
z_{\Delta}\in\widetilde{W}_{\Delta},\\
b\left( w_{\Delta},q_{I}\right) & =0,\qquad\forall q_{I}\in Q_{I}.
\end{align*}
\item the sum and average of the two corrections%
\begin{equation}
\widehat{w}=E\left( w_{\Pi}+w_{\Delta}\right) . \label{eq:two-level_w}%
\end{equation}
\end{enumerate}
\end{algorithm}
In order to state the condition number bound, we also need to introduce a
larger space of \emph{balanced} functions $\widetilde{W}_{B}$ such that
$\widehat{W}_{B}\subset\widetilde{W}_{B}$\ defined as
\[
\widetilde{W}_{B}=\left\{ w\in\widetilde{W}:b\left( v,q_{0}\right)
=0,\quad\forall q_{0}\in Q_{0}\right\} .
\]
The space$~\widetilde{W}_{\Pi}$ is also balanced, i.e., $\widetilde{W}_{\Pi
}\subset\widetilde{W}_{B}$ by (\ref{eq:two-level_coarse-2}). Then also the
output of the preconditioner~(\ref{eq:two-level_w}) satisfies $\widehat{w}%
\in\widehat{W}_{B}$, and we refer to~\cite[Lemma~3]{Sousedik-2013-NBS} for the proof.
Finally, we formulate the condition number bound. If we note that $E$ is a
projection, it is the same as~\cite[Theorem~4]{Sousedik-2013-NBS}
or~\cite[Theorem~6.1]{Tu-2005-BAM}, cf. also~\cite[Theorem$~$3]%
{Mandel-2007-ASF}.
\begin{theorem}
\label{thm:two-level-bound}The condition number~$\kappa$ of the BDDC
preconditioner from Algorithm~\ref{alg:two-level} satisfies
\begin{equation}
\kappa\leq\omega=\max\left\{ {\sup_{w\in\widetilde{W}_{B}}\frac{\left\Vert
\left( I-E\right) w\right\Vert _{a}^{2}}{\left\Vert w\right\Vert _{a}^{2}%
},1}\right\} \leq C\left( 1+\log\frac{H}{h}\right) ^{2}\text{.}
\label{eq:two-level-bound}%
\end{equation}
\end{theorem}
The bound$~\omega$ in~(\ref{eq:two-level-bound}) inspires the adaptive
selection of the flux constraints.
\section{Adaptive selection of the flux constraints}
\label{sec:adaptive}The basic idea is same as in our previous work on adaptive
BDDC for elliptic
problems~{\cite{Mandel-2007-ASF,Mandel-2012-ABT,Sousedik-2013-AMB}}. The
bound~$\omega$ in~(\ref{eq:two-level-bound}) is equal to the maximal
eigenvalue $\lambda_{\max}$\ of the generalized eigenvalue problem%
\begin{equation}
w\in\widetilde{W}_{B}:\quad a\left( \left( I-E\right) w,\left( I-E\right)
z\right) =\lambda\,a\left( w,z\right) ,\quad\forall z\in\widetilde{W}_{B}.
\label{eq:eigenvalue-problem}%
\end{equation}
From the Courant-Fisher-Weyl minimax principle cf., e.g.,~\cite[Theorem
5.2]{Demmel-1997-ANL}, the bound$~\omega$ can be decreased by adding
constraints in the definition of the space$~\widetilde{W}_{B}$ as:
\begin{lemma}
[\cite{Mandel-2012-ABT,Sousedik-2013-AMB}]The generalized eigenvalue
problem~(\ref{eq:eigenvalue-problem}) has eigenvalues $\lambda_{1}\geq
\lambda_{2}\geq\dots\lambda_{n}\geq0$. Denote the corresponding eigenvectors
by$~w_{\ell}$. Then, for any $k=1,\dots,n-1$, and any linear functionals
$L_{\ell}$, $\ell=1,\dots,k$,%
\[
\max\left\{ \frac{\left\Vert \left( I-E\right) w\right\Vert _{a}^{2}%
}{\left\Vert w\right\Vert _{a}^{2}}:w\in\widetilde{W}_{B},\;L_{\ell}\left(
w\right) =0\;\;\forall\ell=1,\dots,k\right\} \geq\lambda_{k+1},
\]
with equality if%
\[
L_{\ell}\left( w\right) =a\left( \left( I-E\right) w_{\ell},\left(
I-E\right) w\right) .
\]
\end{lemma}
Because solving the global eigenvalue problem\ (\ref{eq:eigenvalue-problem})
is computationally expensive, we replace it by a collection of much smaller
problems defined for all pairs of adjacent substructures, where a pair of
substructures is adjacent if they share a face. All quantities associated with
a pair of adjacent substructures $\Omega^{i}$ and $\Omega^{j}$ will be denoted
by a superscript$~^{ij}$. In particular, we define $W^{ij}=W^{i}\times W^{j}$,
and the local space~$\widetilde{W}^{ij}$ of Stokes harmonic functions that
satisfy the initial constraints at the face$~\Gamma^{ij}$ by%
\begin{equation}
\widetilde{W}_{B}^{ij}=\left\{ w\in\left( I-P^{ij}\right) W^{ij}:C_{U}%
^{ij}\left( I-E^{ij}\right) w=0\right\} . \label{eq:def-Wtilde-dual}%
\end{equation}
We note that the space$~\widetilde{W}_{B}^{ij}$ is balanced, which is an
implication of Assumption~\ref{ass:enough-constraints}.
In these settings~(\ref{eq:eigenvalue-problem}) becomes a \emph{local} problem
to find $w\in\widetilde{W}_{B}^{ij}$ such that
\begin{equation}
a^{ij}\left( \left( I-E^{ij}\right) w,\left( I-E^{ij}\right) z\right)
=\lambda\,a^{ij}\left( w,z\right) ,\quad\forall z\in\widetilde{W}_{B}^{ij}.
\label{eq:eigenvalue-problem-local}%
\end{equation}
The bilinear form $a^{ij}$\ is associated on$~\widetilde{W}_{B}^{ij}$ with the
Schur complement$~S^{ij}$ defined with respect to the interfaces$~\Gamma^{i}%
$,$~\Gamma^{j}$, and is positive-definite, cf.~\cite[Lemma$~$3.1]{Tu-2005-BAM}.
Now we can proceed in the same way as in~\cite{Sousedik-2013-AMB}. Let us
denote by$~\mathcal{C}$ the matrix corresponding to $C_{U}^{ij}\left(
I-E^{ij}\right) $. The orthogonal projection onto null$~\mathcal{C}$ is given
by%
\[
\Pi=I-\mathcal{C}^{T}\left( \mathcal{CC}^{T}\right) ^{-1}\mathcal{C},
\]
and we implement the local generalized eigenvalue
problems~(\ref{eq:eigenvalue-problem-local}) as%
\begin{equation}
\Pi\left( I-E^{ij}\right) ^{T}S^{ij}\left( I-E^{ij}\right) \Pi
w=\lambda\,\Pi S^{ij}\Pi w, \label{eq:eigenvalue-problem-local-matrix}%
\end{equation}
which can be either solved using a dense eigenvalue
solver~\cite{Mandel-2007-ASF} or eventually, since
\[
\text{null}\left[ \Pi S^{ij}\Pi\right] \subset\text{null}\left[ \Pi\left(
I-E^{ij}\right) ^{T}S^{ij}\left( I-E^{ij}\right) \Pi\right] ,
\]
a subspace iterations such as the LOBPCG method~\cite{Knyazev-2001-TOP}, which
runs effectively in the factorspace, could be also used.
From~(\ref{eq:eigenvalue-problem-local-matrix}), we wish the constraints to
satisfy
\[
L_{\ell}\left( w\right) =w_{\ell}^{T}\Pi\left( I-E^{ij}\right) ^{T}%
S^{ij}\left( I-E^{ij}\right) \Pi w=0.
\]
That is, we would add into the matrix $C_{U}^{ij}$ the rows
\begin{equation}
c_{\ell}^{ij}=w_{\ell}^{T}\Pi\left( I-E^{ij}\right) ^{T}S^{ij}\left(
I-E^{ij}\right) \Pi, \label{eq:constraints}%
\end{equation}
but because by~\cite[Proposition 1]{Sousedik-2013-AMB} each row can be split
as $c_{\ell}^{ij}=\left[
\begin{array}
[c]{cc}%
c_{\ell}^{i} & -c_{\ell}^{i}%
\end{array}
\right] $ and either half of$~c_{\ell}^{ij}$\ is used to augment the
matrices$~C_{U}^{i}$ and$~C_{U}^{j}$, see~(\ref{eq:C^i}). We note that, due to
the discretization using RT0 elements, the added rows are readily available in
the form used in substructuring. The adaptive BDDC algorithm follows.
\begin{algorithm}
[Adaptive BDDC]\label{alg:adaptive}Find the smallest $k$ for every two
adjacent substructures to guarantee that $\lambda_{k+1}\leq\tau$, where $\tau$
is a given tolerance threshold (the target condition number), and add the
constraints~(\ref{eq:constraints}) to the definition of $\widetilde{W}$.
\end{algorithm}
After the adaptive constraints are added, we define the \emph{heuristic
condition number indicator} as the largest eigenvalue $\omega^{ij}$ of all
local eigenvalue problems~(\ref{eq:eigenvalue-problem-local}), that is
\begin{equation}
\widetilde{\omega}=\max\left\{ \omega^{ij}:\Omega^{i}\text{ and }\Omega
^{j}\text{ are adjacent}\right\} . \label{eq:indicator}%
\end{equation}
\begin{remark}
It has been shown in~\cite[Theorem~4.3]{Zampini-2017-MBD}, see
also~\cite[Theorem~3.10]{Pechstein-2017-UFA} and~\cite[Theorem~3.3]%
{Oh-2018-BAD}, that the condition number$~\kappa$ of the adaptive BDDC
operator satisfies
\[
\kappa\leq\widetilde{\omega}\,N_{F}^{2},
\]
where $N_{F}$ is the maximum number of faces of any subdomain. We note that
this bound is pessimistic due to the factor$~N_{F}^{2}$, and in fact we
observed $\kappa\approx\widetilde{\omega}$ in all experiments.
\end{remark}
\section{Implementation remarks}
\label{sec:implementation}First, we describe a rescaling used to preserve
numerical stability of the method with highly heterogeneous permeability
coefficients. The variational problem~(\ref{eq:variational-1}%
)--(\ref{eq:variational-2}) can be written in the matrix form as%
\begin{equation}
\left[
\begin{array}
[c]{cc}%
A & B^{T}\\
B & 0
\end{array}
\right] \left[
\begin{array}
[c]{c}%
u\\
p
\end{array}
\right] =\left[
\begin{array}
[c]{c}%
0\\
f
\end{array}
\right] . \label{eq:system}%
\end{equation}
Assuming that the mesh size$~h\approx1$, the entries in$~A$ are$~O\left(
k^{-1}\right) $ and the entries in$~B$ are$~O\left( 1\right) $. In
particular, in the case of the SPE~10 data set we get $k^{-1}\approx
10^{6}-10^{12}$, and we found that some of the subdomain matrices and the
matrix of the coarse problem may appear numerically singular. Due to the
discontinuous approximation of the pressure, $B$ is a block-diagonal
rectangular matrix. Each block corresponds to a particular subdomain$,$ and it
can be rescaled, e.g., by an average of the diagonal entries of $A$
corresponding to the degrees of freedom in this subdomain. Collecting this
scaling coefficients in a diagonal matrix$~D$, we replace~(\ref{eq:system})
by
\begin{equation}
\left[
\begin{array}
[c]{cc}%
A & B^{T}D\\
DB & 0
\end{array}
\right] \left[
\begin{array}
[c]{c}%
u\\
\overline{p}%
\end{array}
\right] =\left[
\begin{array}
[c]{c}%
0\\
Df
\end{array}
\right] , \label{eq:system-scaled}%
\end{equation}
and the pressure is recovered at the end of computations as $p=D\overline{p}$.
\subsection{Coarse degrees of freedom}
\label{sec:coarse-dofs}The selection of the flux coarse degrees of freedom or,
equivalently, flux constraints entails construction of the matrix~$C_{U}$ in
the definition of the space~$\widetilde{W}$ by~(\ref{eq:tilde-def}).
Similarly, the selection of the pressure constraints, which facilitate the
decomposition~(\ref{eq:Q_decomposition}), entails construction of the
matrices~$C_{Q}^{i}$, $i=1,\dots,N$, in the definition of the spaces$~Q^{i}$
by~(\ref{eq:Qi-def}). Following the standard practice in substructuring, in
implementation we work with global and local degrees of freedom and the
corresponding spaces, and vectors from these spaces are related by a
restriction operator (a~zero-one matrix). Therefore, the matrix$~C_{U}$ is
constructed as a block-diagonal matrix using blocks$~C_{U}^{i}$ that select
local flux coarse degrees of freedom from all degrees of freedom of
substructure~$i$, see~\cite[Section~2.3]{Mandel-2007-ASF} for details. In the
mixed finite element settings, each local coarse degrees of freedom selection
matrix is constructed simply by augmenting the matrix$~C_{U}^{i}$ by a
row$~C_{Q}^{i}$ as
\begin{equation}
\left[
\begin{array}
[c]{cc}%
C_{U}^{i} & \\
& C_{Q}^{i}%
\end{array}
\right] ,\qquad i=1,\dots,N, \label{eq:C^i}%
\end{equation}
and the matrices$~C_{U}^{i}$ may be further augmented by the adaptive
algorithm, see~(\ref{eq:constraints}).
\subsection{Solution of the local generalized eigenvalue problems}
The choice of an eigensolver for the eigenvalue
problems~(\ref{eq:eigenvalue-problem-local-matrix}) is a delicate one. In
general, the decision whether to use a dense or a sparse\ eigensolver depends
on the type of the eigenvalue problem, size of the substructures, dimension of
the problem, availability of a preconditioner for a sparse solver, and
conditioning and numerical sensitivity of the underlying problem. All these
factors will clearly affect the overall computational cost and performance of
the method. We note that the
formulation~(\ref{eq:eigenvalue-problem-local-matrix}) allows to use a
matrix-free iterative method such as the LOBPCG~\cite{Knyazev-2001-TOP} in the
same way as for elliptic problems, including that it can be further
preconditioned by a local version of the BDDC as suggested in~\cite[Section~5]%
{Sousedik-2013-AMB}, see also~\cite{Klawonn-2018-CLL}. However, we found
that\ dense eigenvalue solvers are more suitable for the SPE~10 dataset due to
their robustness, and we used \textsc{Matlab} function \texttt{eig} in the
numerical experiments.
\subsection{Computational cost}
Clearly, the two most computationally expensive parts of the method are the
setup of the constraints by solving the set of the local eigenvalue problems,
and the factorization of the coarse problem. There are many eigenvalue
problems to be solved, but they are small and can be solved in parallel---this
feature is similar to the setup of multiscale finite element
methods~\cite{Efendiev-2009-MFE}. Assuming that these can be solved
efficiently, the bottleneck in computations is the factorization of the coarse
problem. Specifically, it is crucial for the application of the method to
appropriately balance the effort in the preconditioner and the global linear
solver through a judicious choice of$~\tau$. This could be, for example,
achieved as follows: one can partition the domain into subdomains balancing
the sizes of subdomains and assuming certain size of the coarse problem (and
ideally also taking into account the coefficient jumps and minimizing the size
of interfaces), solve the set of local eigenvalue problems, and based on the
eigenvalues determine the number of additional adaptive constraints (and hence
the value of$~\widetilde{\omega}$) which minimize the work needed to factor
the coarse problem and the work needed by preconditioned conjugate gradients,
including the coarse problem back-substitutions, needed to reduce the error to
desired accuracy based on the well-known error reduction formula of conjugate
gradients see, e.g.,~\cite[Theorem~10.2.6]{Golub-1996-MAC}.
\section{Numerical experiments}
\label{sec:numerical}We implemented the method in \textsc{Matlab}\ and studied
its convergence for problems with large variations in the permeability
coefficients $k$. In all experiments we used relative residual tolerance
$10^{-6}$ as the convergence criterion for the conjugate gradients. First, we
run a test with jumps in$~k$ aligned with substructure interfaces, see
Figure~\ref{fig:jumps}. For this problem we used stiffness scaling, which is
in case of the lowest-order Raviart-Thomas (RT0) elements equivalent\ to the
$\rho$-scaling. This also implies that the stiffness scaling works well for
irregular meshes (unlike for nodal elements). The conjugate gradients with the
BDDC\ preconditioner converged in~$15$ steps and the approximate condition
number computed from the L\'{a}nczos sequence in conjugate gradients was
$\kappa=4.046$; with $k=1$ the method converged in $14$ steps and
$\kappa=4.050$, see the rightmost column in Table~\ref{tab:layers}. In the
remaining experiments, we focused on problems with highly heterogeneous
coefficients, and we used the multiplicity scaling. Specifically, we simulated
flow in a porous media given by Model~2 of the 10th SPE Comparative Solution
Project~\cite{Christie-2001-SPE10}, which is publicly available on the
Internet\footnote{\url{http://www.sintef.no/Projectweb/GeoScale/Results/MsMFEM/SPE10/}
} and, in particular, we used a \textsc{Matlab} dataset described
in~\cite{Aarnes-2007-INF}. The dimensions of the full model are $1200\times
2200\times170$~(ft), and the distribution of the coefficients$~k$ is given
over a regular Cartesian grid with $60\times220\times85$ grid-blocks. We used
several layers and two 3D cutouts of the model for our numerical experiments.
For the experiments in 2D, we used layers 1, 20, 60 and 85 shown in
Figures~\ref{fig:layers_1_20}--\ref{fig:layers_60_85}. In the top layers~1 and
20 the permeability is relatively smooth, whereas the bottom layers~60, and 85
are fluvial and they are characterized by a spaghetti of narrow high-flow
channels. In all layers the permeabilities range over at least six orders of
magnitude. To drive a flow, we impose an injection (source) and a production
well (sink) in the lower-left and upper-right corners, respectively. The
discretization of each layer by the quadrilateral RT0 finite elements yields
$39,880$ degrees of freedom. The layers were partitioned into subdomains in
four ways: using two geometrically regular partitionings with the coarsening
ratios $H/h=30$ and $H/h=10$, and two irregular partitionings. The details of
the partitionings are summarized in Table~\ref{tab:partitioning}\ and
illustrated by Figures~\ref{fig:layers_1_20}--\ref{fig:layers_60_85}. For the
experiments in~3D, we used two domains consisting of $30\times30\times30$
elements extracted from layers~$1$--$30$ and $56$--$85$ of the SPE~10 problem
shown in Figure~\ref{fig:mixed_RT0_3D}. To drive a flow, we impose an
injection (source) and a production well (sink) in two distant corners of the
domain. The discretization by the hexahedral lowest-order Raviart-Thomas (RT0)
finite elements yields $110,700$ degrees of freedom. The domain was
partitioned into subdomains in two ways: using one geometrically regular
partitioning with the coarsening ratio $H/h=10$, and an irregular
partitioning. The details of the partitionings are summarized in
Table~\ref{tab:partitioning} and illustrated by Figure~\ref{fig:3D-metis}. All
irregular partitionings were obtained using \textsc{METIS~4.0}%
~\cite{Karypis-1998-MSP}, and in order to test the adaptive algorithm we did
not take into account the permeability coefficients.
It is interesting to note that the adaptive flux coarse basis functions
capture to some extent features of the solution on the finite element mesh,
and the quality of this approximation improves as the threshold$~\tau$ in
Algorithm~\ref{alg:adaptive} decreases. We illustrate this fact by relative
errors of solutions~$u_{0}$ and~$u^{\ast}$\ obtained in Steps~1 and~2 of
Algorithm~\ref{alg:two-level-nested} with respect to the exact
solution$~u_{\text{exact}}$ obtained by a direct solve of the full problem.
Specifically, the two relative errors are reported in tables as
\begin{equation}
\epsilon_{0}=\frac{\left\Vert u_{0}-u_{\text{exact}}\right\Vert }{\left\Vert
u_{\text{exact}}\right\Vert },\qquad\epsilon^{\ast}=\frac{\left\Vert u^{\ast
}-u_{\text{exact}}\right\Vert }{\left\Vert u_{\text{exact}}\right\Vert }.
\label{eq:epsilon}%
\end{equation}
We also compare the adaptive method with constraints inspired by
\textsl{Multiscale mixed finite element method (MsMFEM)}
cf.~\cite[Algorithm~2.5.2]{Efendiev-2009-MFE}
or~\cite[Section~3.2.1]{Aarnes-2006-HMM}.
In particular, instead of the local eigenvalue problems we solved local
Darcy's flow problems, that is local counterparts of
problem~(\ref{eq:problem-1})--(\ref{eq:problem-2}), with the source term
\[
f\left( x\right) =%
\begin{cases}
\;\;w_{i}, & \text{for }x\in\Omega^{i}\text{,}\\
-w^{j}, & \text{for }x\in\Omega^{j}\text{,}%
\end{cases}
\]
and zero flux boundary condition on $\partial\Omega^{i}\cap\partial\Omega^{j}%
$. The source distribution function is set to $w_{i}\left( x\right)
=1/\left\vert \Omega^{i}\right\vert $ in all subdomains except those
containing a well, in which
\[
w_{i}\left( x\right) =\frac{f\left( x\right) }{\int_{\Omega^{i}}f\left(
\xi\right) \,d\xi},
\]
to ensure a conservative approximation on the fine grid. In the numerical
experiments we then used the set of basic
constraints~(\ref{eq:starting_flux_constraint}) enriched by solving the above
problem and taking the values of flux degrees of freedom on $\partial
\Omega^{i}\cap\partial\Omega^{j}$ as additional constraints. Nevertheless, we
note that there are other more advanced solvers based on multiscale strategies
available in the literature see, e.g., Yang et al.~\cite{Yang-2018-TGP} or la
Cour Christensen et al.~\cite{laCourChristensen-2017-NMU}, and a thorough
comparison of the methods would be of independent interest.
The results of numerical experiments in~2D are summarized in
Tables~\ref{tab:layers}--\ref{tab:layer85}. Table~\ref{tab:layers} shows
performance of the nonadaptive method for a homogeneous case with$~k=1$ and
the layers of the SPE~10 problem. It can be seen that for layers$~1$, and $20$
the convergence does not significantly depend on the partitioning and it is
also quite comparable to the homogeneous case with $k=1$. On the other hand,
for layers $60$ and $85$ the variations in coefficients aggravate convergence,
which is also quite sensitive to the partitioning. This holds, in particular,
for layer$~60$ which contains both regions that are highly heterogeneous and
relatively homogeneous. It can be also seen by comparing left and right
columns in Table~\ref{tab:layers} that increasing the number of subdomains
(that is decreasing the coarsening ratio$~H/h$) leads to higher condition
numbers and increase in iteration counts for both regular and irregular
partitionings. This is not the case in the standard theory of domain
decomposition methods, but here we suspect it can be attributed to the jumps
in coefficients and larger interfaces. The performance of the adaptive
algorithm is illustrated by Tables~\ref{tab:layer1}--\ref{tab:layer85}.
Table~\ref{tab:layer1} shows convergence for layer~$1$ with irregular
partitioning~A, and Table~\ref{tab:layer85} shows convergence for layer~$85$
with irregular partitioning~B. It can be seen that in all cases lower values
of the threshold$~\tau$ lead to fewer iterations, and the value of the
condition number indicator$~\widetilde{\omega}<\tau$ is in a good agreement
with$~\kappa$, which is the approximate condition number estimate obtained
from the L\'{a}nczos sequence in conjugate gradients. The adaptive constraints
also lead to more significant improvement in convergence than the multiscale
constraints. The problem for layer$~85$ is particularly interesting. From the
right panel in~Figure~\ref{fig:layers_60_85} we see that the coefficient jumps
have very large variations even on the interfaces, which can be seen in the
left panel of Figure~\ref{fig:spe10_sub_1-2}. The right panel displays the
eigenvalues of the corresponding eigenproblem: $\lambda_{1}\approx3769.5$ and
all other eigenvalues are less than$~20$. \ Figure~\ref{fig:spe10_eig} then
displays $300$ largest eigenvalues of the (global) BDDC\ preconditioned
operator without adaptivity and with adaptive BDDC\ and target condition
number $\tau=100$. We see that without adaptivity there is a single largest
eigenvalue: specifically $\lambda_{1}=59,492$ and $\lambda_{2}=9,258$. For the
adaptive BDDC\ with $\tau=100$ we get $\lambda_{1}=96.3$. Comparing this plot
with Table~\ref{tab:layer85} we see that the adaptive BDDC\ with $\tau=100$
introduces $115$ adaptive constraints, which corresponds to the number of the
largest eigenvalues removed from the spectrum of the BDDC preconditioned
operator. We also note that adding a single adaptive constraint reduces the
iteration count from $392$ to $347$, which corresponds to the large gap in the
spectrum of the operator without adaptivity. Setting $\tau$ to a lower value,
for example, $\tau=3$, roughly doubles the number of constraints and the
number of iterations is reduced to approximately~$10$. Also, the lower value
of$~\tau$ improve the approximation quality of the first two steps of
Algorithm~\ref{alg:two-level-nested} and, for example, with $\tau<3$ we get
the error $\epsilon^{\ast}<20\%$.
The results of numerical experiments in 3D are summarized in
Tables~\ref{tab:3D}--\ref{tab:spe10-3D-top}. It can be seen from
Table~\ref{tab:3D} that the numbers of iterations are significantly higher
than in~2D, and the convergence is slower for the fluvial bottom
layers~$56$--$85$ comparing with the relatively smooth top layers~$1$--$30$.
The increase in iterations becomes even more pronounced in the case of the
irregular partitioning also due to larger interfaces. The results of
experiments with the adaptive algorithm are summarized in
Tables~\ref{tab:spe10-3D}--\ref{tab:spe10-3D-top}. As in the~2D case, lower
values of the threshold$~\tau$ lead in all cases to fewer iterations, and the
values of$~\tau$, $\widetilde{\omega}$ and$~\kappa$ are in close agreement.
Again, the multiscale constraints provide only a slight improvement of
convergence. Table~\ref{tab:spe10-3D} shows convergence for layers$~1$--$30$.
It can be seen that despite higher condition number of the problem
corresponding to the irregular partitioning, the adaptive algorithm leads
allows to decrease the iteration counts for lower values of$~\tau$. As in 2D,
the first few adaptive constraints allow to decrease the number of iterations
by a fairly large amount: here adding $14$ constraints reduces the number of
iterations from the initial value~$1968$ to $1280$. However, for example
with$~\tau=10$ the number of iterations decreases to~$18$, however the number
of constraints grows rather significantly from~$335$ to~$1617$. Finally, the
values of$~\epsilon_{0}$ and$~\epsilon^{\ast}$ are quite larger compared to
the~2D experiments. Table~\ref{tab:spe10-3D-top} shows convergence for
layers$~56$--$85$ and the regular partitioning $H/h=10$, and the trends are
quite similar as in the previous case. That is, the adaptive algorithm allows
to control the convergence of conjugate gradients, but the number of adaptive
constraints is relatively high in particular for lower values of $\tau$. These
trends are in agreement with the qualitative observations made from
Figure~\ref{fig:spe10_eig}.
\begin{figure}[ptbh]
\begin{center}%
\begin{tabular}
[c]{cc}%
\includegraphics[width=5.6cm]{mixed_RT0_test1_nsub64.pdf} &
\raisebox{0.25\height}{\includegraphics[width=6.5cm]{mixed_RT0_test1_nsub64_eig.pdf}}
\end{tabular}
\end{center}
\caption{Substructuring and the base $10$ logarithm of the permeability $k$
for the problem with jumps aligned with the substructure interfaces (left
panel) and the largest $300$ eigenvalues of the BDDC\ preconditioned operator
for this problem (right panel). }%
\label{fig:jumps}%
\end{figure}
\begin{figure}[ptbh]
\begin{center}
\includegraphics[width=6.3cm]{mixed_RT0_spe10_layer1_nsub16.pdf}
\hspace{1mm}
\includegraphics[width=6.3cm]{mixed_RT0_spe10_layer20_nsub64.pdf}
\vspace{-10pt}
\end{center}
\caption{Substructuring and the base~$10$ logarithm of the permeability~$k$ in
layer~1 (left panel) and layer~20 (right panel) of the SPE~10 problem. Left
panel also illustrates irregular partitioning~A, and the right panel
illustrates irregular partitioning~B.}%
\label{fig:layers_1_20}%
\end{figure}
\begin{figure}[ptbh]
\begin{center}
\includegraphics[width=6.3cm]{mixed_RT0_spe10_layer60_nsub16.pdf}
\hspace{1mm}
\includegraphics[width=6.3cm]{mixed_RT0_spe10_layer85_nsub64.pdf}
\vspace{-10pt}
\end{center}
\caption{Substructuring and the base~$10$ logarithm of the permeability~$k$ in
layer~60 (left panel) and layer~85 (right panel) of the SPE~10 problem. Left
panel illustrates irregular partitioning A, and the right panel illustrates
irregular partitioning B.}%
\label{fig:layers_60_85}%
\end{figure}
\begin{table}[ptbh]
\caption{Substructuring of the 2D and 3D problems: $N$ is the number of
subdomains, $n_{\Gamma}$ is the number of (flux) degrees of freedom on
interfaces, $n_{f}$ is the number of faces, and $n_{c}$ is the number of
(initial) coarse degrees of freedom.}%
\label{tab:partitioning}
\begin{center}%
\begin{tabular}
[c]{|c|r|r|r|r|}\hline
type of partitioning & $N$ & $n_{\Gamma}$ & $n_{f}$ & $n_{c}$\\\hline
\multicolumn{5}{|c|}{2D}\\\hline
regular ($H/h=30$) & 14 & 580 & 19 & 33\\
regular ($H/h=10$) & 132 & 2360 & 236 & 368\\
irregular A & 16 & 756 & 29 & 70\\
irregular B & 64 & 1746 & 152 & 315\\\hline
\multicolumn{5}{|c|}{3D}\\\hline
regular ($H/h=10$) & 27 & 5400 & 54 & 81\\
irregular & 32 & 7267 & 129 & 335\\\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[ptbh]
\caption{Convergence of the non-adaptive method for the homogeneous case
($k=1$) and the six layers of the SPE~10 problem. Here $\epsilon_{0}$ and
$\epsilon^{\ast}$ are the errors (in $\%$) defined by~(\ref{eq:epsilon}),
$\protect\widetilde{\omega}$ is the condition number indicator
from~(\ref{eq:indicator}), $it$~is the number of iterations for relative
residual tolerance $10^{-6}$, and $\kappa$~is the approximate condition number
computed from the L\'{a}nczos sequence in conjugate gradients.}%
\label{tab:layers}
\begin{center}%
\begin{tabular}
[c]{|c|r|r|r|r|r|r|r|r|}\hline
\multirow{2}{*}{layer} & \multicolumn{2}{|c|}{$H/h=30$} &
\multicolumn{2}{|c|}{$H/h=10$} & \multicolumn{2}{|c|}{irregular~A} &
\multicolumn{2}{|c|}{irregular~B}\\\cline{2-9}
& $it$ & $\kappa$ & $it$ & $\kappa$ & $it$ & $\kappa$ & $it$ & $\kappa
$\\\hline
($k=1$) & 11 & 2.790 & 14 & 3.980 & 12 & 3.151 & 14 & 4.050\\
1 & 15 & 8.879 & 22 & 9.491 & 17 & 6.714 & 19 & 11.197\\
20 & 14 & 5.749 & 19 & 6.926 & 15 & 6.524 & 18 & 6.429\\
60 & 162 & 4564.1 & 513 & 26,359.3 & 244 & 11,272.6 & 292 & 7301.7\\
85 & 183 & 9310.7 & 446 & 24,492.8 & 208 & 7170.4 & 392 & 58,931.7\\\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[ptbh]
\caption{Convergence of the adaptive method for layer~$1$ of the SPE~10
problem with the irregular partitioning A. Here $\tau$ is the target condition
number from Algorithm~\ref{alg:adaptive}, $\epsilon_{0}$~and~$\epsilon^{\ast}$
are the errors (in~$\%$) defined by~(\ref{eq:epsilon}),
$\protect\widetilde{\omega}$ is the condition number indicator
from~(\ref{eq:indicator}), $it$~is the number of iterations for relative
residual tolerance $10^{-6}$, and $\kappa$~is the approximate condition number
computed from the L\'{a}nczos sequence in conjugate gradients. With
$\tau=\infty$ no adaptive constraints were used, and (ms) indicates use of the
multiscale constraints.}%
\label{tab:layer1}
\begin{center}%
\begin{tabular}
[c]{|r|r|r|r|r|r|r|}\hline
$\tau$ & $\epsilon_{0} \,\, [\%]$ & $\epsilon^{*} \,\, [\%]$ &
$\widetilde{\omega}$ & $n_{c}$ & $it$ & $\kappa$\\\hline
$\infty$ & 73.21 & 30.55 & 13.586 & 70 & 17 & 6.714\\\hline
(ms) & 72.55 & 27.15 & -na-$\,\,$ & 121 & 15 & 5.998\\\hline
10 & 71.86 & 29.11 & 8.404 & 73 & 16 & 6.231\\
5 & 70.77 & 18.89 & 4.765 & 81 & 13 & 5.517\\
3 & 69.19 & 11.64 & 2.997 & 104 & 11 & 2.842\\
2 & 69.23 & 9.42 & 1.970 & 153 & 8 & 1.915\\\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[ptbh]
\caption{Convergence of the adaptive method for layer~$85$ with the irregular
partitioning~B. }%
\label{tab:layer85}
\begin{center}%
\begin{tabular}
[c]{|r|r|r|r|r|r|r|}\hline
$\tau$ & $\epsilon_{0} \,\, [\%]$ & $\epsilon^{*} \,\, [\%]$ &
$\widetilde{\omega}$ & $n_{c}$ & $it$ & $\kappa$\\\hline
$\infty$ & 69.34 & 42.11 & 59,491.702 & 315 & 392 & 58,931.700\\\hline
(ms) & 68.76 & 39.00 & -na-$\quad$ & 494 & 297 & 8931.930\\\hline
10,000 & 69.34 & 42.11 & 9275.614 & 316 & 347 & 9170.830\\
1000 & 68.03 & 40.29 & 898.754 & 360 & 152 & 836.227\\
100 & 67.21 & 38.38 & 98.117 & 430 & 54 & 95.439\\
10 & 66.16 & 35.68 & 9.885 & 489 & 19 & 9.672\\
5 & 63.07 & 31.85 & 4.888 & 536 & 13 & 4.836\\
3 & 56.19 & 18.87 & 2.988 & 614 & 10 & 3.010\\
2 & 53.44 & 14.87 & 1.997 & 743 & 7 & 1.879\\\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}[ptbh]
\begin{center}
\includegraphics[width=5.5cm]{spe10_layer85_subs1_2.pdf}
\includegraphics[width=5.5cm]{spe10_layer85_face1_eig.pdf}
\end{center}
\caption{The base~$10$ logarithm of the permeability~$k$ in the subdomains~$1$
and~$2$ from the layer~85 of the SPE~10 problem (Fig.~\ref{fig:layers_60_85},
right panel), and the first $12$ eigenvalues of the corresponding local
generalized eigenvalue problem~(\ref{eq:eigenvalue-problem-local}). Here
$\lambda_{i}=0$ for $i=11$ and $12$.}%
\label{fig:spe10_sub_1-2}%
\end{figure}
\begin{figure}[ptbh]
\begin{center}
\includegraphics[width=9cm]{mixed_RT0_spe10_layer85_nsub64_eig.pdf}
\end{center}
\caption{The largest $300$ eigenvalues of the BDDC\ preconditioned operator
for the layer~$85$ of the SPE~10 problem (Fig.~\ref{fig:layers_60_85}, right
panel) without adaptivity ($\tau=\infty$) and for the adaptive BDDC with the
target condition number \thinspace$\tau=100$. }%
\label{fig:spe10_eig}%
\end{figure}
\begin{figure}[ptbh]
\begin{center}
{\small permeability in layers~1--30:} \newline%
\includegraphics[width=6.4cm]{mixed_RT0_3D_layer1-30_XY.pdf}
\includegraphics[width=6.4cm]{mixed_RT0_3D_layer1-30_Z.pdf}
\newline{\small permeability in layers~56--85:} \newline%
\includegraphics[width=6.4cm]{mixed_RT0_3D_layer56-85_XY.pdf}
\includegraphics[width=6.4cm]{mixed_RT0_3D_layer56-85_Z.pdf}
\end{center}
\caption{Base $10$ logarithm of the permeability~$k$ in $x$ and $y$ directions
(left), and in $z$ direction (right) in two cutouts of the SPE~10 problem
consisting of $30\times30\times30$ elements.}%
\label{fig:mixed_RT0_3D}%
\end{figure}
\begin{figure}[ptbh]
\begin{center}
\includegraphics[width=6.4cm]{3D-metis_view1.pdf}
\includegraphics[width=6.4cm]{3D-metis_view2.pdf}
\end{center}
\caption{Irregular partitioning of the domain from
Figure~\ref{fig:mixed_RT0_3D} into $32$ subdomains.}%
\label{fig:3D-metis}%
\end{figure}
\begin{table}[ptbh]
\caption{Convergence of the non-adaptive method for the homogeneous case
($k=1$) and the two 3D cutouts of the SPE~10 problem from
Figure~\ref{fig:mixed_RT0_3D}. The headings are same as in
Table~\ref{tab:layers}.}%
\label{tab:3D}
\begin{center}%
\begin{tabular}
[c]{|c|r|r|r|r|}\hline
\multirow{2}{*}{layer} & \multicolumn{2}{|c|}{$H/h=10$} &
\multicolumn{2}{|c|}{irregular part.}\\\cline{2-5}
& $it$ & $\kappa$ & $it$ & $\kappa$\\\hline
$(k=1)$ & 25 & 17.099 & 35 & 22.029\\
1--30 & 779 & 49,075.600 & 1968 & $1.096\times10^{6}$\\
56--85 & 3762 & $2.576\times10^{6}$ & 5277 & $3.676\times10^{6}$\\\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[ptbh]
\caption{Convergence of the adaptive method for layers~1--30 of the SPE~10
with irregular partitioning. }%
\label{tab:spe10-3D}
\begin{center}%
\begin{tabular}
[c]{|r|r|r|r|r|r|r|}\hline
$\tau$ & $\epsilon_{0} \,\, [\%]$ & $\epsilon^{*} \,\, [\%]$ &
$\widetilde{\omega}$ & $n_{c}$ & $it$ & $\kappa$\\\hline
$\infty$ & 98.41 & 86.05 & $1.191\times10^{6}$ & 335 & 1968 & $1.096\times
10^{6}$\\\hline
(ms) & 98.29 & 86.05 & -na-$\quad\,$ & 571 & 1943 & $1.079\times10^{6}%
$\\\hline
100,000 & 98.39 & 85.95 & 94,328.862 & 349 & 1280 & 92,307.000\\
10,000 & 98.14 & 84.27 & 9862.559 & 514 & 514 & 10,512.200\\
1000 & 97.41 & 82.73 & 995.230 & 989 & 175 & 1014.150\\
100 & 92.93 & 72.63 & 97.989 & 1331 & 60 & 108.673\\
10 & 87.47 & 66.66 & 9.993 & 1617 & 18 & 11.711\\
5 & 85.82 & 65.46 & 4.985 & 1898 & 13 & 6.069\\
3 & 82.90 & 63.22 & 2.997 & 2331 & 9 & 3.007\\
2 & 81.62 & 62.81 & 2.000 & 2997 & 6 & 1.930\\\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[ptbh]
\caption{Convergence of the adaptive method for layers~56--85 of the SPE~10
with the regular partitioning. }%
\label{tab:spe10-3D-top}
\begin{center}%
\begin{tabular}
[c]{|r|r|r|r|r|r|r|}\hline
$\tau$ & $\epsilon_{0} \,\, [\%]$ & $\epsilon^{*} \,\, [\%]$ &
$\widetilde{\omega}$ & $n_{c}$ & $it$ & $\kappa$\\\hline
$\infty$ & 99.31 & 66.24 & $3.435\times10^{6}$ & 81 & 3762 & $2.576\times
10^{6}$\\\hline
(ms) & 99.30 & 66.24 & -na-$\quad\,$ & 99 & 3735 & $2.566\times10^{6}$\\\hline
100,000 & 99.33 & 65.78 & 95,129.959 & 122 & 1267 & 93,040.500\\
10,000 & 98.63 & 62.19 & 9834.429 & 188 & 498 & 9487.840\\
1000 & 98.25 & 63.81 & 990.920 & 373 & 183 & 1200.070\\
100 & 97.26 & 64.08 & 99.793 & 766 & 59 & 124.896\\
10 & 91.79 & 49.39 & 9.965 & 1154 & 17 & 9.506\\
5 & 88.88 & 43.77 & 4.991 & 1342 & 13 & 5.545\\
3 & 86.52 & 41.75 & 2.997 & 1610 & 9 & 3.205\\
2 & 85.18 & 41.05 & 2.000 & 1960 & 6 & 1.918\\\hline
\end{tabular}
\end{center}
\end{table}
\section{Conclusion}
\label{sec:conclusion}We studied a method for solution of single-phase flow in
heterogeneous porous media. We have, in particular, shown that the idea of
adaptive BDDC, previously used for elliptic problems, can be also applied in
the context of the BDDC method for mixed finite element discretizations using
the lowest-order Raviart-Thomas finite elements, and that the adaptive method
works well with the usual types of scaling used in substructuring. We
illustrated that the resulting algorithm can be successfully applied for
adaptive selection of the coarse flux degrees of freedom using several
examples corresponding to the SPE~10 benchmark model. The effect of the
adaptive construction of the flux coarse basis functions is twofold. First,
the first two steps of the BDDC method provide some approximation properties
with respect to the exact solution of the full problem, in particular in 2D.
Second, the coarse problem provides a better preconditioner for conjugate
gradients used in the third step. We also compared the adaptive constraints
with constraints inspired by multiscale mixed finite element method, and we
found that the adaptive constraints outperform the multiscale constraints.
Next, we experimented with different partitionings of the domains into
substructures. While the adaptive method is able to overcome these issues in
many cases, it is evident that a suitable partitioning makes the adaptive
method more efficient. We note that development of optimal partitioning
strategies is an open problem cf.,
e.g.,~\cite{Aarnes-2008-MMM,Vecharynski-2014-GPU}. However, our experiments
indicate that if it is not possible to find a suitable partitioning, the best
strategy is to simply minimize the size of interfaces, which may be achieved
by a simple geometric partitioning, see also~\cite{Hanek-2017-EII}.
\newpage
\bibliographystyle{siam}
|
1,314,259,996,570 | arxiv | \section{Introduction}
Automatic speech recognition (ASR) is used in a wide variety of applications, ranging from techniques for text input to operating devices such as smart speakers, mobile devices, or car navigation systems. Voice input requires no specialized input device other than a microphone and thus can be operated hands-free. Furthermore, as text input based on speech can be much faster than typing, voice input can also be used to note ideas and quickly enter manuscript drafts.
However, automatic speech recognition is often inaccurate, with recognition errors that need to be corrected using traditional input methods, such as a keyboard and mouse, which eliminates the advantage of hands-free operation. In addition, misrecognition errors can arise when inputting commands or special characters other than regular text. For example, if you want to enter ``?'', you may say ``question mark,'' but it might be entered as the text itself, not the symbol, or, if you want to start a new paragraph and say ``new paragraph,'' it may be recognized literally and input as the text ``new paragraph''. Similar problems occur when commands such as ``delete word'' and ``new paragraph,'' which are frequently used in text editing, need to be followed.
Deciding in advance that saying ``new line'' always means the new-line command and not text input is a possible solution; however, when the string ``new line,'' needs to be entered, errors may occur. Moreover, users may find remembering all the commands and text input phrases difficult. The shift key on a regular keyboard may also be used to switch between modalities, but the hands-free advantage of voice input would be lost.
To address these issues, this study proposes a speech interaction method called DualVoice, which uses ``whispering'' for commands and a normal voice for text inputs (Figure~\ref{fig:teaser}).
Whispering is a mode of speech that many users can implement without special training. Switching between whispering and a normal voice, users can give multiple meanings to speech. For example, saying ``new line'' in a normal voice can mean to input that string, whereas whispering ``new line'' can invokes a new-line command. Similarly, to correct a recognition error, users can whisper ``candidates,'' which would result in alternative recognition candidates being displayed; the user can then select one of them by whispering the number (``one,'' ``two,'' etc.) corresponding to that candidate.
The relationship between a normal voice and whisper can be compared to the relationship between text input and command input, where the conventional voice recognition method can be regarded as a keyboard that does not include command or symbol keys.
However, a keyboard used for computer interactions has function keys, command keys, keys for symbols, and other means of entering commands, as well as letter input. Thus, users can invoke those functions without explicitly changing the interaction mode or having to take their hands off the keyboard. Similarly, with the proposed DualVoice method, text and command input can co-exist without special mode conversion operations.
\add{
Interactions using whispering also have other possibilities not limited to commands when entering text. Voice interaction is difficult to use in public environments, where preservation of privacy and social acceptability become issues; a whisper voice input has the potential to solve these issues. If one can discriminate between normal voice (for normal conversation) and whisper voice for computer inputs, effective interaction for wearable, mobile, and virtual-environment computing may be possible.
}
Figure~\ref{fig:whisper} shows spectrograms of utterances with normal and whispered voices. To realize DualVoice interaction, two neural networks were developed for this study: one to distinguish between whispered and normal voices, and the other to recognize whispered utterances.
\add{
The contributions are summarized as follows:
\begin{itemize}
\item Speech interfaces that discriminate between whisper and normal voices were constructed,
\item A whisper--speech recognizer based on self-supervised learning was built, and
\item A highly accurate neural network for whisper--normal speech discrimination was constructed.
\end{itemize}
}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figs/wn_spectro.pdf}
\caption{Spectrograms of normal and whispered voices saying ``A quick brown fox jumps over the lazy black dog.'' }
\label{fig:whisper}
\Description[Spectrograms of normal and whisper voices]{Spectrograms of normal and whispered voices saying ``A quick brown fox jumps over the lazy black dog.''}
\end{figure}
\section{Related Work}
\begin{figure}
\centering
\includegraphics[width=0.43\textwidth]{figs/screen.pdf}
\caption{DualVoice text input examples.}
\label{fig:textinput}
\Description[DualVoice text input]{DualVoice text input examples}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{figs/interactions.pdf}
\caption{Examples of DualVoice interactions: (a) Correction of recognition errors: whispering ``\whisp{MENU}'' invokes a menu for possible recognition candidates. A user may select the required candidate by whispering the corresponding number. (b) Entering symbols by whispering ``\whisp{COMMA}'' and ``\whisp{DOUBLE}\ \whisp{QUOTE}.'' (c) Combining the spelled input by whispering ``\whisp{SPELL}.'' (d) Inputting Emoji by saying ``smile'' in a normal voice, followed by whispering ``\whisp{EMOTION}.''}
\label{fig:sample}
\Description[DualVoice interactions]{Examples of DualVoice interactions}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{figs/arch2.pdf}
\caption{Whisper voice recognition and whisper / normal classifier.}
\label{fig:wav2vec2asr}
\Description[Whisper/Normal recognition and classification]{Whisper voice recognition and whisper / normal classifier.}
\end{figure*}
Goto et al. proposed ``speech-shift,'' which specifies the mode of speech input by intentionally controlling the pitch of speech~\cite{Goto2003-lu}; i.e. if the fundamental frequency ($F_0$) of an utterance exceeds a specified threshold, it is judged as a different mode. This method introduced two modes of speech input; however, the user is required to speak at an unnaturally high pitch for stable recognition. In contrast, switching between whisper and normal speech is more natural, and can be done more clearly without setting the (user-unknown) threshold.
Goto et al. also developed a method that automatically detects filled (vocalized) pauses, which are hesitation phenomena of utterances in speech commands, and proposes possible candidates that can fill the command~\cite{Goto1999}. For instance, if the user says ``play, Beeee...'' and stops, the system detects ``eeee'' as the filled vocalized pauses of hesitation and proposes filling candidates such as ``Beatles'' or ``Beach Boys.'' This method demonstrates the possibility of indicating non-verbal intentions in speech; however, it can only utilize hesitation and not arbitrary commands as can be utilized by the proposed approach. Moreover, making vocalized pauses is only possible after vowels in speech, not after consonants.
PrivateTalk~\cite{privatetalk} uses a hand partially covering the mouth from one side for activating the speech command. Although the main objective of this research was to preserve privacy, this technique might also be used to distinguish normal speech (without hand covering) from command (with hand covering). However, explicit hand gestures are required, and thus, the method can no longer be considered a ``hands free'' interaction. Furthermore, whereas PrivateTalk needs two microphones (attached to the left and right earphones) to recognize the effect of hand covering, the proposed approach only requires a single standard microphone. To preserve privacy, this study also considered that it is more natural and effective to speak in whispers than to cover the mouth.
DualBreath~\cite{Onishi2021-tj} is a system that uses breathing as a command, distinguishing a command from normal exhalation by discriminating between inhaling and exhaling air through the nose and mouth simultaneously. This system can express triggers equivalent to a simple button press; however, it cannot express commands as rich as that expressed by whispers.
ProxiMic is a sensing technology that detects a user's utterance via a microphone device located close to the mouth~\cite{10.1145/3411764.3445687}. It is intended for use as a wake-up-free speech for smart speakers and similar voice-controlled devices. However, it requires physical motion (moving a microphone close to the mouth), and thus mixing normal and close-to-the-mouth utterances is not easy.
Fukumoto proposed SilentVoice, where a user can input speech using {\it ingressive speech}, an utterance made while inhaling (breathing-in)~\cite{10.1145/3242587.3242603}. Although the method is designed mainly for silent speech, it can also distinguish ingressive speech from normal speech. However, it requires a special microphone placed very close to the mouth, and training is needed for users to speak correctly in the ingressive mode. Moreover, frequently changing between normal and ingressive utterances is difficult.
\add{
There are a series of silent speech researches, where a user's silent utterances or silent commands are recognized with various sensing configurations including lip-reading, blowing, EMG, and ultrasound~\cite{10.1145/3172944.3172977,10.1145/3290605.3300376,10.1145/3242587.3242599,7310970,10.1145/2971763.2971765,10.1145/2634317.2634322}. Whisper speech has similar characteristics to silent speech, such as preserving social acceptability in public spaces; however, it has the potential to be more commonly used given that it can be recognized by ordinary microphones.
}
With some Speech Input Systems, such as Google Cloud Speech-to-Text~\cite{googlespeech}, one can obtain ``.' input when you say ``period.'' However, to distinguish this utterance from the ``period' in the sentence, it is necessary to leave a certain amount of pause before and after it, which reduces the overall input speed. \add{It is conceivable to discriminate between text input and text normalization commands with a higher level of contextual understanding, such as using n-grams, but we consider our proposed method provides a more straightforward and more reliable means of discrimination.}
Research on whisper voice recognition~\cite{Denby:2010:SSI:1746726.1746804,Freitas:2016:ISS:3001610,10.1109/TASLP.2017.2738559,Chang2020-ks,Ghaffarzadegan2016-jf} has previously been conducted; however, to the best of the author’s knowledge, the use of mixing normal voices and whispering, has not considered.
The Alexa smart speaker supports the {\it whisper mode}~\cite{ai-alexa}. When this mode is set, if you talk to Alexa in a whisper, it will respond in a whisper~\cite{Cotescu2019-fr}.
\section{Interacting with DualVoice}
Figure~\ref{fig:sample} shows examples of voice input process using DualVoice.
In regular text creation by speech input, the voice input is converted into text by speech recognition when one speaks. However, when a misrecognition occurs, it is necessary to correct the input using a keyboard or other manual means. In addition, symbols such as periods, commas, quote marks, etc., need to be manually input. For example, saying ``period'' could mean you want to input the corresponding symbol, but it could also mean you want to input the actual word ``period' in letters. It is also difficult to use voice input for things that require command key input in normal text editing tasks, such as line breaks and paragraph changes.
In DualVoice, normal voice input is converted into text, the same as in conventional voice input text creation; whereas, symbols such as ``\whisp{COMMA},'' ``\whisp{PERIOD},'' and ``\whisp{QUOTE},'' in contrast, are input as symbols when spoken in a whisper. Whispering ``\whisp{NEW}\ \whisp{LINES}'' is also treated as a command and means making a new line. If there is a misrecognition, it is possible to delete the last word or change the word of interest by whispering ``\whisp{BACK}'' or ``\whisp{DELETE} \whisp{SENTENCE}.'' Presenting alternative recognition candidates is also possible by whispering ``\whisp{MENU}.'' Candidates on a menu are labeled as 1, 2, 3 and can be selected by whispering ``\whisp{ONE},'' ``\whisp{TWO},'' ``\whisp{THREE},'' etc. (Figure~\ref{fig:sample} (a, b)).
Combining normal voice input with whispered voice commands is also feasible. For example, saying ``w a v 2 v e c'' and whispering ``\whisp{SPELL}'' would yield a ``wav2vec'' word, which is difficult to input with normal ASR (Figure~\ref{fig:sample} (c)). Similarly, {\it Emoji} input is also possible by saying ``smile'' followed by whispering ``\whisp{EMOTION}'' (Figure~\ref{fig:sample} (d)).
\add{
The aforementioned commands show one possibility of DualVoice; however, as many function or control keys can be combined to represent many possible interactions, there are many possibilities of command configurations that utilize whisper discrimination and are worth exploring.
}
\section{Recognition Architecture}
This section describes the system architecture and neural network configurations that enable whisper voice recognition and classification. The overall architecture is shown in Figure~\ref{fig:wav2vec2asr}.
\subsection{Whisper Voice Recognition}
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{figs/wn_separation.pdf}
\caption{Whisper classification examples:
(a) the classification result, (b, c) the filtering results to be sent to the corresponding whisper/normal speech recognizers.}
\label{fig:whisperClassify}
\Description[Whisper classification examples]{Whisper classification examples}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{figs/tsne3.pdf}
\caption{t-SNE 2D visualizations of feature vectors: (a) output of the feature extractor, (b) vectors to be classified at the last FC (fully-connected) layer of the whisper classifier. It can be seen that whisper and normal voices are well separated. }
\Description[visualization of feature vectors]{t-SNE 2D visualizations of feature vectors}
\label{fig:tsne}
\end{figure}
For whispered voice recognition, this study utilized wav2vec 2.0~\cite{wav2vec2,Yi2020-re} and HuBERT (Hidden-Unit BERT)~\cite{10.1109/TASLP.2021.3122291}; both are self-supervised neural networks designed for speech processing systems.
Both networks assume a combination of pre-training with self-supervised representation learning on unlabeled speech data and fine-tuning on labeled speech data. These systems are primarily intended for speech recognition applications but have also been applied to speaker, language, and emotion recognition~\cite{Pepino2021-hv,Yi2020-re}.
The overall structure is shown in Figure~\ref{fig:wav2vec2asr}. It can be divided into the feature extractor and transformer layers~\cite{Vaswani2017-bm}.
The pre-training method is similar to that for BERT (Bidirectional Encoder Representations from Transformers)~\cite{Devlin2018-cg}'s masked language model in natural language processing. It is designed to mask a part of the input and estimate the expression features corresponding to it from the rest of the inputs. With this pre-training, the model can learn the acoustic properties of the input data and the characteristics of the speech.
In contrast to the pre-training, fine-tuning, which requires text labels for the audio data, requires only a small amount of data. As shown in Figure~\ref{fig:wav2vec2asr}, a projection layer and a connectionist temporal classification (CTC) layer~\cite{10.1145/1143844.1143891} were added to the output of wav2vec~2.0 or HuBERT to enable text transcriptions to be generated from audio waveforms.
As reported in~\cite{wav2vec2,Yi2020-re}, wav2vec~2.0 can achieve a speech recognition accuracy comparable to conventional state-of-the-art ASRs with fine-tuning only on a short amount of labeled speech dataset. Therefore, it is anticipated that this architecture is suitable for recognizing whisper voices under limited whisper speech corpora.
There are many corpora of normal speech; however, the corpora of whisper speech is limited, such as wTIMIT~\cite{wTIMIT}. Therefore, this study adopted a fine-tuning policy with whisper speech for neural networks pre-trained with normal speech corpora.
The feature extractor converts raw waveforms into latent feature vectors. Similar to the original wav2vec~2.0 and HuBERT, it consists of seven blocks and the temporal convolutions. Each block has 512 channels with strides (5,2,2,2,2,2,2) and kernel width (10,3,3,3,3,2,2) (Figure~\ref{fig:wav2vec2asr} (left)).
It is designed to output 512 latent dimension vectors every 20 ms.
\subsection{Whisper Voice Classification}
The whisper voice classification part distinguishes whispers from normal voice input by a fixed-length (e.g., 100 ms) audio signal, whereas the feature extractor based on convolutional neural networks obtained from wav2vec~2.0 converts the acoustic signal into 512-dimensional features every 20 ms (Figure~\ref{fig:wav2vec2asr} (right)). As the whisper classification part could partially share the neural network with the whisper recognition part, the overall network size is reduced.
Using these features, the layer normalization, average pooling layers, followed by two FC (fully connected) layers, are applied to obtain whisper and normal voice classification.
The results of the classification sample are shown in Figure~\ref{fig:whisperClassify}. First, an audio stream with removed whispered voices and an audio stream with removed normal voices are generated from this information. Then, each speech stream is sent to its respective speech recognizer.
Figure~\ref{fig:tsne} shows the 2D plotting based on t-SNE~\cite{JMLR:v9:vandermaaten08a} of feature vectors to be classified as the input to the last FC (fully-connected) layer; the figure indicates that normal speech and whisper voice are well discriminated.
\add{Other whisper voice classification techniques based on acoustic analysis such as LPC residuals can also be used~\cite{ITO2005139,zhang07d_interspeech,10.1145/3271553.3271611}; one advantage of our approach is that the feature extractor part (CNN layers) can be shared with whisper voice recognition (Figure~\ref{fig:wav2vec2asr}). Only a relatively small network layer (with 260K parameters) need to be added for whispered speech classification, with this feature contributing to the simplicity of the system.}
\subsection{Training Dataset}
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{figs/process.pdf}
\caption{Whisper speech recognition training process.}
\label{fig:wav2vec2_train}
\Description[Whisper speech recognition training]{Whisper speech recognition training process.}
\end{figure*}
\begin{table}
\centering
{\it
\begin{tabular}{ll}\toprule
one, two, three, four, five & six, seven, eight, nine, zero\\
A, B, C, D, E, F, G & H, I, J, K, L, M, N\\
O, P, Q, R, S, T, U &V, W, X, Y, Z\\
newline, back, delete &delete sentence\\
space, equal, at, number & dollar, ampersand, asterisk\\
left parenthesis, right parenthesis & left bracket, right bracket\\
underline, hyphen, plus, minus & percent, atmark, sharp\\
spell, paragraph & period, comma, dot\\
menu, open, close, yes, no & line, new, repeat, candidates\\
next, page, word & delete line, delete word\\
question mark, exclamation mark & quote, double quote\\\bottomrule
\end{tabular}
}
\caption{Whisper phrases used for per-user training.}
\label{tab:peruser}
\Description[Whisper phrases]{Whisper phrases used for per-user training}
\end{table}
\begin{table*}
\begin{tabular}{l|ll|rr}\toprule
model & train \add{(fine-tuning)} & test & WER (\%) & CER (\%) \\\midrule
\add{Google Text-to-speech~\cite{googlespeech}} & & \add{wTIMIT(N)} & \add{11.55} & \add{4.66} \\
& & \add{wTIMIT(W)} & \add{44.70} & \add{28.38} \\\midrule
wav2vec 2.0 large~\cite{wav2vec2} & LS-960 & wTIMIT(N) & 15.85 & 4.42 \\
& LS-960 & wTIMIT(W) & 39.00 & 16.46 \\
& LS-960 + wTIMIT(W) & wTIMIT(W) & 0.84 & 0.45 \\
& LS-960 & per-user & 83.95 & 38.42 \\
& LS-960 + wTIMIT(W) & per-user & 53.33 & 34.64 \\
& LS-960 + per-user & per-user & {\bf 2.45} & {\bf 0.56}\\
& LS-960 + wTIMIT(W) + per-user & per-user & {\bf 0.38} & {\bf 0.18} \\\midrule
HuBERT large~\cite{10.1109/TASLP.2021.3122291} & LS-960 & wTIMIT(N) & 11.03 & 2.76 \\
& LS-960 & wTIMIT(W) & 17.12 & 5.74 \\
& LS-960 + wTIMIT(W) & wTIMIT(W) & 0.78 & 0.43 \\
& LS-960 & per-user & 61.24 & 25.34 \\
& LS-960 + wTIMIT(W) & per-user & 83.25 & 37.32 \\
& LS-960 + per-user & per-user & {\bf 0.34} & {\bf 0.08} \\
& LS-960 + wTIMIT(W) + per-user & per-user & {\bf 0.32} & {\bf 0.06} \\\bottomrule
\end{tabular}
\caption{Whisper voice recognition accuracy for \add{Google Cloud Speech-to-Text}, wav2vec~2.0 large, and HuBERT large: The latter two models are pre-trained on unlabeled LibriSpeech for 960 hours with (LS-960) audio data and fine-tuned on the same LS-960 with text labels. The ``train'' column indicates dataset used for fine-tuning (LS-960: LibriSpeech 960 hours, wTIMIT(N): wTIMIT normal voice, wTIMIT(W): wTIMIT whisper voice, and per-user: per-user whisper voice). For wav2vec~2.0 and HuBERT models, scores fine-tuned with wTIMIT(W) and per-user were the best; however, that fined-tuned with only per-user data (and not with wTIMIT(W)) also showed comparable results.}
\label{tab:wer}
\Description[Whisper voice recognition accuracy]{Whisper voice recognition accuracy}
\end{table*}
The neural networks were pre-trained and fined-tuned with (normal) voice data (960 hours from the LibriSpeech dataset~\cite{librispeech}). Notably, the pre-training did not require the corresponding text, only the voice data (Figure~\ref{fig:wav2vec2_train}). The following two whispered voice datasets were then used for further fine-tuning:
\subsubsection*{wTIMIT}
wTIMIT whisper voice corpus ({\it whisper TIMIT})~\cite{wTIMIT}. Each speaker both says and whispers 450 phonetically balanced sentences according to the prompts in TIMIT, with 29 speakers that are gender-balanced to some extent. The total number of utterances used is 11,324 (1,011 minutes); it also comes with a normal voice. These were used for whisper/normal classification training.
The data is provided in two parts, train and test, with the division kept as is.
\subsubsection*{Per-user dataset}
Voice data dubbed with whisper voice by each user according to the prompts of the selected voice phrases (Table~\ref{tab:peruser}). These phrases are mainly assumed to be the commands used during text input. Each user repeats each phrase five times, for a total of 110 phrases \add{(average 2.7 seconds). The total data acquisition time for each user is approximately 15 minutes. } These phrases are then randomly concatenated to be used as a dataset. After this concatenation, the total number of utterances is 936 (approximately 82 minutes).
Fine-tuning is applied in two stages. In the first stage, the networks are trained on the wTIMIT (whisper voice), and in the second stage, the whispering voice command set blown by the users (per-user dataset) is used.
The wTIMIT normal and whisper voice data is also used as the training dataset for the training of whisper/normal classification. The length of the voice supplied to the classifier is set to 1,600 samples (100 ms in size with $16\ K / sec$ audio sampling), which is consistent with the length of the speech chunks used in the speech recognition cloud service in the later stage.
\section{Results}
\subsection{Whisper voice recognition: }
\begin{table}
\begin{tabular}{l|ll|r}\toprule
model & train & test & acc. \\\midrule
feature extractor (Fig.~\ref{fig:wav2vec2asr}) + & wTIMIT & wTIMIT & {\bf 0.967} \\
\,LayerNorm + Pooling + FC + FC & wTIMIT & per-user & 0.905 \\
\midrule
MFCC + & wTIMIT & wTIMIT & 0.949 \\
\,LayerNorm + Pooling + FC + FC & wTIMIT & per-user & 0.857 \\
\bottomrule
\end{tabular}
\caption{Whisper--normal classification accuracy results.}
\label{tab:whisperclass}
\Description[Whisper-Normal classification]{Whisper-Normal classification accuracy results.}
\end{table}
The results of the training are shown in Table~\ref{tab:wer}. The recognition scores are shown as word error rate (WER) and character error rate (CER). The training times (using a single NVIDIA A6000 GPU) for fine-tuning were 4.5 hours for wTIMIT(W) and 20 minutes for the per-user dataset.
The recognition accuracy was determined to be poor when trained (fine-tuned) with LS-960 and wTIMIT(W) and tested with the per-user dataset, indicating that this neural network is not sufficient for an unspecified speaker whisper recognizer. In contrast, the system pre-trained on LS-960 and fine-tuned with LS-960, wTMIT(W), and per-user datasets showed good recognition accuracy (e.g., WER 0.38\%).
These results indicate that (1) the model pre-trained on normal speech can deliver good whisper recognition accuracy when it is fine-tuned with whisper voice, and (2) at this stage, only a user-dependent whisper recognition is achievable.
\subsection{Whisper--Normal Classification: }
For the whisper--normal classifier training, this study utilized the normal and whisper voices of wTIMIT. The 16-kHz sampled voice dataset was divided into an audio segment containing 1,600 samples (100 ms). Each segment was labeled either normal or whisper. The system was then trained to discriminate between these two.
Of the 100 ms speech segments, those with an average power of less than -20 db were excluded, and training was performed on the rest. The results are shown in Table~\ref{tab:whisperclass}). As can be seen in the table, classification of normal and whispered speech was possible with a 96.7 \% accuracy.
A comparison of models that use feature extractor to those that use MFCC (Mel-Frequency Cepstrum Coefficient) as a feature was also conducted. As shown in the table, the model using feature extractor outperformed the model using MFCC slightly. In particular, the feature extractor based model performed better in discrimination tested by the voice of a user different from wTIMIT.
\subsection{Normal Voice Recognition}
For normal voice recognition, although the afore-described neural network could be trained, this study opted to select an existing cloud-based speech recognition system (Google Cloud Speech-to-Text~\cite{googlespeech}) because of the noted better recognition accuracy and compatibility with existing voice-based text input applications.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{figs/filter.pdf}
\caption{Whisper classification and filtering: As a result of whisper--normal classification, the speech stream with the whisper part removed is sent to the normal speech recognizer, and the speech stream with the normal part removed is sent to the whisper speech recognizer.}
\label{fig:filter}
\Description[Whisper classification and filtering]{Whisper classification and filtering}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{figs/zoom.pdf}
\caption{Giving commands to the computer with whisper voice while using normal voice for remote meeting conversation. While speaking in a whisper voice, lip images are substituted using Wav2Lip~\cite{wav2lip}, so that the mouth appears to be closed during whisper speech. \add{(note: this is a partially envisioned example; Lip image substitution is realized offline, whereas whisper identification is done in real time.)}}
\label{fig:mouth}
\Description[Giving commands as whisper voice during a meeting]{Giving commands to the computer with whisper voice while using normal voice for remote meeting}
\end{figure*}
\section{Implementation Details}
The neural networks for whisper recognition and whisper discrimination are based on huggingface~\cite{Wolf2019-vq} wav2vec~2.0 and HuBERT classes, and other networks developed with PyTorch~\cite{paszke2017automatic}. The GUI system for the DualVoice text input example is built using the Tkinter Python GUI platform~\cite{lundh1999introduction}.
The GUI system manages the thread that receives the microphone input and splits it into packets containing 1,600 samples ($100 ms$) of sound. These packets are sent via TCP/IP to the whisper discriminator, which discriminates each packet as containing whispers, normal voice, or silence.
Then, depending on the result of the discrimination, the speech packet is then sent to either the whisper recognizer or the normal speech recognizer.
In the case of speech sent to whisper speech recognition, packets that are judged not to be whisper speech will be replaced with no-speech packets; in the case of normal speech recognition, packets that are classified to be whisper speech will be replaced with no-speech packets (Figure~\ref{fig:filter}).
\begin{figure}
\centering{\includegraphics[width=0.45\textwidth]{figs/avatars.pdf}}
\caption{Controlling multiple avatars with normal and whisper voice.}
\label{fig:avatar}
\Description[Controlling multiple avatars]{Controlling multiple avatars with normal and whisper voice}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{figs/mask.pdf}
\caption{Masked system with whisper discrimination function: Normal voice is output externally via speaker, and whisper voice is treated as a command to the computer.}
\label{fig:mask}
\Description[Masked system]{Masked system with whisper discrimination function}
\end{figure}
\section{End-User Experience}
Five participants were asked to use the DualVoice system for speech input. The whisper voice recognition neural network was first fine-tuned for each user and then the participants were asked to enter text sentences verbally.
The use of whisper speech as a command was understood and achievable by all users. \add{All participants were observed to be able to easily distinguish between normal and whisper utterances, although a very short momentary pause was usually required to switch between the normal and whispered voice. }
Participant feedback indicated that it was slightly inconvenient to learn which commands could be executed with a whisper voice and suggested that the design guidelines might vary depending on whether the goal is to operate the system entirely by voice alone or to use it in conjunction with a mouse or keyboard. There was also a suggestion that the system could be used as an input method for people with paralyzed fingers (without the need for keyboard or mouse manipulations).
\section{Discussions}
This section briefly describes the potential and current issues of this method.
\subsection{Other Potential Applications}
The whisper voice discrimination enables multiple modes of speech interaction, and the possibilities are not limited to text input. Some possible applications are as follows.
\subsection*{Using whispering commands at a teleconference}
DualVoice can be used during teleconferences.
Assume the use of voice commands during a teleconference. The command is given in a whisper to prevent the commands from becoming part of the conference speech. The whisper discriminator discriminates between whisper and normal voices, and the whisper voice part is removed from the conference participant's audio stream. The whisper part is sent to the whisper recognizer.
As it is unnatural if the mouth is moving while uttering whisper speech; hence, the lip part should be replaced to appear as if the participant is not speaking (Figure~\ref{fig:mouth}). For this purpose, Wav2Lip can generate lip images from vocalizations using deep learning~\cite{wav2lip}. When the user speaks in a whisper voice, the face image is replaced by an image of the user when not speaking (i.e., a closed mouth).
Using this mechanism, both the face image and the voice stream can be adjusted so that the part where the command is uttered in a whisper voice would be invisible and inaudible to other participants.
Figure~\ref{fig:mouth} shows the original face image video and the face image when the part of whisper utterance is replaced with no utterance.
\subsection*{Control of multiple Avatars}
Switching the speech of multiple avatars is possible in virtual space (Figure~\ref{fig:avatar}). For example, the first avatar can be made to speak in a normal voice, and the second avatar could be made to talk in a whisper voice. Whisper voice utterances can then be expected to be converted to other voices using voice conversion technology~\cite{arxiv.1808.10687,8835014,arxiv.2004.09347}.
\subsection*{Combination with Silent Speech}
The objectives of silent speech are to ensure that the speech for speech command does not become noise to the surroundings; moreover, it preserves privacy by not disclosing confidential information~\cite{Denby:2010:SSI:1746726.1746804,Freitas:2016:ISS:3001610}. The sound pressure level of a typical conversation is approximately 60 dB; whereas, the sound pressure level of a whisper is in the range 30--40 dB. Thus, using a whisper voice as the speech command, the objectives of silent speech can be achieved.
Incorporation of whisper functionality into a mask-type wearable interface is also possible (Figure~\ref{fig:mask}). Philips and Dyson, for example, are in the process of developing masks that allow powered ventilation for breathing to protect against air pollution and infectious diseases~\cite{pmask,dysonzone}. A microphone can be placed inside such masks
for picking up whisper voice, making it possible to achieve an effect almost equivalent to silent speech. By introducing a whisper discrimination function into such always-worn masks, an always-available voice interface without interfering normal conversation would be constructed.
Furthermore, there is potential to use three modalities: silent, normal, and whispered speech. For example, if lip reading and whisper speech can be recognized, three types of speech modality can be obtained together with normal speech.
\subsection{User-Independent Whisper Recognition}
In the implementation presented in this study, an existing speech recognition system (Google Cloud Speech-to-Text~\cite{googlespeech}) was used for normal voice recognition and our custom neural network for whisper voice recognition. The former can be trained on a large corpus, is speaker-independent, and can be used without special training. In contrast, the whisper speech recognition part can be trained on a smaller corpus. It does require a training phase in which individual users are asked to read out example sentences in a whisper for speaker adaptation.
As whisper-based interactions become more widespread, it is anticipated that obtaining a whisper-speech dataset from the usage history will become possible. Hence, one will be able to use whisper speech recognition in a speaker-independent manner.
\section{Conclusion}
This study proposed DualVoice, a speech input method for inputting non-text commands in a whispered voice and inputting text in a normal voice. Normal voice input is used for text input, whereas, various commands can be entered by whispering them. The proposed method does not require any specialized hardware other than a regular microphone and can be used in a wide range of situations where speech recognition is already available.
Furthermore, this study designed two neural networks, one for distinguishing whisper speech from normal speech and the other for recognizing whisper speech and implemented a prototype speech-based text input system using these neural networks and evaluated its usability.
\begin{acks}
The authors wish to thank the anonymous reviewers for their valuable comments on earlier version of this paper.
This work was supported by JST Moonshot R\&D Grant Number JPMJMS2012, JST CREST Grant Number JPMJCR17A3, and The University of Tokyo Human Augmentation Research Initiative.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
|
1,314,259,996,571 | arxiv | \section{#1}\setcounter{equation}{0}}
\def\thesection.\arabic{equation}{\thesection.\arabic{equation}}
\newcommand{\OL}[1]{ \hspace{.5pt}\overline{\hspace{-.5pt}#1
\hspace{-.5pt}}\hspace{.5pt} }
\def\, \rlap{$>$}{\lower 1.1ex\hbox{$\sim$}}\,{\, \rlap{$>$}{\lower 1.1ex\hbox{$\sim$}}\,}
\def\, \rlap{$<$}{\lower 1.1ex\hbox{$\sim$}}\,{\, \rlap{$<$}{\lower 1.1ex\hbox{$\sim$}}\,}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\textwidth = 6.5 in
\textheight = 8.5 in
\oddsidemargin = 0.0 in
\evensidemargin = 0.0 in
\headheight = 0.0 in
\headsep = 0.0 in
\parskip = 0.03in
\arraycolsep 2pt
\renewcommand{\textfraction}{0.00}
\begin{document}
\begin{titlepage}
\bigskip
\bigskip\bigskip\bigskip
\centerline{\Large \bf Dualities of Fields and Strings}
\bigskip\bigskip\bigskip
\bigskip\bigskip\bigskip
\centerline{
{\bf Joseph Polchinski}}
\bigskip
\centerline{\em Department of Physics}
\centerline{\em University of California}
\centerline{\em Santa Barbara, CA 93106 USA}
\bigskip
\centerline{\em Kavli Institute for Theoretical Physics}
\centerline{\em University of California}
\centerline{\em Santa Barbara, CA 93106-4030 USA}
\bigskip
\centerline{\tt joep@kitp.ucsb.edu }
\bigskip\bigskip\bigskip
\begin{abstract}
Duality, the equivalence between seemingly distinct quantum systems, is a curious property that has been known for at least three quarters of a century. In the past two decades it has played a central role in mapping out the structure of theoretical physics. I discuss the unexpected connections that have been revealed among quantum field theories and string theories. Written for a special issue of Studies in History and Philosophy of Modern Physics.
\end{abstract}
\end{titlepage}
\baselineskip = 16pt
\tableofcontents
\baselineskip = 18pt
\setcounter{footnote}{0}
\sect{Introduction}
Perturbation theory is a central part of the education of a physicist. One first learns the basic solvable systems, most notably the simple harmonic oscillator. One then learns how to approach general problems in which the Hamiltonian is of the form
\begin{equation}
H = H_0 + g H_{1} \,, \label{ham}
\end{equation}
where $H_0$ is solvable and the parameter $g$ is small. These approximation schemes give physical quantities as a perturbation series,
\begin{equation}
{\cal A} = {\cal A} _0 + g {\cal A} _1 + g^2 {\cal A} _2 + \ldots\,. \label{perts}
\end{equation}
Here ${\cal A} $ might be an energy level, a scattering amplitude, or any other quantity of interest.\footnote{To avoid confusion, it should be noted that in some cases only even powers of $g$ appear, depending on the structure of the Hamiltonian. Also, the series for a given physical quantity may begin with a term of order $g^m$ with nonzero $m$, rather than $m=0$ as is written here for simplicity.} In particular, a major focus of the standard quantum field theory (QFT) course is the development of the series~(\ref{perts}) in terms of Feynman graphs.
The series~(\ref{perts}) is known not to converge in most systems of interest, in particular quantum field theories~\cite{Dyson:1952tj}. Nevertheless, it is valuable as an asymptotic series, meaning that for $g$ sufficiently small a few terms give accurate results. For example, in quantum electrodynamics, where the effective expansion parameter is $\alpha/2\pi \sim 10^{-3}$, this has allowed the magnetic moment of the electron to be calculated to one part in $10^{12}$.
However, as $g$ increases, perturbation theory becomes increasingly inaccurate, and it can completely miss important qualitative effects. In the Standard Model, quark confinement is the most notable example of such a nonperturbative effect, but others include the spontaneous breaking of chiral symmetry by condensation of quarks, and the violation of baryon and lepton numbers by instantons and skymions in the weak interaction.
There are no general methods for studying QFT's at large $g$. In principle, the nonperturbative definition of QFT by means of the path integral plus the renormalization group, as given by Wilson~\cite{Wilson:1993dy}, implies that any physical quantity can be calculated on a large enough computer. In practice, the theories and observables for which this can be done are limited. Another tool in the study of QFT's is the limit of a large number of fields~\cite{Stanley:1968gx,'tHooft:1973jz}. Here the graphical expansion simplifies and in some cases can be summed, giving a description of physical phenomena that cannot be seen in the individual terms of the series. This is most successful for theories where the many fields organize into a vector $\phi_i$. For matrix fields $\phi_{ij}$, including the important case of Yang-Mills fields, the graphical expansion simplifies enough to allow interesting general conclusions, but usually there are still too many graphs to sum explicitly.
A new tool, which has risen to prominence in the last two decades, is weak/strong duality, also known as $S$-duality. In some cases it is possible to decompose the Hamiltonian in multiple ways,
\begin{eqnarray}
H &= &H_0 + g H_{1} \nonumber\\\
&=& H'_0 + g' H'_1 \,. \label{dualh}
\end{eqnarray}
Now one has two perturbative expansions. The original $H_0$ will have a simple expression in terms of some fields $\phi$, while $H'_0$ will have a simple expression in terms of some new set of fields $\phi'$. The $\phi'$ are related to $\phi$ in a complicated and usually nonlocal way; we will see examples in \S2.
Typically the couplings $g$ and $g'$ have a relation of a form something like
\begin{equation}
g' = 1/g\,. \label{ggp}
\end{equation}
If this is so, then as $g$ becomes very large, $g'$ becomes very small, and the perturbation series in $g'$ becomes an accurate description of the system just where the series in $g$ becomes useless. Of course, if $g \approx 1 \approx g'$ then neither expansion gives a good {\it quantitative} description, but having an understanding of the two limits gives a powerful global picture of the physics. Phenomena that are complicated in one description are often simple in the other. In many interesting systems there are multiple coupling constants, and multiple dual representions~(\ref{dualh}). In \S2.5 we will give an example where there are two coupling constants and an {\it infinite} number of dual descriptions.
There is another, perhaps deeper, way to think about the duality~(\ref{dualh}). In quantum field theories, the expansion in $g$ is essentially the same as the expansion in $\hbar$. To see this, consider Yang-Mills theory, whose field strength is
\begin{equation}
\hat F_{\mu\nu} = \partial_\mu \hat A_\nu - \partial_\nu \hat A_\mu + g [ \hat A_\mu , \hat A_\nu] \,.
\end{equation}
The Yang-Mills connection $\hat A_\mu$ is written here as a matrix. It is useful to work with a rescaled field $ A_\mu = g \hat A_\mu$, so that
\begin{equation}
F_{\mu\nu} \equiv \left (\partial_\mu A_\nu - \partial_\nu A_\mu + [ A_\mu , A_\nu] \right) = g \hat F_{\mu\nu} \,. \label{gscale}
\end{equation}
The Yang-Mills action is then
\begin{eqnarray}
S_{\rm YM} &=& -\frac{1}{2} \int d^4x\, {\rm Tr}(\hat F_{\mu\nu} \hat F^{\mu\nu}) \nonumber\\
&=& -\frac{1}{2g^2} \int d^4x\, {\rm Tr}( F_{\mu\nu} F^{\mu\nu} )\,,
\end{eqnarray}
where the matrix trace makes the action gauge-invariant.
In the latter form the coupling $g$ appears only as an overall factor in the action. Quantum amplitudes are obtained from the path integral
\begin{equation}
\int {\cal D} A \, e^{iS_{\rm YM}/\hbar} = \int {\cal D} A \, e^{i \int d^4x\,{\rm Tr}( F_{\mu\nu} F^{\mu\nu})/2 g^2 \hbar} \,. \label{zpath}
\end{equation}
Note that the parameters $g$ and $\hbar$ appear only in the combination $g^2 \hbar$, so that the perturbation series for typical observables is
\begin{equation}
{\cal A} = {\cal A} _0 +( g^2 \hbar) {\cal A} _2 + (g^2 \hbar)^2 {\cal A} _4 + \ldots\,. \label{zpert}
\end{equation}
It follows that the small-$g$ and small-$\hbar$ limits are the same: weak coupling corresponds to the classical field limit. When $g^2 \hbar$ is small, the exponent in the path integral~(\ref{zpath}) is large and so the integral is highly peaked on configurations where $S_{\rm YM}$ is stationary; these are the solutions to the classical equations of motion. When $g^2 \hbar$ is large, the path integral is not very peaked and the quantum fluctations are large. However, when the duality~(\ref{dualh}) holds, we can change to the primed description, and now the expansion parameter is $g'^2 \hbar = \hbar/g^2$, giving a highly peaked action. Essentially what is happening is that in the original description the fields $\phi$ have wild quantum fluctuations at large $g$, but we can find new fields $\phi'$ which behave classically. This is a bit like a Fourier transform, where a function that is narrow in $x$ is wide in $p$, and vice-versa; we will make this analogy more precise in \S2.3. (Having made this point, we will now revert to the quantum field theorist's conventions $\hbar = c = 1$.)
The interpretation of a duality is then that we have a single quantum system that has two classical limits. Quantum mechanics is sometimes presented as a naive one-to-one correspondence between classical and quantum theories. In this view we quantize the classical theory to go in one direction, and take the classical limit to go in other. Of course there are exceptions; for example, a classical gauge theory with anomalies cannot be consistently quantized. But with dualities, a single quantum theory may have two or more classical limits. `Quantizing' any of these produces the same quantum theory.
The original wave-particle duality already exemplified this idea: given a QFT, one can take two different classical limits depending on what one holds fixed. One limit gives classical fields, the other classical particles. It is fruitless to argue whether the fundamental entities are particles or fields. The fundamental description (at least to the extent that we now understand) is a \mbox{QFT}. Similarly it is fruitless to argue whether $\phi$ or $\phi'$ provide the fundamental description of the world; rather, it is the full quantum theory.
With the dualities~(\ref{dualh}), the functional forms of $H_0$ and $H_1$ in terms of $\phi$ may be the same as those of $H_0'$ and $H'_1$ in terms of $\phi'$. In this case we would say that the theory is self-dual. Alternatively, the functional forms and even the nature of the fields may be quite different: in this case we have two very different ways to think about the system. The term $S$-duality is applied in either case; in some cases self-duality may be implied by the context.
It is not clear why the structure of theoretical physics is so kind to us, in providing simple description in many limits that would seem to be very complicated. It may be that there are fewer consistent quantum theories than classical ones, so that we necessarily get to the same quantum theory from multiple classical starting points.
In \S2 we discuss dualities where both descriptions are QFT's. We begin with some classic examples, namely the Ising model, bosonization, and free electromagnetism, where the duality can be constructed rather explicitly. We then move on to richer examples, in particular supersymmetric Yang-Mills theories. For these, the duality is not proven but inferred. We discuss the evidence and the logic that supports the existence of these dualities. We also discuss the role of supersymmetry.
In \S3 we discuss dualities between string theories. We begin with $T$-duality, which connects two weakly coupled string theories and can be demonstrated rather explicitly. It illustrates a number of remarkable features of string theory: that space is not fundamental but emergent, and that strings perceive spacetime geometry in a rather different way from pointlike particles and fields. We then discuss weak/strong dualities in string theory, and the significance of branes. A notable conclusion is that there is only a single quantum theory in the end: what appear to be different string theories are different classical limits of a single quantum theory, whose full form is not yet known. The same analysis reveals the existence of new classical limits, which are not string theories at all.
In \S4 we discuss dualities in which one description is a QFT and the other a string theory. The existence of such dualities is remarkable, because QFT's are well-understood conceptually, while string theories include quantum gravity and so present many conceptual puzzles. In fact, field-string duality currently plays a key role in providing a precise definition of the quantized theory of strings and gravity. We describe how two puzzles, the black hole entropy and the black hole information paradox, have been clarified by dualities, although important questions remain open. We also discuss the holographic principle, in which the emergent nature of spacetime is even more radical. We conclude by discussing various open questions.
Apology: this is a rather sweeping subject, and I certainly have not set out to reconstruct the entire history of its development. I have tried to choose references that will be useful to the intended audience.
\sect{QFT/QFT dualities}
\setcounter{equation}{0}
\subsection{The Ising model}
A simple example of duality appears in the two-dimensional Ising model. This model is given by spins $\sigma_i$ living at the sites of a two-dimensional square lattice. The subscript $i$ labels the lattice sites, and each spin can take the values $\pm 1$. The action is
\begin{equation}
S_{\rm Ising} = - \sum_{\rm links} K \sigma_i \sigma_j \,.
\end{equation}
The sum runs over all nearest-neighbor pairs, i.e.\ links, of the lattice. The path integral is\footnote{To be precise, this is the analog of the Lagrangian form~(\ref{zpath}), in a Euclidean spacetime. The Hamiltonian form would have a discrete spatial direction but continuous time, which could either be Euclidean or Lorentzian. The duality exists in these cases as well.}
\begin{equation}
\left(\prod_i \sum_{\sigma_i = \pm 1}\right) e^{-S_{\rm Ising}} \,.
\end{equation}
The parameter $K$ plays the role of $1/g^2 \hbar$. When it is large, the path integral is highly peaked on the configurations that minimize the action, namely all $\sigma_i$ being equal (either +1 or $-1$). When it is small, the path integral receives contributions from many configurations.\footnote{For the Ising model, a strong coupling (small $K$) expansion actually exists: the discreteness of the spacetime and the boundedness of the spins allow an expansion in powers of $K$. Moreover, this system is exactly solvable for all $K$~\cite{Onsager:1943jn}. We will not make use of these special properties; our purpose is to illustrate weak/strong duality, which is more general.}
Kramers and Wannier~\cite{Kramers:1941kn} showed that the path integral could also be written as
\begin{equation}
\left(\prod_i \sum_{\sigma'_i = \pm 1}\right) e^{-S_{\rm Ising}'} \,,
\end{equation}
where
\begin{equation}
S'_{\rm Ising} = - \sum_{\rm links} K' \sigma'_i \sigma'_j \,, \quad
K' = -\frac12 \ln \tanh K \,,\quad K = -\frac12 \ln \tanh K' \,. \label{isingdual}
\end{equation}
The new variables $\sigma'_i$ also live on a square lattice and take the values $\pm 1$. The relationship of $K$ to $K'$ is more complicated than the simple reciprocal~(\ref{ggp}), but shares the property that $K' \to \infty$ as $K \to 0$ and vice versa. The transformation between the variables $\sigma_i$ and $\sigma'_i$ is nonlocal: the operator $\sigma'_i$ flips a whole half-line of $\sigma$ spins, ending at $i$. We will not derive this transformation or give its form explicitly here, but we will derive a similar transformation in \S2.3 below.
When $K$ is large, the action strongly favors all spins being parallel, and the system is ferromagnetic. The duality then implies that for small $K$, the dual variables $\sigma_i'$ behave ferromagnetically; this is a disordered state in terms of the $\sigma_i$. There must be a transition between these phases as $K$ is varied. If there is only a single transition, then the duality implies that it must take place at the self-dual value $K = K' = \frac12 \ln(1+\sqrt 2)$. This is the case for the Ising model, but more general dual systems can have multiple transitions, which must occur in dual pairs if they are not at the self-dual point.
\subsection{Bosonization}
A second example of a surprising equivalence also arises in two spacetime directions. A nice review of this equivalence is given by Coleman~\cite{Coleman:1974bu}. One description is a massive Dirac fermion with a self-interaction,
\begin{equation}
S_{\rm ferm} = \int d^2 x \left( i \bar \psi \gamma^\mu \partial_\mu \psi - m \bar \psi \psi - \frac{g}{2} \bar \psi \gamma^\mu \psi \,\bar \psi \gamma_\mu \psi \right)\,.
\end{equation}
The other description is the sine-Gordon model, a scalar field with a cosine potential,
\begin{equation}
S'_{\rm bos} = \int d^2 x \left( - \frac12 \partial_\mu \phi \partial^\mu \phi + m \cos \beta \phi \right)\,.
\end{equation}
In the bosonic description, $\beta$ plays the role of a coupling constant. One sees this by expanding the potential in powers of $\beta$:
\begin{equation}
-m \cos \beta \phi = -1 + \frac{m}{2} \beta^2 \phi^2 - \frac{m}{24} \beta^4 \phi^4 \ldots \, .
\end{equation}
Aside from the unimportant constant term, the leading term at small $\beta$ is a quadratic mass term, while interactions of the $\phi$'s are suppressed by additional powers of $\beta$.
Remarkably, these theories are equivalent, with the mapping of parameters
\begin{equation}
\frac{\beta^2}{4\pi} = \frac{\pi}{\pi + g}
\end{equation}
As $g$ goes to $\infty$, $\beta$ goes to zero. As $\beta$ goes to $\infty$, $g$ goes to $-\pi$. A negative $g$ means an attractive interaction, and $g = -\pi$ is the most attractive possible coupling, beyond which the theory is unstable.
This duality can be demonstrated by starting with the free massless theories, $m = \lambda = 0$. One then finds by explicit calculation that with the correspondence
\begin{equation}
\bar \psi \gamma^\mu \psi \leftrightarrow \frac{\beta}{2\pi} \epsilon^{\mu\nu} \partial_\nu \phi \,, \quad
\bar\psi \psi \leftrightarrow \cos\beta\phi \,, \label{dict}
\end{equation}
the amplitudes in the two theories are equal. One can then use the dictionary~(\ref{dict}) to show that deforming the fermionic theory toward nonzero $m$ and $g$ produces corresponding deformation of the bosonic theory. (To do this all properly requires some attention to the proper definition of the renormalized operators in the quantum theory.)
It is not so surprising that one can make a boson out of two fermions. Making a fermion out of bosons is more complicated. The fermion is a {\it kink}, a configuration where the bosonic field $\phi$ takes different values in the two spatial limits $x^1 \to \pm \infty$. Note that the multiple minima of the cosine potential, at $\phi = 2 \pi n /\beta$, make such a configuration stable. As with the disorder operator in the Ising model, the basic quantum of one description is nonlocal in terms of the other. More precisely, it is topological, being associated with nontrivial boundary conditions at infinity.
Both the Ising model and bosonization are realized in physical condensed matter systems. For example, bosonization has been a valuable tool in understanding tunneling processes at the 1+1 dimensional edge of the quantum Hall system. More generally, duality is often a powerful tool in condensed matter physics; some characteristic examples are~\cite{Shahar,Senthil}.
\subsection{Maxwell theory}
Our third example is Maxwell theory in four spacetime dimensions. We begin with the sourceless theory, with action
\begin{equation}
S_{\rm Maxwell} = - \frac{1}{4e^2} \int d^4x \,(\partial_\mu A_\nu - \partial_\nu A_\mu)(\partial^\mu A^\nu - \partial^\nu A^\mu) \label{maxact}
\,.
\end{equation}
We use the conventional $e$ to denote the coupling, while $g$ will be used for the magnetic coupling later.
The path integral is over all vector potentials $A_\mu(x)$,\footnote{We might imagine that the fields are defined over some range of times $t_f > t > t_i$, with initial and final boundary conditions, so that the path integral defines a transition amplitude. We will omit such details to focus on the central point.}
\begin{equation}
\int {\cal D}A\, e^{i S_{\rm Maxwell}} \,. \label{zmax}
\end{equation}
Given an antisymmetric tensor $F_{\mu\nu}(x)$, the condition that it can be written as the curl of a vector potential is the Bianchi identity,\footnote{To be precise, this is true for a topologically trivial spacetime, as we assume here for simplicity.}
\begin{equation}
\partial_\mu \tilde F^{\mu\nu} = 0\,,\quad \tilde F^{\mu\nu} = \frac{1}{2}\epsilon^{\mu\nu\sigma\rho} F_{\sigma\rho} \,,
\label{bianchi}
\end{equation}
where $\epsilon^{\mu\nu\sigma\rho}$ is the fully antisymmetric Levi-Civita tensor. We can then replace the path integral over vector potentials $A_\mu(x)$ with a path integral over an antisymmetric tensor field $F_{\mu\nu}(x)$ subject to the constraint~(\ref{bianchi}) at each point,\begin{equation}
\int {\cal D}A \ldots = \int {\cal D}F \prod_x \delta( \partial_\mu \tilde F^{\mu\nu}(x))\ldots \,. \label{atof}
\end{equation}
There may be a Jacobian for this change of variables, but it is an uninteresting overall constant; similar constants are ignored in later equations as well. Now write the functional delta-function in integral form, analogous to $\int_{-\infty}^\infty dx\, e^{i x y} = 2\pi \delta(y)$,
\begin{equation}
\prod_x \delta( \partial_\mu \tilde F^{\mu\nu}(x)) = \int {\cal D} V \exp \left({ \frac{ i}{2\pi} \int d^4x\, V_\nu \partial_\mu \tilde F^{\mu\nu} }\right) \,.
\label{deltint}
\end{equation}
The factor of $1/2\pi$ is arbitrary; it can be absorbed into the normalization of $V_\mu$, but has been inserted to make later equations simpler.
Using the relations~(\ref{atof}) and~(\ref{deltint}), the path integral~(\ref{zmax}) becomes
\begin{equation}
\int {\cal D}F\,{\cal D}V\, \exp \left\{-i \int d^4x \left( \frac{1}{4e^2}F_{\mu\nu} F^{\mu\nu} - \frac{1}{4\pi} (\partial_\mu V_\nu - \partial_\nu V_\mu) \tilde F^{\mu\nu} \right) \right\} \,. \label{zmax2}
\end{equation}
In the second term we have integrated by parts and made use of the antisymmetry of $\tilde F_{\mu\nu}$. Now, integrating over $V_\mu$ just produces the functional delta function and takes us back to our starting point. But integrating over $F_{\mu\nu}$ leads us to a new form. This path integral is gaussian and can be carried out using the functional version of the usual identity
\begin{equation}
\int dx\, e^{- i a x^2/2 + i b x} = \sqrt{\frac{2\pi }{ia}} e^{ i b^2 /2 a} \,. \label{gauss}
\end{equation}
The result, ignoring the normalization constant as before, is\footnote{Some useful identities are $\tilde S_{\mu\nu} \tilde T^{\mu\nu} = - S_{\mu\nu} T^{\mu\nu}$ for any antisymmetric tensors $S$ and $T$, and $\tilde{\tilde {T}\,}\!_{\mu\nu} = - T_{\mu\nu}$ for any antisymmetric tensor $T$. \label{tildefoot}}
\begin{equation}
\int {\cal D}V\, \exp \left\{-{i \frac{ e^2}{16\pi^2}} \int d^4x\, (\partial_\mu V_\nu - \partial_\nu V_\mu)(\partial^\mu V^\nu - \partial^\nu V^\mu) \right\} \,.
\label{zmax3}
\end{equation}
Comparing the forms~(\ref{zmax}) and~(\ref{zmax3}), we have turned a path integral over a vector field $A_\mu(x)$ into a path integral of similar form but over a new vector field $V_\mu(x)$, which entered originally as an auxiliary field in the integral representation of the functional delta function.
To see the relation between these path integrals, consider the equation of motion from varying $F_{\mu\nu}$ in the action~(\ref{zmax2}),
\begin{equation}
\tilde F_{\mu\nu} = - G_{\mu\nu}\,,\quad G_{\mu\nu} = \frac{e^2}{2\pi} (\partial_\mu V_\nu - \partial_\nu V_\mu) \,. \label{av}
\end{equation}
That is, the electric part of $F = \partial_\mu A_\nu - \partial_\nu A_\mu$ is the magnetic part of $\partial_\mu V_\nu - \partial_\nu V_\mu$, and vice versa. This shows that the electric-magnetic duality of free electromagnetism, $\vec{ E} \to \vec { B},\ \vec { B} \to - \vec { E}$ (the factor of $e^2/2\pi $ depends on conventions, and can be removed by rescaling fields).
Note that the integral~(\ref{gauss}) is the Fourier transformation of a gaussian, illustrating the comment in \S1 that a duality is something like a Fourier transform. Also, the relation~(\ref{av}) between $A_\mu$ and $V_\mu$ is nonlocal: it is local in terms of the curls of these potentials, but one must integrate over spacetime to express it in terms of the potentials themselves.
We can see already a very important lesson. The path integral~(\ref{zmax}) has a gauge invariance
\begin{equation}
A_\mu(x) \to A_\mu(x) + \partial_\mu \lambda(x)\,.
\end{equation}
The path integral~(\ref{zmax3}) has a gauge invariance
\begin{equation}
V_\mu(x) \to V_\mu(x) + \partial_\mu \chi(x)\,.
\end{equation}
Nothing transforms under the $\chi$ gauge invariance in the original theory~(\ref{maxact}, \ref{zmax}): this invariance is {\it emergent}.
In the introduction we have noted that dualities call into question our notion of what is fundamental. We see that this includes even gauge invariance, which we might have thought to be one of the fundamental principles. This example might seem rather trivial, but the phenomenon of emergent gauge theories has proven to be much more general, ranging from condensed matter systems to string theory.
In perturbative gauge theories one seems find larger and larger gauge symmetries as one goes to higher energies: from the $U(1)$ of electromagnetism to the $SU(3) \times SU(2) \times U(1)$ of the Standard Model to the $SU(5)$ or larger of grand unified theories. One might have supposed that the goal was to identify the ultimate gauge invariance from which all else descends. We now recognize that in a variety of contexts gauge invariance can emerge from nothing.
This is consistent with the insight that gauge invariances are not symmetries, but rather redundancies, the same physical configuration being described by more than one field configuration. Dualities relate only physical observables, and so are blind to gauge invariances. Global symmetries, which act nontrivially on physical states, are the same in both dual descriptions.
In going from the original path integral (\ref{zmax}) to the dual~(\ref{zmax3}), $e$ has been replaced by
\begin{equation}
e' = \frac{2\pi}{e}\,.
\end{equation}
This looks like the weak/strong duality that was discussed in \S1, but it is a fake. This theory is free, with a gaussian action in both forms, and the `couplings' can be absorbed into rescalings of the fields $A_\mu$ and $V_\mu$. We have written things the way we have in order to illustrate a more general principle, and we will note in \S2.4 some examples where analogous transformations produce a true weak/strong duality.
As a final exercise, consider adding a $\theta$-term
\begin{equation}
\frac{\theta}{16\pi^2} F_{\mu\nu} \tilde F^{\mu\nu} \label{thetaterm}
\end{equation}
to the action~(\ref{zmax2}). With a little algebra you can show that the final action in terms of $V$ can be written in terms of couplings $e'$ and $\theta'$. The transformation is simple when written in terms of
\begin{equation}
\tau = \frac{\theta}{2\pi} +i\frac{2\pi}{e^2} \,. \label{tau}
\end{equation}
It is simply
\begin{equation}
\tau' = -\frac1\tau\,.
\end{equation}
\subsection{Generalizations}
The Maxwell theory can be generalized to $p$-form potentials, with $p$ antisymmetric indices, and to any number of spacetime dimensions $d$. The exact same steps --- treat the field strength as a independent field, introduce the Bianchi constraint in integral form, and integrate out the field strength ---
again produce a dual theory, with a $p' = d-p - 2$ index potential (note that the fully antisymmetric $\epsilon$-tensor has $d$ indices). We will refer to this general set of transformations as Hodge dualities. The Maxwell case is $d=4$, $p = p' = 1$. A variety of higher dimensional forms are present in string theory and supergravity. A rather useful case is simply $d=2$, $p = p' =0$. A $0$-form potential is just a scalar field, and its `field strength' is just its gradient. This gives two representations of the two-dimensional massless scalar,
\begin{equation}
\partial_\mu \phi = \epsilon_{\mu\nu} \partial^\nu \phi' \,. \label{ttrans}
\end{equation}
The divergence of the right side is automatically zero, so we get a massless field equation for $\phi$.
The Ising model can be thought of as a scalar field theory in which the field lives on a discrete spacetime and is limited to discrete values $\pm 1$.
The exact same steps can be applied in the discrete case to produce a dual description, and this is actually the derivation of the Ising dual~(\ref{isingdual}). In contrast to the Maxwell case, this is a true weak/strong duality: the discrete values of the variables don't allow for a rescaling.
The Ising model has $p$-form generalizations, where the variables live on $p$-dimensional cells of the lattice. Again, the same steps produce a dual theory~\cite{Savit:1979ny}. For example, the three-dimensional Ising model is dual to a three-dimensional $\mathbb Z_2$ gauge theory, where the potential lives on links and takes values $\pm 1$. Further generalizations, again having duals, allow the potentials to take values in $\mathbb Z_N$, or the integers. These systems have various phases, and the phase diagrams are duality-symmetric.
Consider now coupling the Maxwell theory to electrically and magnetically charged fields at the same time. As shown by Dirac~\cite{Dirac:1931kp} , the quantum theory is consistent only if the product of the electric charge $e$ and magnetic charge $g$ is a multiple of $2\pi$. In particular, let us suppose that the product takes the minimal value, so that $g = 2\pi/e$. Now apply the Maxwell duality above, which takes $e$ to $e' = 2\pi/e$ and switches electric and magnetic charges. The electric charges $e$ become magnetic charges $e = 2\pi/e'$, while the magnetic charges $2\pi/e$ become electric charges $2\pi /e = e'$. What this means is that the theories with couplings $e$ and $e'$ are {\it the same} under the Maxwell change of variables: we have found a self-duality.
This theory is somewhat unsatisfactory (and also somewhat obscure; we will meet a better one in the next section, in which the magnetic charges are solitons rather than independent fields). It has no weakly coupled limit: the theory has both electric and magnetic charges, and in either description one of these will be strongly coupled. Thus, we have found a `strong-strong' duality, which is not so useful. Also, following the discussion in the introduction, this means that the theory has no classical limit --- whichever field is strongly coupled equivalently has large quantum fluctuations. Related to this, it has no local and covariant Lagrangian: the Dirac quantization condition implies that the definition of the charged field depends nonlocally on the configuration of magnetic charges, and vice versa.\footnote{On a personal note, I have a long history with this subject. The first warmup problem given to me as a graduate student by my advisor Stanley Mandelstam was to find a Lagrangian for this system. I will explain his interest in this subject below.}
A nonlocal Lagrangian is given in Ref.~\cite{Brandt:1978wc}. This theory, with some additional supersymmetry, can be obtained as the low energy limit of a strongly coupled theory that does have a local covariant Lagrangian~\cite{Argyres:1995jj} .
\subsection{Montonen-Olive duality}
If one is keeping score to this point, we have described weak/strong dualities in two spacetime dimensions and for various discrete systems in higher dimensions, and a strong/strong duality in $d=4$, but no weak/strong duality for a continuum QFT in $d=4$. Non-Abelian gauge theories are a place to look for this, because they have weakly coupled limits, and they can have magnetic monopoles. These monopoles, found by 't Hooft and Polyakov~\cite{'tHooft:1974qc,Polyakov:1974ek}, arise as solitons, classical solutions to the field equations (like the sine-Gordon kink).
The non-Abelian electrically and magnetically charged particles (which we will refer to as charges and monopoles, respectively) seem to have very different origins: the former arise from the quantization of the fields, while the latter are particle-like solitons even in the classical limit. At weak coupling, the electrically charged objects are light, weakly interacting, and essentially pointlike, while the magnetic monopoles are heavy (the soliton mass contains a factor of $1/g^2$ as compared to the electrically charged quanta), strongly interacting, and have finite radii that is set by the scale $v$ of gauge symmetry breaking. Consider now what happens as the coupling is increased, as indicated by the orders of magnitude in the table.
\begin{table}[h]
\begin{center}
\begin{tabular}{rcc}
& electric & magnetic \\
& charges & monopoles \\ \hline
pair potential & $g^2$& $1/g^2$ \\
mass & $g v$ & $v/g$\\
size$\,\times\,$mass ($g \ll 1$) & $\ll 1$ & $1/g$
\end{tabular}
\end{center}
\end{table}
The electrically charged objects interact more strongly, while the magnetically charged objects interact more weakly. The ratio of the monopole to the charge mass goes down, and if the masses can be reliably extrapolated then for $g > O(1)$ the monopoles are actually lighter. The electric quanta become less pointlike, as they split more frequently into virtual quanta. The ratio of the monopole size to its Compton wavelength goes as $1/g$ and so becomes smaller. If this last can be reliably extrapolated to large values $g$, then the monopoles end up much smaller than their Compton wavelengths. In this case one would think that they can be quantized as fundamental fields.
In 1977, Montonen and Olive conjectured on this basis that non-Abelian gauge theories having magnetic monopole solutions would be invariant under a weak/strong, electric/magnetic duality; the simplest example would be the Georgi-Glashow $O(3)$ weak interaction model. In the following year, Witten and Olive introduced supersymmetry into the story~\cite{Witten:1978mh}, showing that in a supersymmetric extension of this model the calculation of the masses is exact, so that at large $g$ one can reliably say that the monopoles are lighter than the charges. To be precise, one needs ${\cal N}=2$ supersymmetry for this to hold.\footnote{In four spacetime dimensions, the smallest supersymmetry algebra has one Majorana or Weyl supercharge, meaning four real or equivalently two complex supersymmetry charges. Some systems are invariant under extended supersymmetry, with ${\cal N}$ copies of this basic algebra. Larger algebras require particles of higher spin, so that ${\cal N}= 4$ is the maximum for a QFT without gravity, and ${\cal N=8}$ is the maximum for a gravitational theory.} Osborn then showed that for ${\cal N}= 4$ supersymmetry (but not less), the spins of the charges and monopoles match as well.
In retrospect, this is strong circumstantial evidence. In particular that fact that the monopoles are much lighter than the charges at strong coupling is striking. However, this duality could not be demonstrated by the kind of Lagrangian manipulations employed above (in particular, the non-Abelian action and Bianchi identity contain $A_\mu$ and not just $F_{\mu\nu}$, so the trick of treating $F_{\mu\nu}$ as the independent field goes nowhere fast).
A Lagrangian derivation is still lacking, but in the 1990's the circumstantial evidence for such dualities expanded rapidly. Sen showed that also the dyon spectrum (e.g.\ magnetic charge 2, electric charge 1) is as predicted by duality~\cite{Sen:1994yi}. To understand this, we should note that there is more to the duality group than the weak/strong interchange. Referring back to the $\theta$ term~(\ref{thetaterm}), the quantization of instanton charge implies an invariance $\theta \to \theta + 1$. For the combined parameter $\tau$ (\ref{tau}), we then have the two invariances $\tau \to - 1/\tau$ and $\tau \to \tau+1$. Together these generate the infinite discrete group~\cite{Cardy:1981qy}
\begin{equation}
\tau \to \frac{a \tau + b}{c \tau + d} \,,\quad \left[ \begin{array}{c} \vec E \\ \vec B \end{array} \right] \to \left[ \begin{array}{cc} a & b \\ c & d \end{array} \right] \left[ \begin{array}{c} \vec E \\ \vec B \end{array} \right] \,,
\end{equation}
where $a,b,c,d$ are integers such that $ad-bc = 1$. This symmetry, known as $SL(2,\mathbb Z)$, relates the spectrum of dyons to that of the electric charges. Another check was the demonstration by Vafa and Witten that certain twisted versions of the path integral are dual~\cite{Vafa:1994tf}.
Beyond this accumulation of evidence for the ${\cal N} = 4$ theory, this subject exploded when Seiberg began to solve the strongly coupled dynamics of ${\cal N}=1 $ supersymmetric theories~\cite{Seiberg:1994bz}, and Seiberg and Witten did the same for ${\cal N}=2$ theories~\cite{Seiberg:1994rs}. These provided many cross-checks on the reliability of these circumstantial arguments, and they showed that ${\cal N} = 4$ is not an isolated example but part of an elaborate and massively self-consistent web.
Not long after, this web was extended to string theory, as we will describe in \S3. Even for a skeptic, accustomed to path integral arguments, it became impossible to resist the weight of evidence.
Finally, we emphasize that as in the simple Maxwell duality, the gauge invariances in the two dual descriptions are unrelated. From the point of view of either theory, the other gauge symmetry is emergent. The dual theories may even have different Lie algebras~\cite{Goddard:1976qe}.
\subsection{Remarks}
The absence of a derivation remains a puzzle. The path integrals on both sides can be given rather precise definitions, so we have a sharp mathematical statement (actually many statements, the matching of all correlators and other observables) and no proof. It should be noted that in quantum field theory proof usually lags far behind what we can understand physically, but one feels that this is a gap that must eventually be closed.
It may be that the Wilsonian path integral construction of field theory is not the correct starting point~\cite{cyberg}. To this day, it remains the only universal tool for proceeding from a Lagrangian or a Hamiltonian to the full quantum theory, but it seems clumsy in theories of high symmetry.
A possible alternative would be to find a construction that applied only with a large amount of symmetry, and then reach theories of less symmetry by some process of deformation. In this context, we note an argument for $S$-duality of ${\cal N}=4$ Yang-Mills theory, beginning with the $d=6$ $(2,0)$ theory~\cite{Witten:1995zh}. When two of the six dimensions are made periodic, the low energy physics is the $d=4$, ${\cal N}=4$ theory. Moreover, the coupling $g$ is the ratio of the periods of the two dimensions. But there is no difference between these two directions, so by switching them we invert the coupling! But what is this $(2,0)$ theory? Well, it does not have a Lagrangian, or a classical limit, or any complete definition. It is inferred to exist from the low energy limit of the physics on certain stringy 5-branes. So clearly much is lacking, but this may point the way to the future, if we could construct this theory. Further, a host of ${\cal N}=2$ dualities follow by compactifying it on other two-dimensional manifolds~\cite{Gaiotto:2009we}.
What is the role of supersymmetry in $S$-duality? Supersymmetry allows the calculation of certain quantities at strong coupling, and so makes it possible to check duality conjectures where there are no other methods. But in many cases, supersymmetry plays a more essential role: it allows the strongly coupled theory to exist at all. To see the issue, consider adding to the QED vacuum an electron-positron pair in some region of size $r$ smaller than the Compton wavelength of the particles. This costs a kinetic energy of order $1/r$, times two. However, there is a negative potential energy of order $-e^2/r$. For $e^2$ sufficiently greater than 1, adding the pair lowers the energy. The would-be vacuum is then unstable at all scales $r$, and there may be no final state at all. But in supersymmetric theories, there is a natural stability built in. Essentially the Hamiltonian is the sum of squares of supersymmetry charges, $H = \sum_i {\cal Q}_i^2$, and so the energy is nonnegative and bounded below.
In general, as the amount of supersymmetry is reduced, the dynamics becomes richer, and dualities act in more complicated ways. Duality symmetries of some form exist in nonsupersymmetric theories, but generally they apply only in the low energy limit. As a familiar such example, the low energy limit of QCD is the theory of pion physics, which is also described by a sigma model in terms of scalar fields.
Finally, we note another interesting application of duality. It was observed by 't Hooft~\cite{'tHooft:1977hy} and Mandelstam \cite{Mandelstam:1978ed} that confinement of quarks (another universally believed but unproven property in quantum field theory) would follow if the QCD vacuum were the electric/magnetic dual of a superconductor. A superconductor excludes magnetic fields via the Meissner effect, so a dual superconductor would exclude (color) electric fields and force color-electric charges into neutral bound states. QCD itself does not have a precise dual, but this mechanism can be seen in various supersymmetric theories that are closely related to QCD~\cite{Seiberg:1994rs}.
\sect{String/string dualities}
\setcounter{equation}{0}
In string theory we need to distinguish two kinds of quantum effects: the wiggles on a single string, and the splitting of the string into two strings. These are controlled by different parameters. The importance of the wiggles depends on $l_{\rm s}/ l$, where $l_{\rm s}$ is the characteristic length scale of string theory, and $l$ is the characteristic length of the system being studied. The splitting depends on the string coupling $g_{\rm s}$. Each quantum effect in turn is associated with a kind of duality, known as $T$- and $S$-duality respectively. The $T$-duality is simpler, since the wiggles of the string are described by a quantum field theory, though we will see that it teaches us rich lessons. The $S$-duality gets into the full and puzzling nature of quantized string theory.
\subsection{$T$-duality}
String theory originally took hold as a solution to the short distance problem of quantum gravity. If we try to build a quantum theory of gravity by the standard procedure of feeding the classical Lagrangian into the path integral, the result is nonrenormalizable: the divergences become worse with each order of perturbation theory. Such problems had been encountered before, and had pointed the way to new physics. The Fermi theory of the weak interaction was highly successful in accounting for observations, but it too was nonrenormalizable, pointing to a breakdown of the theory at short distance. In order to solve this problem, it was necessary to resolve the pointlike interaction of the Fermi theory into the exchange of a $W$ boson. Indeed, this clue, together with some imagination, was enough to lead to the Weinberg-Salam model. This predicted the precise properties of the $W$ and $Z$ bosons, and the Higgs, well before there was any direct evidence that any of these existed.
In the case of gravity, the new physics has to be more than just a few new particles. In string theory, the basic point-like quanta of QFT are expanded into loops and strands. This smooths the short distance behavior of the amplitudes. But going from points to strings took us out of the familiar framework of QFT, where we have had many decades to learn how things work. For string theory, we had rules of calculation, but not the intuition for all that they imply. In this circumstance, it is useful to consider a variety of thought experiments.
A particular thought experiment that has been enormously fruitful is imagining what happens if we put the strings in some compact space, and then contract the space.
For comparison, let us do this first for a quantum field. We will take the mathematically simplest kind of box, where one or more dimensions are periodic with period $2\pi L$. For a field to fit into such a box it must be of the form $e^{i n/L}$ for integer $n$. This corresponds to a momentum $n/L$, and therefore an energy
\begin{equation}
E^2 = p_\bot^2 + \frac{n^2}{L^2} + M^2\,, \label{parte}
\end{equation}
where $p_\bot$ is the momentum in the noncompact directions and $M$ is the rest mass. Now, as we take $L \to 0$, this energy diverges. The only states of finite energy are those with $n=0$, those that don't depend on the compact dimension at all. If we start with $d$ spacetime dimensions, and compactify $k$ of them in this way, then only the $d-k$ noncompact momenta can be zero: the fields move only along the noncompact dimensions. In effect, the small dimensions disappear, at least to probes with energy less than $1/L$.
For a closed string (a loop), the energy is
\begin{equation}
E^2 = p_\bot^2 + \frac{n^2}{L^2} + \frac{w^2 L^2}{\alpha'^2} + \frac{\pmb N}{\alpha'} \,. \label{stringe}
\end{equation}
The first and second terms are the same as in the QFT energy~(\ref{parte}). The final terms are also the same: the mass-squared of a string is proportional to its excitation level $\pmb N$ (which may include a zero-point constant), and inversely proportional to $\alpha'$. The third term in the string energy~(\ref{stringe}) is special to string theory. It arises because a closed string can wind around the periodic dimension. The integer $w$ counts the number of windings.
This energy comes from the tension of the stretched string, and so is proportional to the distance $2\pi L$ that the string must stretch.
Now consider what happens as $L \to 0$. The states of nonzero $n$ again go to high energy. But now we have a large set of very low energy states at nonzero $w$. This is the opposite of what happens in the $L \to \infty$ limit, where the $n \neq 0$ states go over to a continuum while the $w\neq 0$ states go to high energy. In fact, the energy is symmetric is completely symmetric between large $L$ and small $L$: if we interchange
\begin{equation}
L \to \frac{\alpha'}{L} \,,\quad (n ,w) \to (w, n)
\end{equation}
it is unchanged~\cite{Kikkawa:1984cp,Sakai:1985cs}. Evidently there is no difference between large $L$ and small $L$! This holds not just for the spectrum~(\ref{stringe}), but for the interactions of the string as well~\cite{Nair:1986zn}. In the effective two-dimensional QFT of the string world-sheet, $T$-duality is just the $d=2$, $p = p' =0$ Hodge duality~(\ref{ttrans}). The coordinates of the string in spacetime are scalar fields in the world-sheet QFT, and the coordinates in the dual theory are related as in Eq.~(\ref{ttrans}).
This comparison of strings~(\ref{stringe}) with point particles~(\ref{parte}) reveals that strings see spacetime differently. In particular we see signs of a minimum length at the self-dual value $L = \sqrt{\alpha'}$. This is also an example of emergent spacetime, to go along with the emergent gauge symmetry noted earlier: the effective large spacetime that appears as $L \to 0$ is not the original one in which the string was embedded, but emerges from the quantum dynamics of the string.
There is a natural question here: if large $L$ and small $L$ are equivalent (e.g. a light-year and $10^{-77}$ cm), why should we prefer the large-$L$ picture?\footnote{I thank Eliezer Rabinovici for raising this question and sharing his thoughts on it.} The point is that there is a locality structure, things far apart in the large-$L$ picture cannot interact directly with each other. This important property is not manifest in the small-$L$ picture.
Beyond the simple $T$-duality of periodic flat dimensions, stringy geometry extends to curved spaces in interesting ways. Mirror symmetry is an extension of $T$-duality relating strings moving on different Calabi-Yau manifolds. It implies surprising connections between different mathematical objects, and has led to extensive interaction between mathematics and string theory~\cite{Hori:2003ic}. These methods have also led to controlled descriptions of transitions between spaces of different topology, going beyond familiar notions of geometry. Further developments are described in the reviews~\cite{Giveon:1994fu,Quevedo:1997jb}.
It is very interesting to extend this thought experiment to open strings~\cite{Dai:1989ua,Horava:1989ga}. Open strings do not have a winding number: their ends are free, so they can simply unwind. Correspondingly, the mass formula
\begin{equation}
E^2 = p_\bot^2 + \frac{n^2}{L^2} + \frac{\pmb N}{4\alpha'} \,. \label{ostringe}
\end{equation}
has no winding number term. As $L \to 0$, no new light states emerge, and the small dimension disappears just as in \mbox{QFT}. But this is a puzzle. Theories with open strings always have closed strings as well. So if we start with open plus closed strings in $d$ spacetime dimensions, and take $k$ dimensions to be very small, the closed strings still feel a $d$-dimensional space, but the open strings move in $d-k$ dimensions. The physical interpretation of this calculation, as can be can be confirmed by further thought experiments, is that in the dual picture we do not have empty space, but rather a membrane with $d-k-1$ spatial dimensions, and the endpoints of the open strings are stuck to this manifold. These were termed D-branes (D for Dirichlet, referring to the boundary condition for the open string endpoints).
The surprising lesson is that string theory has additional extended objects of various dimensions. One might think of them as being solitons,
built out of strings in some way. More precisely, they are open string solitons. Around the same time, other classes of extended objects built out of closed strings were found --- black branes~\cite{Horowitz:1991cd} and NS5-branes~\cite{Callan:1991dj}. The precise role of all of these took a few years to become clear, and we will return to it.
The search for consistent theories of relativistic one-dimensional objects eventually led to five distinct string theories: types I, IIA, and IIB, and heterotic string theories with gauge groups $SO(32)$ and $E_8 \times E_8$. All of these are supersymmetric, and in all cases consistent quantization requires that the string move in ten dimensions.
These actually involve just two kinds of string. In types I, IIA, and IIB, supersymmetric degrees of freedom move along the string in both directions. In the two heterotic theories, supersymmetric degrees of freedom move in one direction, and gauge degrees of freedom in the other. The differences between the theories in each class are the boundary or periodicity conditions on a piece of string of finite length.
In fact, $T$-dualities connect the theories within each class. A $T$-duality on type IIA produces IIB, and vice versa~\cite{Dine:1989vu,Dai:1989ua}: the interchange of winding with momentum has this effect on the periodicity conditions. A $T$-duality on the type I theory produces a space with a D-brane immersed in a IIA or IIB background, as described above.\footnote{The type I theory itself can be thought of as a type II theory with D9-branes. That is, they fill the nine space dimensions, so are not perceived as dynamical objects. The field equations require that there be 16 D9-branes, as well as a related object, the orientifold 9-plane, which together lead to a gauge group $SO(32)$.} For either heterotic theory, if one takes the $L\to 0$ limit in combination with turning on a background gauge potential, one can get to the other~\cite{Narain:1985jj}.
In all, studying strings in small spaces has been a remarkably productive thought experiment, leading to stringy geometry, D-branes, and connections between seemingly distinct string theories.
\subsection{$S$-dualities}
We now consider dualities with respect to the string coupling $g_{\rm s}$. As with the field theory $S$-dualities, an effective strategy is to look at quantities that can be calculated at strong coupling using supersymmetry, and comparing with the weak-coupling values. These quantities include the low energy effective theory (the spectrum and interactions of the massless fields), and the spectrum of BPS states. BPS states are massive states that are invariant under some subgroup of the supersymmetry algebra. To see the basic idea~\cite{Witten:1978mh}, a typical supercharge $Q$ has the property
\begin{equation}
Q^2 = H - G \,, \label{centralcharge}
\end{equation}
where $H$ is the Hamiltonian and $G$ is some other conserved quantity; $G$ may depend on the couplings, but this dependence is determined by supersymmetry). Then for any state,
\begin{equation}
\langle\psi | H |\psi \rangle - \langle\psi | G |\psi \rangle = \langle\psi | Q^2 |\psi \rangle = \| Q |\psi \rangle \|^2 \geq 0 \,.
\end{equation}
Equality holds if and only if $Q |\psi \rangle =0$ for one or more supercharges, in which case $ |\psi \rangle$ is termed a BPS state. The energy of such a state is thus determined by its charge, for any value of the coupling.
Evidence for string-string $S$ dualities appeared as early as Refs.~\cite{Cremmer:1978km,Duff:1987bx,Font:1990gx,Duff:1994zt}, but the full and intricate pattern only emerged with Refs.~\cite{Hull:1994ys,Townsend:1995kk,Witten:1995ex,Horava:1995qa}. Consider first IIB string theory. This has odd-dimensional D-branes, so there are two one-dimensional BPS objects, the fundamental string whose quantization defines the theory, and the D1-brane or D-string.\footnote{The second object was originally described as a `black brane.' The relation with the D-brane picture will be explained in \S4.}
Their tension is in the ratio
\begin{equation}
\tau_{\rm F1}/\tau_{\rm D1} = g_{\rm s} \,.
\end{equation}
This is invariant under $g_{\rm s} \to 1/g_{\rm s}$ with interchange of the F1 and D1, suggesting self-duality of IIB string theory. This is also consistent with various other evidence, such as the self-duality of the low energy effective action. It is interesting to look at the effect of the duality on D3-branes. A characteristic property of D3-branes is that their massless degrees of freedom are described by an ${\cal N}=4$ gauge theory, whose gauge group is $U(N)$ for $N$ coincident D3-branes~\cite{Witten:1995im}. The IIB duality takes D3-branes into themselves, but it reverses the electric and magnetic fields on them. Thus, IIB self-duality implies Montonen-Olive duality. The reverse is not necessarily true, but this is one simple example of the web of interconnections among all the dualities.
The IIA string has even-dimensional D-branes, so no D-string. One clue as to its dual was the early observation~\cite{Cremmer:1978km} that Kaluza-Klein (KK) compactification of $d=11$ supergravity gives the massless sector of the IIA string theory. A second clue~\cite{Duff:1987bx} was existence of 2-brane solutions in $d=11$ supergravity. In KK compactification, a 2-brane with one direction wrapped on the KK circle behaves exactly like a IIA string. The IIA D-branes also play essential roles~\cite{Hull:1994ys,Townsend:1995kk,Witten:1995ex}. The D0-brane, a massive particle, corresponds to the charged states in the KK compactification, while the D2-brane is the $d=11$ 2-brane with both directions orthogonal to the IIA circle. The higher-dimensional D-branes each have their $d=11$ interpretation in turn.
These arguments suggest the existence of an eleven-dimensional theory of quantum gravity, whose low energy limit is $d=11$ supergravity. The maximum dimension for a weakly coupled, Lorentz-invariant, theory of strings is $d=10$. The perturbative expansion of the IIA amplitudes in powers of $g_{\rm s}$ is an expansion around the zero-radius limit of the KK compactified $d=11$ theory, so the eleventh dimension is not visible at weak coupling. The $d=11$ theory is not a perturbative string theory, but it is one limit of a theory of quantum gravity, which includes string theories as other limits. The full form of this quantum theory is not yet known; it has been given the provisional name of M theory~\cite{Witten:1998uk}. (Sometimes M theory is used specifically for the $d=11$ theory, and the branes in this theory are termed M-branes.)
The type I string does have a D1-brane~\cite{Polchinski:1995df}, which is heterotic: supersymmetric degrees of freedom move along it one direction, and gauge degrees of freedom move in the other direction. These are the same properties as for the fundamental string of the heterotic theory. Moreover the gauge group of the type I theory is $SO(32)$, the same as for one of the two heterotic theories, and so we identify the type I and $SO(32)$ heterotic theories as weak/strong duals of one another.
The remaining string theory, the $E_8 \times E_8$ heterotic theory, took the longest to sort out~\cite{Horava:1995qa}, but by using its $T$-duality to the $SO(32)$ theory, and following a further chain of dualities, one can show that it is also dual to a compactification of the $d=11$ theory. Normal KK compactification on a circle gives the IIA theory, but it turns out that compactification on a line segment, with a boundary at either end, is also consistent and gives the $E_8 \times E_8$ heterotic theory.
In all, this is a rather nontrivial pattern: two string theories are $S$-dual to each other, one is $S$-dual to itself, and two are $S$-dual to compactifications of a $d=11$ theory. Moreover, by combining this with connections via $T$-duality described earlier, one sees that one can get from any string theory to any other by a series of $T$- and $S$-dualities. The situation is shown schematically in the figure.
\begin{figure}[!ht]
\begin{center}
\vspace {-5pt}
\includegraphics[width=6in]{buckskin.pdf}
\end{center}
\vspace {-10pt}
\caption{Schematic duality diagram. A two dimensional subspace of supersymmetric string vacua, with the limits corresponding to the five string theories and $d=11$ supergravity labeled.
}
\label{fig:radii}
\end{figure}
This depicts a two-dimensional slice through the space parameterized by the string coupling and the radii of compact dimensions, and emphasizes the nature of string theory and the $d=11$ theory as limits of some single quantum theory. Since string theory contains gravity, one might have thought that the strongly coupled limit would involve strong gravity, with the metric fluctuating wildly. The dualities mean that if we try to reach such an extreme, we find instead a dual picture in which things become classical again.
There is an important point to emphasize here. We have been referring to the different points within the gray region as different theories. However, they are distinguished, not by the values of fixed constants, but by the values of background fields. This is clear for the radii of dimensions, which arise from the metric, but it is also true for the string coupling, which is determined by the value of the dilaton field. Thus, the figure depicts a single quantum theory with many backgrounds (vacua), and in certain limits the physics in the given background is well-described by one or another weakly coupled theory.
In QFT a theory is first characterized by the choice of fields and symmetries, with an infinite number of options. In string theory, we only have five choices for the kind of string, and duality shows them all to be equivalent. But secondly, in QFT for each set of fields and symmetries there may be a continuous infinity of possible coupling constants. In string theory, this second infinity comes from the Hilbert space. It is a property of the state rather than the theory. This is not a consequence of duality per se. Rather it reflects the Einstein-Kaluza-Klein principle that physics emerges from geometry.
The space of vacua shown in the figure is a tiny piece of the landscape of string theory. The landscape follows from a rather simple train of thought.
General relativity describes gravity as the curvature of space and time. In a unified theory, we would expect this to be true for the other forces as well. But for this we need `more spacetime,' essentially more dimensions: this is Kaluza-Klein theory. String theory requires extra dimensions for an additional reason, the consistent quantization of the string. In string theory the Kaluza-Klein mechanism can be transmuted into many other forms by dualities, but the central idea remains, that the physics that we see reflects the spacetime geometry. The equations of general relativity in higher dimensions have multiple solutions, corresponding to multiple vacua in the quantum theory. The vacua in the figure are all degenerate, a consequence of the large amount of supersymmetry, but for nonsupersymmetric solutions there is a potential energy that leads to isolated solutions. Rather than the two dimensions shown in the figure, topologically complex solutions are characterized by hundreds of background fields or more, and combinatorics can generate a vast number of solutions.
The physics that we see directly depends on the shape of the compact directions, not just on the underlying equations of string theory. The latter seem to be unique, but the former are far from unique. The uniqueness of the equations is what one would hope for from a complete theory. The nonuniqueness of the solutions is again in keeping with the general pattern in physics, that a few principles can define a theory, but these then give rise to a vast range of different phenomena. The consequence here is that things that we had hoped to predict, basic constants and spectra of low energy physics, depend on which of the many vacua we are in. But there is some evidence that this is the way things are. Weinberg argued in 1987 that such a multi-vacuum theory would explain the absence of a large cosmological constant and predict a small nonzero one~\cite{Weinberg:1987dv}, and after 27 years and the discovery of vacuum energy there is still no significant competing idea.
We should note one important difference between string theory and field theory. In field theory, we have the Hamiltonian~(\ref{ham}), and we have the corresponding perturbative expansion of the amplitudes~(\ref{perts}). In string theory we had for a long time only the analog of~(\ref{perts}), the sum over string world-sheets. The dualities are deduced, not from a full understanding of the exact theory, but from parts of it that are determined by general principles plus supersymmetry. In QFT, the path integral as defined by Wilson tells us what is the exact theory. We might try to copy this in string theory, in the form of an infinite component QFT known as string field theory, but this has not been as fruitful as hoped and another approach seems needed. The string-string dualities do give us some more global picture of the theory, and we will see in the next section that further dualities give a much fuller understanding.
\sect{QFT/string dualities}
Dualities between field theories, and dualities between string theories, are remarkable, taking QFT and string theory far beyond their perturbative descriptions. A duality between a field theory and a string theory might seem to be impossible, on several grounds. String theories require ten dimensions, whereas renormalizable field theories do not seem to exist in dimensions greater than four (though some non-Lagrangian theories exist up to dimension six). String theories seem to contain many more degrees of freedom than QFT's, from the infinite number of internal states of the string. And, string theories contain quantum gravity, with its many conceptual puzzles, while renormalizable QFT's do not. Well, prepare to be amazed.
\subsection{Matrix theory}
The short distance problem of quantum gravity points to a minimum spacetime distance. This is suggestive of some sort of uncertainty principle for position,
\begin{equation}
[ X^i, X^j ] \neq 0 \,, \label{xxcom}
\end{equation}
extending the Heisenberg relation~$[ p^i, x^j ] = -i\hbar$. Here $i, j$ are spatial indices; we will get to Lorentz invariance below. Now, if we try to copy the Heisenberg relation and put some constant on the right side of~(\ref{xxcom}), we have a problem with rotation invariance: the left-hand side has two antisymmetric spatial indices, so a constant of this form would not be rotationally invariant. So we will try a different approach. We will just assume that the $X^i$ are $N \times N$ matrices in some unspecified space, and figure out what this means later.
We want to have ordinary commutative space at low energies, so we will include in the Hamiltonian a simple term that enforces this. We take
\begin{equation}
H = {\rm Tr}\left( \dot X^i \dot X^i +| [X^i, X^j] [X^i, X^j]|\right) \,. \label{mham}
\end{equation}
For convenience, all coefficients have been set to 1.
The first term in $H$ is a kinetic energy; expanding out the trace, one gets the sums of the squares of the time derivatives of all of the matrix components (we use summation convention for the repeated spatial indices). The second term vanishes only if all the $X^i$ matrices commute. In this limit, we can diagonalize these matrices, and the diagonal elements just give us $N$ free particles. However, at high energy the full noncommutative structure comes into play. In addition we have to introduce supersymmetry, for the usual reason of stability, so we add in fermionic coordinates $\Theta$ in an appropriate way. This adds one more term to $H$, a Yukawa interaction $\Theta \Theta X$.
It might seem that we have just made a toy model of noncommutative spacetime, but in fact we are done: the quantum mechanical model we have described is a dual description of M theory! This Hamiltonian first made its appearance in attempts to quantize the M2-brane as a fundamental object, by analogy to the quantization of the string in string theory~\cite{de Wit:1988ig}. The M2-brane, having one more dimension than the string, has more internal degrees of freedom and these give rise to more divergences; the finite value of $N$ corresponds to a regulated version of the M2-brane. Of the eleven dimensions of M theory, nine come from the matrices $X^i$ with $1 \leq i \leq 9$; this is the maximum number allowed by the supersymmetry. Time provides a tenth dimension, and the eleventh is hidden here in Fourier-transformed form: $N$ is the number of units of momentum in this direction. The quantization of the membrane is in light-cone gauge, which treats the dimensions in this asymmetric way.\footnote{There is a covariant form of Matrix theory, based on ten matrices $X^\mu$, which is suppose to describe IIB string theory~\cite{Ishibashi:1996xs}. However, its full interpretation is not clear. One issue is that time, which is one of the ten matrices, must be Euclidean.}
The full significance of this Hamiltonian was identified by Banks, Fischler, Shenker, and Susskind~\cite{Banks:1996vh}: it describes not just the membrane, but the whole of M theory, at a particular location in the landscape of Fig.~1. In particular, the Hamiltonian is precisely the Hamiltonian of D0-branes, which are interpreted as eleven-dimensional gravitons in the IIA-M $S$-duality. The BFSS Matrix theory corresponds to M theory with one of its dimensions compactified in a null direction, and with $N$ units of momentum in this direction. But even better, it allows one to describe the quantum theory of M theory in eleven uncompactified dimensions as well. By taking $N \to \infty$, one is essentially taking the compact dimension to be large, in the rest frame of the matter, and in the limit one gets the noncompact theory. This is the upper point in the duality diagram, but unlike the supergravity description, which captures only low energy physics, the Matrix theory description is complete.
This means that if we want the $S$-matrix in M theory, we need simply solve the quantum mechanics problem~(\ref{mham}) and take the limit $N \to \infty$. Now, this is hard, but it is something that one could program a very large computer to do, and so it provides an algorithmic definition of the theory. In fact, similar calculations are being done~\cite{Hanada:2013rga}. They are near the limit of current technology, but show impressive agreement between matrix quantum mechanics and quantum gravity.
The simple quantum model that we have described thus provided something that was previously lacking, a nonperturbative construction of at least part of the landscape of vacua of string theory~\cite{Banks:1996vh}. One challenge that remains here is that if one compactifies some of the dimensions (to get down to the four noncompact dimensions of our vacuum), Matrix theory becomes more complicated, and if more than three dimensions are compactified its form is not known. Other challenges will be discussed below.
Regarding our earlier puzzle about the mismatch in the number of degrees of freedom, the key is that the full duality emerges in the $N \to \infty$ limit, and one can put as many degrees of freedom as one wants in an infinite dimensional matrix. This seems like a truism, but the miracle is that the simple Hamiltonian~(\ref{mham}) is all that it takes to do the encoding. To be precise, this is a string-QM duality, not a string-QFT duality. But for uniformity with other examples that we will soon see, we should regard it as a 0+1 dimensional field theory, the basic quantum variables $X$, $\Theta$ being functions only of $t$.
\subsection{Black hole quantum mechanics}
We have seen that one thought experiment, the string in a small box, was exceedingly productive. For quantum gravity, black holes turn out to be another valuable laboratory, leading to the entropy puzzle and the information paradox.
\subsubsection{Black hole entropy}
By considering a process of feeding quantum bits into a black hole, Bekenstein argued that black holes have a well-defined information-carrying capacity. By the uncertainty principle, it requires an energy $\hbar c/R$ to fit a quantum into a black hole of radius $R$. Using the mass-radius relation for a black hole, $M = c^2R/2G$, one finds that a black hole of radius $R$ can contain of order $c^3 R^2/\hbar G$ bits of information.
This argument was reinforced by the discovery of Hawking radiation~\cite{Hawking:1974sw}. Black holes radiate like a hot body with a temperature of order $\hbar c/R k_{\rm B}$. By thermodynamic relations this translates into an entropy
\begin{equation}
\frac{S}{k_{\rm B}} = \frac{\pi c^3 R^2}{\hbar G} = \frac{A}{4 l_{\rm P}^2} \,. \label{bhe}
\end{equation}
In the last form, $A$ is the area of the black hole horizon, and $l_{\rm P}$ is the Planck length $1.6 \times 10^{-32}$ cm. In statistical mechanics, the entropy is a count of the effective degrees of freedom of a system, and the number of possible microscopic states is the exponential of this. Since $l_{\rm P}$ is very small, the number of states is very large, for macroscopic $A$.
This is a puzzle: what is the black hole entropy counting? In general relativity, every black hole of given mass (and charge and angular momentum) looks exactly the same --- black holes have no hair. But in quantum theory, they seem to have a microscopic structure. Heuristically, temperature normally comes from the motion of atoms, so black holes should have some sort of `atomic' structure.
The first concrete description of the degrees of freedom of a black hole came from the work of Strominger and Vafa~\cite{Strominger:1996sh}. They considered a thought experiment in which one starts with a black brane (like a black hole, but extended in some directions) and then reduces the strength of the gravitational interaction. As we have noted, in string theory the coupling constant is the value of a field, the dilaton, and one can imagine reducing this in a gradual way, until the black brane is no longer black and we can see what is inside.
To do this in a controlled way, one wants to take a BPS black brane, so that one can keep track of its mass during the process. For a particular class of black branes, what one finds inside is D-branes. To understand this, recall that the supersymmetry algebra~(\ref{centralcharge}) contains a conserved charge $G$, and BPS objects carry this charge. For D-branes, a key characteristic is that they carry Ramond-Ramond (RR) charge~\cite{Polchinski:1995mt}. This kind of charge is not carried by fundamental string themselves, but for any D-brane, one can write down a `black' gravitational solution with the same number of extended directions and carrying the same charge~\cite{Horowitz:1991cd}. This solution has a horizon and a singularity, and the Ramond-Ramond field lines emanate from the singularity.
Having two BPS objects with the same charge was a bit puzzling, until Ref.~\cite{Strominger:1996sh} observed that they were the weak and strong coupling descriptions of the same object. Moreover, the weakly coupled D-brane picture allows an explicit count of the states of the system, and it agrees with that deduced by Bekenstein and Hawking.
This subject, the count of black hole states, has continued to give insight into the structure of quantum gravity. It points to a connection between spacetime geometry (the black hole area) and information which remains tantalizing and has been the subject of much recent work. A striking property of the entropy formula~(\ref{bhe}) is that it is proportional to the black hole's area, not its volume. For ordinary systems the number of atoms goes as the volume. Here it is as though they are spread out on the horizon of the black hole, with a density of order one per Planck area. Moreover, in any gravitating system, most states are black holes (if one tries to excite a lot of degrees of freedom, they will gravitationally collapse). This suggests the {\it holographic principle,} that in quantum gravity the fundamental degrees of freedom for any region live on its surface, not in its volume~\cite{'tHooft:1993gx,Susskind:1994vu}.
\subsubsection{The information paradox}
The large number of black hole microstates reflects the fact that the black hole might be formed in many different ways, by collapsing matter from many possible initial quantum states. Once this matter is behind the event horizon, causality prevents it from influencing anything outside the black hole. In particular, the state of the Hawking radiation cannot depend on the state of the infalling matter. But this implies irreversibility: after the black hole evaporates so that only the Hawking radiation remains, the final state of the system is essentially independent of the initial state~\cite{Hawking:1976ra}. In this sense, information is lost.
This is inconsistent with normal quantum mechanical time evolution,
\begin{equation}
i \partial_t \psi = H \psi \,.
\end{equation}
This Schr\"odinger form holds not just in quantum mechanics, but also in quantum field theory and in string perturbation theory. Such a differential equation, with a self-adjoint $H$, can be integrated backwards in time as easily as forward, and so the final state determines the initial state uniquely. Hawking therefore proposed a more general evolution law~\cite{Hawking:1976ra}, the dollar matrix.
This is a true paradox. It allows a number of alternatives, all of them seemingly unpalatable: (a) information is lost in the way that Hawking argued;
(b) information is carried away by the outgoing Hawking radiation; (c) the black hole does not evaporate completely, but ends in a Planck-sized remnant with an enormous number of internal states. We will not review here all of the arguments in various directions, but will list a major criticism of each. On (a), it is argued that dollar matrix evolution does not satisfy Noether's theorem, and energy conservation is violated in a radical way~\cite{Banks:1983by}. On (b), information needs to travel faster than the speed of light. This is not just the apparent nonlocality of the EPR effect, as its effect would be seen in actual measurements of the Hawking radiation. On (c), the large number of states is inconsistent with the entropy formula~(\ref{bhe}) which relates the number of states of a black hole to it size, and it may lead to uncontrolled virtual effects.
In fact, Matrix theory already tells us the answer~\cite{Banks:1996vh}. The $S$-matrix defined by Matrix theory includes processes in which a black hole forms in a high energy collision and then evaporates. Matrix theory is an ordinary quantum mechanical system, so Schr\"odinger evolution holds and information is not lost. Moreover, the quantum mechanical spectrum does not have room for the states of a remnant. So the answer must be (b).
This argument is now accepted, at the same level of belief as all the other dualities, but it was not immediately appreciated. It was not initially clear that Matrix theory represents a full-fledged duality. For one thing, the large-$N$ limit involved is somewhat unfamiliar, and the physical interpretation of the finite-$N$ theory is a bit exotic~\cite{Susskind:1997cw}. So work on the information paradox continued, and was extremely fruitful.
\subsection{AdS/CFT duality}
We have noted that we can move back and forth between D-branes and black branes by dialing the coupling up and down. Since D-branes seem to have a normal quantum mechanical description, this suggests that black branes should as well. In order to make this more precise, after the agreement of entropies was found the dynamical properties of D- and black branes were compared, i.e.\ the amplitudes for the branes to absorb and emit radiation.
A series of unexpected coincidences were found, where very different calculations gave the same answer --- an operator matrix element on the D-brane side, and a curved spacetime propagator on the black brane side. Primed by the previous examples, the reader will suspect a duality, and this was crystallized by Maldacena~\cite{Maldacena:1997re}.
Maldacena's insight was that one obtains a duality if one takes the low energy limit of the D/black brane. As we have noted earlier, the low energy limit of the physics on a D-brane is a QFT, supersymmetric Yang-Mills theory. The low energy limit of the black brane isolates the geometry near the horizon, because the gravitational redshift blows up there.\footnote{On both sides there are also massless gravitons moving in the bulk, but these decouple because they are propagating in a larger number of noncompact dimensions.} The case that has received the most attention is 3-branes. On the gauge side, one gets the ${\cal N}=4$ theory discussed earlier. On the black side, the near-horizon geometry is $AdS_5 \times S^5$. The claim is that the $d=4$, ${\cal N}=4$ $SU(N)$ Yang-Mills theory is dual to the IIB string on $AdS_5 \times S^5$. (The remaining $U(1)$ from $U(N)$ decouples.) Here $AdS$ denotes anti-de Sitter spacetime, a solution to Einstein's equation that once was rather exotic, but whose special properties are essential to the duality.
An immediate check is that the symmetries match. The $AdS_5 \times S^5$ spacetime has the geometrical symmetry $SO(4,2) \times SO(6)$. In the QFT, the $SO(4,2)$ emerges from the Poincar'e symmetry plus conformal symmetry, the latter being a special property of the ${\cal N}=4$ theory, and the $SO(6)$ is a global symmetry of this theory. The duality means that there is a 1-1 correspondence in the spectrum, with equality of observables. The simplest observables in a QFT are expectation values of products of local operators. In the string theory, these are dual to perturbations of the boundary conditions on $AdS_5$~\cite{GKP,W}. In quantum gravitational theories, it is difficult to find simple observables --- because of the coordinate invariance and spacetime dynamics, one must specify where the observable is by including effectively a physical system of clocks and rods. But anti-de Sitter space is rather special, in that it has a boundary where the quantum fluctuations go to zero, and observables there (such as perturbations of the boundary conditions) can be simple.
There are two parameters on each side of the duality. In the Yang-Mills theory there is the coupling $g_{\rm YM}^2$ and the rank $N$ of the gauge theory. In the string theory there is the string coupling $g_{\rm s}$ and the number $N$ of units of RR flux on the $S^5$. The $N$'s are the same, and the couplings are related by $g_{\rm YM}^2 = 4\pi g_{\rm s}$. It is useful to relate these parameters to the three characteristic length scales in the string theory description, the Planck length $l_{\rm P}$, the string length $l_{\rm s}$, and the curvature radius $l_{\rm AdS}$ of the $AdS_5$ and $S^5$:
\begin{equation}
\frac{l_{\rm AdS}}{l_{\rm P}} \sim N^{1/4} \,, \quad \frac{l_{\rm AdS}}{l_{\rm s}} \sim ( g_{\rm s}N)^{1/4} \,.
\end{equation}
In order for a classical gravitational description of the AdS spacetime to be valid, we need both of these ratios to be large, so both the rank $N$ of the gauge group and the combination $\lambda = g_{\rm YM}^2 N \sim g_{\rm s} N$ must be large. It was pointed out by 't Hooft~\cite{'tHooft:1973jz} that $\lambda$ is the parameter that controls the validity of the QFT perturbation theory: the dominant graphs at each loop order have one power of $N$ for each power of $g_{\rm YM}^2$. So for large $N$, if $\lambda \ll 1$ then the weakly coupled QFT is the good description, and if $\lambda \gg 1$ then the string theory in AdS is the good description.
As an aside, the reader might note that we have proposed two duals for strongly coupled $d=4$, ${\cal N}=4$ gauge theory --- here the $AdS_5 \times S^5$ IIB theory, while in section 2.5 the original gauge theory but in magnetic variables. The point is that there are three descriptions, with the magnetic one holding at very large $\lambda$. The original QFT is good if $g_{\rm YM}^2 \ll 1/N$, so the dual QFT is good if $1/g_{\rm YM}^2 \ll 1/N$ meaning that $g_{\rm YM}^2 \gg N$. In between, $1/N \ll g_{\rm YM}^2 \ll N$, the AdS string theory is the good description.
't Hooft also noted that the graphs that dominate at each order have a special structure. They are planar, meaning that they can be drawn on a two-dimensional surface without crossing of lines. This is suggestive of a string world-sheet, and 't Hooft proposed that the large-$N$ theory might be rewritten as a string theory~\cite{'tHooft:1973jz}. This idea was tantalizing; see for example Ref.~\cite{Polyakov:1987ez} for further exploration of this. One surprise about the final form of the duality is that it involves the exact same string theory that describes quantum gravity, but that the dual spacetime $AdS_5 \times S^5$ is very different from that in which the gauge theory lives. The unusual properties of $AdS_5$, in particular its gravitational warping, explain why there is no graviton in the 4-dimensional spectrum. 't Hooft's connection with planar gauge theory is related to the existence of strings in the dual theory.
The QFT also contains the duals of black holes: they are thermal equilibrium states of high energy. There is a phase transition in the QFT~\cite{Witten:1998zw}. At low temperature the entropy is of order $N^0$. This correspond to a gas of gravitons. At high temperature the entropy is of order $N^2$. Tracing through the parameters, this matches the entropy formula~(\ref{bhe}), generalized to ten dimensions. In particular it reproduces the distinctive factor of $1/\hbar$, which is special to the black hole entropy.
This is consistent with the intuition that black holes should behave like ordinary thermal objects: they are literally dual to a thermal state. Further, AdS/CFT, like Matrix theory, can be used to argue for conservation of information. One can consider a black hole forming and evaporating within an AdS box, and this has a QFT description in terms of ordinary quantum evolution.
\subsection{Gauge/gravity duality}
If we apply the same reasoning to D$p$-branes for $p \neq 3$, one again obtains a duality~\cite{Itzhaki:1998dd}. The gauge theory is no longer conformally invariant, and the gravitational space has less symmetry as well. Thus, the label AdS/CFT used for the D3 and other early examples is too special, and gauge/gravity duality is the more general term (though this still may not be general enough, perhaps QFT/gravity would be more encompassing).
Note the analogy between the commutator-squared potential ${\rm Tr} ( [X^i, X^j] [X^i, X^j] )$ that was the key to Matrix theory and the quartic interaction ${\rm Tr} ( [ A_\mu , A_\nu] [ A_\mu , A_\nu])$ in Yang-Mills theory. This is more than a coincidence. Matrix theory is the quantum mechanics of D0-branes, and $AdS_5 \times S^5$ is the quantum mechanics of D3-branes. These D-branes are related by $T$-duality, so Matrix theory and AdS/CFT are both part of a larger web of gauge/gravity dualities.
Like Matrix theory, AdS/CFT provides a nonperturbative definition of quantum gravity and string theory.
Recalling the discussion of observables, the spacetime of the QFT is identified with the boundary of anti-de Sitter space. Thus, AdS/CFT is a precise realization of the holographic principle: the nonperturbative variables live on the boundary of the space of interest.
It is sometimes suggested that AdS/CFT might only be an approximate duality; see for example~\cite{Gary:2011kk}. At one level it is difficult to see how this could be. The CFT is a complete and well-defined quantum field theory. It contains states that can be identified with gravitons, strings, and black holes in AdS spacetime, and many of the detailed properties agree with those expected for these particles. What quantum theory could be so similar to quantum gravity (or, more precisely, the IIB string) without actually being quantum gravity?
In the D3 duality, the QFT is a 3+1 dimensional gauge theory. These of course are interesting for many other reasons, and gauge/gravity duality is a new tool for understanding them. When the 't Hooft parameter is large, we cannot use perturbation theory, but now we can use the dual gravitational description to calculate.
QCD itself, the theory that we would like to solve, will not have any simple dual. The problem is that the coupling is large only in a narrow range around the confinement scale. At higher energies it is weak because of asymptotic freedom, and at lower energies there are no degrees of freedom. Thus, there is not much room for a duality to operate. By contrast, the coupling does not run in the ${\cal N}=4$ theory, so the coupling can be strong at all scales. One can deform the conformal ${\cal N}=4$ theory to break some of its supersymmetry, and if enough is broken the theory would be expected to confine. Indeed, this is what one finds on the gravitational side. One can even take a limit that should give true QCD, but it is complicated and will involve spacetimes of large curvature.
Surprisingly, even the conformal AdS/CFT duality seems to capture some features of the strongly interacting states that are produced at heavy ion accelerators~\cite{Kovtun:2004de}, and which are difficult to understand from any other point of view. Gauge/gravity duality is also used to model strongly coupled phases in condensed matter systems~\cite{Hartnoll:2009sz}. As with QCD, the theories that have sharp duals are always idealizations, and one must work to distinguish universal properties from artifacts of the model, but these dualities allow the study of a range of phemenomena in a novel way, orthogonal to standard methods.
Gauge/gravity duality, starting from the QFT side, is an example of emergent gravity. In \S2 we discussed emergent gauge theory, so emergent gravity may not be such a surprise. However, it took much longer to realize. A celebrated no-go theorem~\cite{Weinberg:1980kq} suggested that it would be impossible. The essence of this theorem is a remark we made earlier, about the paucity of observables in quantum gravity: QFT's without gravity have many more observables, so there was a mismatch. As with other powerful but ultimately falsified no-go theorems such as~\cite{Coleman:1967ad}, it is evaded but only due to rich and unexpected new ideas. For emergent gravity via gauge/gravity duality, it was necessary that new dimensions emerge as well. The mismatch of observables is resolved as we have discussed by identifying the QFT with the boundary of the gravitational space. And along with gravity and spacetime dimensions, strings, or more generally M theory, emerge as well.
\sect{Discussion}
\subsection{Open questions}
Duality has given us a vastly larger and more precise picture of string/M theory. Perturbation theory covered only small neighborhoods of the cusps of the duality diagram. String/string dualities let us see the whole picture. Gauge/gravity dualities then provided exact descriptions of special regions, and joined QFT into the web.
What we are still missing is a global definition of M theory. The dual constructions are limited to theories in special spaces such as AdS, which have visible boundaries. Extending this to cosmological spacetimes such as our own is a large step. One attempt is dS/CFT~\cite{Strominger:2001pn,Witten:2001kn}, in which the future infinity of de Sitter replaces the boundary of AdS, and other frameworks are being explored as well~\cite{Alishahiha:2004md}, but no clear success has emerged. Related to this is the problem of extending the holographic principle in a precise way to situations where there are not special boundaries.
Black hole quantum mechanics has been a fruitful laboratory, and we may have more to learn from it. Gauge/gravity duality tells us that information is carried away by the Hawking radiation, which seems tantamount to traveling faster than light, but the duality does not tell us how this works. The recent black hole firewall paradox~\cite{Almheiri:2012rt} sharpens the issue. It had been widely believed that no single observer would see any violation of the ordinary laws of physics, but this seems to lead to a contradiction: if all is normal for observers outside the black hole, then an infalling observer sees something very different from a smooth event horizon.
Most attempts to avoid this conclusion relax the rules of quantum mechanics --- not in the way that Hawking proposed, which would be visible to an external observer, but in the description of the infalling observer. At this point the subject is much like the original information paradox before gauge/gravity duality. Theorists are trying out different scenarios, in which one or another assumption is relaxed, but a convincing theory has yet to emerge. Gauge/gravity duality itself has been rather impotent here. It gives a precise, and as far as we can tell complete, description of measurements that can be made by an observer at infinity. It had been widely believed that one could infer from the QFT a description of the black hole interior as well, but the firewall argument has challenged this. It seems that the dual QFT does not tell us whether the firewall is there in the gravitational theory, or perhaps we do not yet have the dictionary needed to interpret it.
We have repeatedly discussed emergent space, but what of emergent time? In all gauge-gravity duals (except the poorly understood IKKT model~\cite{Ishibashi:1996xs}), time is already present on the QFT side. Even so, there is some measure of emergent time: due to relativity, there is not a direct identification between time in the QFT and time in the gravitational bulk. But the understanding of this is far from precise. An interesting recent direction has been the study of holographic entanglement entropy~\cite{Nishioka:2009un}, relating spacetime geometry to entanglement in the dual \mbox{QFT}. It is not yet clear what the ultimate lesson will be, but this brings together ideas from several fields, and is being pursued vigorously.
On the list of unsolved problems, one should mention the understanding of string vacua like ours, with broken supersymmetry. There are some constructions of these, but they rest on multiple approximations and no exact theory. For example, the value of the cosmological constant in such a vacuum should be calculable algorithmically to essentially perfect accuracy (even if this is not a practical thing to do). In a complete theory there must be a prescription for this.
The problems of the black hole interior, and of nonsupersymmetric vacua, both point to limits in our current understanding. The issue may be that we do not have an independent nonperturbative construction of string/M theory. Rather, at present we have a nonperturbative construction of the dual QFT, and various approximations on the string/M side. Perhaps, with improved technology, this will be enough, but it seems that we are still missing some big idea. Duality has revealed much of the fascinating mathematical/physics structure known as M theory, but its full form is still for us to discover.
\subsection{Fundamentals}
One should always be a bit agnostic about what is fundamental in the current understanding of the laws of nature. Future technical developments may completely change the landscape, and what looks central today might be a sideshow tomorrow.
{\it Symmetry} is a cautionary example. In the 1960's and 1970's, symmetry reigned, from the global $SU(3)$ of Gell-Mann and Ne'eman to the local $SU(3)\times SU(2) \times U(1)$ of the Standard Model, and beyond to grand unification and supersymmetry. In the context of the Standard Model, global $SU(3)$ is now seen as an accident, which appears in an approximate way because the scale of the quark masses happens to be somewhat less than the scale of quark confinement. Indeed, when this ratio is reversed a different approximate global symmetry appears instead~\cite{Isgur:1989vq}. The structure of the Standard Model is determined entirely by its gauge symmetries. It does have one global symmetry, $B-L$ (baryon number minus lepton number), but this again appears to be an accident: given the gauge symmetries and particle content, there are no renormalizable terms that could violate $B-L$.\footnote{Baryon number and lepton number separately are not symmetries of the Standard Model due to anomalies. Anomaly mediated $B$ and $L$ violation is believed to have been substantial in the early universe.} This point of view would suggest that a small violation, from virtual high energy effects, would eventually be seen, and the neutrino masses may be evidence for this. From more theoretical points of view, string theory appears to allow no exact global symmetries~\cite{Banks:1988yz}, and in any theory of quantum gravity virtual black holes might be expected to violate all global symmetries~\cite{Hawking:1974sw}.
Moreover, as we have already discussed in \S2, local (gauge) symmetries have been demoted as well, with the discovery of many and varied systems in which they emerge essentially from nowhere. It seems that local symmetry is common, not because it is a basic principle, but because when it does emerge it is rather robust: small perturbations generally do not destroy it. Indeed, it has long been realized that local symmetry it is `not really a symmetry,' in that it acts trivially on all physical states. The latest nail in this coffin is gauge/gravity duality, in which general coordinate invariance emerges as well.
This leaves us in the rather disturbing position that no symmetry, global or local, should be fundamental (and we might include here even Poincar\'e invariance and supersymmetry). Susskind has made a distinction between the mathematics needed to write down the equations describing nature, and the mathematics needed to solve those equations~\cite{Suss}. Perhaps symmetry belongs only to the latter.\footnote{As an example, the $SO(4,2) \times SO(6)$ symmetry of the prototypical gauge/gravity duality is a symmetry of the $AdS_5 \times S^5$ solution to the bulk field equations, but there are many other solutions of lesser or no symmetry. On the CFT side this $SO(4,2) \times SO(6)$ is a symmetry of the Lagrangian. In QFT, different Lagrangians would be regarded as defining different theories, but here there is a larger structure where the CFT Lagrangian is contingent on the solution. One might think of it loosely as an `effective Lagrangian.'} Or perhaps this is taking things too far.
{\it Stringiness} is another example. After the early successes of string theory, a tempting direction was to seek a nonperturbative formulation of the theory in terms of the quantum mechanics of strings. But work in this direction, such as abstract conformal field theory, and string field theory, seem not to have been so fruitful. Rather, strings have been a bridge to the discovery of M theory, in which they are found to emerge in some classical limits but not in others.
What then are we to make of the role of {\it duality} in recent developments? Let me take a pragmatic point of view. When we deal with a large quantum system, we have a lot of knobs to turn: couplings can be weak or strong, spaces can be large or small. When parameters are taken to extreme values, it might be that the physics becomes chaotic or turbulent and has no simple description. But in a large number of physical systems, it is possible to find some simplification that makes such limits tractable. And when the system is constrained by quantum mechanics, and Poincar\'e invariance, and possibly supersymmetry as well, then limits often have a classical interpretation. Moreover, it seems that these principles are so constraining that in many cases it is possible to reconstruct the whole quantum theory from such a limit.
The existence of dualities points to a great unity in the structure of theoretical physics. String-string dualities imply that there is a unique string/M theory. {\it Unique} here means that there are not even adjustable coupling constants, as in quantum field theory; rather, the apparent parameters are properties of the particular state (vacuum) of the system. Gauge/gravity dualities further imply that many QFT's can also be interpreted as vacua of string theory. As we have noted, a necessary condition for a QFT to have an interpretation in terms of a geometrical spacetime is that the number of fields be large.\footnote{There is some evidence that this condition, plus the condition that the QFT be strongly coupled in a certain precise technical sense (large anomalous dimensions), actually constitute sufficient conditions~\cite{Heemskerk:2009pn}.} However, there is no sharp cutoff on the number of fields, so QFT's with small numbers of fields should correspond to string theories in highly curved spaces. In this sense it may be that every QFT can be understood as a vacuum state of string/M theory.
When we identify the different duality frames with different fundamental descriptions, we are essentially saying that what is fundamental is what we see in the classical limit. This is strange, in a world that is intrinsically quantum mechanical (and what of theories that have no classical limit?). But our universal tool for constructing relativistic quantum field theories, the path integral, begins with a sum over classical histories. Somehow we do not have a fully quantum point of view, and this may be connected with our inability to derive the interesting dualities. We want to know what is the theory in the middle of the duality diagram.
Our understanding of what is fundamental has taken many twists and turns, some of which I have tried to describe. I have also tried to describe some of the methods and reasoning that have brought us to our current level of understanding. The immediate questions are, how do we go beyond this, and which of the current threads will be the productive ones?
\section{Acknowledgments}
I am grateful to many colleagues over the years, beginning with my advisor Stanley Mandelstam, for insights into duality. I would like to thank Stanley Deser, Dieter Luest, Fernando Quevedo, Eliezer Rabinovici, Lo\"ic Turban, and especially Bill Zajc for their comments on the manuscript. This work was supported in part by NSF grants PHY11-25915 and PHY13-16748.
|
1,314,259,996,572 | arxiv | \section{#1}}
\newcommand{\addtocounter{section}{1} \setcounter{equation}{0}{\addtocounter{section}{1} \setcounter{equation}{0}
\section*{Appendix \Alph{section}}}
\renewcommand{\theequation}{\arabic{equation}}
\textwidth 164mm
\textheight 214mm
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\parindent=0.7truecm
\parskip=0.2truecm
\begin{document}
\topmargin 0pt
\oddsidemargin=-0.4truecm
\evensidemargin=-0.4truecm
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\newpage
\setcounter{page}{0}
\begin{titlepage}
\vspace{0.8cm}
\begin{center}
{\large RESONANT SPIN-FLAVOR PRECESSION OF NEUTRINOS AS
A POSSIBLE SOLUTION TO THE SOLAR NEUTRINO PROBLEM
\footnote{Talk given at the XII Moriond Workshop "Massive Neutrinos.
Tests of Fundamental Symmetries", Les Arcs, France, Jan. 25-Feb. 1,
1992}}\\
\vspace{0.4cm}
\begin{flushright}
SISSA Ref. 49/92/EP
\end{flushright}
\vspace{0.4cm}
{\large Eugueni Kh. Akhmedov
\footnote{on leave from Kurchatov Institute of Atomic
Energy, Moscow 123182, Russia}
\footnote{E-mail: akhmedov@tsmi19.sissa.it, ~akhm@jbivn.kiae.su}} \\
\vspace{0.2cm}
{\em Scuola Internazionale Superiore di Studi Avanzati\\
Strada Costiera 11, I-34014 Trieste, Italy} \\
\end{center}
\vspace{1.2cm}
\begin{abstract}
Recent developments of the resonant neutrino spin-flavor precession
scenario and its applications to the solar neutrino problem are reviewed.
We discuss in particular the possibilities of reconciliation
of strong time variations of the solar neutrino flux observed in the
Homestake ${}^{37}\!$Cl experiment with little or no time variation seen
in the Kamiokande II experiment.
\end{abstract}
\vspace{2cm}
\centerline{March 1992}
\vspace{.5cm}
\end{titlepage}
\renewcommand{\thefootnote}{\arabic{footnote}}
\setcounter{footnote}{0}
\newpage
\section{Introduction}
There are two issues in the solar neutrino problem: \\
{}~(1) the deficiency of solar neutrinos observed in the Homestake
\cite{Davis}, Kamiokande II \cite{Kam} and, most recently, SAGE
\cite{SAGE} experiments;\\
{}~(2) time variation of the solar neutrino flux in anticorrelation with
solar activity (11-yr variations) for which there is a strong indication
in the chlorine experiment of Davis and his collaborators but which is
not seen in the Kamiokande data.
In this talk I will discuss mainly the second issue with the emphasis on
various possibilities of conciliation of the strong time variations
in the Homestake experiment with no or little variation in Kamiokande II.
The most natural explanation of the time variation of the solar neutrino
flux is related to the possible existence of a large magnetic or electric
dipole moments of neutrinos, $\mu \sim 10^{-11}\mu_{B}$. As was pointed
out by Vysotsky, Voloshin and Okun (VVO) \cite{VV,VVO}, strong toroidal
magnetic field in the convective zone of the sun $B_{\bot}$ could then
rotate left-handed electron neutrinos $\nu_{eL}$ into right-handed
$\nu_{eR}$ which escape the detection. In the periods of quiet sun the solar
magnetic field is much weaker and the neutrino spin precession is less
efficient which explains the 11-yr variation of the neutrino flux.
Subsequently, it was noted \cite{VVO,BF} that the matter effects can
suppress the neutrino spin precession. The reason for this is that
$\nu_{eL}$ and $\nu_{eR}$ are not degenerate in matter since $\nu_{eL}$
interact with medium whereas $\nu_{eR}$ are sterile, and their energy
splitting reduces the precession probability. It was also shown
\cite{AKHM1} that, unlike in the MSW effect case, the adiabaticity
may play a bad role for the VVO effect resulting in a reflip of neutrino
spin and thus reducing the probability of $\nu_{eL} \rightarrow \nu_{eR}$
transition. In order to break the adiabaticity, the precession length
should be large as compared to the characteristic lengths over which
matter density and magnetic field vary significantly, which gives an upper
bound on $\mu B_{\bot}$. This parameter should be also bounded from below
in order for the precession phase not to be too small. Therefore one gets a
rather narrow range of allowed values of $\mu B_{\bot}$ \cite{AKHM1}.
Another interesting possibility is the neutrino spin-flavor precession
(SFP) due to the interaction of flavor-off-diagonal (transition) magnetic
or electric dipole moments of neutrinos $\mu_{ij}$ with transverse magnetic
fields \cite{ShVa,VVO}. The SFP is the rotation of neutrino spin with its
flavor being simultaneously changed. Such a process can occur even for
Majorana neutrinos since the $CPT$ invariance does not preclude the
transition magnetic dipole moments of Majorana particles. Until recently,
the neutrino SFP has not attracted much attention because it was expected
to be suppressed by the energy splitting of the neutrinos of different species.
If the "Zeeman energy" $\mu_{ij} B_{\bot}$ is small as compared to the
kinetic energy difference $\Delta m^{2}_{ij}/2E$, the SFP probability is
heavily suppressed. However, in 1988 it was noted independently by the
present author \cite{AKHM2,AKHM1} and by Lim and Marciano \cite{LM} that
in matter the situation can change drastically. Since $\nu_{eL}$ and
right-handed neutrinos or antineutrinos of another flavor interact with
matter differently, the difference of their potential energies can cancel
their kinetic energy difference resulting in a resonant amplification of
the SFP. Therefore in matter the SFP of neutrinos can be enhanced, unlike
the VVO neutrino spin precession
\footnote{The VVO neutrino spin rotation can also be resonantly enhanced
provided the magnetic field twists along the neutrino trajectory, see
\cite{SM,AKS} and below.}. The resonant spin-flavor precession (RSFP)
of neutrinos has also some more advantages as compared to the VVO mechanism:
$\bullet$ the adiabaticity plays a good role for the RSFP increasing the
conversion
probability, and therefore the $\mu_{ij}B_{\bot}$ should be bounded only
from below; the required magnitude of this parameter is a factor of
$2-3$ smaller than that for the VVO effect;
$\bullet$ some energy dependence of the neutrino conversion seems to be
necessary to reconcile the Homestake and Kamiokande II data (see below).
The RSFP probability has the desired energy dependence whereas
the VVO neutrino spin precession is energy independent.
Although the above arguments disfavor the VVO effect as a solution of the
solar neutrino problem, they do not rule it out, given the
uncertainties of the experimental data.
\section{General features of RSFP of neutrinos}
The RSFP of neutrinos is analogous to the resonant neutrino oscillations
\cite{MS,W}, but differs
from the latter in a number of important respects. The main features of
this effect have been discussed in detail in my talk at the last Moriond
meeting \cite{AKHM3}, and so I will just briefly mention them here.
The magnetic-field induced mixing of $\nu_{eL}$ and $\nu_{\mu R}
(\bar{\nu}_{\mu R})$ can be described by the mixing angle $\theta$,
\begin{equation}
\tan 2\theta = \frac{2\mu_{e\mu}B_{\bot}}{\sqrt{2}G_{F}(N_{e}-
\alpha N_{n})-\frac{\Delta m^{2}_{e\mu}}{2E}\cos 2\theta_{0}}
\end{equation}
Here $N_{e}$ and $N_{n}$ are the electron and neutron number densities,
$\alpha=1/2$ for Dirac neutrinos and 1 for Majorana neutrinos, $G_{F}$
is the Fermi constant, and $\theta_{0}$ is the ordinary neutrino mixing
angle in vacuum. The resonant density is defined as a density at
which the mixing angle $\theta$ becomes $\pi/4$:
\begin{equation}
\sqrt{2}G_{F}(N_{e}-\alpha N_{n})|_{r}=\frac{\Delta m^{2}_{e\mu}}{2E}
\cos 2\theta_{0}
\end{equation}
The efficiency of the $\nu_{eL}$$\rightarrow$$\nu_{\mu R}
(\bar{\nu}_{\mu R})$ transition is defined by the degree of the adiabaticity
which depends on both the neutrino energy and magnetic field strength at the
resonance:
\begin{equation}
\lambda\equiv \pi \frac{\Delta r}{l_{r}}=
8\frac{E}{\Delta m^{2}_{e\mu}}(\mu_{e\mu}B_{\bot r})^{2}
L_{\rho}
\end{equation}
Here
\begin{equation}
\Delta r=\frac{8E\mu_{e\mu}B_{\bot r}}{\Delta m^{2}_{e\mu}}L_{\rho}
\end{equation}
is the resonance width, $l_{r}=\pi/\mu_{e\mu}B_{\bot}$ is the precession
length at the resonance and $L_{\rho}$ is the characteristic length over
which matter density varies significantly in the sun.
For the RSFP to be efficient, $\lambda$ should be $> 1$. In
non-uniform magnetic field the field strength at resonance $B_{\bot r}$
depends on the resonance coordinate and so, through eq. (2), on
neutrino energy. Therefore the energy dependence of the adiabaticity
parameter $\lambda$ in eq. (3) is, in general, more complicated than
just $\lambda\sim E$, and is defined by the magnetic field profile
inside the sun
\footnote{Note that for the MSW effect the adiabaticity parameter
is inversely proportional to $E$ \cite{MS}.}.
The main difficulty in the analyses of the RSFP as a possible solution
of the solar neutrino problem is that this profile is essentially
unknown, so that one is forced to use various more or less plausible
magnetic field configurations.
In the adiabatic regime $(\lambda >>1)$, the $\nu_{eL}$ survival
probability is
\begin{equation}
P(\nu_{eL}\rightarrow \nu_{eL})=\frac{1}{2}+\frac{1}{2}\cos 2\theta
_{i}\cos 2\theta_{f}+\frac{1}{2}\sin 2\theta_{i}\sin 2\theta_{f}
\cos \int\nolimits_{t_{i}}^{t_{f}}\Delta E(t)\,dt
\end{equation}
where
\begin{equation}
\Delta E=\sqrt{\left[\sqrt{2}G_{F}(N_{e}-\alpha N_{n})-
\frac{\Delta m^{2}_{e\mu}}{2E}\cos 2\theta_{0}\right]^{2}+(2\mu_{e\mu}
B_{\bot})^{2}}
\end{equation}
Here $\theta_{i}$ and $\theta_{f}$ are the mixing angles (1) at the
neutrino production point and on the surface of the sun respectively.
If the $\nu_{eL}$ are produced at a density which is much higher
than the resonant one, $\theta_{i}\approx 0$ and the survival
probability (4) becomes
\begin{equation}
P(\nu_{eL}\rightarrow \nu_{eL})\approx \cos^{2} \theta_{f}
\end{equation}
Since the magnetic field becomes very weak at the sun's surface, the
mixing angle $\theta_{f}\approx \pi/2$, and so the $\nu_{eL}$ survival
probability is very small in the adiabatic regime. The adiabaticity
parameter $\lambda$ in eq. (3) depends
drastically on the magnetic field strength at resonance, which gives a natural
explanation of time variations of the solar magnetic flux in
anticorrelation with solar activity.
The RSFP requires non-vanishing flavor-off-diagonal magnetic dipole
moments of neutrinos and so is only possible if the neutrino
flavor is not conserved. Therefore neutrino oscillations must also
take place, and in general one should consider the SFP and oscillations
of neutrinos jointly. This have been done in a number of papers both
analytically \cite{AKHM4,AKHM5} and numerically
\cite{LM,AKHM4,MN,BHL,AKHM5}. It was shown that a subtle interplay
between the RSFP and the
MSW resonant neutrino oscillations can occur. In particular, although
the resonant neutrino oscillations cannot give rise to the time variations
of the solar neutrino flux, they can assist the RSFP to do so by improving the
adiabaticity of the latter \cite{AKHM5}.
\section{Neutrino spin precession in twisting magnetic fields}
If the magnetic field changes its direction along the neutrino trajectory,
this can result in new interesting phenomena. In particular, new kinds of
resonant neutrino conversions become possible, the energy
dependence of the conversion probability can be significantly distorted
and the lower limit on the value of $\mu B_{\bot}$ required to account
for the solar neutrino problem can be slightly relaxed \cite{SM,AKS}.
Moreover, if the neutrino oscillations are also taken into account,
the transitions $\nu_{e}\rightarrow \bar{\nu}_{e}$ can become resonant, and
the order of the RSFP and MSW resonances can be interchanged \cite{AKMSP}.
Since the main features of the resonant neutrino spin-flip transitions in
twisting magnetic fields are discussed in some detail in the contributions
of Krastev and Toshev in this volume, I will confine myself to a new
development which was not covered in their talks.
A few years ago, Vidal and Wudka \cite{VW} claimed that the field rotation
effects can greatly enhance the neutrino spin-flip probability and reduce the
needed value of $\mu B_{\bot}$ by a few orders of magnitudes. In
\cite{SM,AKS} it was shown that this result is incorrect and typically
the required
value of $\mu B_{\bot}$ can only be reduced by a factor 2--3 (see also
\cite{ASh1,ASh2} in which the process without matter effects was considered).
However, in these papers it was not proved that there cannot exist a rotating
field configuration giving stronger enhancement of the spin-flip
probability and larger gain in the $\mu B_{\bot}$ parameter. Recently, Moretti
\cite{M} has found a severe constraint on the transition probability which
eliminates even this possibility. The effective
Hamiltonian describing the evolution of the system of left handed $\nu_{eL}$
and right handed neutrino of the same or another flavor $\nu_{R}$ in a
twisting magnetic field is
\begin{equation}
H=\left(\begin{array}{cc}
V(t)/2 & \mu B_{\bot}e^{i\phi (t)}\\
\mu B_{\bot}e^{-i\phi (t)} & -V(t)/2
\end{array}
\right)
\end{equation}
where V(t) is just the denominator of the r.h.s. of eq. (1), and the angle
$\phi (t)$ defines the direction of the
magnetic field in the plane orthogonal to the neutrino momentum. The
transition probability $P(\nu_{eL}\rightarrow \nu_{R})$ turns out to have
the following upper bound \cite{M}:
\begin{equation}
P(\nu_{eL}\rightarrow \nu_{R};t)\leq \mu\int\nolimits_{0}^{t}B_{\bot}
(t^{'})\,dt^{'}
\end{equation}
The analogous result can also be obtained for the neutrino oscillations
in matter as well as for the evolution of any other two-level system.
\section{RSFP and antineutrinos from the sun}
If both the SFP and oscillations of neutrinos can occur, this will result
in the conversion of a fraction of solar $\nu_{e}$ into $\bar{\nu}_{e}$
\cite{LM,AKHM4,AKHM6,RBL}. For Majorana neutrinos, the direct $\nu_{e}
\rightarrow \bar{\nu}_{e}$ conversions are forbidden since the $CPT$
invariance precludes the diagonal magnetic moment $\mu_{ee}$. However,
this conversion can proceed as a two-step process in either of two ways:
\begin{eqnarray}
\nu_{eL}\stackrel{\rm oscill.}{\longrightarrow}\nu_{\mu L}\stackrel
{\rm SFP}{\longrightarrow}\bar{\nu}_{eR}\\
\nu_{eL}\stackrel{\rm SFP}{\longrightarrow}\bar{\nu}_{\mu R}\stackrel
{\rm oscill.}{\longrightarrow}\bar{\nu}_{eR}
\end{eqnarray}
One can then consider two possibilities:
{}~(1) both oscillations and SFP take place inside sun \cite{LM,AKHM4,AKHM6}.
The amplitudes of the processes (10) and (11) have opposite signs
since the matrix of the magnetic moments of Majorana neutrinos is
antisymmetric. Therefore there is a large cancellation between these two
amplitudes (the cancellation is exact in the limit of vanishing neutron
density $N_{n}$), and the probability of the $\nu_{e}\rightarrow
\bar{\nu}_{e}$ conversion inside the sun turns out to be about 3--5\%
even for large mixing angles $\theta_{0}$ \cite{AKHM4,AKHM6}.
{}~(2) Only the RSFP transition $\nu_{eL}\rightarrow\bar{\nu}_{\mu R}$
occurs in the sun with an appreciable probability whereas the oscillations
of neutrinos proceed mainly in vacuum on their way between the sun and the
earth [eq. (11)]. For not too small neutrino mixing
angles the probability of the $\nu_{e}\rightarrow \bar{\nu}_{e}$ conversion
can then be quite sizable \cite{RBL}.
In \cite{BFMM} the background events in the Kamiokande II experiment were
analysed and a stringent bound on the flux of $\bar{\nu}_{e}$ from the sun
was obtained: $\Phi(\bar{\nu}_{e})\leq (0.05-0.07)\Phi(\nu_{e})$.
This poses a limit on the models in which both the RSFP and neutrino
oscillations occur: the mixing angle $\theta_{0}$ should be less than
$6-8^{\circ}$. This rules out the models with the large magnetic moments
of pseudo Dirac neutrinos including those with only one neutrino
generation for which $\theta_{0}$ is the mixing between $\nu_{eL}$ and
sterile $\bar{\nu}_{eL}$ \cite{KLN,MN2}. However, the models with a
conserved lepton charges $L_{e}\pm (L_{\mu}-L_{\tau})$ are not
excluded even though the mixing angle is $\pi/4$, since the $\nu_{e}
\rightarrow \bar{\nu}_{e}$ conversion probability vanishes identically
in this case \cite{BFMP}.
The $\bar{\nu}_{e}$ production due to the combined effect of
the RSFP and oscillations of neutrinos can be easily distinguished from
the other mechanisms of $\bar{\nu}_{e}$ generation (like $\nu\rightarrow
\bar{\nu}+{\rm Majoron}$ decay) since (i) the neutrino flux should vary
in time in {\em direct} correlation with solar activity, and (ii) the
neutrino energy is not degraded in this case \cite{AKHM4,AKHM6}. The
$\bar{\nu}_{e}$ flux from the sun of the order of a few per cent of
the expected $\nu_{e}$ flux should be detectable in the
forthcoming solar neutrino experiments like BOREXINO, SNO and
Super-Kamiokande \cite{AKHM6,RBL,BL}.
\section{Reconciling the Homestake and Kamiokande II data}
It has been mentioned above that while there is a strong indication
in favor of time variation of the neutrino detection rate in the Homestake
data, the Kamiokande experiment does not see such a time variation.
It still cannot rule out a small $(\leq 30\%)$ time variation.
Therefore a question naturally arises as to whether it is possible to
reconcile large time variations in the Homestake ${}^{37}\!$Cl experiment
with small time variation in the water \v{C}erenkov experiment. There are
two major differences between these two experiments which could in principle
give rise to different time variations of their detection rates:
{}~(1) Homestake experiment utilizes the $\nu_{e}-{}^{37}\!$Cl charged current
reaction, while in the Kamiokande detector $\nu -e$ scattering is used
which is mediated by both charged and neutral currents;
{}~(2) the energy threshold in the Homestake experiment is 0.814 MeV so that
it is sensitive to high energy ${}^{8}$B, intermediate energy
${}^{7}$Be and partly to low energy $pep$ neutrinos; at the same time the
energy threshold in the Kamiokande II experiment is 7.5 MeV and so it is
only sensitive to the high-energy fraction of the ${}^{8}$B neutrinos.
In \cite{AKHM7,AKHM5} it was noted that if the lower-energy neutrino
contributions to the chlorine detection rate are suppressed stronger than
that of high-energy neutrinos, the latter can vary in time with smaller
amplitude and still fit the Homestake data. In that case one can expect
weaker time variations in the Kamiokande II experiment. The desired
suppression of the low-energy neutrino flux can be easily explained in
the framework of the RSFP scenario as a consequence of flavor-changing
spin-flip conversion due to a strong inner magnetic field, the existence
of which seems quite plausible \cite{C}. The alternative possibility is
the suppression of low energy neutrinos by the MSW effect when RSFP and
the resonant neutrino oscillations operate jointly. Another important
point is that due to the RSFP solar $\nu_e$ are converted into
$\bar{\nu}_{\mu R}$ or $\bar{\nu}_{\tau R}$ which are sterile for the
chlorine detector but can be detected (though with a smaller cross
section) by water \v{C}erenkov detectors. This also reduces the
amplitude of the time variation in the Kamiokande II detector. If both
these factors are taken into account, it becomes possible to
reconcile the Homestake and Kamiokande data; one can expect a low
signal in the gallium experiments in this case since they are primarily
sensitive to low energy neutrinos whose flux is supposed to be heavily
suppressed \cite{AKHM7,AKHM5}.
A similar possibility has been recently considered by
Babu, Mohapatra and Rothstein \cite{BMR} and by Ono and Suematsu
\cite{OS}. They pointed out that due to the energy dependence of the
RSFP neutrino conversion probability, lower-energy neutrinos can exhibit
stronger time variations (i.e. stronger magnetic field dependence)
than the higher-energy ones. In fact, this is very
natural in the RSFP scenario: with increasing neutrino energy the width
of the resonance increases [see eq. (4)] and at sufficiently high energies
it can be a significant fraction of the solar radius. The neutrino
production point can then happen to be inside the resonant region, which
reduces the conversion efficiency. The different magnetic field dependence
of the Homestake and Kamiokande II detection rates is illustrated by the
figures which we borrowed from ref. \cite{BMR}.
\vspace*{7.5truecm}
\noindent
{\footnotesize
\vspace*{-.2truecm}
Fig. 1. (a) Expected event rate in chlorine as a function of the convective
zone magnetic field. Here
\vspace*{-.2truecm}
$\Delta m^{2}=7.8\times 10^{-9}$
eV$^{2}$, the maximal value of the magnetic field in the core $B_{1}=10^{7}~G$
and $\mu =2\times 10^{-11}\mu_{B}$.
\vspace*{-.2truecm}
(b) The same as (a) but for the Kamiokande event rate.}\\
It should be noted that the
ordinary VVO neutrino spin precession lacks energy dependence which is
required to get smaller time variation in the Kamiokande II experiment.
Moreover, it converts $\nu_{eL}$ into sterile $\nu_{eR}$ (unless
the neutrinos are Zeldovich-Konopinski-Mahmoud particles) which do not
contribute to the $\nu -e$ cross section. However, for the VVO scenario yet
another possibility of reconciliation of the Homestake and Kamiokande data
exists. In order to get sizable magnetic moments of neutrinos, $\mu\approx
10^{-11}\mu_{B}$, one has to go beyond the Standard Model. Most of the
models producing large neutrino magnetic moments are based on various
extensions of the Standard Model containing new charged scalars.
In these models right-handed sterile neutrinos can interact with electrons
via scalar exchange and therefore can contribute
to the $\nu -e$ reaction which increases the signal in the Kamiokande II
detector and reduces the amplitude of its time variation \cite{FY}.
Note that the models giving large transition neutrino magnetic moments
usually also contain new scalars and therefore the same mechanism
can be operative in case of the RSFP as well.
\section{Conclusion}
We conclude that the resonant neutrino spin-flavor precession mechanism
provides a viable explanation of the solar-neutrino problem which complies
with all the existing experimental data and yields a number of interesting
predictions for the forthcoming experiments.
\section*{Acknowledgement}
The author is deeply indebted to
Scuola Internazionale Superiore di Studi Avanzati where this report was
written for kind hospitality and financial support.
|
1,314,259,996,573 | arxiv | \section{Introduction}
Quantum field theories on non-orientable spacetime have several physical applications, and have been studied from many different perspectives. They are an integral part of string theory in the description of unoriented worldsheets \cite{Polchinski:1998rq,Blumenhagen:2009zz,Fioravanti:1993hf}. Studying theories on non-orientable manifolds also probes the realization of time reversal symmetry \cite{Kapustin:2014tfa,Kapustin:2014gma,Kapustin:2014dxa} (see, {\it e.g.}, \cite{Metlitski:2015yqa} for discussions on a refinement of electric-magnetic duality in abelian gauge theories, and \cite{Guo:2017xex,Wan:2018zql,Wan:2019oyr,Wang:2019obe} on Yang-Mills theory and $\mathbb{CP}^{N-1}$-sigma models), which plays important roles in condensed matter physics. They also make appearances in formal studies of quantum field theory. For example, partition functions of two dimensional CFTs on non-orientable surfaces were studied in \cite{Maloney:2016gsg}, where holographic connections with three dimensional geometries were explored. Their role in supersymmetric quantum field theories was discussed, {\it e.g.}, in \cite{LeFloch:2017lbt, Wang:2020jgh}. Recently, there has been considerable interest in studying CFTs on real projective space -- one of the simplest examples of non-orientable manifolds.\footnote{More precisely, $\mathbb{RP}^d$ is unorientable for $d$ even, and orientable for $d$ odd.} This is partly in light of the modern nonperturbative conformal bootstrap (see \cite{Rychkov:2016iqz,Poland:2018epd} for reviews) \cite{Nakayama:2016cim, Hasegawa:2016piv,Hogervorst:2017kbj, Hasegawa:2018yqg}, where CFTs on real projective space provide attractive playgrounds for developing and testing new techniques.
Moreover, the study of such theories is also fueled by the program of constructing bulk local operators in AdS \cite{Miyaji:2015fia,Nakayama:2015mva, Verlinde:2015qfa, Nakayama:2016xvw,Goto:2016wme, Lewkowycz:2016ukf}, where crosscap states are proposed to be dual to fields inserted at a bulk point.
In this paper, we continue the analytic study of conformal field theories on real projective space, and revisit the problem from multiple angles.
The real projective space $\mathbb{RP}^d$ can be defined by a $\mathbb{Z}_2$ quotient of a sphere $S^d$
\begin{equation}
\mathbf{X}^2=1\;,\quad \mathbf{X}\in \mathbb{R}^{d+1}\;,\quad \text{with} \;\; \mathbf{X}\sim -\mathbf{X}.
\end{equation}
Equivalently, since our focus is on CFT, we can perform a Weyl transformation and map it to the flat space
\begin{equation} \label{MetricWeyl}
x^{\mu} = \frac{X^{\mu}}{1 - X^{d + 1}}, \ \mu=1,\ldots,d \hspace{1cm} ds^2_{\mathbb{R}^d} = \frac{(1 + x^2)^2}{4} ds^2_{S^d}\;.
\end{equation}
The real projective space is then represented as $\mathbb{R}^d$ under the identification
\begin{equation}
\label{inversion}
x^\mu\to -\frac{x^\mu}{x^2}
\end{equation}
where $x^\mu$ are the Cartesian coordinates on $\mathbb{R}^d$. Unless otherwise stated, in this paper we denote by $\mathbb{RP}^d$ the quotient of flat space by the inversion (\ref{inversion}).
Putting CFTs on $\mathbb{RP}^d$ partially breaks the conformal symmetry and introduces new observables. Scalar operators can have non-vanishing one-point functions
\begin{equation}
\langle \mathcal{O}_\Delta \rangle=\frac{a_{\mathcal{O}}}{(1+x^2)^{\Delta_{\mathcal{O}}}}\;.
\end{equation}
The coefficients $a_{\mathcal{O}}$ are new data that defines the CFT on real projective space, along with the standard operator spectrum and OPE coefficients, which remain the same as on $\mathbb{R}^d$. Moreover, two-point functions are no longer fixed by symmetry, but instead become functions of a cross ratio $\eta$ invariant under the residual conformal symmetry
\begin{equation}
\langle \mathcal{O}_1(x_1)\mathcal{O}_2(x_2)\rangle=\frac{\mathcal{G}(\eta)}{(1+x_1^2)^{\Delta_1}(1+x_2^2)^{\Delta_2}}\;,
\end{equation}
where
\begin{equation}
\eta=\frac{(x_1-x_2)^2}{(1+x_1^2)(1+x_2^2)}\;.
\end{equation}
The reader might notice the breaking of conformal symmetry and the structure of correlators are very reminiscent of those of boundary CFTs \cite{McAvity:1995zd,Liendo:2012hy}. We will point out more similarities later in the paper. Analogous to four-point functions on $\mathbb{R}^d$, two-point functions of $\mathbb{RP}^d$ CFTs obey a crossing equation because of the identification (\ref{inversion})
\begin{equation}\label{crossingintro}
\mathcal{G}(\eta)=\pm\, \mathcal{G}(1-\eta)
\end{equation}
where $\pm$ corresponds to the two choices $\mathcal{O}_{1,2} \to \pm\, \mathcal{O}_{1,2}$ under the inversion. The operator product expansion in the direct channel ($\eta\to 0$), and in the mirror channel ($\eta\to 1$) allows them to be expanded in terms of conformal blocks in the respective channels. The crossing equation together with the conformal block decomposition then impose nontrivial constraints on the structure of correlators. Two-point functions are therefore the prime target for developing an analytic understanding of CFTs on real projective space.
In this paper, we develop an analytic approach to two-point functions, which is universal for $\mathbb{RP}^d$ CFTs. However, to develop this method we will take a holographic detour. We first lift the quotient (\ref{inversion}) into the bulk as a $\mathbb{Z}_2$ quotient of $AdS_{d+1}$, and study a toy model for holography in this setup.
We define the tree-level Witten diagrams in this background, and study their various properties. In particular, we consider in detail the two-point conformal block decompositions of exchange Witten diagrams in the two channels. The structure of the conformal block decompositions suggests a natural basis for the function space of two-point correlators, which consists of special conformal blocks with discrete `double-trace' dimensions in both the direct and the mirror channel. The dual of this basis is a basis of analytic functionals, whose actions on a generic conformal block can be read off from the conformal block decomposition coefficients of the exchange Witten diagrams. Acting on the crossing equation (\ref{crossingintro}) with the functionals allows us to extract the complete set of constraints in the form of sum rules. These sum rules are valid non-perturbatively. But they become especially simple around the mean field theory spectrum, and essentially trivialize the study of perturbations around mean field theory. We demonstrate the use of the analytic functionals on the model of $\phi^4$ theory in $4-\epsilon$ dimensions. By solving the functional sum rules, we obtain the one-point function coefficients to the order $\epsilon^2$. Setting $\epsilon=1$, we find good agreement with the numerical bootstrap estimation for the 3d Ising model \cite{Nakayama:2016cim}.
We also develop perturbative field theory approaches to study CFTs on $\mathbb{RP}^d$, with the $O(N)$ vector model being our main example. We study the two-point function of the fundamental scalar $\phi$ of the $O(N)$ model both in the large $N$ expansion in arbitrary dimension, and in the $\epsilon$ expansion in $d = 4 - \epsilon$ dimensions. In $d = 4- \epsilon$, instead of using the usual loop expansions, we exploit the fact that $\phi$ satisfies an equation of motion. The equation of motion implies a differential equation obeyed by the two-point function, which can be solved in perturbation theory to order $\epsilon^2$. The essential idea of using equations of motion to obtain CFT data was described in \cite{Rychkov:2015naa} and here we extend it to CFTs on $\mathbb{RP}^d$. The field theory calculations provide an independent test of the results obtained from analytic functionals.
We also discuss other interesting features of $\mathbb{RP}^d$ CFT. We point out a two-term `dimensional reduction' formula for conformal blocks, which expresses a conformal block in $d-2$ dimensions as the sum of two $d$ dimensional conformal blocks with shifted conformal dimensions. An analogous five-term relation was found in \cite{Kaviraj:2019tbg} for CFT four-point functions in $\mathbb{R}^d$, and was shown to be a consequence of the Parisi-Sourlas supersymmetry \cite{Parisi:1979ka}. The appearance of the dimensional reduction relation therefore suggests a possible extension of the Parisi-Sourlas supersymmetry to real projective space. Moreover, following \cite{Zhou:2020ptb}, we show that the dimensional reduction formula for conformal blocks can be extended to exchange Witten diagrams as well.
The setup of our toy model for holography is also closely related to the Hamilton-Kabat-Lifschytz-Lowe (HKLL) approach for constructing local bulk operators \cite{Hamilton:2005ju,Hamilton:2006fh,Hamilton:2006az}. The two problems have the same partially broken conformal symmetry. We will make a few comments on how our results are relevant in the bulk reconstruction problem. In particular, we point out that the bulk reconstruction of the bulk-boundary-boundary three-point function can be reformulated as a conformal bootstrap problem, which can be solved using our functionals.
The rest of the paper is organized as follows. In Section \ref{Sec:2}, we review the kinematics of $\mathbb{RP}^d$ CFT using the embedding formalism. We set up the holography toy model on the $\mathbb{Z}_2$ quotient of AdS in Section \ref{Sec:3}, and define various Witten diagrams. In Section \ref{Sec:4} we perform a detailed analysis of the tree-level two-point exchange Witten diagrams: we evaluate them in a closed form and study their conformal block decompositions. In Section \ref{Sec:5} we study the dimensional reduction of conformal blocks and exchange Witten diagrams. We develop a functional method to $\mathbb{RP}^d$ CFTs in Section \ref{Sec:6}, and use the method in a few perturbative applications. We also present a complementary field theory approach using the equation of motion method. The relation to bulk reconstruction is discussed in Section \ref{Sec:7}. We conclude in Section \ref{Sec:8} with a brief discussion of future directions. In Appendix \ref{AppendixFreeEnergy}, we compute the free energy on $\mathbb{RP}^d=S^d/\mathbb{Z}_2$ for $O(N)$ models; the calculation has some connection to the content of Section \ref{Sec:CFTEoM}, but is mostly independent from the main text of the paper and can be read separately.
\section{Kinematics of CFT on $\mathbb{RP}^d$}\label{Sec:2}
\subsection{Embedding space}\label{Sec:2.1}
It is convenient to introduce the embedding space which linearizes the action of the conformal group. Let us first review the case where the space is just $\mathbb{R}^{d-1,1}$. For any point $x^\mu\in\mathbb{R}^{d-1,1}$, we can represent it as a null ray $P^A$ ($A=1,2,\ldots,d+2$) in $\mathbb{R}^{d,2}$
\begin{equation}
P\cdot P=0\;,\quad P\sim \lambda\, P\;.
\end{equation}
Operators are defined on the space of null rays\footnote{For simplicity we focus here on scalar operators.}, with the scaling property
\begin{equation}\label{scaling}
\mathcal{O}_\Delta(\lambda P)=\lambda^{-\Delta} \mathcal{O}_\Delta(P)\;.
\end{equation}
For definiteness, let us choose the signature of $\mathbb{R}^{d,2}$ to be $(-,+,-,+,\ldots,+)$.\footnote{For Euclidean spacetime $\mathbb{R}^d$ the embedding space is $\mathbb{R}^{d+1,1}$, and we choose the signature to be $(-,+,+,+,\ldots,+)$.} We can explicitly parameterize $P^A$ with the $\mathbb{R}^{d-1,1}$ coordinates as
\begin{equation}
P^A=\left(\frac{1+x^2}{2},\frac{1-x^2}{2},x^\mu\right)\;,
\end{equation}
after fixing a gauge for the rescaling freedom
\begin{equation}\label{gaugefixing}
P^1=\frac{1}{2}(1+x^2)\;.
\end{equation}
The conformal group $SO(d,2)$ acts on $P^A$ as linear rotations in $\mathbb{R}^{d,2}$
\begin{equation}
P^A\to \Omega^A_B\, P^B\;.
\end{equation}
The conformal transformation on the $x^\mu$ coordinates is obtained after restoring the gauge fixing condition (\ref{gaugefixing}) by an appropriate rescaling.
The inversion (\ref{inversion}) can be conveniently represented in terms the embedding coordinates, where it flips the sign of the last $d+1$ components of the embedding vector
\begin{equation}\label{calIembed}
\mathcal{I}: P^1\to P^1\;,\quad P^a\to -P^a\;,\quad a=2,3,\ldots,d+2\;.
\end{equation}
To go back to (\ref{gaugefixing}) we must multiply the null vector with a factor $x^{-2}$, and one can easily check that this reproduces the transformation (\ref{inversion}). Under inversion operators are identified by
\begin{equation}\label{opidentification}
\mathcal{I}: \mathcal{O}^\pm_\Delta(x)\to \pm\, x^{2\Delta}\mathcal{O}^\pm_\Delta (x')\;,\quad x'^\mu=-\frac{x^\mu}{x^2}\;.
\end{equation}
The insertion of a crosscap introduces a fixed {\it time-like} embedding vector
\begin{equation}\label{Nc}
N_c=(1,0,0,\ldots,0)\;.
\end{equation}
The residual conformal symmetry after inserting the crosscap is all the $SO(d,1)$ rotations that leaves $N_c$ invariant. It is useful to compare the situation with the closely related case of CFTs with a conformal boundary. The presence of a spherical boundary centered at $x=0$ with unit radius, is represented in the embedding space by introducing a {\it space-like} constant vector
\begin{equation}\label{NB}
N_B=(0,1,0,0,\ldots,0)\;.
\end{equation}
The conformal boundary breaks the $SO(d,2)$ conformal group down to the subgroup $SO(d-1,2)$, which consists of all the rotations in the embedding space preserving the vector $N_B$.
\subsection{Correlators and conformal blocks}
The embedding space formalism introduced in the last section makes it straightforward to discuss the kinematics of CFT correlators on $\mathbb{RP}^d$. Correlators are constructed using the $SO(d,1)$ invariants of the embedding vectors, and must scale properly according to (\ref{scaling}).
Let us start with one-point functions. The only invariant one can write down is $(-2 N_c\cdot P)$, and scaling requires the one-point function must be of the form \footnote{Throughout this paper, we normalize the correlation functions by the $\mathbb{RP}^d$ partition function so that the one-point function of the identity operator is 1. In addition, we also normalize the operators such that in the short distance limit, the two-point function goes as $ \langle \mathcal{O}_{\Delta} (x_1)\mathcal{O}_{\Delta} (x_2) \rangle \sim \frac{1}{(x_1 - x_2)^{2 \Delta}}$. }
\begin{equation}\label{1ptfun}
\langle \mathcal{O}_\Delta(x)\rangle=\frac{a_{\mathcal{O}}}{(-2N_c\cdot P)^\Delta}=\frac{a_{\mathcal{O}}}{(1+x^2)^\Delta}\;.
\end{equation}
Using the Weyl transformation (\ref{MetricWeyl}), this implies that on the $\mathbb{Z}_2$ quotient of the sphere, the one-point functions are constant
\begin{equation}\label{1ptfunSphere}
\langle \mathcal{O}_{\Delta}(x) \rangle_{\mathbb{S}^d/ \mathbb{Z}_2} = \frac{a_{\mathcal{O}}}{2^{\Delta}}.
\end{equation}
Note that under the inversion (\ref{opidentification}), the operator $\mathcal{O}$ must transform with the $+$ sign in order for the one-point function to be nonvanishing. This can be clearly seen from the fact that the one-point function is a constant on the sphere ${\mathbb{S}^d/ \mathbb{Z}_2}$, and we note that antipodal points on the sphere are identified by inversion. The choice of the $-$ sign leads to a zero value for $a_{\mathcal{O}}$. More generally, it follows from the fact that the total $\mathbb{Z}_2$ charge under inversion must be zero in a correlator. Therefore, one-point functions are completely determined by symmetry up to a constant $a_{\mathcal{O}}$. The constant $a_{\mathcal{O}}$ is a new CFT data, and encodes dynamical information of CFTs on real projective space. We should also point out that only scalar operators can obtain nonzero one-point function. Operators with spin must have vanishing one-point functions, because a nonzero one-point function is inconsistent with the residual symmetry (a completely analogous result holds in BCFT).
We now discuss two-point functions, which are the main focus of this paper. With the embedding vectors $P_1$, $P_2$ and $N_c$, we can construct a cross ratio
\begin{equation}\label{defeta}
\eta=\frac{(-2 P_1\cdot P_2)}{(-2N_c\cdot P_1)(-2N_c\cdot P_2)}=\frac{(x_1-x_2)^2}{(1+x_1^2)(1+x_2^2)}\;,
\end{equation}
which is also invariant under the independent rescaling of each operator. In Euclidean spacetime, the cross ratio takes values in $\eta\in[0,1]$. After extracting a kinematic factor which takes care of the scaling property, we can write the two-point function as a function of the cross ratio
\begin{equation}
\langle \mathcal{O}_{\Delta_1}(x_1)\mathcal{O}_{\Delta_2}(x_2)\rangle=\frac{\mathcal{G}(\eta)}{(-2N_c\cdot P_1)^{\Delta_1}(-2N_c\cdot P_2)^{\Delta_2}}=\frac{\mathcal{G}(\eta)}{(1+x_1^2)^{\Delta_1}(1+x_2^2)^{\Delta_2}}\;.
\end{equation}
Here we have suppressed the parity of the operators under $\mathbb{Z}_2$. The correlator is only nonzero when $\mathcal{O}_{\Delta_1}$ and $\mathcal{O}_{\Delta_2}$ have the {\it same} parity, in order for the correlator to have a zero overall $\mathbb{Z}_2$ charge.
Moreover, because of the operator identification (\ref{opidentification}) the correlator $\mathcal{G}(\eta)$ must satisfy the following {\it crossing equation} \cite{Nakayama:2016cim}
\begin{equation}\label{crossingeqn}
\mathcal{G}(\eta)=\pm\, \mathcal{G}(1-\eta)\;,
\end{equation}
where $\pm$ denotes the common parity of $\mathcal{O}_{\Delta_1}$ and $\mathcal{O}_{\Delta_2}$. Here and elsewhere, the upper sign refers to the $+$ parity and the lower sign to $-$ parity. Also note again that under the Weyl transform \eqref{MetricWeyl}, the two-point function on ${\mathbb{S}^d/ \mathbb{Z}_2}$ just becomes $\mathcal{G} (\eta)/2^{\Delta_1 + \Delta_2}$.
There are several points of interests for the two-point function on the $\eta$-plane. The first is the limit $\eta=0$. Physically, it means that the two operators coincide in Euclidean spacetime (or light-like separated in Lorentzian spacetime). In the limit of coinciding operators, we can use the standard OPE
\begin{equation}\label{bulkOPE}
\mathcal{O}_{\Delta_1}(x_1)\mathcal{O}_{\Delta_2}(x_2)=\frac{\delta_{\Delta_1\Delta_2}}{(x_1-x_2)^{2\Delta_1}}+\sum_kC_{12k} D[x_1-x_2,\partial_{x_2}]\mathcal{O}_{\Delta_k}(x_2)
\end{equation}
where $k$ labels the conformal primaries, and $C_{12k}$ is the OPE coefficient. The differential operator $D[x_1-x_2,\partial_{x_2}]$ is completely determined by the conformal symmetry. The OPE reduces the two-point function to one-point functions which are completely determined by kinematics up to an overall constant. The contribution of each primary operator in the OPE to the two-point function can be resummed into a conformal block \cite{Nakayama:2016xvw}
\begin{equation}\label{defg}
g_\Delta(\eta)=\eta^{\frac{\Delta-\Delta_1-\Delta_2}{2}} {}_2F_1\left(\frac{\Delta+\Delta_1-\Delta_2}{2},\frac{\Delta+\Delta_2-\Delta_1}{2};\Delta-\frac{d}{2}+1;\eta\right)\;.
\end{equation}
The conformal block can also be obtained more conveniently from solving the conformal Casimir equation
\begin{equation}
L^2 \langle \mathcal{O}_{\Delta_1}(P_1)\mathcal{O}_{\Delta_2}(P_2)\rangle=-\Delta(\Delta-d)\langle \mathcal{O}_{\Delta_1}(P_1)\mathcal{O}_{\Delta_2}(P_2)\rangle\;,
\end{equation}
with the boundary condition
\begin{equation}\label{gbc}
g_{\Delta}(\eta)\sim \eta^{\frac{\Delta-\Delta_1-\Delta_2}{2}}\;,\quad \eta\to0\;.
\end{equation}
The Casimir operator
\begin{equation}\label{defL2}
L^2=\frac{1}{2}(L_1^{AB}+L_2^{AB})(L_{1,AB}+L_{2,AB})
\end{equation}
is constructed from the $SO(d,2)$ generators
\begin{equation}
L_{AB}=P_A\frac{\partial}{\partial P^B}-P_B\frac{\partial}{\partial P^A}\;.
\end{equation}
In terms of these conformal blocks, the two-point function can be written as
\begin{equation}
\mathcal{G}(\eta)=\sum_{k} \mu_{12k} g_{\Delta_k}(\eta)
\end{equation}
where
\begin{equation}
\mu_{12k}=a_kC_{12k}\;.
\end{equation}
Similarly, $\eta=1$ is also an interesting point where one operator approaches the image of the other operator (or the lightcone of the image). We can again apply the OPE for an operator with an image operator, which is equivalent to the original OPE thanks to (\ref{opidentification}). We will refer to this channel as the {\it image channel}. The two-point function can be decomposed into the {\it image conformal blocks}
\begin{equation}\label{defgbar}
\bar{g}_\Delta(\eta)=(1-\eta)^{\frac{\Delta-\Delta_1-\Delta_2}{2}} {}_2F_1\left(\frac{\Delta+\Delta_1-\Delta_2}{2},\frac{\Delta+\Delta_2-\Delta_1}{2};\Delta-\frac{d}{2}+1;1-\eta\right)\;.
\end{equation}
These image conformal blocks are eigenfunctions of the image conformal Casimir equation
\begin{equation}
\bar{L}^2 \langle \mathcal{O}_{\Delta_1}(P_1)\mathcal{O}_{\Delta_2}(P_2)\rangle=-\Delta(\Delta-d)\langle \mathcal{O}_{\Delta_1}(P_1)\mathcal{O}_{\Delta_2}(P_2)\rangle\;,
\end{equation}
with the boundary condition
\begin{equation}\label{gbarbc}
\bar{g}_{\Delta}(\eta)\sim (1-\eta)^{\frac{\Delta-\Delta_1-\Delta_2}{2}}\;,\quad \eta\to1\;.
\end{equation}
Here the image conformal Casimir operator
\begin{equation}\label{defLbar2}
\bar{L}^2=\frac{1}{2}(L_1^{AB}+\bar{L}_2^{AB})(L_{1,AB}+\bar{L}_{2,AB})
\end{equation}
involves the generators at the image position
\begin{equation}
\bar{L}_{AB}=\bar{P}_A\frac{\partial}{\partial \bar{P}^B}-\bar{P}_B\frac{\partial}{\partial \bar{P}^A}
\end{equation}
where $\bar{P}^A$ is the embedding vector for the image point
\begin{equation}
\bar{P}^A=\left(\frac{1+x^{-2}}{2},\frac{1-x^{-2}}{2},-\frac{x^\mu}{x^2}\right)\;.
\end{equation}
In terms of the conformal blocks, the crossing equation (\ref{crossingeqn}) now reads
\begin{equation}
\sum_{k} \mu_{12k} (g_{\Delta_k}(\eta)\mp \bar{g}_{\Delta_k}(\eta))=0\;.
\end{equation}
Finally, there is another interesting point $\eta=\infty$, which can only be reached in Lorentzian signature. It turns out, as we will see in Section \ref{Sec:3}, that this limit plays a similar role as the ``Regge limit'' in BCFT two-point functions. In fact, the kinematics of boundary CFTs are intimately related to $\mathbb{RP}^d$ CFTs. We now give a detailed comparison with the closely related BCFT case,\footnote{In the table and discussion below, we take the conformal boundary of the BCFT to be a unit sphere. It can be mapped to the infinite plane boundary by a conformal transformation.} and the result is summarized in Table \ref{table1}.
{\begin{table}
\begin{center}
\begin{tabular}{||c|| c | c ||}
\hline\hline
& $\mathbb{RP}^d$ CFT & BCFT$_d$ \\ [0.5ex]
\hline\hline
One-point function & $\langle \mathcal{O}_\Delta(x)\rangle= \frac{a_{\mathcal{O}}}{(1+x^2)^\Delta}$ & $\langle \mathcal{O}_\Delta(x)\rangle_B=\frac{a_{B,\mathcal{O}}}{|1-x^2|^\Delta}$\\ [0.5ex]
\hline
Two-point cross ratio & $\eta=\frac{(x_1-x_2)^2}{(1+x_1^2)(1+x_2^2)}$ & $\xi=\frac{(x_1-x_2)^2}{(1-x_1^2)(1-x_2^2)}$ \\ [0.5ex]
\hline
Two-point function & $G=\frac{1}{(1+x_1^2)^{\Delta_1}(1+x_2^2)^{\Delta_2}}\mathcal{G}(\eta)$ & $G_B=\frac{1}{|1-x_1^2|^{\Delta_1}|1-x_2^2|^{\Delta_2}} \mathcal{G}_B(\xi)$\\ [0.5ex]
\hline
OPE limits & $\eta\to 0$ (bulk channel) & $\xi\to 0$ (bulk channel)\\
& $\eta \to 1$ (image channel) & $\xi\to\infty$ (boundary channel)\\ [0.5ex]
\hline
Regge limit& $\eta\to\infty$& $\xi\to-1$\\ [0.5ex]
\hline\hline
\end{tabular}
\caption{A comparison of kinematics for $\mathbb{RP}^d$ CFTs and boundary CFTs.}
\label{table1}
\end{center}
\end{table}}
\subsubsection*{Intermezzo: comparing with boundary CFTs}
As we mentioned in the last section, a boundary CFT preserves the conformal symmetry that leaves $N_B$ invariant. Up to a Wick rotation, the two systems preserve the same symmetry group. The inversion sphere $x^2=1$ now becomes a fixed locus in the BCFT case, and is the location of the conformal boundary. The one-point function of an operator inserted away from the boundary is determined by kinematics\footnote{In order to distinguish from the real projective space case, we use the subscript $B$ to denote objects in boundary CFTs.}
\begin{equation}\label{B1pt}
\langle\mathcal{O}_\Delta(x)\rangle_B=\frac{a_{B,\mathcal{O}}}{|2P\cdot N_B|^\Delta}=\frac{a_{B,\mathcal{O}}}{|1-x^2|^\Delta}\;.
\end{equation}
up to a constant $a_{B,\mathcal{O}}$. The one-point coefficients $a_{B,\mathcal{O}}$ are part of the new data defining a BCFT. For two operators inserted away from the boundary\footnote{We will assume that the operators are inserted on the same side of the boundary.}, we can construct a cross ratio
\begin{equation}\label{defxi}
\xi=\frac{(-2 P_1\cdot P_2)}{(2N_B\cdot P_1)(2N_B\cdot P_2)}=\frac{(x_1-x_2)^2}{(1-x_1^2)(1-x_2^2)}\;,
\end{equation}
which is invariant under the residual conformal symmetry and the independent rescaling of the operators. In a Euclidean spacetime, the range of the cross ratio is $\xi\in[0,\infty)$. The two-point function can be written as a function of the cross ratio
\begin{equation}
\langle \mathcal{O}_{\Delta_1}(x_1)\mathcal{O}_{\Delta_1}(x_2)\rangle_B=\frac{\mathcal{G}_B(\xi)}{|2N_B\cdot P_1|^{\Delta_1}|2N_B\cdot P_2|^{\Delta_1}}=\frac{\mathcal{G}_B(\xi)}{|1-x_1^2|^{\Delta_1}|1-x_2^2|^{\Delta_2}}\;.
\end{equation}
There are three interesting points on the $\xi$-plane. The point $\xi=0$ is known as the bulk channel OPE limit, and should be identified with the $\eta=0$ case where operators coincide (or light-like separated in Lorentzian signature). We can apply the OPE (\ref{bulkOPE}), and reduce the two-point function into one-point functions. The two-point function can be written as a sum of bulk channel conformal blocks \cite{McAvity:1995zd,Liendo:2012hy}
\begin{equation}
\mathcal{G}_B(\xi)=\sum_k \mu_{B,12k} g^{bulk}_{B,\Delta_k}(\xi)
\end{equation}
where $\mu_{B,12k}=C_{12k}a_{B,k}$ and
\begin{equation}\label{defgBbulk}
g^{bulk}_{B,\Delta}(\xi)=\xi^{\frac{\Delta-\Delta_1-\Delta_2}{2}} {}_2F_1\left(\frac{\Delta+\Delta_1-\Delta_2}{2},\frac{\Delta+\Delta_2-\Delta_1}{2};\Delta-\frac{d}{2}+1;-\xi\right)\;.
\end{equation}
Note that $g^{bulk}_{B,\Delta}(\xi)$ can be identified with $g_\Delta(\eta)$ with the replacement $\xi\leftrightarrow -\eta$,\footnote{There is some formal connection between boundary CFTs and $\mathbb{RP}^d$ CFTs by analytic continuation. For example, in two dimensions the boundary states are defined by $(L_n-\bar{L}_{-n})|\mathcal{B}\rangle=0$ while the crosscap states are defined by $(L_n-(-1)^n\bar{L}_{-n})|\mathcal{C}\rangle=0$. Here we can relate them formally by making the analytic continuation $x^2\to -x^2$. This not only gives $\xi\to-\eta$, but also fixes the one-point function and the kinematic factors we extracted from the two-point functions.} up to an overall normalization.The limit $\xi=\infty$ is known as the boundary channel limit, where operators inserted in the bulk are taken close to the boundary. A different OPE is involved in this limit
\begin{equation}
\mathcal{O}_\Delta(x)=\frac{a_{B,\mathcal{O}}}{|1-x^2|^\Delta}+\sum_l \rho_\ell C[x]\widehat{O}_{\Delta_l}(x)
\end{equation}
which expresses a bulk operator as a sum of operators $\widehat{O}_{\Delta_l}(x)$ on the boundary. Here $C[x]$ is a differential operator determined by symmetry. The boundary operator spectrum $\Delta_l$ and OPE coefficient $\rho_l$ are new CFT data. Applying the boundary OPE for each operator reduces the two-point function to a sum of two-point functions of operators living on the boundary, and the latter is fully fixed by the residual conformal symmetry. The contribution of exchanging a boundary operator is summarized by a boundary channel conformal block
\begin{equation}
g^{boundary}_{B,\Delta}(\xi)=\xi^{-\Delta}{}_2F_1\left(\Delta,\Delta-\frac{d}{2}+1;2\Delta+2-d,-\frac{1}{\xi}\right)\;,
\end{equation}
and the correlator can be decomposed in the boundary channel as
\begin{equation}
\mathcal{G}_B(\xi)=a_{B,\mathcal{O}}^2\delta_{12}+\sum_l \rho_{1,l}\rho_{2,l}g^{boundary}_{B,\Delta_l}(\xi)\;.
\end{equation}
The boundary channel of BCFT two-point functions has no analogue in the real projective space case, because the identification (\ref{inversion}) does not have any fixed point. The two channels of OPE should lead to the same answer, and gives to a ``crossing equation''
\begin{equation}
\sum_k \mu_{B,12k} g^{bulk}_{B,\Delta_k}(\xi)=a_{B,\mathcal{O}}^2\delta_{12}+\sum_l \rho_{1,l}\rho_{2,l}g^{boundary}_{B,\Delta_l}(\xi)\;.
\end{equation}
Finally, the limit of $\xi=-1$ is known as the ``Regge limit'' \cite{Mazac:2018biw}.
In this limit, one operator is at the lightcone created by the image of the other operator with respect to the boundary. The Regge limit can only be reached in the Lorentzian signature, and requires analytic continuation from the Euclidean signature. It was proven in \cite{Mazac:2018biw} that for any unitary boundary CFT, the two-point function has a bounded behavior at the Regge limit, which is controlled by the bulk channel exchange of operators with the lowest dimension.
We can think of the $\eta=\infty$ limit of $\mathbb{RP}^d$ CFTs as the $\xi=-1$ limit of BCFTs, as both cases requires analytic continuation from the Euclidean regime.\footnote{It might also be tempting to identify $\eta=1$ with the BCFT Regge limit, since in both cases one operator approaches the lightcone of (or coincides with) the image of the other operator. However, the crossing equation (\ref{crossingeqn}) tells us that $\eta=1$ limit is physically not any different from the $\eta=0$ limit.} This intuition will also be supported in the next section when we study Witten diagrams, which arise in the weakly coupled duals of $\mathbb{RP}^d$ CFTs.
\section{Holography on quotient AdS and Witten diagrams}\label{Sec:3}
\subsection{$AdS_{d+1}/\mathbb{Z}_2$}
In this section, we study a simple toy model of holography for $\mathbb{RP}^d$ CFTs. We extend the quotient of the boundary spacetime into the bulk to define a $\mathbb{Z}_2$ quotient of AdS space, and consider perturbative physics on this background. This over-simplified setup is effective in nature, and does not correspond to top-down models. However, it captures all the essential kinematics which are relevant to various applications later in the paper. This setup of quotient of AdS appeared previously, {\it e.g.}, in \cite{Verlinde:2015qfa, Maloney:2016gsg, Nakayama:2015mva,Nakayama:2016xvw,Lewkowycz:2016ukf}. Here we give a detailed account using the embedding space formalism introduced in Section \ref{Sec:2.1}.
For the calculations in this section, it will be most convenient to consider the Euclidean AdS space\footnote{We have set the curvature of AdS to 1.}
\begin{equation}
-\left(Z^1\right)^2+\left(Z^2\right)^2+\ldots+\left(Z^{d+2}\right)^2=-1\;, Z^1>0\;,
\end{equation}
and analytic continue the results to the Lorentzian signature in the end. In terms of the Poincar\'e coordinates $z=(z_0,\vec{z})$, the embedding space vector $Z^A$ is parameterized as
\begin{equation}
Z^A=\frac{1}{z_0}\left(\frac{1+z_0^2+\vec{z}^2}{2},\frac{1-z_0^2-\vec{z}^2}{2},\vec{z}\right)\;.
\end{equation}
We extend the boundary inversion (\ref{calIembed}) into the bulk by requiring that $\mathcal{I}$ should act in the same way on the AdS embedding vector. This leads to
\begin{equation}\label{AdScalI}
\mathcal{I}:\quad z_0\to \frac{z_0}{z_0^2+\vec{z}^2}\;,\quad \vec{z}\to-\frac{\vec{z}}{z_0^2+\vec{z}^2}\;,
\end{equation}
and defines a quotient space $qAdS_{d+1}\equiv AdS_{d+1}/\mathbb{Z}_2$ by the identification
\begin{equation}\label{AdSquotient}
z_0\leftrightarrow \frac{z_0}{z_0^2+\vec{z}^2}\;,\quad \vec{z}\leftrightarrow-\frac{\vec{z}}{z_0^2+\vec{z}^2}\;.
\end{equation}
Note that the identification (\ref{AdSquotient}) is geometrically represented in Poincar\'e coordinates by an inversion with respect to the hemisphere $\mathcal{H}$ defined by
\begin{equation}\label{defcalH}
z_0^2+z^2=1\;,\quad z_0\geq 0\;,
\end{equation}
This is illustrated by Figure \ref{Fig:AdSquotient}. The map (\ref{AdSquotient}) has a fixed point
\begin{equation}
z_0=1\;,\quad \vec{z}=0\;,
\end{equation}
which sits at the north pole of the inversion hemisphere $\mathcal{H}$. In terms of embedding coordinates, the fixed point corresponds to the fixed vector $N_c$ introduced in (\ref{Nc}).
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.8\textwidth]{FigAdSquotient}
\caption{Illustration of $AdS_{d+1}/\mathbb{Z}_2$ in Poincar\'e coordinates. A point $Z$ inside the hemisphere $\mathcal{H}$ is identified with its inversion image $\bar{Z}$ out side of the hemisphere. The quotient has a fixed point $N_c$, which is the north pole of the hemisphere.}
\label{Fig:AdSquotient}
\end{center}
\end{figure}
Now we consider a scalar field $\varphi_\pm$ living on the quotient space $qAdS_{d+1}$. To describe the scalar field, it is convenient to extend the definition of the scalar field to the full $AdS_{d+1}$ and impose the condition that
\begin{equation}
\varphi_\pm(z)=\pm\, \varphi_\pm(\mathcal{I}\circ z)\;.
\end{equation}
We can define the propagators of $\varphi_\pm$ on $qAdS_{d+1}$ (and extended to $AdS_{d+1}$) as follows. The bulk-to-bulk propagator $H_{BB}^{\Delta,\pm}(Z,W)$ satisfies the following equation of motion
\begin{equation}
\left(\square_Z+\Delta(\Delta-d)\right)H_{BB}^{\Delta,\pm}(Z,W)=\delta^{d+1}(Z,W)\pm\delta^{d+1}(Z,\bar{W})\;,
\end{equation}
where $\bar{W}$ denotes the image point of $W$ under the inversion (\ref{AdScalI}). The propagator $H_{BB}^{\Delta,\pm}(Z,W)$ can be expressed in terms of the usual AdS propagators before taking the quotient, as
\begin{equation}\label{HBB}
H_{BB}^{\Delta,\pm}(Z,W)=G^\Delta_{BB}(Z,W)\pm G^\Delta_{BB}(Z,\bar{W})
\end{equation}
where $G^\Delta_{BB}(Z,W)$ satisfies
\begin{equation}\label{eomGBB}
\left(\square_Z+\Delta(\Delta-d)\right)G_{BB}^{\Delta}(Z,W)=\delta^{d+1}(Z,W)\;,
\end{equation}
and is given explicitly by
\begin{equation}
G^\Delta_{BB}(Z,W)=C^{\Delta,d}_{BB}(-u)^{-\Delta}{}_2F_1\left(\Delta,\Delta-\frac{d}{2}+\frac{1}{2};2\Delta-d+1;u^{-1}\right)\;,
\end{equation}
with
\begin{equation}
u=-\frac{Z\cdot W+1}{2}\;,\quad C^{\Delta,d}_{BB}=\frac{\Gamma(\Delta)\Gamma(\Delta-\frac{d}{2}+\frac{1}{2})}{(4\pi)^{\frac{d+1}{2}}\Gamma(2\Delta-d+1)}\;.
\end{equation}
Note that
\begin{equation}
Z\cdot W=\bar{Z}\cdot \bar{W}\;,\quad Z\cdot \bar{W}=\bar{Z}\cdot W\;,
\end{equation}
we have the following identities for the bulk-to-bulk propagator
\begin{equation}\label{HBBidentities}
H_{BB}^{\Delta,\pm}(Z,W)=H_{BB}^{\Delta,\pm}(W,Z)=H_{BB}^{\Delta,\pm}(\bar{Z},\bar{W})\;, \quad H_{BB}^{\Delta,\pm}(Z,\bar{W})=\pm H_{BB}^{\Delta,\pm}(Z,W)\;.
\end{equation}
The bulk-to-boundary propagator $H_{B\partial}^{\Delta,\pm}(Z,P)$ can be obtained from the limit of $H_{BB}^{\Delta,\pm}(Z,W)$ as we move the bulk point $W$ close to the boundary. It is easy to prove that
\begin{equation}
Z\cdot \bar{P}=(\vec{x}^2)^{-1}\bar{Z}\cdot P\;,\quad \bar{Z}\cdot \bar{P}=(\vec{x}^2)^{-1}Z\cdot P\;.
\end{equation}
This implies that the usual $AdS_{d+1}$ bulk-to-boundary propagator
\begin{equation}
G^\Delta_{B\partial}(Z,P)=\left(\frac{1}{-2Z\cdot P}\right)^\Delta=\left(\frac{z_0}{z_0^2+(\vec{z}-\vec{x})^2}\right)^\Delta
\end{equation}
transforms as
\begin{equation}\label{GBpartialinversion}
G^\Delta_{B\partial}(Z,\bar{P})=(\vec{x}^2)^\Delta G^\Delta_{B\partial}(\bar{Z},P)\;.
\end{equation}
On the other hand, we recall that boundary operator receives an extra $(\vec{x}^2)^{-\Delta}$ under inversion from (\ref{opidentification}), which cancels the $(\vec{x}^2)^{\Delta}$ in (\ref{GBpartialinversion}). Therefore the $qAdS_{d+1}$ bulk-to-boundary propagator should be defined similarly to (\ref{HBB}), as
\begin{equation}\label{HBpartiala}
H_{B\partial}^{\Delta,\pm}(Z,P)=G^\Delta_{B\partial}(Z,P)\pm G^\Delta_{B\partial}(\bar{Z},P)\;,
\end{equation}
or equivalently,
\begin{equation}\label{HBpartialb}
H_{B\partial}^{\Delta,\pm}(Z,P)=G^\Delta_{B\partial}(Z,P)\pm (\vec{x}^2)^{-\Delta}G^\Delta_{B\partial}(Z,\bar{P})\;.
\end{equation}
\subsection{Tree-level Witten diagrams on quotient AdS space}\label{Sec:3.2}
Having obtained bulk-to-bulk and bulk-to-boundary propagators on $qAdS_{d+1}$, we are now ready to define Witten diagrams.
\vspace{0.5cm}
\noindent{\bf One-point diagram}
\vspace{0.1cm}
\noindent Let us start from the one-point function. It is given by a single bulk-to-boundary propagator which ends on the conformal boundary where the operator is inserted. Note that the Witten diagram must preserve the fixed vector $N_c$ defined in (\ref{Nc}). Therefore, the other end of the propagator has to end at the inversion fixed point $N_c$ (Figure \ref{Fig:1pt}). It corresponds to a vertex $\varphi(N_c)$ localized at the fixed point. We have
\begin{equation}\label{AdS1pt}
\langle \mathcal{O}_\Delta \rangle=H_{B\partial}^{\Delta,\pm}(N_c,P)=\left\{\begin{array}{l}\frac{2}{(1+\vec{x}^2)^\Delta}\quad \text{for }+ \text{ parity}\\ 0\quad\quad\quad\;\;\, \text{for }- \text{ parity}\end{array}\right.
\end{equation}
which has the correct structure (\ref{1ptfun}) determined by symmetry. Note that the information of the vertex $\varphi(N_c)$ was not contained in the original theory before the quotient, but was inputted into this toy model by hand.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.8\textwidth]{Fig1pt}
\caption{The one-point function is given by a bulk-to-boundary propagator with no integration.}
\label{Fig:1pt}
\end{center}
\end{figure}
\vspace{0.5cm}
\noindent{\bf Two-point contact Witten diagrams}
\vspace{0.1cm}
\noindent For two-point functions, we can define the following tree-level contact Witten diagram (Figure \ref{Fig:con})
\begin{equation}\label{Vcon0}
V^{con,0}_{\Delta_1,\Delta_2}(P_1,P_2)=\frac{1}{4}H_{B\partial}^{\Delta_1,+}(N_c,P_1)H_{B\partial}^{\Delta_2,+}(N_c,P_2)\;,
\end{equation}
which factorizes into the product of two one-point functions. It comes from the vertex $\frac{1}{4}\varphi_1\varphi_2(N_c)$ which localizes at the fixed point $N_c$. Note that it is important that both operators are parity even since $H_{B\partial}^{\Delta,-}(N_c,P_1)$ vanishes. In terms of the cross ratio, the contact Witten diagram reads
\begin{equation}\label{defV0a}
V^{con,0}_{\Delta_1,\Delta_2}(P_1,P_2)=\frac{\mathcal{V}^{con,0}_{\Delta_1,\Delta_2}(\eta)}{(1+x_1^2)^{\Delta_1}(1+x_2^2)^{\Delta_2}}
\end{equation}
where
\begin{equation}\label{defV0b}
\mathcal{V}^{con,0}_{\Delta_1,\Delta_2}(\eta)=1\;.
\end{equation}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.8\textwidth]{Figcon}
\caption{The two-point contact Witten diagram is given by a product of bulk-to-boundary propagators with no integration.}
\label{Fig:con}
\end{center}
\end{figure}
To define two-point contact diagram for operators with odd parity, we can add two derivatives to the vertex. The vertex now becomes $\frac{1}{4}(\nabla^\mu \varphi_1 \nabla_\mu \varphi_2)(N_c)$, and leads to the following diagram
\begin{equation}
\begin{split}
V^{con,2}_{\Delta_1,\Delta_2}(P_1,P_2)={}&\frac{1}{4}\nabla^\mu H_{B\partial}^{\Delta_1,-}(Z,P_1)\nabla_\mu H_{B\partial}^{\Delta_2,-}(Z,P_2)\big|_{Z=N_c}\;,\\
={}&\nabla^\mu G_{B\partial}^{\Delta_1}(Z,P_1)\nabla_\mu G_{B\partial}^{\Delta_2}(Z,P_2)\big|_{Z=N_c}\;.
\end{split}
\end{equation}
Using the identity
\begin{equation}
\begin{split}
\nabla_\mu G^{\Delta_1}_{B\partial}(Z,P_1) \nabla^\mu G^{\Delta_2}_{B\partial}(Z,P_2)={}&\Delta_1\Delta_2\bigg(G^{\Delta_1}_{B\partial}(Z,P_1)G^{\Delta_2}_{B\partial}(Z,P_2)\\
{}&-2x_{12}^2G^{\Delta_1+1}_{B\partial}(Z,P_1)G^{\Delta_2+1}_{B\partial}(Z,P_2)\bigg)\;,
\end{split}
\end{equation}
we get
\begin{equation}
V^{con,2}_{\Delta_1,\Delta_2}(P_1,P_2)=\frac{\mathcal{V}^{con,2}_{\Delta_1,\Delta_2}(\eta)}{(1+x_1^2)^{\Delta_1}(1+x_2^2)^{\Delta_2}}
\end{equation}
where
\begin{equation}
\mathcal{V}^{con,2}_{\Delta_1,\Delta_2}(\eta)=\Delta_1\Delta_2(1-2\eta)\;.
\end{equation}
It is clear that $\mathcal{V}^{con,2}_{\Delta_1,\Delta_2}(\eta)$ is antisymmetric under $\eta\leftrightarrow 1-\eta$.
It is straightforward to generalize these contact Witten diagrams to include more derivatives in the vertex. In general, for a vertex with $2L$ derivatives the two-point contact Witten diagram $\mathcal{V}^{con,2L}_{\Delta_1,\Delta_2}(\eta)$ is a polynomial in $\eta$ of degree $L$. Moreover, the contact Witten diagrams have the following behavior at $\eta\to\infty$
\begin{equation}\label{VconRegge}
\mathcal{V}^{con,2L}_{\Delta_1,\Delta_2}(\eta)\to \eta^L\;,\quad \eta\to \infty\;.
\end{equation}
This is consistent with the intuition that the $\eta\to\infty$ limit can be thought of as a ``Regge limit'' for the two-point correlator -- increasing the number of derivatives in the vertex leads to a more divergent behavior in the correlator at large cross ratio.
\vspace{0.5cm}
\noindent{\bf Two-point exchange Witten diagrams}
\vspace{0.1cm}
\noindent Let us now define the two-point exchange Witten diagram (Figure \ref{Fig:exch})
\begin{equation}
V^{exchange,\pm}_\Delta(P_1,P_2)=\frac{1}{2}\int_{qAdS_{d+1}}d^{d+1}Z\; H^{\Delta,+}_{BB}(N_c,Z)H_{B\partial}^{\Delta_1,\pm}(Z,P_1)H_{B\partial}^{\Delta_2,\pm}(Z,P_2)
\end{equation}
which requires the presence of the localized vertex $\varphi(N_c)$ for the scalar field with dual conformal dimension $\Delta$, and a bulk cubic vertex $\varphi \varphi_1\varphi_2(Z)$ where $\varphi_{1,2}$ have dual dimensions $\Delta_{1,2}$. Let us use (\ref{HBB}), (\ref{HBpartiala}) and (\ref{HBpartialb}) to express the $qAdS_{d+1}$ propagators in terms of the propagators in the full $AdS_{d+1}$. Note that $N_c$ is its own image, therefore $H^{\Delta,+}_{BB}(N_c,Z)=2G^\Delta_{BB}(Z,N_c)$. On the other hand, $H^{\Delta,+}_{BB}(N_c,Z)=2G^\Delta_{BB}(\bar{Z},N_c)$ thanks to (\ref{HBBidentities}). Using this and (\ref{GBpartialinversion}), we can expand the product and massage the expressions such that the $AdS_{d+1}$ propagators join into connected diagrams. It becomes obvious that the integrals in $V^{exchange,\pm}_\Delta(P_1,P_2)$ can be organized into the linear combinations of exchange diagrams defined in the full $AdS_{d+1}$ space (in particular, integrals inside the sphere and their images outside combine, and extend to the full space)
\begin{equation}\label{VasWWbar}
V^{exchange,\pm}_\Delta(P_1,P_2)=W^{exchange}_\Delta(P_1,P_2)\pm (x_2^2)^{-\Delta_2}\bar{W}^{exchange,\pm}_\Delta(P_1,P_2)
\end{equation}
where
\begin{equation}\label{defW}
W^{exchange}_\Delta(P_1,P_2)=\int_{AdS_{d+1}}d^{d+1}Z\; G^{\Delta}_{BB}(N_c,Z)G_{B\partial}^{\Delta_1}(Z,P_1)G_{B\partial}^{\Delta_2}(Z,P_2)\;,
\end{equation}
and
\begin{equation}\label{defWbar}
\bar{W}^{exchange}_\Delta(P_1,P_2)=W^{exchange}_\Delta(P_1,\bar{P}_2)\;.
\end{equation}
When written in terms of the cross ratio
\begin{equation}
\begin{split}
{}&W^{exchange}_\Delta(P_1,P_2)=\frac{\mathcal{W}^{exchange}_\Delta(\eta)}{(1+x_1^2)^{\Delta_1}(1+x_2^2)^{\Delta_2}}\;,\\
{}&(x_2^2)^{-\Delta_2}\bar{W}^{exchange}_\Delta(P_1,P_2)=\frac{\bar{\mathcal{W}}^{exchange}_\Delta(\eta)}{(1+x_1^2)^{\Delta_1}(1+x_2^2)^{\Delta_2}}\;,
\end{split}
\end{equation}
$\mathcal{W}^{exchange}_\Delta$ and $\bar{\mathcal{W}}^{exchange}_\Delta$ are related by
\begin{equation}\label{calWWbcrossing}
\bar{\mathcal{W}}^{exchange}_\Delta(\eta)=\mathcal{W}^{exchange}_\Delta(1-\eta)\;.
\end{equation}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.8\textwidth]{Figexch}
\caption{Illustration of an exchange Witten diagram. One end of the bulk-to-bulk propagator is fixed at $N_c$, while the other end is connected to the cubic vertex and integrated over.}
\label{Fig:exch}
\end{center}
\end{figure}
\vspace{0.5cm}
\noindent{\bf Equation of motion relations}
\vspace{0.1cm}
The exchange Witten diagrams $W^{exchange}_\Delta$ and $\bar{W}^{exchange}_\Delta$ introduced above are related to the the zero-derivative contact Witten diagram $V^{con,0}_{\Delta_1,\Delta_2}$ by the equation of motion operators. More precisely, let us define
\begin{equation}
\begin{split}
\mathbf{EOM}={}& L^2+\Delta(\Delta-d)\;,\\
\overline{\mathbf{EOM}}={}& \bar{L}^2+\Delta(\Delta-d)
\end{split}
\end{equation}
where $L^2$ and $\bar{L}^2$ are the conformal Casimir operators defined in (\ref{defL2}) and (\ref{defLbar2}). The exchange diagrams are then related to the contact diagram via
\begin{equation}
\begin{split}\label{eomWtoV}
\mathbf{EOM}[W^{exchange}_\Delta]={}&V^{con,0}_{\Delta_1,\Delta_2}\;,\\
\overline{\mathbf{EOM}}[\bar{W}^{exchange}_\Delta]={}&V^{con,0}_{\Delta_1,\Delta_2}\;.
\end{split}
\end{equation}
To prove these relations, let us notice that the integral (\ref{defW}) is conformal invariant
\begin{equation}\label{covW}
(L_1^{AB}+L_2^{AB}+\mathcal{L}^{AB})W^{exchange}_\Delta(P_1,P_2;N_c)=0\;.
\end{equation}
Here we are viewing $W^{exchange}_\Delta$ as a function of the bulk point $N_c$, and $\mathcal{L}^{AB}$ are the $AdS_{d+1}$ isometry generators at the point $N_c$. Using (\ref{covW}) twice, we get
\begin{equation}
L^2W^{exchange}_\Delta(P_1,P_2;N_c)=\mathcal{L}^2W^{exchange}_\Delta(P_1,P_2;N_c)
\end{equation}
where $\mathcal{L}^2=\frac{1}{2}\mathcal{L}^{AB}\mathcal{L}_{AB}$. Notice that $\mathcal{L}^2$ acts on the bulk coordinates as $\square$, and collapses the bulk-to-bulk propagator in (\ref{defW}) to a delta function by (\ref{eomGBB}). Integrating over $Z$ gives us the contact diagram (\ref{Vcon0}). The proof for $\bar{W}^{exchange}_\Delta$ is analogous.
We will also explicitly evaluate these exchange Witten diagrams, and study their decompositions into conformal blocks. We will delay the discussion until Section \ref{Sec:4}.
\subsection{Geodesic Witten diagrams}\label{Sec:geoW}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.8\textwidth]{FigWgeo}
\caption{Illustration of a geodesic Witten diagram. The integration of the cubic vertex is restricted to the geodesic line $\gamma_{12}$ connecting the two boundary insertions.}
\label{Fig:Wgeo}
\end{center}
\end{figure}
We can also define a variation of the exchange Witten diagrams, which is holographically dual to the conformal blocks (\ref{defg}), (\ref{defgbar}). These modified exchange Witten diagrams are known as the geodesic Witten diagrams \cite{Hijano:2015zsa}. These objects were also considered in \cite{daCunha:2016crm} in a different context. We define the dual of $g_\Delta(\eta)$ similarly as in (\ref{defW}) by
\begin{equation}\label{defWgeo}
W^{geo}_\Delta(P_1,P_2)=\int_{\gamma_{12}}d\gamma \; G^{\Delta}_{BB}(N_c,\gamma)G_{B\partial}^{\Delta_1}(\gamma,P_1)G_{B\partial}^{\Delta_2}(\gamma,P_2)\;,
\end{equation}
where $\gamma_{12}$ is the geodesic connecting the $\vec{x}_1$ and $\vec{x}_2$ on the conformal boundary. In Poincar\'e coordinates, the geodesic $\gamma_{12}$ is just a semicircle.
Instead of integrating over the whole $AdS_{d+1}$, the integration is now restricted to the geodesic only. The geodesic Witten diagram is illustrated in Figure \ref{Fig:Wgeo}. After extracting a kinematic factor
\begin{equation}
W^{geo}_\Delta(P_1,P_2)=\frac{\mathcal{W}^{geo}_\Delta(\eta)}{(1+x_1^2)^{\Delta_1}(1+x_2^2)^{\Delta_2}}\;,
\end{equation}
the function $\mathcal{W}^{geo}_\Delta(\eta)$ is proportional to $g_\Delta(\eta)$ up to some constant factor. Similarly, we define
\begin{equation}\label{defWbargeo}
\bar{W}^{geo}_\Delta(P_1,P_2)=\int_{\gamma_{1\bar{2}}}d\gamma \; G^{\Delta}_{BB}(N_c,\gamma)G_{B\partial}^{\Delta_1}(\gamma,P_1)G_{B\partial}^{\Delta_2}(\gamma,\bar{P}_2)\;,
\end{equation}
and
\begin{equation}
(x_2^2)^{-\Delta_2}\bar{W}^{geo}_\Delta(P_1,P_2)=\frac{\bar{\mathcal{W}}^{geo}_\Delta(\eta)}{(1+x_1^2)^{\Delta_1}(1+x_2^2)^{\Delta_2}}\;,
\end{equation}
where $\gamma_{1\bar{2}}$ is the geodesic connecting $\vec{x}_1$ and the image of $\vec{x}_2$ (see Figure \ref{Fig:Wgeob}). The geodesic Witten diagram $\bar{W}^{geo}_\Delta$ is dual to the image conformal block $\bar{g}_\Delta(\eta)$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.8\textwidth]{FigWgeob}
\caption{Illustration of a geodesic Witten diagram in the mirror channel. The geodesic $\gamma_{1\bar{2}}$ now connects point 1 and the image of point 2.}
\label{Fig:Wgeob}
\end{center}
\end{figure}
To prove their equivalence, we can use the equation of motion operators introduced in the last subsection. Let us act on the geodesic Witten diagrams with $\mathbf{EOM}$ and $\overline{\mathbf{EOM}}$, and use the definitions of $W^{geo}_\Delta$ and $\bar{W}^{geo}_\Delta$. The integrations along the geodesics preserve the conformal invariance, and allows us to apply the analysis and use the equation of motion for the bulk-to-bulk propagator. Note however that since the geodesic lines do not pass through the fixed point $N_c$ for generic end points $\vec{x}_1$, $\vec{x}_2$, the delta function is not integrated. Therefore, instead of generating contact diagrams we get
\begin{equation}
\mathbf{EOM}[W^{geo}_\Delta]=0\;, \quad \overline{\mathbf{EOM}}[\bar{W}^{geo}_\Delta]=0\;,
\end{equation}
On the boundary side, these two equations are just the quadratic conformal Casimir equations. It is also clear from the definitions (\ref{defWgeo}), $(\ref{defWbargeo}$) that $\mathcal{W}^{geo}_\Delta(\eta)$ and $\bar{\mathcal{W}}^{geo}_\Delta(\eta)$ satisfy the boundary conditions (\ref{gbc}), (\ref{gbarbc}). This concludes the proof that the geodesic Witten diagrams are the bulk dual of the conformal blocks.
\subsection{Comparison with interface CFT from the probe-brane setup}
To conclude this section, we give a quick comparison of the above discussion with the closely related interface/boundary CFT from the probe-brane setup \cite{DeWolfe:2001pq,Aharony:2003qf,Rastelli:2017ecj,Mazac:2018biw} (which is the simplest Karch-Randall set up \cite{Karch:2000gx,Karch:2001cw}). As we have seen in Section \ref{Sec:2}, the kinematics of the $\mathbb{RP}^d$ CFT and BCFT share a lot of similarities. We now show how these kinematical similarities are extended into the bulk, and also point out a number of differences.
In the probe-brane setup, we choose a special slice of $AdS_d$ space inside $AdS_{d+1}$. There are local degrees of freedom living on the $AdS_d$ slice, and they are coupled to the bulk fields in $AdS_{d+1}$. However, the $AdS_d$ brane is treated as a probe and does not back-reacts to the geometry. In \cite{Rastelli:2017ecj,Mazac:2018biw}, ``straight'' probe branes are considered in great detail where the $AdS_d$ slice is embedded in $AdS_{d+1}$ as the restriction to $z_d=0$. The setup corresponds to CFTs with a straight co-dimension 1 interface at $x_d=0$. One can then use method of images to take only half of the $AdS_{d+1}$ space with $z_d\geq 0$, and consider boundary CFTs defined on $x_d\geq 0$ with Dirichlet or Neumann boundary conditions, as was done in \cite{Mazac:2018biw}. Here we consider a slightly modified setup where the probe brane is a hemisphere in the Poincar\'e coordinates of $AdS_{d+1}$. It is related to the ``straight'' case by a conformal mapping. The method of images is similar in the spherical case (one can also first apply the method in the ``straight'' case and then perform the conformal mapping), and will not be elaborated here. We will therefore focus only on the probe brane case where the space continues beyond the interface.
As we pointed out in Section \ref{Sec:2.1}, the spherical boundary of a BCFT preserves the fixed embedding vector $N_B$ (\ref{NB}). In the bulk, the boundary of the BCFT is extended into the hemisphere
\begin{equation}
N_B\cdot Z=\frac{1-z_0^2-\vec{z}^2}{2z_0}=0\;,
\end{equation}
which is the probe $AdS_d$ brane. This hemisphere coincides with the inversion hemisphere $\mathcal{H}$ defined in (\ref{defcalH}). Note that in the $\mathbb{RP}^d$ CFT case the fixed vector $N_c$ corresponds to a point in the bulk, while in the BCFT case the fixed vector $N_B$ is a normal vector defining a fixed co-dimension 1 surface.
We can modify the Witten diagrams defined in Section \ref{Sec:3.2} to define their BCFT counterparts, by simply integrating over the whole hemisphere $\mathcal{H}$ instead of localizing on the north pole. For example, the BCFT one-point function is defined as
\begin{equation}
\langle \mathcal{O}_\Delta(P)\rangle_B=\int_{\mathcal{H}}dZ_{\mathcal{H}} G^\Delta_{B\partial}(Z_{\mathcal{H}},P)\sim \frac{1}{|1-x^2|^\Delta}\;,
\end{equation}
which reproduces the correct structure (\ref{B1pt}). The two-point contact and exchange Witten diagrams are respectively defined as
\begin{equation}\label{WBcon}
W^{con,0}_{B,\Delta_1,\Delta_2}(P_1,P_2)=\int_{\mathcal{H}}dZ_{\mathcal{H}} G^{\Delta_1}_{B\partial}(Z_{\mathcal{H}},P_1)G^{\Delta_2}_{B\partial}(Z_{\mathcal{H}},P_2)\;,
\end{equation}
\begin{equation}\label{WBexchange}
W^{exchange;bulk}_{B,\Delta}(P_1,P_2)=\int_{\mathcal{H}}dZ_{\mathcal{H}} \int_{AdS_{d+1}}dW G^{\Delta}_{BB}(Z_{\mathcal{H}},W) G^{\Delta_1}_{B\partial}(W,P_1)G^{\Delta_2}_{B\partial}(W,P_2)\;.
\end{equation}
It is easy to verify that a similar equation of motion identity relates the exchange Witten diagram to the contact Witten diagram
\begin{equation}
\mathbf{EOM}[W^{exchange;bulk}_{B,\Delta}]=W^{con,0}_{B,\Delta_1,\Delta_2}\;,
\end{equation}
by using similar arguments.
The integrals (\ref{WBcon}), (\ref{WBexchange}) can be mapped to the straight probe brane integrals studied in \cite{Rastelli:2017ecj}, by parameterizing the hemisphere $\mathcal{H}$ with the following Poincar\'e coordinates\footnote{Here we switched to the $(-,+,-,+,\ldots,+)$ signature for $\mathbb{R}^{d,2}$. We also need to perform the Wick rotation $z_d\to iz_d$ to make the probe $AdS_d$ brane Euclidean in order to compare with \cite{Rastelli:2017ecj}.}
\begin{equation}
Z_{\mathcal{H}}=\frac{1}{z_0}\left(z_d,0,\frac{1+(z_0^2-z_d^2+z_i^2)}{2},\frac{1-(z_0^2-z_d^2+z_i^2)}{2},z^i\right)\;,\quad i=1,\ldots,d-1\;.
\end{equation}
It then follows that these Witten diagrams are functions of the BCFT cross ratio $\xi$ defined in (\ref{defxi}), rather than the $\mathbb{RP}^d$ CFT cross ratio $\eta$ defined in (\ref{defeta}).
We can also define the geodesic Witten diagram similar to (\ref{defWgeo})
\begin{equation}
W^{geo;bulk}_{B,\Delta}(P_1,P_2)=\int_{\mathcal{H}}dZ_{\mathcal{H}}\int_{\gamma_{12}}d\gamma \; G^{\Delta}_{BB}(Z_{\mathcal{H}},\gamma)G_{B\partial}^{\Delta_1}(\gamma,P_1)G_{B\partial}^{\Delta_2}(\gamma,P_2)\;,
\end{equation}
as was first discussed in \cite{Rastelli:2017ecj}. A similar argument using the equation of motion operator shows that the geodesic Witten diagram is holographically dual to the bulk channel conformal block $g_{B,\Delta}^{bulk}(\xi)$ defined in (\ref{defgBbulk}).
Finally, in the probe brane setup we can define the boundary exchange Witten diagram
\begin{equation}
W^{exchange;boundary}_{B,\Delta}(P_1,P_2)=\int_{\mathcal{H}}dZ_{1,\mathcal{H}}dZ_{2,\mathcal{H}}G^{\Delta,\mathcal{H}}_{BB}(Z_{1,\mathcal{H}},Z_{2,\mathcal{H}}) G^{\Delta_1}_{B\partial}(Z_{1,\mathcal{H}},P_1)G^{\Delta_2}_{B\partial}(Z_{2,\mathcal{H}},P_2)
\end{equation}
where an interface field localized on $\mathcal{H}$ with dimension $\Delta$ is exchanged via the $AdS_d$ propagator $G^{\Delta,\mathcal{H}}_{BB}(Z_{1,\mathcal{H}},Z_{2,\mathcal{H}})$. These diagrams have no analogue in the $\mathbb{RP}^d$ CFT case, since the inversion hemisphere is not a boundary and there is no extra degrees of freedom living on it.
\section{Two-point exchange Witten diagrams}\label{Sec:4}
In this section, we study in detail the properties of the two-point exchange Witten diagrams defined in the previous section. In Section \ref{Sec:evaluate2pt} we explicitly evaluate these diagrams. In Section \ref{Sec:cbdecomp} we study the conformal block decomposition of Witten diagrams in different channels.
\subsection{Evaluating two-point exchange Witten diagrams}\label{Sec:evaluate2pt}
We will focus on the evaluation of the two-point exchange Witten diagram $W^{exchange}_\Delta$. The image diagram $\bar{W}^{exchange}_\Delta$ can be obtained from $W^{exchange}_\Delta$ via the crossing relation (\ref{calWWbcrossing}). The full $qAdS_{d+1}$ exchange Witten diagram $V^{exchange,\pm}_\Delta$ can then be assembled using (\ref{VasWWbar}). We first discuss the special case where $W^{exchange}_\Delta$ can be expressed as a finite sum of contact Witten diagrams. We then give the formula for the exchange diagram when the quantum numbers $\Delta_1$, $\Delta_2$, $\Delta$ are generic. Note that the calculation is exactly the same as doing only one of the two integrals in a scalar four-point exchange Witten diagram in $AdS_{d+1}$, since once we strip away the other two bulk-to-boundary propagators the integral becomes identical.
\vspace{0.5cm}
\noindent{\bf The truncated case}
\vspace{0.1cm}
\noindent Let us first consider a special case when $\Delta_1+\Delta_2-\Delta\in 2\mathbb{Z}_+$. We can use the vertex identity for scalar exchange \cite{DHoker:1999mqo} (see also Appendix A of \cite{Goncalves:2019znr}) to write the integrated vertex as a finite sum of products of two bulk-to-boundary propagators with shifted conformal dimensions. The exchange diagram $W^{exchange}_\Delta$ then becomes
\begin{equation}\label{Wtrunc}
\begin{split}
W^{exchange}_\Delta(P_1,P_2)={}&\sum_{k=k_{\min}}^{k_{\max}}a_k(\vec{x}_{12}^2)^{k-\Delta_2} G_{B\partial}^{k+\Delta_1-\Delta_2}(N_c,P_1)G_{B\partial}^k(N_c,P_2)\\
{}&=\sum_{k=k_{\min}}^{k_{\max}}a_k (\vec{x}_{12}^2)^{k-\Delta_2} V^{con,0}_{k+\Delta_1-\Delta_2,k}(P_1,P_2)
\end{split}
\end{equation}
where
\begin{equation}
\begin{split}
{}&k_{\min}=\frac{\Delta-\Delta_1+\Delta_2}{2}\;,\quad \quad k_{\max}=\Delta_2-1\;,\\
{}& a_{k-1}=\frac{(k-\frac{\Delta}{2}+\frac{\Delta_1-\Delta_2}{2})(k-\frac{d}{2}+\frac{\Delta}{2}+\frac{\Delta_1-\Delta_2}{2})}{(k-1)(k-1-\Delta_1+\Delta_2)}a_k\;,\\
{}& a_{\Delta_2-1}=\frac{1}{4(\Delta_1-1)(\Delta_2-1)}\;.
\end{split}
\end{equation}
Written in terms of the cross ratio, the exchange Witten diagram is a polynomial of $\eta^{-1}$
\begin{equation}
\mathcal{W}^{exchange}_\Delta(\eta)=\sum_{k=k_{\min}}^{k_{\max}}a_k\eta^{k-\Delta_2}\;.
\end{equation}
\vspace{0.5cm}
\noindent{\bf The general case}
\vspace{0.1cm}
\noindent In the general case, we can still express the exchange Witten diagram as an infinite sum of contact Witten diagrams. The integral has already been computed in Appendix C of \cite{Zhou:2018sfz}. Here we review the derivation and the result in the language of $\mathbb{RP}^d$ CFT.
The main idea for evaluating the integral is to use the equation of motion relation (\ref{eomWtoV}) to write down a differential equation for the exchange Witten diagram. Written in terms of the cross ratio, the equation of motion identity becomes
\begin{equation}
\mathbf{EOM}[\mathcal{W}^{exchange}_\Delta(\eta)]=1
\end{equation}
where the differential operator acts as
\begin{equation}
\begin{split}
\mathbf{EOM}[\mathcal{G}(\eta)]={}&4\eta^2(\eta-1)\mathcal{G}''(\eta)+\eta(4(\eta-1)(\Delta_1+\Delta_2+1)+2d)\mathcal{G}'(\eta)\\
{}&+((\Delta-\Delta_1-\Delta_2)(-d+\Delta+\Delta_1+\Delta_2)+4\Delta_1\Delta_2\eta)\mathcal{G}(\eta)\;.
\end{split}
\end{equation}
The differential equation should be supplemented by two boundary conditions:
\begin{enumerate}
\item[1)] From the OPE limit $\eta\to0$, we know $\mathcal{W}^{exchange}_\Delta(\eta)$ should behave as $\eta^{\frac{\Delta-\Delta_1-\Delta_2}{2}}$.\footnote{Here we are assuming that $\Delta<\Delta_1+\Delta_2$ such that the single-trace contribution is leading.}
\item[2)] From the definition of the integral (\ref{defW}), $\mathcal{W}^{exchange}_\Delta(\eta)$ has to be smooth at $\eta=1$ (see \cite{DHoker:1998ecp}).
\end{enumerate}
The physical solution is a linear combination of the special solution
\begin{equation}
f(\eta)={}_3F_2\left(1,\Delta_1,\Delta_2;\frac{\Delta_1+\Delta_2-\Delta}{2}+1,\frac{\Delta_1+\Delta_2+\Delta-d}{2}+1;\eta\right)\;,
\end{equation}
and a homogeneous solution, the conformal block $g_\Delta(\eta)$,
\begin{equation}\label{Wvalue}
\mathcal{W}^{exchange}_\Delta(\eta)=C_1 f(\eta)+C_2 g_\Delta(\eta)\;.
\end{equation}
The coefficients $C_1$, $C_2$ are given by
\begin{equation}
\begin{split}
C_1={}&-\frac{1}{(\Delta_1+\Delta_2-\Delta)(\Delta_1+\Delta_2+\Delta-d)}\;,\\
C_2={}&\frac{\Gamma(\frac{\Delta+\Delta_1-\Delta_2}{2})\Gamma(\frac{\Delta-\Delta_1+\Delta_2}{2})\Gamma(\frac{-\Delta+\Delta_1+\Delta_2}{2})\Gamma(\frac{-d+\Delta+\Delta_1+\Delta_2}{2})}{4\Gamma(\Delta_1)\Gamma(\Delta_2)\Gamma(-\frac{d}{2}+\Delta+1)}\;.
\end{split}
\end{equation}
The ratio $\frac{C_1}{C_2}$ is precisely fixed by the condition that the solution is regular at $\eta=1$. We can also write (\ref{Wvalue}) as two infinite series in $\eta$. Using the definition (\ref{defV0a}), (\ref{defV0b}) for the contact Witten diagram, we can then write $W^{exchange}_\Delta$ as two infinite sums of contact Witten diagrams
\begin{equation}
W^{exchange}_\Delta=\sum_{i=0}^\infty (\vec{x}_{12}^2)^i P_i V^{con,0}_{\Delta_1+i,\Delta_2+i}+\sum_{i=0}^\infty (\vec{x}_{12}^2)^{\frac{\Delta-\Delta_1-\Delta_2+2i}{2}}Q_i V^{con,0}_{\frac{\Delta+\Delta_1-\Delta_2}{2}+i, \frac{\Delta-\Delta_1+\Delta_2}{2}+i}
\end{equation}
where the coefficients $P_i$ and $Q_i$ are given by
\begin{equation}
P_i=\frac{(\Delta_1)_i (\Delta_2)_i}{(\Delta -\Delta_1-\Delta_2) (-d+\Delta +\Delta_1+\Delta_2) \left(\frac{-\Delta +\Delta_1+\Delta_2+2}{2}\right)_i \left(\frac{-d+\Delta +\Delta_1+\Delta_2+2}{2}\right)_i}\;,
\end{equation}
and
\begin{equation}
\begin{split}
Q_i={}&\frac{(-1)^i \Gamma \left(\frac{d-2 i-2\Delta}{2}\right)\sin \left(\frac{\pi (d-2 \Delta )}{2} \right)\Gamma \left(\frac{-d+\Delta +\Delta_1+\Delta_2}{2}\right)}{4 \pi \Gamma (i+1)\Gamma (\Delta_1) \Gamma (\Delta_2)}\\
{}&\times \frac{\Gamma \left(\frac{\Delta -\Delta_1+\Delta_2}{2}\right) \Gamma \left(\frac{\Delta +\Delta_1-\Delta_2}{2}\right) \Gamma \left(\frac{-\Delta +\Delta_1+\Delta_2}{2}\right)\Gamma \left(\frac{-\Delta +\Delta_1-\Delta_2+2}{2}\right) \Gamma \left(\frac{-\Delta -\Delta_1+\Delta_2+2}{2}\right) }{\Gamma \left(\frac{-\Delta +\Delta_1-\Delta_2-2 i+2}{2}\right)\Gamma \left(\frac{-\Delta -\Delta_1+\Delta_2-2 i+2}{2}\right)}\;.
\end{split}
\end{equation}
When $\Delta_1+\Delta_2-\Delta=2\mathbb{Z}_+$ the infinite sum truncates and reduces to (\ref{Wtrunc}).
Finally, let us examine the behavior of the exchange Witten diagrams at $\eta\to\infty$. By using the equation of motion identities (\ref{eomWtoV}) and the behavior of the contact Witten diagram (\ref{VconRegge}), we find
\begin{equation}\label{WWbarRegge}
\mathcal{W}^{exchange}_\Delta(\eta)\to \eta^{-1}\;, \bar{\mathcal{W}}^{exchange}_\Delta(\eta)\to \eta^{-1}\;,\quad \text{for } \eta\to\infty\;.
\end{equation}
\subsection{Conformal block decomposition of Witten diagrams}\label{Sec:cbdecomp}
\vspace{0.5cm}
\noindent{\bf The contact Witten diagram}
\vspace{0.1cm}
\noindent We now study the conformal block decomposition of two-point Witten diagrams. We start with a warmup case, namely the zero-derivative contact Witten diagram (\ref{defV0b}). It is straightforward to show that $\mathcal{V}^{con,0}_{\Delta_1,\Delta_2}(\eta)$ can be written as an infinite sum over double-trace conformal blocks
\begin{equation}\label{cfdecomVcon0}
\mathcal{V}^{con,0}_{\Delta_1,\Delta_2}(\eta)=\sum_{n}a_n g_{\Delta_n^{d.t.}}(\eta)=\sum_{n}a_n \bar{g}_{\Delta_n^{d.t.}}(\eta)
\end{equation}
where $\Delta_n^{d.t.}\equiv \Delta_1+\Delta_2+2n$, and
\begin{equation}\label{an}
a_n=\frac{(-1)^n \Gamma (n+\Delta_1) \Gamma (n+\Delta_2) \Gamma \left(-\frac{d}{2}+n+\Delta_1+\Delta_2\right)}{\Gamma (\Delta_1) \Gamma (\Delta_2) \Gamma (n+1) \Gamma \left(-\frac{d}{2}+\Delta_n^{d.t.}\right)}\;.
\end{equation}
The exchanged operators in each channel correspond to double-trace operators of the schematic form $:\mathcal{O}_1\square^n\mathcal{O}_2:$ with $n=0,1,\ldots$.
\vspace{0.5cm}
\noindent{\bf Exchange Witten diagram in the direct channel}
\vspace{0.1cm}
\noindent Let us now consider the conformal block decomposition of the exchange Witten diagram (\ref{defW}) in the same channel. The Witten diagram can be written as a sum of a single-trace conformal block with dimension $\Delta$, dual to the exchanged field, and infinitely many double-trace conformal blocks
\begin{equation}\label{Wdecomdir}
\mathcal{W}^{exchange}_\Delta(\eta)=A\, g_\Delta(\eta)+\sum_{n}A_n g_{\Delta_n^{d.t.}}(\eta)\;.
\end{equation}
The single-trace OPE coefficient can be extracted from the small $\eta$ expansion of (\ref{Wvalue}), and is associated to the term with the behavior $\eta^{\frac{\Delta-\Delta_1-\Delta_2}{2}}$
\begin{equation}\label{WAcoe}
A=\frac{\Gamma \left(\frac{\Delta +\Delta_1-\Delta_2}{2}\right) \Gamma \left(\frac{\Delta -\Delta_1+\Delta_2}{2}\right) \Gamma \left(\frac{-\Delta +\Delta_1+\Delta_2}{2}\right) \Gamma \left(\frac{-d+\Delta +\Delta_1+\Delta_2}{2}\right)}{4 \Gamma (\Delta_1) \Gamma (\Delta_2) \Gamma \left(-\frac{d}{2}+\Delta +1\right)}\;.
\end{equation}
To extract the double-trace OPE coefficients, we use the equation of motion identity (\ref{eomWtoV}). Note that the $\mathbf{EOM}$ operator annihilates the single-trace conformal block $g_\Delta(\eta)$, while multiplies the double-trace conformal blocks with constants
\begin{equation}
\begin{split}
\mathbf{EOM}[g_\Delta(\eta)]={}&0\;,\\
\mathbf{EOM}[g_{\Delta_n^{d.t.}}(\eta)]={}&(\Delta(\Delta-d)-\Delta_n^{d.t.}(\Delta_n^{d.t.}-d))g_{\Delta_n^{d.t.}}(\eta)\;.
\end{split}
\end{equation}
Using the conformal block decomposition (\ref{cfdecomVcon0}) of the contact Witten diagram, we find
\begin{equation} \label{WAnDDT}
A_n=\frac{a_n}{\Delta(\Delta-d)-\Delta_n^{d.t.}(\Delta_n^{d.t.}-d)}\;.
\end{equation}
Using these OPE coefficients, we can further expand the conformal blocks to obtain a small $\eta$ expansion for the exchange Witten diagram. This expansion can be compared with the expansion of (\ref{Wvalue}), which provides a consistency check of our results. By crossing symmetry, we also have
\begin{equation}
\bar{\mathcal{W}}^{exchange}_\Delta(\eta)=A\, \bar{g}_\Delta(\eta)+\sum_{n}A_n \bar{g}_{\Delta_n^{d.t.}}(\eta)\;.
\end{equation}
\vspace{0.5cm}
\noindent{\bf Exchange Witten diagram in the crossed channel}
\vspace{0.1cm}
\noindent Finally we consider the conformal block decomposition of the exchange Witten diagram in the crossed channel. The decomposition consists of crossed channel double-trace conformal blocks only
\begin{equation}
\mathcal{W}^{exchange}_\Delta(\eta)=\sum_n B_n \bar{g}_{\Delta_n^{d.t.}}(\eta)\;.
\end{equation}
To work out the decomposition coefficients, we will generalize the recursive techniques developed in \cite{Zhou:2018sfz}. We apply the equation of motion relation (\ref{eomWtoV}) to turn the exchange Witten diagram into a contact Witten diagram, which has already been decomposed into the crossed channel in (\ref{cfdecomVcon0}). On the other hand, the action of the $\mathbf{EOM}$ operator on $\bar{g}_{\Delta_n^{d.t.}}$ admits a simple three-term recursion relation
\begin{equation}\label{cfblockrecur}
\mathbf{EOM}[\bar{g}_{\Delta_n^{d.t.}}]=\mu_n \bar{g}_{\Delta_{n-1}^{d.t.}}+\nu_n \bar{g}_{\Delta_n^{d.t.}}+\rho_n \bar{g}_{\Delta_{n+1}^{d.t.}}
\end{equation}
where
\begin{eqnarray}
\nonumber \mu_n=&&(\Delta_1+\Delta_2-\Delta_n^{d.t.}) (d-\Delta_1-\Delta_2+\Delta_n^{d.t.}-2)\;,\\
\nonumber \nu_n=&&\frac{(d-2 \Delta_1-2 \Delta_2+2) (3 d-2 \Delta_1-2 \Delta_2-2) (d+2 \Delta_1-2 \Delta_2-2) (d-2 \Delta_1+2 \Delta_2-2)}{8 (d-2 \Delta_n^{d.t.}-2) (d-2 \Delta_n^{d.t.}+2)}\\
\nonumber &&-\left(\Delta_1-\frac{d}{2}\right)^2-\left(\Delta_2-\frac{d}{2}\right)^2+\frac{1}{2} \left(\Delta_n^{d.t.}-\frac{d}{2}\right)^2+d-\frac{1}{2}+\Delta (\Delta -d)\;,\\
\rho_n=&&\frac{\left((\Delta_1-\Delta_2)^2-{\Delta_n^{d.t.}}^2\right) (d-\Delta_1-\Delta_2-\Delta_n^{d.t.}) (2 d-\Delta_1-\Delta_2-\Delta_{n+1}^{d.t.})}{(d-2 (\Delta_n^{d.t.}+1))^2}\\
\nonumber &&\times\frac{(d+\Delta_1-\Delta_2-\Delta_{n+1}^{d.t.}) (d-\Delta_1+\Delta_2-\Delta_{n+1}^{d.t.})}{(d-2 \Delta_n^{d.t.})(d-2\Delta_{n+1}^{d.t.})}\;.
\end{eqnarray}
Note that for $n=0$, the coefficient $\mu_0$ vanishes. Therefore the action of the $\mathbf{EOM}$ operator preserves the double-trace spectrum. Using (\ref{cfblockrecur}), (\ref{Wvalue}) and (\ref{cfdecomVcon0}), we obtain the following inhomogeneous recursion relation for the decomposition coefficients
\begin{equation} \label{BnRecursion}
\rho_{n-1}B_{n-1}+\nu_n B_n+\mu_{n+1} B_{n+1}=a_n\;.
\end{equation}
For $n = 0$, the equation remains the same just without the $\rho_n$ piece. The coefficients $B_n$ with $n>0$ can be recursively solved, after specifying the boundary condition $B_0$, which is extracted from the $\eta\to1$ limit of the exchange diagram (\ref{Wvalue})
\begin{equation}\label{B0BoundaryCondition}
\begin{split}
&B_0=\mathcal{W}^{exchange}_\Delta( \eta \rightarrow 1) \\
&= \frac{\Gamma\left(\frac{\Delta_1 + \Delta_2 - \Delta}{2} \right) \Gamma\left(\frac{\Delta_1 + \Delta_2 - d + \Delta}{2} \right) \Gamma\left( 1 - \frac{d}{2} \right) }{4 \Gamma (\Delta_{2})} \Bigg[\frac{\Gamma \left(\frac{\Delta + \Delta_1 - \Delta_2}{2}\right)\Gamma \left(\frac{\Delta - \Delta_1 + \Delta_2}{2}\right)}{\Gamma\left(\Delta_1 \right) \Gamma \left(\frac{ 2 - d + \Delta + \Delta_1 - \Delta_2}{2}\right) \Gamma \left(\frac{ 2 - d + \Delta - \Delta_1 + \Delta_2}{2}\right) } \\
& - \ {}_3 \tilde{F}_2 \left( \frac{2 + \Delta_1 - \Delta_2 - \Delta}{2}, \frac{2 + \Delta_1 - \Delta_2 - d + \Delta}{2}, 1 - \frac{d}{2}; 2 - \frac{d}{2}, 1 - \frac{d}{2} + \Delta_{1}; 1 \right) \Bigg]
\end{split}
\end{equation}
where ${}_3 \tilde{F}_2$ is the regularized hypergeometric function defined by
\begin{equation}
{}_3 \tilde{F}_2 (a_1, a_2, a_3; b_1, b_2; z) = \frac{{}_3 F_2 (a_1, a_2, a_3; b_1, b_2; z)}{\Gamma (b_1) \Gamma (b_2)}.
\end{equation}
By crossing symmetry, the conformal block decomposition for $\mathcal{W}^{exchange}_\Delta$ also gives
\begin{equation}
\bar{\mathcal{W}}^{exchange}_\Delta(\eta)=\sum_n B_n g_{\Delta_n^{d.t.}}(\eta)\;.
\end{equation}
\section{Dimensional reduction}\label{Sec:5}
In \cite{Zhou:2020ptb} it was pointed out that a large class of Witten diagram recursion relations can be obtained from conformal block recursion relations, essentially by just replacing conformal blocks with the corresponding exchange Witten diagrams. This idea was demonstrated for four-point functions in generic CFTs, and two-point functions in CFTs with boundaries. Here we generalize similar statements to two-point functions in CFTs on $\mathbb{RP}^d$, and we will focus on the dimensional reduction relations.
\subsection{Reduction for conformal blocks}
Let us recall the relation between $\mathbb{RP}^d$ CFT conformal blocks and BCFT conformal blocks in the bulk channel (see (\ref{defg}) and (\ref{defgBbulk}))
\begin{equation}
g_\Delta(\eta)=(-1)^{\frac{\Delta-\Delta_1-\Delta_2}{2}}g_{B,\Delta}^{bulk}(-\eta)\;.
\end{equation}
The dimensional reduction formulae derived in \cite{Zhou:2020ptb} for $g_{B,\Delta}^{bulk}(\xi)$ therefore can be straightforwardly transformed into those for $g_\Delta(\eta)$. We have the following relation between conformal blocks in $d$ and $d-1$ dimensions
\begin{equation}\label{gdtodm1}
g^{(d)}_\Delta(\eta)=\sum_{j=0}^\infty \alpha^{(d)}_j(\Delta) g^{(d-1)}_{\Delta+2j}(\eta)
\end{equation}
where we have used the superscript to emphasize the dimensional dependence, and
\begin{equation}
\alpha^{(d)}_j(\Delta)=\frac{\Gamma \left(j+\frac{1}{2}\right) \left(\frac{1}{2} (\Delta +\Delta_1-\Delta_2)\right)_j \left(\frac{1}{2} (\Delta -\Delta_1+\Delta_2)\right)_j}{\sqrt{\pi } j! \left(\frac{1}{2} (-d+2 \Delta +2)\right)_j \left(j+\frac{1}{2} (-d+2 \Delta -1)+1\right)_j}\;.
\end{equation}
Moreover, a conformal block in $d-2$ dimensions can be expressed in terms of only two conformal blocks in $d$ dimensions
\begin{equation}\label{gdtodm2}
g^{(d-2)}_\Delta(\eta)=g^{(d)}_\Delta(\eta)+\beta(\Delta) g^{(d)}_{\Delta+2}(\eta)
\end{equation}
where
\begin{equation}
\beta(\Delta)=-\frac{(\Delta +\Delta_1-\Delta_2) (\Delta -\Delta_1+\Delta_2)}{(d-2 \Delta -4) (d-2 \Delta -2)}\;.
\end{equation}
Using $\bar{g}^{(d)}_\Delta(\eta)=g^{(d)}_\Delta(1-\eta)$, we also obtain similar dimensional reduction formulae for the image channel conformal blocks $\bar{g}^{(d)}_\Delta(\eta)$.
Let us comment that the recursion relation (\ref{gdtodm2}) is quite special, as it involves only finitely many terms. In fact, the inverse relation which expresses a $d$ dimensional conformal block in terms of $d-2$ dimensional blocks, contains infinitely many conformal blocks. A similar relation of (\ref{gdtodm2}) was first derived in \cite{Kaviraj:2019tbg} for conformal blocks for four-point functions in CFTs without boundaries. The identity expresses a $d-2$ dimensional conformal block in terms of the linear combination of five conformal blocks in $d$ dimensions. The existence of such a recursion was explained in terms of a $OSp(d+1,1|2)$ Parisi-Sourlas supersymmetry \cite{Parisi:1979ka} in a $d$ dimensional SCFT, which upon dimensional reduction gives rise to a non-supersymmetric CFT in $d-2$ dimensions. The relation (\ref{gdtodm2}) we wrote down here parallels the five-term relation in \cite{Kaviraj:2019tbg}, and therefore suggests that a similar story of Parisi-Sourlas supersymmetry and dimensional reduction can also be extended to CFTs on real projective space.
\subsection{Reduction for exchange Witten diagrams}
Similar to the observations in \cite{Zhou:2020ptb}, the recursion relations (\ref{gdtodm1}) and (\ref{gdtodm2}) can also be extended to imply relations for exchange Witten diagrams. Let us rescale the exchange Witten diagrams such that the single-trace conformal blocks appear with unit coefficients
\begin{equation}\label{defPolyakovb}
\mathcal{P}_\Delta(\eta)=\frac{1}{A}\mathcal{W}^{exchange}_\Delta(\eta)\;,\quad \bar{\mathcal{P}}_\Delta(\eta)=\frac{1}{A}\bar{\mathcal{W}}^{exchange}_\Delta(\eta)
\end{equation}
where $A$ is the coefficient of the single-trace conformal block (\ref{WAcoe}). We claim that we have the following dimensional reduction formulae
\begin{equation}\label{Pdtodm1}
\mathcal{P}^{(d)}_\Delta(\eta)=\sum_{j=0}^\infty \alpha^{(d)}_j(\Delta) \mathcal{P}^{(d-1)}_{\Delta+2j}(\eta)\;,
\end{equation}
\begin{equation}\label{Pdtodm2}
\mathcal{P}^{(d-2)}_\Delta(\eta)=\mathcal{P}^{(d)}_\Delta(\eta)+\beta(\Delta)\mathcal{P}^{(d)}_{\Delta+2}(\eta)\;.
\end{equation}
Similar relations also hold for the mirror channel exchange Witten diagram upon replacing $\mathcal{P}^{(d)}_\Delta(\eta)$ with $\bar{\mathcal{P}}^{(d)}_\Delta(\eta)$. In \cite{Zhou:2020ptb}, similar Witten diagram identities were proven by using simple Mellin space arguments. Unfortunately, the same arguments cannot be used here. We note that although a Mellin representation formalism can be developed for $\mathbb{RP}^d$ correlators similar to the BCFT case \cite{Rastelli:2017ecj}, it is not suitable for holographic correlators. To see this, we recall that contact Witten diagrams are polynomials of the cross ratio, and their Mellin transform are ill-defined. Nevertheless, we can still prove (\ref{Pdtodm1}) and (\ref{Pdtodm2}) in position space by using the conformal block decomposition in the direct channel.
Let us denote the decomposition of the exchange Witten diagrams as
\begin{equation}
\mathcal{P}^{(d)}_\Delta(\eta)=g^{(d)}_\Delta(\eta)+\sum_{n=0}^\infty \mu^{(d)}_n(\Delta)g^{(d)}_{\Delta_{n}^{d.t.}}(\eta)
\end{equation}
where $\mu^{(d)}_n(\Delta)=\frac{A_n}{A}$ in relation to (\ref{Wdecomdir}). Substituting this decomposition into (\ref{Pdtodm2}), we find that the single-trace conformal blocks on both sides cancel thanks to (\ref{gdtodm2}). The double-trace conformal blocks must also match, provided
\begin{equation}
\sum_{n=0}^\infty \mu^{(d-2)}_n(\Delta)g^{(d-2)}_{\Delta_{n}^{d.t.}}(\eta)=\sum_{n=0}^\infty \mu^{(d)}_n(\Delta)g^{(d)}_{\Delta_{n}^{d.t.}}(\eta)+\beta(\Delta) \sum_{n=0}^\infty \mu^{(d)}_n(\Delta+2)g^{(d)}_{\Delta_{n}^{d.t.}}(\eta)\;.
\end{equation}
We can use (\ref{gdtodm2}) again to turn $g^{(d-2)}_{\Delta_{n}^{d.t.}}(\eta)$ into $g^{(d)}_{\Delta_{n}^{d.t.}}(\eta)$, and arrive at the following condition
\begin{equation}
\mu^{(d-2)}_n(\Delta)+\beta(\Delta_{n-1})\mu^{(d-2)}_{n-1}(\Delta)=\mu^{(d)}_n(\Delta)+\beta(\Delta)\mu^{(d)}_n(\Delta+2)\;.
\end{equation}
This identity can be straightforwardly verified, using the explicit expressions for $\mu^{(d)}_n(\Delta)$ and $\beta(\Delta)$.
We can proceed similarly for the relation (\ref{Pdtodm1}). The single-trace operators again drop out because of the conformal block recursion relation. The double-trace coefficients need to be constrained, and the condition reads
\begin{equation}
\sum_{m+j=n} \mu^{(d)}_m(\Delta) \alpha^{(d)}_j(\Delta_m)=\sum_{k=0}^\infty \alpha^{(d)}_k(\Delta)\mu^{(d-1)}_n(\Delta+2k)\;.
\end{equation}
The infinite sum on the r.h.s. makes it difficult to check analytically, we can nevertheless numerically check this identity.
Another nontrivial crosscheck is to use (\ref{Pdtodm1}) twice to reproduce (\ref{Pdtodm2}). It is not difficult to find
\begin{equation}
\begin{split}
\mathcal{P}^{(d)}_\Delta(\eta)={}&\sum_{j,k}\alpha^{(d)}_j(\Delta)\alpha^{(d-1)}_k(\Delta+2j)\mathcal{P}^{(d-2)}_{\Delta+2j+2k}(\eta)\\
={}&\sum_{n=0}^\infty \frac{\left(\frac{\Delta +\Delta_1-\Delta_2}{2}\right)_n \left(\frac{\Delta -\Delta_1+\Delta_2}{2}\right)_n}{\left(-\frac{d}{2}+\Delta +1\right)_{2 n}}\mathcal{P}^{(d-2)}_{\Delta+2n}(\eta)\;.
\end{split}
\end{equation}
Using this identity in (\ref{Pdtodm2}), one can straightforwardly verify that the relation is valid.
\section{An analytic bootstrap approach for $\mathbb{RP}^d$ CFTs}\label{Sec:6}
In this section, we present an analytic bootstrap approach for studying CFTs on $\mathbb{RP}^d$. Part of our discussions forms a close analogy of the analysis for BCFT two-point functions in \cite{Mazac:2018biw} (see also the related works \cite{Mazac:2018ycv,Kaviraj:2018tfd,Mazac:2019shk,Caron-Huot:2020adz}). In Section \ref{Sec:basisfunctional} we argue that the double-trace conformal blocks in both the bulk channel and the mirror channels form a complete basis for two-point correlators. Their duals give a basis for analytic functionals, and we will explicitly construct their actions using exchange Witten diagrams. In Section \ref{Sec:dispersion} we give another construction of the functionals from the dispersion relation. We apply these analytic functionals in Section \ref{Sec:testfunctionals}, where we obtain the $\epsilon$-expansion result of $\mathbb{RP}^d$ $\varphi^4$ theory to $\epsilon^2$ order. We also perform an independent field theory check of our results in Section \ref{Sec:CFTEoM}.
\subsection{Space of functions, double-trace basis, and dual basis}\label{Sec:basisfunctional}
The study of Witten diagrams in Section \ref{Sec:4} motivates us to propose a natural basis for two-point functions. As we have seen, the exchange Witten diagrams admit the following decompositions in two channels
\begin{equation}\label{Wtwochannels}
\mathcal{W}^{exchange}_\Delta(\eta)=A\, g_\Delta(\eta)+\sum_{n}A_n g_{\Delta_n^{d.t.}}(\eta)=\sum_n B_n \bar{g}_{\Delta_n^{d.t.}}(\eta)\;,
\end{equation}
\begin{equation}\label{Wbartwochannels}
\bar{\mathcal{W}}^{exchange}_\Delta(\eta)=A\, \bar{g}_\Delta(\eta)+\sum_{n}A_n \bar{g}_{\Delta_n^{d.t.}}(\eta)=\sum_n B_n g_{\Delta_n^{d.t.}}(\eta)\;.
\end{equation}
These identities show that any conformal blocks $g_\Delta(\eta)$, $\bar{g}_\Delta(\eta)$ with generic conformal dimension $\Delta$ can be expressed as linear combinations of double-trace conformal blocks $g_{\Delta_n^{d.t.}}(\eta)$, $\bar{g}_{\Delta_n^{d.t.}}(\eta)$ in both channels. This fact, loosly speaking, implies that $\{g_{\Delta_n^{d.t.}}(\eta),\bar{g}_{\Delta_n^{d.t.}}(\eta)\}$ form a new basis.
To phrase our statement more precisely, we need to define the space of functions $\mathcal{U}$ for the correlators $\mathcal{G}(\eta)$ which we are considering. We define $\mathcal{U}$ to be the space with the following ``Regge'' behavior
\begin{equation}\label{spaceU}
\mathcal{G}(\eta)\in \mathcal{U}\;,\;\text{if }\;\; |\mathcal{G}|\lesssim |\eta|^{-\epsilon}\;,\;\text{when }\;\; \eta\to\infty
\end{equation}
where $\epsilon$ is an infinitesimal positive number. For example, the mean field theory two-point function
\begin{equation}\label{Reggebehavior}
\langle\phi_\pm(x_1)\phi_\pm(x_2)\rangle=\frac{1}{(1+x_1^2)^{\Delta_\phi}(1+x_2^2)^{\Delta_\phi}}(\eta^{-\Delta_\phi}\pm(1-\eta)^{-\Delta_\phi})\;,
\end{equation}
belongs to this space when $\Delta_\phi>0$\;. The conformal blocks $g_\Delta(\eta)$, $\bar{g}_\Delta(\eta)$ are also in this space if the external dimensions $\min\{\Delta_1,\Delta_2\}>0$. On the other hand, the contact Witten diagrams are not in this space (see (\ref{VconRegge})). This avoids having relations among the basis vectors $\{g_{\Delta_n^{d.t.}}(\eta),\bar{g}_{\Delta_n^{d.t.}}(\eta)\}$, as a contact Witten diagram can be decomposed into only double-trace conformal blocks in either channel. Note that in the BCFT case, it was proven that two-point correlators in any unitary theory have a bounded Regge behavior when $\xi\to -1$ \cite{Mazac:2018biw}. The proof exploits the positivity of the decomposition coefficients in the boundary channel. By contrast, in the case at hand here of $\mathbb{RP}^d$ CFTs, positivity is not {\it a priori} guaranteed in either channel even when the theory is unitary. The Regge behavior requirement (\ref{spaceU}) is therefore imposed by hand.
We claim that the double-trace conformal blocks $\{g_{\Delta_n^{d.t.}},\bar{g}_{\Delta_n^{d.t.}}\}$ form a basis for the space $\mathcal{U}$. A basis for the dual space $\mathcal{U}^*$ is given by the set of functionals $\{\omega_m,\bar{\omega}_m\}$, defined by dualizing the double-trace basis
\begin{equation}\label{orthonormal}
\begin{split}
{}&\omega_m(g_{\Delta_n^{d.t.}})=\delta_{mn}\;,\quad \omega_m(\bar{g}_{\Delta_n^{d.t.}})=0\;,\\
{}&\bar{\omega}_m(g_{\Delta_n^{d.t.}})=0\;,\quad \bar{\omega}_m(\bar{g}_{\Delta_n^{d.t.}})=\delta_{mn}\;.
\end{split}
\end{equation}
Although we do not have a general proof for this proposal (except for the $d=2$ case where we prove in Section \ref{Sec:dispersion} from the dispersion relation), we will provide ample evidence which supports this conjecture.
The action of the basis functionals can be read off from the conformal block decompositions of exchange Witten diagrams. Acting on (\ref{Wtwochannels}) with $\omega_m$ and use the orthonormal relation (\ref{orthonormal}), we get
\begin{equation} \label{FunctionalActionBlockDirect}
\omega_m(g_\Delta)=-\frac{A_m}{A}\;.
\end{equation}
Acting with $\bar{\omega}_m$, we find
\begin{equation}
\bar{\omega}_m(g_\Delta)=\frac{B_m}{A}\;.
\end{equation}
The action of the functionals on the image channel conformal block $\bar{g}_\Delta(\eta)$ can be obtained from (\ref{Wbartwochannels}), and is related to the action on $g_\Delta(\eta)$ by crossing symmetry
\begin{equation}
\omega_m(\bar{g}_\Delta)=\bar{\omega}_m(g_\Delta)\;,\quad \bar{\omega}_m(\bar{g}_\Delta)=\omega_m(g_\Delta)\;.
\end{equation}
Let us consider the action of the functionals on a two-point function $\mathcal{G}\in \mathcal{U}$ with the following conformal block decomposition
\begin{equation}\label{calGexpandinggbar}
\mathcal{G}(\eta)=\sum_{k} \mu_{12k} g_{\Delta_{\mathcal{O}_k}}(\eta)=\pm \sum_{k} \mu_{12k} \bar{g}_{\Delta_{\mathcal{O}_k}}(\eta)\;.
\end{equation}
Applying the basis functionals allow us to extract the complete set of constraints in terms of the sum rules
\begin{equation}\label{sumrulea}
\sum_{k} \mu_{12k} \omega_n(g_{\Delta_{\mathcal{O}_k}})\mp \sum_{k} \mu_{12k} \omega_n(\bar{g}_{\Delta_{\mathcal{O}_k}})=0\;,
\end{equation}
\begin{equation}\label{sumruleb}
\sum_{k} \mu_{12k} \bar{\omega}_n(g_{\Delta_{\mathcal{O}_k}})\mp \sum_{k} \mu_{12k} \bar{\omega}_n(\bar{g}_{\Delta_{\mathcal{O}_k}})=0\;.
\end{equation}
The exchange Witten diagrams can also be viewed as the Polyakov-Regge blocks in the Polyakov-style bootstrap \cite{Polyakov:1974gs,Gopakumar:2016wkt,Gopakumar:2016cpb,Dey:2016mcs,Dey:2017fab,Gopakumar:2018xqi,Mazac:2018ycv,Kaviraj:2018tfd,Mazac:2018biw,Mazac:2019shk,Ferrero:2019luz,Penedones:2019tng,Sleight:2019ive,Caron-Huot:2020adz}, and can be used as a new decomposition basis. In terms of the rescaled exchanged Witten diagrams (\ref{defPolyakovb}), two-point function in (\ref{calGexpandinggbar}) can be rewritten as
\begin{equation}\label{Pblockexpand}
\mathcal{G}(\eta)=\sum_k \mu_{12k} (\mathcal{P}_{\Delta_{\mathcal{O}_k}}(\eta)\pm \bar{\mathcal{P}}_{\Delta_{\mathcal{O}_k}}(\eta))\;.
\end{equation}
To prove this relation, we can express (\ref{Wtwochannels}), (\ref{Wbartwochannels}) in terms of functional actions, and substitute in (\ref{Pblockexpand}) with
\begin{equation}\label{Ping}
\mathcal{P}_\Delta(\eta)=g_\Delta(\eta)-\sum_n \omega_n(g_\Delta) g_{\Delta_n^{d.t.}}(\eta)\;,
\end{equation}
\begin{equation}\label{Pbaring}
\bar{\mathcal{P}}_\Delta(\eta)=\sum_n \omega_n(\bar{g}_\Delta) g_{\Delta_n^{d.t.}}(\eta)\;.
\end{equation}
We now have
\begin{equation}
\mathcal{G}(\eta)=\sum_{k} \mu_{12k} g_{\Delta_{\mathcal{O}_k}}(\eta)-\sum_{k} \mu_{12k}\bigg(\sum_n \omega_n(g_{\Delta_{\mathcal{O}_k}}) g_{\Delta_n^{d.t.}}(\eta)\mp\sum_n \omega_n(\bar{g}_{\Delta_{\mathcal{O}_k}}) g_{\Delta_n^{d.t.}}(\eta)\bigg)\;.
\end{equation}
Interchanging the action of the functionals and the sums, the second term becomes
\begin{equation}
\sum_n \omega_n\bigg(\sum_{k} \mu_{12k}(g_{\Delta_{\mathcal{O}_k}}(\eta)\mp\bar{g}_{\Delta_{\mathcal{O}_k}}(\eta))\bigg) g_{\Delta_n^{d.t.}}(\eta)\;,
\end{equation}
and vanishes because of the crossing equation (\ref{calGexpandinggbar}). We have therefore proven the equivalence between (\ref{Pblockexpand}) and the first identity in (\ref{calGexpandinggbar}). To prove the second identity, we just need to decompose the Polyakov-Regge blocks in the image channel
\begin{equation}\label{Pingbar}
\mathcal{P}_\Delta(\eta)=\sum_n \bar{\omega}_n(g_\Delta) \bar{g}_{\Delta_n^{d.t.}}(\eta)\;,
\end{equation}
\begin{equation}\label{Pbaringbar}
\bar{\mathcal{P}}_\Delta(\eta)=\bar{g}_\Delta(\eta)-\sum_n \bar{\omega}_n(\bar{g}_\Delta) \bar{g}_{\Delta_n^{d.t.}}(\eta)\;.
\end{equation}
The alternative decomposition (\ref{Pblockexpand}) can also be taken as the starting point for analytic bootstrap. In a generic interacting CFT, we do not expect operators with precise double-trace dimensions. However, if we use the the conformal block decompositions (\ref{Ping}), (\ref{Pbaring}), (\ref{Pingbar}), (\ref{Pbaringbar}) of the Polyakov-Regge blocks, we would encounter spurious double-trace operators in (\ref{Pblockexpand}). The requirement that these spurious operators should cancel gives sum rules for the OPE coefficients, which are identical to the conditions (\ref{sumrulea}), (\ref{sumruleb}).
\subsection{Functionals from dispersion relation}\label{Sec:dispersion}
As was pointed in \cite{Mazac:2019shk}, analytic functionals and the dispersion relation for correlators \cite{Carmi:2019cub} (see also \cite{Bissi:2019kkx}) are closely related. In particular, the kernel of the dispersion relation can be viewed as the generating function for the kernels of the analytic functional in their integral representation. Here we will demonstrate that a similar relation holds for $\mathbb{RP}^d$ CFTs by re-deriving the basis identified in Section \ref{Sec:basisfunctional} and constructing the dual functionals. For simplicity, we will only focus on $d=2$ and set $\Delta_1=\Delta_2=\Delta_\phi$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.65\textwidth]{Figcontour}
\caption{An illustration of the integral contours in the dispersion relation.}
\label{Fig:contours}
\end{center}
\end{figure}
We start with the Cauchy's integral formula
\begin{equation}
\mathcal{G}(\eta)=\oint \frac{d \zeta}{2\pi i} \frac{\mathcal{G}(\zeta)}{\zeta-\eta}\;,
\end{equation}
with a contour encircling the point $\zeta=\eta$. We can deform the contour, and wrap it around the two branch cuts $[1,\infty)$, $(-\infty,0]$. We will denote these two deformed contours respectively as $C_1$ and $C_2$, as is illustrated in Figure \ref{Fig:contours}. Since we have assumed that the correlator has the Regge behavior (\ref{spaceU}), we can safely drop the arc at infinity. We therefore obtain
\begin{equation}
\mathcal{G}(\eta)=\mathcal{G}_1(\eta)+\mathcal{G}_2(\eta)
\end{equation}
where
\begin{eqnarray}
\mathcal{G}_1(\eta)&=&\int_{C_1}\frac{d \zeta}{2\pi i} \frac{\mathcal{G}(\zeta)}{\zeta-\eta}=\int\limits_1^\infty\frac{d \zeta}{2\pi i} \frac{{\rm Disc}_1[\mathcal{G}(\zeta)]}{\zeta-\eta} \;,\\
\mathcal{G}_2(\eta)&=&-\int_{C_2}\frac{d \zeta}{2\pi i} \frac{\mathcal{G}(\zeta)}{\zeta-\eta}=\int\limits_{-\infty}^0\frac{d \zeta}{2\pi i} \frac{{\rm Disc}_2[\mathcal{G}(\zeta)]}{\zeta-\eta}\;,
\end{eqnarray}
and
\begin{eqnarray}
{\rm Disc}_1[\mathcal{G}(\zeta)]&=&\mathcal{G}(\zeta+i0^+)-\mathcal{G}(\zeta-i0^+)\;,\quad \text{for}\;\zeta\in(1,\infty)\;,\\
{\rm Disc}_2[\mathcal{G}(\zeta)]&=&\mathcal{G}(\zeta+i0^+)-\mathcal{G}(\zeta-i0^+)\;,\quad \text{for}\;\zeta\in(-\infty,0)\;.
\end{eqnarray}
Let us define
\begin{equation}
k_h(\eta)=\eta^h{}_2F_1(h,h;2h,\eta)\;.
\end{equation}
These functions satisfy the following orthonormality condition
\begin{equation}\label{orthonormalk}
\oint_{|\eta|=\epsilon} \frac{d\eta}{2\pi i}\eta^{-2}k_{x+n}(\eta)k_{1-x-m}(\eta)=\delta_{mn}\;.
\end{equation}
Note that for $d=2$ and $\Delta_1=\Delta_2=\Delta_\phi$ the double-trace conformal blocks are just
\begin{equation}
g_{\Delta_n^{d.t.}}(\eta)=\eta^{-\Delta_\phi}k_{\Delta_\phi+n}(\eta)\;.
\end{equation}
We expect that the Cauchy kernel can be expanded in terms of the double-trace conformal blocks
\begin{equation}
\frac{1}{\zeta-\eta}=\sum_{n=0}^\infty H_n(\zeta)g_{\Delta_n^{d.t.}}(\eta)\;.
\end{equation}
The coefficients $H_n(\zeta)$ can then be extracted using the orthonormality relation (\ref{orthonormalk})
\begin{equation}
H_n(\zeta)=\oint_{|\eta|=\epsilon}\frac{d\eta}{2\pi i}\frac{\eta^{\Delta_\phi-2}}{\zeta-\eta}k_{1-\Delta_\phi-n}(\eta)\;.
\end{equation}
The integral is simple to perform and gives
\begin{equation}
H_n(\zeta)=\frac{(-4)^{-n}(\Delta_\phi)_n(2\Delta_\phi-1)_n}{n!(\Delta_\phi-\frac{1}{2})_n}\zeta^{-1} {}_3F_2(1,-n,2\Delta_\phi+n-1;\Delta_\phi,\Delta_\phi,\zeta^{-1})\;.
\end{equation}
Therefore we have shown that $\mathcal{G}_1(\eta)$ can be expanded in terms of the bulk channel double-trace conformal blocks
\begin{equation}
\mathcal{G}_1(\eta)=\sum_{n=0}^\infty r_{n,1}\, g_{\Delta_n^{d.t.}}(\eta)
\end{equation}
where
\begin{equation}\label{defcn1}
r_{n,1}=\int_{C_1} \frac{d\zeta}{2\pi i} H_n(\zeta)\mathcal{G}(\zeta)\;.
\end{equation}
We can perform a similar analysis for $\mathcal{G}_2$. However, by crossing symmetry
\begin{equation}
\mathcal{G}_2(\eta)=\pm \mathcal{G}_1(1-\eta)
\end{equation}
where $\pm$ is common parity of the two operators. It follows that $\mathcal{G}_2(\eta)$ admits an expansion into mirror channel double-trace conformal blocks
\begin{equation}
\mathcal{G}_2(\eta)=\sum_{n=0}^\infty r_{n,2}\, \bar{g}_{\Delta_n^{d.t.}}(\eta)
\end{equation}
where
\begin{equation}\label{defcn2}
r_{n,2}=-\int_{C_2}\frac{d\zeta}{2\pi i}H_n(1-\zeta)\mathcal{G}(\zeta)\;.
\end{equation}
In fact the above decomposition is even valid when we do not assume crossing symmetry. To see this, we simply need to notice
\begin{equation}
\frac{1}{\zeta-\eta}=-\frac{1}{(1-\zeta)-(1-\eta)}=-\sum_{n=0}^\infty H_n(1-\zeta)\bar{g}_{\Delta_n^{d.t.}}(\eta)\;.
\end{equation}
We have now established that the double-trace conformal blocks $g_{\Delta_n^{d.t.}}$ and $\bar{g}_{\Delta_n^{d.t.}}$ form a basis for functions satisfying the Regge behavior (\ref{spaceU}), which is captured by the decomposition
\begin{equation}
\mathcal{G}(\eta)=\sum_{n=0}^\infty r_{n,1}\, g_{\Delta_n^{d.t.}}(\eta)+\sum_{n=0}^\infty r_{n,2}\, \bar{g}_{\Delta_n^{d.t.}}(\eta)\;.
\end{equation}
The coefficients can then be interpreted as the actions of the dual functionals
\begin{equation} \label{FunctionalActionCorrelator}
r_{n,1}=\omega_n[\mathcal{G}(\eta)]\;,\quad r_{n,2}=\bar{\omega}_n[\mathcal{G}(\eta)]\;.
\end{equation}
Using their definitions (\ref{defcn1}), (\ref{defcn2}), we see that the functional kernels indeed follow from the dispersion relation kernel as we claimed earlier.
The discussion in this section is quite similar to the CFT$_1$ case discussed in Section 2 of \cite{Mazac:2019shk}. But we should notice that the basis in Section 2 of \cite{Mazac:2019shk} does not coincide with the expected basis from holography. Holography suggests a basis containing all conformal blocks with dimensions $2\Delta_\phi+2n$ and their derivatives with respect to the conformal dimension, while the basis from the Cauchy dispersion kernel consists of all conformal blocks with dimensions $2\Delta_\phi+n$ and no derivatives. This is different in the $\mathbb{RP}^d$ CFT case. We find that the Cauchy dispersion kernel gives exactly the same double-trace basis which we expect from holography.
\subsubsection*{Nonperturbative checks}
Let us now perform some quick checks of the equivalence between functional actions obtained from the dispersion relation, and the ones obtained from Witten diagrams. Consider the following crossing symmetric toy example of a correlation function
\begin{equation}
\mathcal{G}(\eta) = \frac{1}{\sqrt{\eta (1 - \eta)}}.
\end{equation}
We will set the external dimensions to be $\Delta_1 = \Delta_2 = 2$. The `correlator' can be decomposed into the $d = 2$ conformal blocks with a spectrum $\Delta=1+2n$, $n\in\mathbb{Z}_+$
\begin{equation}
\begin{split}
\mathcal{G}(\eta) &= \sum_{n = 1}^{\infty} \alpha_n g_{\Delta = 1 + 2 n}(\eta)\\
\alpha_n &= \frac{(-1)^n 2^{1-2 n} \left(n-\frac{1}{2}\right)! \left(2 (-1)^n \Phi \left(-1,1,n+\frac{1}{2}\right)-\pi \right)}{\sqrt{\pi } (n-1)!}
\end{split}
\end{equation}
where $\Phi$ here is the Lerch transcendent function. Note that the mean field theory double-trace operators have conformal dimensions $\Delta=4+2n$, $n\in \mathbb{N}$. The OPE spectrum of the above correlator is therefore `maximally' different from the mean field spectrum, and is in this sense a `nonperturbative' check. We can act with the functionals on both sides using \eqref{FunctionalActionBlockDirect} and \eqref{FunctionalActionCorrelator}, and it leads to the following constraints
\begin{equation}
\omega_{n} [\mathcal{G}(\eta)] = \int_1^{\infty} \frac{d \zeta}{2 \pi i} {\rm Disc}_1 [\mathcal{G}(\zeta) H_{n} (\zeta)] = -\sum_{m = 1}^{\infty} \alpha_m \left( \frac{A_n}{A} \right)_{\Delta= 1 + 2 m, \ \Delta_1 = \Delta_2 = 2}.
\end{equation}
We checked numerically that this constraint is true for $n = 0, 1$ and $2$.
A more physical example is given by the $2 d$ Ising model. The exact solution is known for this model. The $\sigma$ operator, which has conformal dimension $\Delta_{\sigma} = 1/8$, has a two point function on $\mathbb{RP}^d$ given by (we assume that $\sigma$ has positive parity) \cite{Nakayama:2016cim}
\begin{equation}
\mathcal{G}_{\sigma}(\eta) = \frac{\sqrt{1-\sqrt{1-\eta }}+\sqrt{1-\sqrt{\eta }}}{(\eta (1-\eta ))^{1/8}} = \sum_{n = 0}^{\infty} \rho_{1,n} g_{\Delta = 4 n}(\eta) + \sum_{n = 1}^{\infty} \rho_{2,n} g_{\Delta = 1 + 4 n}(\eta).
\end{equation}
The coefficients $\rho_{1,n}$ and $\rho_{2,n}$ can be found recursively using the expansion of the correlator. Note that $\mathcal{G}_{\sigma}(\eta)$ approaches a constant as $\eta\to \infty$, and therefore does not belong to the space of functions we defined. In fact, a direct application of the functionals leads to a divergent sum. So instead we perform check on $\mathcal{G}_{\sigma}(\eta)/ \eta $ which has an improved Regge behavior. We would like the new correlator to have the same operator spectrum in the conformal block decomposition. Therefore, we will take the external dimensions to be $\Delta_1 = \Delta_2 = 9/8$ instead of $1/8$. The action of the functionals then requires
\begin{equation}
\begin{split}
\omega_{n} \left[\frac{\mathcal{G}_{\sigma}(\eta)}{\eta}\right] &= \int_1^{\infty} \frac{d \zeta}{2 \pi i} {\rm Disc}_1 \left[\frac{\mathcal{G}_{\sigma}(\zeta)}{\zeta} H_{n} (\zeta)\right] \\
&= -\sum_{m = 0}^{\infty} \rho_{1,m} \left( \frac{A_n}{A} \right)_{\Delta= 4 m, \ \Delta_1 = \Delta_2 = \frac{9}{8}} -\sum_{m = 0}^{\infty} \rho_{2,m} \left( \frac{A_n}{A} \right)_{\Delta= 4 m + 1, \ \Delta_1 = \Delta_2 = \frac{9}{8}} .
\end{split}
\end{equation}
We checked this relation numerically for $n = 0, 1$ and $2$ and it holds true.
\subsection{Perturbative applications of analytic functionals}\label{Sec:testfunctionals}
In this subsection, we apply our functionals to some perturbative examples. We start by checking the sum rules \eqref{sumrulea}, \eqref{sumruleb} on the mean field theory. We then consider small perturbations around the mean field theory and show how we can obtain the data for Wilson-Fisher model on $\mathbb{RP}^d$ using these sum rules. Note that the mean field theory two-point function has the following conformal block decomposition
\begin{equation}\label{CBDGFF}
\mathcal{G}(\eta) = \frac{1}{\eta^{\Delta_{\phi}}} \pm \frac{1}{(1 - \eta)^{\Delta_{\phi}}} = g_{\Delta = 0}(\eta) + \sum_{n = 0}^{\infty} \mu_{\phi \phi n} g_{\Delta = \Delta_n^{d.t.}} (\eta)
\end{equation}
where $\Delta_n^{d.t.} = 2 \Delta_{\phi} + 2 n$ and the OPE coefficients are given by
\begin{equation}\label{GFFCoefficients}
\mu_{\phi \phi n}^{(0)} = \pm \frac{(\Delta_{\phi})_n (2 \Delta_{\phi} - \frac{d}{2} + 2 n)_{-n}}{(\Delta_{\phi} - \frac{d}{2} + n + 1)_{-n} n!}.
\end{equation}
These OPE coefficients are just a simple modification of the BCFT case, which can be found for instance in \cite{Liendo:2012hy}. We use the superscript $0$ to indicate that we will soon perturb the mean field theory solution. The sum rules then tells us that the following must be true for all values of $n$
\begin{equation} \label{ConstraintGFF}
- \left( \frac{A_n}{A}\right)_{\Delta = 0} + \mu_{\phi \phi n}^{(0)} = \pm \left( \frac{B_n}{A} \right)_{\Delta = 0}.
\end{equation}
Now recall the expression of the coefficients $A_{n}$ and $A$ from \eqref{WAcoe} and \eqref{WAnDDT}. For $\Delta_1 = \Delta_2 = \Delta_{\phi}$ they take a simpler form
\begin{equation} \label{WCoefGFF}
\begin{split}
A_{n} &= \frac{(- 1)^{n } (\Delta_{\phi})_n^2 (2 \Delta_{\phi} - \frac{d}{2} + 2 n)_{-n}}{\left(\Delta(\Delta - d) - (2 \Delta_{\phi} + 2 n)(2 \Delta_{\phi} + 2 n - d) \right) n!} \\
A &= \frac{\Gamma\left( \frac{\Delta}{2} \right)^2 \Gamma\left( \Delta_{\phi} - \frac{\Delta}{2} \right) \Gamma\left( \Delta_{\phi} - \frac{(d - \Delta)}{2} \right)}{4 \Gamma(\Delta_{\phi})^2 \Gamma\left( \Delta + 1 - \frac{d}{2} \right)}.
\end{split}
\end{equation}
It is then clear that as we take $\Delta \rightarrow 0$ the expression for the coefficient $A$ diverges, while $A_m$ remains finite. So the first term in the constraint equation \eqref{ConstraintGFF}, $A_m/A$ does not contribute. The constraint then becomes
\begin{equation} \label{BFunctionalIdentity}
\left( \frac{B_n}{A} \right)_{\Delta = 0} = \pm \mu_{\phi \phi n}^{(0)}
\end{equation}
This constraint can be explicitly checked for $n = 0$ and $n = 1$ using the results for $B_n$ in \eqref{B0BoundaryCondition} and \eqref{BnRecursion}. For all other values of $n$, note that the recursion relation \eqref{BnRecursion} implies
\begin{equation}
\begin{split}
&\left( \frac{\rho_{n - 1} B_{n - 1}}{A} \right)_{\Delta = 0} + \left( \frac{\nu_n B_n}{A} \right)_{\Delta = 0} + \left( \frac{\mu_{n + 1} B_{n + 1}}{A} \right)_{\Delta = 0} = \left( \frac{a_n}{A} \right)_{\Delta = 0} = 0
\\
\\
& \implies \left( \rho_{n - 1} \ \mu_{\phi \phi n-1}^{(0)} \ + \ \nu_n \ \mu_{\phi \phi n}^{(0)} \ + \ \nu_n \ \mu_{\phi \phi n}^{(0)} \right)_{\Delta = 0} = 0
\end{split}
\end{equation}
which can be easily checked to be true for all values of $n$. This completes our check of analytic functionals for mean field theory.
\subsubsection*{Wilson-Fisher model}
We now consider perturbations around the mean field solution such that the above OPE coefficients $\mu_{\phi \phi n}$ and dimensions receive small corrections. One such perturbation is the Wilson-Fisher fixed point in $d = 4 - \epsilon$ which is a perturbation of free field theory with $\Delta^{(0)}_{\phi} = \frac{d}{2} - 1$. It has a Lagrangian description, which in our normalization can be written as
\begin{equation}
S = \frac{\Gamma \left(\frac{d}{2} - 1 \right)}{4 \pi^{d/2}} \int d^d x \left( \frac{1}{2} (\partial_{\mu} \phi^I)^2 + \frac{\lambda}{4} (\phi^I \phi^I)^2 \right).
\end{equation}
But we will not need this Lagrangian description, and we will treat it as a perturbation of a mean field theory of $N$ free fields. We parametrize deviations from the mean field values as follows
\begin{equation}
\begin{split}
\mu_{\phi \phi n} &= \mu_{\phi \phi n}^{(0)} + \epsilon \ \mu_{\phi \phi n}^{(1)} + \epsilon^2 \ \mu_{\phi \phi n}^{(2)}, \hspace{1cm} \Delta_{\phi} = \frac{d}{2} - 1 + \epsilon^2 \gamma^{(2)}_{\phi} \\
\Delta & = \Delta_n^{d.t.} + \epsilon \gamma_n^{(1)} + \epsilon^2 \gamma_n^{(2)} = 2 \Delta_{\phi} + 2 n + \epsilon \gamma_n^{(1)} + \epsilon^2 \gamma_n^{(2)}
\end{split}
\end{equation}
where we used the well known fact that in this model, the first order correction to the anomalous dimension of $\phi$ vanishes. From \eqref{GFFCoefficients}, we see that for this free field value of $\Delta^{(0)}_{\phi}$, $\mu_{\phi \phi n}^{(0)} = \pm \delta_{n , 0}$, which truncates the functional equations. This leaves us with a finite number of terms on both sides. Using \eqref{orthonormal} the sum rule \eqref{sumrulea} at order $\epsilon$ just becomes
\begin{equation}
\mu_{\phi \phi n}^{(1)} \mp \left( \frac{A_n}{A}\right)^{O(\epsilon)}_{\Delta = \Delta_{0}^{d.t.} \ + \ \epsilon \gamma_0^{(1)}} = \pm\left( \frac{B_n}{A}\right)^{O(\epsilon)}_{\Delta = 0} + \left( \frac{B_n}{A}\right)^{O(\epsilon)}_{\Delta = \Delta_0^{d.t.} \ + \ \epsilon \gamma_0^{(1)}}
\end{equation}
and the superscript indicates that we pick out the order $\epsilon$ contribution. Expanding in $\epsilon$, we can check that $(A_n/A)$ term does not contribute at order $\epsilon$. Also using \eqref{BFunctionalIdentity} and \eqref{GFFCoefficients}, we can check that $(B_n/A)$ does not contribute for $\Delta = 0$. As for the other term on the right hand side, it only contributes at this order for $n = 0$ and $1$ and using the recursion relation \eqref{BnRecursion}, we can check that all the other values of $n$ start contributing at order $\epsilon^2$. This gives the following results for the CFT data
\begin{equation}
\mu_{\phi \phi 0}^{(1)} = - \frac{\gamma^{(1)}_0}{2}, \ \ \ \mu_{\phi \phi 1}^{(1)} = \frac{\gamma^{(1)}_0}{4}, \ \ \ \mu_{\phi \phi n \ge 2}^{(1)} = 0
\end{equation}
This agrees with what was found in \cite{Hasegawa:2018yqg}. The fact that only two of the OPE coefficients are non-zero at this order implies that the functional equations also truncate to finite terms at next order. At next order in $\epsilon$, we obtain using the sum rule
\begin{equation}
\begin{split} \label{Epsilon2OPESumRule}
&\mu_{\phi \phi n}^{(2)} \mp \left( \frac{A_n}{A}\right)^{O(\epsilon^2)}_{\Delta = \Delta_{0}^{d.t.} + \epsilon \gamma_0^{(1)} + \epsilon^2 \gamma_0^{(2)}} - \mu_{\phi \phi 0}^{(1)} \left( \frac{A_n}{A}\right)^{O(\epsilon)}_{\Delta = \Delta_{0}^{d.t.} + \epsilon \gamma_0^{(1)}} - \mu_{\phi \phi 1}^{(1)} \left( \frac{A_n}{A}\right)^{O(\epsilon)}_{\Delta = \Delta_{1}^{d.t.} + \epsilon \gamma_1^{(1)}} = \\
&\pm \left( \frac{B_n}{A}\right)^{O(\epsilon^2)}_{\Delta = 0} + \left( \frac{B_n}{A}\right)^{O(\epsilon^2)}_{\Delta = \Delta_{0}^{d.t.} + \epsilon \gamma_0^{(1)} + \epsilon^2 \gamma_0^{(2)}} \pm \mu_{\phi \phi 0}^{(1)} \left( \frac{B_n}{A}\right)^{O(\epsilon)}_{\Delta = \Delta_{0}^{d.t.} + \epsilon \gamma_0^{(1)}} \pm \mu_{\phi \phi 1}^{(1)} \left( \frac{B_n}{A}\right)^{O(\epsilon)}_{\Delta = \Delta_{1}^{d.t.} + \epsilon \gamma_1^{(1)}}.
\end{split}
\end{equation}
For the identity block, using \eqref{BFunctionalIdentity} and \eqref{GFFCoefficients}, it is easy to check that
\begin{equation}
\left( \frac{B_n}{A}\right)^{O(\epsilon^2)}_{\Delta = 0} = \frac{\gamma_{\phi}^{(2)} (\Gamma(n))^2}{\Gamma(2 n)}, n \ge 0 \hspace{1 cm} \left( \frac{B_0}{A}\right)^{O(\epsilon^2)}_{\Delta = 0} = 0.
\end{equation}
For other values of $\Delta$, expanding the $A_n$-functionals is straightforward. To expand the $B_n$-functionals, which involve hypergeometric functions, we use the package $\mathtt{HypExp}$ \cite{Huber:2005yg}. We collect below the needed expansions for a few low-lying values of $n$
\begin{equation}
\begin{split}
\left( \frac{B_0}{A}\right)_{\Delta = \Delta_{0}^{d.t.} + \epsilon \gamma_0^{(1)} + \epsilon^2 \gamma_0^{(2)}} &= - \frac{\gamma_0^{(1)}}{2} \epsilon - \frac{\left(\gamma_0^{(1)} + 2 \gamma_0^{(2)}\right)}{4} \epsilon^2; \ \left( \frac{B_0}{A}\right)_{\Delta = \Delta_{1}^{d.t.} + \epsilon \gamma_1^{(1)}} = \frac{\gamma_1^{(1)} (\pi^2 - 6)}{6} \epsilon, \\
\left( \frac{B_1}{A}\right)_{\Delta = \Delta_{0}^{d.t.} + \epsilon \gamma_0^{(1)} + \epsilon^2 \gamma_0^{(2)}} &= \frac{\gamma_0^{(1)}}{4} \epsilon + \frac{\left( \gamma_0^{(1)} (2\gamma_0^{(1)} - 1 ) + 4 \gamma_0^{(2)}\right)}{16} \epsilon^2; \ \left( \frac{B_1}{A}\right)_{\Delta = \Delta_{1}^{d.t.} + \epsilon \gamma_1^{(1)}} = - \frac{\gamma_1^{(1)}}{2} \epsilon, \\
\left( \frac{B_2}{A}\right)_{\Delta = \Delta_{0}^{d.t.} + \epsilon \gamma_0^{(1)} + \epsilon^2 \gamma_0^{(2)}} &= - \frac{\gamma_0^{(1)} (\gamma_0^{(1)} - 1)}{24} \epsilon^2; \ \left( \frac{B_2}{A}\right)_{\Delta = \Delta_{1}^{d.t.} + \epsilon \gamma_1^{(1)}} = \frac{\gamma_1^{(1)}}{12} \epsilon, \\
\left( \frac{B_3}{A}\right)_{\Delta = \Delta_{0}^{d.t.}+ \epsilon \gamma_0^{(1)} + \epsilon^2 \gamma_0^{(2)}} &= \frac{\gamma_0^{(1)} (\gamma_0^{(1)} - 1)}{480} \epsilon^2; \ \left( \frac{B_3}{A}\right)_{\Delta = \Delta_{1}^{d.t.} + \epsilon \gamma_1^{(1)}} = -\frac{\gamma_1^{(1)}}{90} \epsilon.
\end{split}
\end{equation}
Going to higher values of $n$ is also straightforward by using the recursion relation \eqref{BnRecursion}. Using these expansions of coefficients, we can obtain the results of $\mu_{\phi \phi n}^{(2)}$ with $n=0,1,2,3$
\begin{equation}
\begin{split}
\mu_{\phi \phi 0}^{(2)} &= \frac{\gamma_0^{(1)} (\gamma_1^{(1)} - 1)}{4} - \frac{\gamma_0^{(2)}}{2} \pm \frac{\gamma_0^{(1)} (\gamma_0^{(1)} - \gamma_1^{(1)})}{4}\\
\mu_{\phi \phi 1}^{(2)} &= \frac{\gamma_0^{(1)} (2 \gamma_0^{(1)} - 3 \gamma_1^{(1)} - 1)}{16} + \frac{\gamma_0^{(2)}}{4} \mp \frac{\gamma_0^{(1)} (3 \gamma_0^{(1)} + \gamma_1^{(1)} - 2)}{8} \pm \gamma_{\phi}^{(2)} \\
\mu_{\phi \phi 2}^{(2)} &= \pm \frac{ \gamma_0^{(1)}(\gamma_0^{(1)} - 1 + \gamma_1^{(1)})}{48} - \frac{ \gamma_0^{(1)}(\gamma_0^{(1)} - 1)}{24} - \frac{ \gamma_0^{(1)}\gamma_1^{(1)}}{36} \pm \frac{\gamma_{\phi}^{(2)}}{6} \\
\mu_{\phi \phi 3}^{(2)} &= \frac{\gamma_0^{(1)} (\gamma_0^{(1)} - 1)}{480} + \frac{\gamma_0^{(1)} \gamma_1^{(1)}}{320} \mp \frac{\gamma_0^{(1)}(\gamma_0^{(1)} - 1 + \gamma_1^{(1)})}{360} \pm \frac{\gamma_{\phi}^{(2)}}{30}.
\end{split}
\end{equation}
The bulk data of the $O(N)$ vector model at Wilson-Fisher fixed point can be found in \cite{PhysRevD.7.2911, Rychkov:2015naa, Gliozzi:2017hni}
\begin{equation}
\gamma_0^{(1)} = \frac{N + 2}{N + 8}, \hspace{1cm} \gamma_1^{(1)} = 1, \hspace{1cm} \gamma_0^{(2)} = \frac{6 (N + 2) (N + 3)}{(N + 8)^3}, \hspace{1cm} \gamma_{\phi}^{(2)} = \frac{N + 2}{4 (N + 8)^2}.
\end{equation}
This then gives us the following $\mathbb{RP}^d$ OPE coefficients to the $\epsilon^2$ order
\begin{equation} \label{Epsilon2OPEResults}
\begin{split}
\mu_{\phi \phi 0} &= \pm 1 - \frac{N + 2}{2 (N + 8)} \epsilon - \frac{3 (N + 2) (2 N + 6 \pm (N + 8))}{2 (N + 8)^3} \epsilon^2 \\
\mu_{\phi \phi 1} &= \frac{N + 2}{4 (N + 8)} \epsilon - \frac{ (N + 2) }{4 (N + 8)^2} \left( \frac{76 + N(N + 10)}{2 (N + 8)} \pm (N - 2) \right) \epsilon^2 \\
\mu_{\phi \phi 2} &= \frac{N + 2}{48 (N + 8)^2} \left( \pm (N + 4) - \frac{4}{3} (N - 1) \right) \epsilon^2 \\
\mu_{\phi \phi 3} &= \frac{N + 2}{320 (N + 8)^2} \left( (N + 4) \mp \frac{8}{9} (N + 2) \pm \frac{8}{3} \right) \epsilon^2.
\end{split}
\end{equation}
We can obtain numerical estimations for OPE coefficients in the $d = 3$ Ising model by plugging in $\epsilon = 1$ and $N = 1$ in the above expressions. This gives in particular $\mu^{+}_{\phi \phi 0} = 0.728$ and $\mu^{+}_{\phi \phi 1} = 0.0478$, which can be compared with the results obtained by the bootstrap analysis \cite{Nakayama:2016cim}. The bootstrap result gives $\mu^{+}_{\phi \phi 0} \sim 0.70$ and $\mu^{+}_{\phi \phi 1} \sim 0.047$, and is in good agreement with our result. We can keep going and it is completely straightforward to obtain all the OPE coefficients at order $\epsilon^2$ using \eqref{Epsilon2OPESumRule}. On the other hand, since all the OPE coefficients are non-zero at this order, the sum rules at the next order will contain infinite number of terms. The sum rules still put nontrivial constraints on the OPE coefficients, but it is not clear how the constraints can be solved analytically.
\subsubsection*{Large N checks}
We now provide an independent consistency check of some of these results by considering the large $N$ $O(N)$ vector model and compare the results in that with the large $N$ limit of \eqref{Epsilon2OPEResults}.
Let us first note that to the leading order in $\epsilon$, we can use the results from \eqref{Epsilon2OPEResults} to write the two-point function as
\begin{equation}
\langle \phi^I (x_1) \phi^J (x_2) \rangle = \frac{\delta^{IJ} \mathcal{G}(\eta)}{((1 + x_1^2)(1 + x_2^2))^{\Delta_{\phi}}}
\end{equation}
with
\begin{equation} \label{TwoPointEpsilon1}
\begin{split}
\mathcal{G}(\eta) &= g_{\Delta = 0} (\eta) + \mu_{\phi \phi 0} g_{\Delta = \Delta_0^{d.t.}} (\eta) + \mu_{\phi \phi 1} g_{\Delta = \Delta_1^{d.t.}} (\eta) \\
&= \frac{1}{\eta^{\frac{d}{2} - 1}} \left( 1 \pm \left(\frac{\eta}{1 - \eta} \right)^{\frac{d}{2} - 1} \pm \frac{\epsilon (N + 2)}{2 (N + 8)} \left(\frac{\eta}{1 - \eta} \right) \log \eta + \frac{\epsilon (N + 2)}{2(N + 8)} \log (1 - \eta) \right)
\end{split}
\end{equation}
\begin{comment}
So at leading order in $\epsilon$, this result is compatible with
\begin{equation} \label{TwoPointEpsilonE}
\mathcal{G}(\eta) = \frac{1}{\eta^{\Delta_{\phi}}} \bigg[ 1 \pm \bigg(\frac{\eta}{1 - \eta} \bigg)^{\frac{\Delta_{0}^{d.t.}}{2}} \bigg] (1 - \eta)^{\frac{\Delta_{0}^{d.t.}}{2} - \Delta_{\phi}}
\end{equation}
with
\begin{equation}
\Delta_{0}^{d.t.} = d - 2 + \epsilon \frac{N + 2}{N + 8}, \ \ \ \Delta_{\phi} = \frac{d}{2} - 1.
\end{equation}
This form for the two point function make manifest the expectation that in the limit $\eta \rightarrow 0$ and the limit $\eta \rightarrow 1$, the correlator should go like $\eta^{-\Delta_{\phi}}$ and $(1- \eta)^{- \Delta_{\phi}}$ respectively. The $\epsilon^2$ and higher corrections to the dimensions $\Delta_{0}^{d.t.}$ and $\Delta_{\phi}$ are also subleading in $N$ and go at least as $1/N$ in large $N$. This can be seen to hold upto $O(\epsilon^5)$ from results in \cite{Kleinert:2001ax}. We will use this fact and first find the large $N$ solution by assuming that the result \eqref{TwoPointEpsilonE} is exact as a function of $d$ at leading order in large $N$, and later justify that assumption.
\end{comment}
To develop a large $N$ expansion, we introduce the usual Hubbard-Stratonovich auxiliary field $\sigma$ and write down the action as
\begin{equation}
S = \frac{\Gamma \left(\frac{d}{2} - 1 \right)}{4 \pi^{d/2}} \int d^d x \left( \frac{1}{2} (\partial_{\mu} \phi^I)^2 + \frac{\sigma \phi^I \phi^I}{2} \right)
\end{equation}
where we omitted a $\sigma^2/ 4 \lambda$ term, which can be dropped at the fixed point. At leading order in large $N$, we get the following equation of motion for the $\phi$ correlator
\begin{equation}
(\nabla^2 - \langle \sigma (x_1)\rangle ) \langle \phi^I (x_1) \phi^J (x_2) \rangle = - \frac{4 \pi^{d/2}}{\Gamma \left( \frac{d}{2} - 1 \right)}\delta^{I J} \delta^d (x_1 - x_2).
\end{equation}
At large $N$ the scaling dimension of $\sigma$ is $2 + 1/N$, while the scaling dimension of $\phi$ is $d/2 - 1 + 1/N$. We can again express these correlators as
\begin{equation} \label{LargeNCorrParamet}
\langle \sigma (x_1)\rangle = \frac{a_{\sigma}}{(1 + x_1^2)^2}, \hspace{1 cm} \langle \phi^I (x_1) \phi^J (x_2) \rangle = \frac{\delta^{IJ} \mathcal{G}(\eta)}{((1 + x_1^2)(1 + x_2^2))^{\Delta_{\phi}}} = \frac{\delta^{IJ} G(\eta)}{((x_1 - x_2)^2)^{\Delta_{\phi}}}.
\end{equation}
Plugging in this general form into the equation of motion, we get
\begin{equation}
4 \eta (1 - \eta) \frac{d G (\eta)}{d \eta^2} + (8 (1 - \eta) -2 d) \frac{d G (\eta) }{d \eta} - a_{\sigma} G (\eta) = 0.
\end{equation}
This equation can be solved and the general solution is
\begin{equation} \label{LargeNTwoPointSol}
\begin{split}
G(\eta) =& C_1 \left( \frac{\eta}{1 - \eta} \right)^{\frac{d}{2} - 1} \ {}_2F_1 \left( \frac{1 + \sqrt{1 - a_{\sigma}}}{2}, \frac{1 - \sqrt{1 - a_{\sigma}}}{2}, 2 - \frac{d}{2}, 1 - \eta \right) \\
&+ C_2 \ {}_2F_1 \left( \frac{1 + \sqrt{1 - a_{\sigma}}}{2}, \frac{1 - \sqrt{1 - a_{\sigma}}}{2}, \frac{d}{2}, 1 - \eta \right).
\end{split}
\end{equation}
Recall that the crossing equation \eqref{crossingeqn} requires
\begin{equation}
\frac{G(\eta)}{\eta^{\frac{d}{2} - 1}} = \pm \frac{G(1 - \eta)}{(1 - \eta)^{\frac{d}{2} - 1}}
\end{equation}
which implies that the coefficients must satisfy
\begin{equation} \label{LargeNTwoPointCoeff}
\frac{C_2}{C_1} = \pm \frac{\Gamma \left( \frac{d - 1 + \sqrt{1 - a_{\sigma}}}{2} \right)\Gamma \left( \frac{d - 1 - \sqrt{1 - a_{\sigma}}}{2} \right) \Gamma\left(2 - \frac{d}{2} \right)}{\pi \Gamma\left( \frac{d}{2} \right)} \left( - \sin \pi \left( \frac{d \pi }{2}\right) \mp \cos \left( \frac{\pi \sqrt{1 - a_{\sigma}}}{2} \right)\right).
\end{equation}
The overall constant can then be fixed by demanding that the leading term in small $\eta$ expansion of $G$ is just 1. We can expand this solution in small $\eta$ as
\begin{equation}
\begin{split}
&G(\eta) = C_1 \left( \pm 1 \ \mp \ \frac{a_{\sigma} }{2 (d - 4) } \ \eta + O(\eta^2) \right) \\
& + C_1 \eta^{\frac{d}{2} - 1} \left( \frac{\pi \Gamma \left( 2 - \frac{d}{2} \right)}{ \left(\sin \left( \frac{ \pi d}{2} \right) \mp \cos \left( \frac{\pi \sqrt{1 - a_{\sigma}}}{2} \right) \right)
\Gamma \left( \frac{d}{2} \right) \Gamma\left(
\frac{3 - d - \sqrt{1 - a_{\sigma}}}{2} \right) \Gamma \left( \frac{3 - d + \sqrt{1 - a_{\sigma}}}{2} \right)} + O(\eta) \right).
\end{split}
\end{equation}
The first term in the first line is the contribution from the identity operator, while the second term of order $\eta$ is from the $\sigma$ operator of dimension $2$. The term in the second line represents the $\phi^2$ operator of dimension $d - 2$. In the large $N$ theory, we expect the $\phi^2$ operator of dimension $d - 2$ in the free theory to be replaced by the $\sigma$ operator of dimension $2$. This then requires us to set the term in the second line of the above equation to zero, which implies the following possible values of $a_{\sigma}$
\begin{equation}
a^{+}_{\sigma} = - (d - 2) (d - 4), - (d - 6)(d - 8), .... \hspace{1cm} a^{-}_{\sigma} = - (d - 4)(d - 6), - (d - 8)(d - 10), ....
\end{equation}
Note that for these values of $a_{\sigma}$, the coefficient $C_{2}$ also vanishes. Now we expect the large $N$ theory to match with the free theory in $d = 4$. This means that we must choose the value of $a_{\sigma}$ such that $a_{\sigma} = 0$ as $d \rightarrow 4$. This then picks out the solution for us for both $+$ and $-$ parity \footnote{In appendix \ref{AppendixFreeEnergy}, we provide another way to check this value of $a_{\sigma}$, where it occurs as a large $N$ saddle point of the free energy.}
\begin{equation} \label{TwoPointCriticalO(N)+-}
\begin{split}
&a^+_{\sigma} = - (d - 2) (d - 4), \implies G^+ (\eta) = \frac{1}{(1 - \eta)^{\frac{d}{2} - 1}}\\
&a^-_{\sigma} = - (d - 4) (d - 6) \implies G^-(\eta) = \frac{1 - 2 \eta}{(1 - \eta)^{\frac{d}{2} - 1}}.
\end{split}
\end{equation}
It can be checked that these results in $d = 4- \epsilon$ agree with the large $N$ limit of the $\epsilon$ expansion solution \eqref{TwoPointEpsilon1}. These two point functions can be decomposed into conformal blocks of dimension $2 n + 2$ as follows
\begin{equation} \label{CBDLargeN-}
\begin{split}
\mathcal{G}^-(\eta) &= \frac{1}{(\eta(1 - \eta))^{\frac{d}{2} - 1}} = g_{\Delta = 0}(\eta) + \sum_{n = 0}^{\infty} \lambda^+_n g_{\Delta = 2 n + 2} (\eta) \\
\lambda^+_n &= \frac{\Gamma \left(\frac{d}{2}\right) \, _2F_1\left(-n-1,-n;\frac{1}{2} (d-4 n-2);1\right)}{\Gamma (n+2) \Gamma \left(\frac{d}{2}-n-1\right)}
\end{split}
\end{equation}
for the $+$ case and
\begin{equation} \label{CBDLargeN+}
\begin{split}
\mathcal{G}^-(\eta) &= \frac{1 - 2 \eta}{(\eta(1 - \eta))^{\frac{d}{2} - 1}} = g_{\Delta = 0}(\eta) + \sum_{n = 0}^{\infty} \lambda^-_n g_{\Delta = 2 n + 2} (\eta) \\
\lambda^-_n &= -\frac{ (-1)^n (d^2 - 4 d (n + 2) + 8 (1 + n)^2 + 4) \Gamma(1 - \frac{d}{2} + n) \Gamma(2 - \frac{d}{2} + n)^2}{4 \Gamma(2 - \frac{d}{2})^2 \Gamma(n + 2) \Gamma(2 - \frac{d}{2} + 2 n)}
\end{split}
\end{equation}
for the $-$ case. These coefficients can be found in a manner similar to the one used for BCFT case which can be found in \cite{Liendo:2012hy, Giombi:2020rmc}. We want to emphasize here that unlike the mean field theory, the conformal blocks appearing here have dimensions $2 n + 2$. These OPE coefficients can be expanded in $\epsilon$ in $d = 4 - \epsilon$, and we find a precise match with the large $N$ limit of \eqref{Epsilon2OPEResults}.
\subsection{Using CFT equations of motion}\label{Sec:CFTEoM}
A complementary approach to using the analytic functionals, where a Lagrangian description for the CFT is available, is to use the CFT equations of motion.\footnote{Note that this is different from what we did in Section \ref{Sec:3}, where we used the equations of motion in the bulk $AdS_{d+1}/\mathbb{Z}_2$.} The essential idea was described for the CFT in flat space in \cite{Rychkov:2015naa} and extended to the case of BCFT in \cite{Giombi:2020rmc}. Here we will use this method to fix the two-point function of the field $\phi$ in the Wilson-Fisher model on $\mathbb{RP}^d$. This case is very similar to the BCFT case. Since the two-point function is only a function of cross-ratio $\eta$ on the sphere quotient $S^d/\mathbb{Z}_2$ and is proportional to $\mathcal{G}(\eta)$, in this subsection it will be more convenient to `undo' the Weyl transformation (\ref{MetricWeyl}) and work on the sphere quotient.
The action for the $\phi^4$ Wilson-Fisher model, including the conformal coupling term, can be written as
\begin{equation} \label{ActionSpherePhi4}
S = \frac{\Gamma \left(\frac{d}{2} - 1 \right)}{4 \pi^{d/2}} \int d^d x \left( \frac{1}{2} (\partial_{\mu} \phi^I)^2 + \frac{d (d - 2)}{4} \phi^I \phi^I + \frac{\lambda}{4} (\phi^I \phi^I)^2 \right).
\end{equation}
The two-point function on the sphere is
\begin{equation}
\langle \phi^I(x_1) \phi^J(x_2) \rangle = \frac{\delta^{IJ} \mathcal{G} (\eta)}{4}.
\end{equation}
Let us start with the free theory with $\lambda = 0$. The the field $\phi^I$ satisfies $(\nabla^2 - d (d - 2)/4) \phi^I = 0$, which implies that
\begin{equation}
\begin{split}
&\left( \frac{1}{\sqrt{g}} \partial_{\mu} (g^{\mu \nu} \sqrt{g} \partial_{\nu}) - \frac{d (d - 2)}{4} \right) \mathcal{G}(\eta) = 0\\
&\eta (1 - \eta) \frac{d^2 \mathcal{G} }{d \eta^2} + d \left( \frac{1}{2} - \eta \right) \frac{d\mathcal{G} }{d \eta} - \frac{d (d - 2)}{4} \mathcal{G} = D^{(2)} \mathcal{G}(\eta) = 0.
\end{split}
\end{equation}
This equation can be solved, and the general solution is
\begin{equation} \label{TwoPointSolFree}
\mathcal{G}(\eta) = b_1 \left( \frac{1}{\eta^{\frac{d}{2} - 1}} + \frac{1}{(1 - \eta)^{\frac{d}{2} - 1}}\right) + b_2 \left( \frac{1}{\eta^{\frac{d}{2} - 1}} - \frac{1}{(1 - \eta)^{\frac{d}{2} - 1}}\right).
\end{equation}
The constants are just fixed by the normalization, and we pick $b_1 = 1, b_2 = 0$ for $+$ parity, and vice versa for the $-$ parity. When we include interactions, the equation of motion gets modified to $(\nabla^2 - d (d - 2)/4) \phi^I(x) = \lambda \phi^I \phi^K \phi^K (x)$. This implies
\begin{equation}
D^{(2)} \mathcal{G}(\eta) = \frac{\lambda_* (N + 2) a_{\phi^2}}{4} \mathcal{G}(\eta) + O(\lambda_*^2)
\end{equation}
to leading order in $\lambda$. We can solve this perturbatively in $d = 4 - \epsilon$, where this model has a non-trivial fixed point. $\lambda_*$ is the fixed point value of the coupling and is equal to $2 \epsilon/ (N + 8)$ in this normalization, while $a_{\phi^2}/4$ is the one-point function of $\phi^2$ on the sphere. Since there is a factor of $\lambda_*$ on the right hand side, we can plug in the correlators in $d = 4$, and it will give us the two-point function on the left hand side, correct to order $\epsilon$. We can expand the differential operator and the correlator as follows
\begin{equation}
\begin{split}
\mathcal{G}(\eta) = \mathcal{G}_0(\eta) + \epsilon \mathcal{G}_1(\eta) + \epsilon^2 \mathcal{G}_2(\eta) + O(\epsilon^3) \\
D^{(2)} = D_0^{(2)} + \epsilon D_1^{(2)} + O(\epsilon^2)
\end{split}
\end{equation}
where $\mathcal{G}_0(\eta) $ is just given by \eqref{TwoPointSolFree} with $d = 4$. The equation of motion at first order in $\epsilon$ is
\begin{equation}
D_0^{(2)} \mathcal{G}_1(\eta) = \pm \frac{N + 2}{2 (N + 8)} \mathcal{G}_0(\eta) - D_1^{(2)} \mathcal{G}_0(\eta).
\end{equation}
This equation can also be solved te give
\begin{equation} \label{TwoPointSolEps}
\mathcal{G}_1(\eta) = \frac{c_1}{\eta} + \frac{c_2}{1 - \eta} + \frac{\log \eta}{2 \eta} \pm \frac{\log( 1 - \eta)}{2 ( 1 - \eta)} + \frac{N + 2}{2 (N + 8)} \left( \frac{\log (1 - \eta)}{ \eta} \pm \frac{\log \eta}{1 - \eta} \right).
\end{equation}
If we fix the normalization such that $\mathcal{G}(\eta) = \eta^{-1}$ as $\eta \rightarrow 0$, this fixes $c_1 = 0$. Also, the crossing symmetry \eqref{crossingeqn} requires $\mathcal{G}$ to be either symmetric or antisymmetric under $\eta \rightarrow 1 - \eta$, which sets $c_2 = 0$. This order $\epsilon$ correlator then agrees exactly with the result using functionals \eqref{TwoPointEpsilon1}. Now to go to the next order, note that in the two-point function $\phi^I (x_1) \phi^J(x_2)$, we can also apply the equation of motion to the other $\phi$. This gives the following fourth-order differential equation
\begin{equation}
D^{(4)}\mathcal{G}(\eta) =D^{(2)}( D^{(2)} \mathcal{G}(\eta)) = \frac{\lambda_*^2 (N + 2)}{16} (a_{\phi^2}^2 (N + 2)\mathcal{G}(\eta) + 2 \mathcal{G}(\eta)^3 ).
\end{equation}
We can again solve it perturbatively in $\epsilon$ by expanding $D^{(4)} = D_0^{(4)} + \epsilon D_1^{(4)} + \epsilon^2 D_2^{(4)} + O(\epsilon^3)$. The differential equation at $O(\epsilon^2)$ then becomes
\begin{equation}
D_0^{(4)} \mathcal{G}_2(\eta) = \frac{(N + 2)}{4 (N + 8)^2} \left((N + 2)\mathcal{G}_0(\eta) + 2 \mathcal{G}_0(\eta)^3 \right) - D_1^{(4)} \mathcal{G}_1(\eta) - D_2^{(4)} \mathcal{G}_0(\eta).
\end{equation}
This general solution of this equation is
\begin{equation} \label{TwoPointSolEps2}
\begin{split}
\mathcal{G}_2(\eta) &= \frac{d_1}{\eta} + \frac{d_2}{1 - \eta} + \frac{ d_3 \log \eta}{1 - \eta} + \frac{d_4 \log( 1 - \eta)}{\eta} \\
& - \frac{N + 2}{4(N + 8)^2} \left(\frac{ \log \eta}{\eta} \pm \frac{ \log( 1- \eta)}{1 - \eta} \right) + \frac{ \log^2 \eta}{8 \eta} \pm \frac{ \log^2( 1- \eta)}{8 (1 - \eta)} \\
& + \frac{(N + 2)^2}{8(N + 8)^2} \left(\frac{ \log^2 (1 - \eta)}{\eta} \pm \frac{ \log^2( \eta)}{1 - \eta} \right) + \frac{N + 2}{ 4 (N + 8)} \left(\frac{1}{\eta} \pm \frac{1}{1 - \eta} \right) \log \eta \log (1 - \eta) .
\end{split}
\end{equation}
To fix the constants, we again use the normalization and demand symmetry/antisymmetry under $\eta \rightarrow 1 - \eta$. This sets $d_1 = d_2 = 0$, and $d_4 = \pm d_3$. To fix $d_3$, we recall that the in the direct channel, $\eta \rightarrow 0$, the correlator should behave as
\begin{equation}
\begin{split}
\mathcal{G}(\eta) &= \eta^{- \Delta_{\phi} } + \mu_{\phi \phi 0} \eta^{\frac{\Delta_0^{d.t.}}{2} - \Delta_{\phi}} + \textrm{higher orders in $\eta$} \\
&= \eta^{- \Delta_{\phi} } + \mu^{(0)}_{\phi \phi 0}
+ \epsilon \left(\mu^{(1)}_{\phi \phi 0} + \frac{\gamma_0^{(1)}}{2} \log \eta \right) \\
&+ \epsilon^2 \left( \mu^{(2)}_{\phi \phi 0} + (\mu^{(1)}_{\phi \phi 0} \gamma_0^{(1)} + \mu^{(0)}_{\phi \phi 0} \gamma_0^{(2)}) \frac{\log \eta}{2} + \mu^{(0)}_{\phi \phi 0} (\gamma_0^{(1)})^2 \frac{\log^2 \eta}{8} \right) + O(\eta).
\end{split}
\end{equation}
Comparing the $\log \eta$ terms at order $\epsilon^2$ with \eqref{TwoPointSolEps2} then tells us
\begin{equation}
\begin{split}
&\mu^{(1)}_{\phi \phi 0} \gamma_0^{(1)} + \mu^{(0)}_{\phi \phi 0} \gamma_0^{(2)} = 2 d_3 - \frac{N + 2}{2 (N + 8)} \\
\implies & d^+_3 = \frac{3 (N + 2) (3 N + 14)}{2 (N + 8)^3}, \ \ d^-_3 = -\frac{3 (N + 2) (N - 2)}{2(N + 8)^3}.
\end{split}
\end{equation}
This gives us an explicit expression for the complete two-point function to order $\epsilon^2$ for the Wilson-Fisher model on $\mathbb{RP}^d$. Expanding the two-point function in powers of $\eta$ and extracting the OPE coefficients, it is easy to check that this agrees with the results in \eqref{Epsilon2OPEResults} which we found by using functionals. The large $N$ limit of this solution also agrees with \eqref{TwoPointCriticalO(N)+-} in $d = 4- \epsilon$.
\section{Comments on relation to bulk reconstruction}\label{Sec:7}
Our discussion of $\mathbb{RP}^d$ CFTs can also be related to the bulk reconstruction program in the large $N$ limit. In this section we make a number of comments regarding the connection with previous works.
Let us begin by noticing that inserting a local operator in Euclidean $AdS_{d+1}$ has the same effect of breaking the isometry from $SO(d+1,1)$ to $SO(d,1)$, as performing the $\mathbb{Z}_2$ quotient (\ref{AdScalI}). This is because the $\mathbb{Z}_2$ quotient selects a special fixed point $N_c$, just as inserting a local bulk operator. However, the identification under the inversion does not further change the Lie algebra of the residual symmetry group, and we will not impose such identifications in this section.
Notice that $N_c$ now is no longer a special point, because AdS space is homogenous. Nevertheless, we will always use the AdS isometry generators to move the local bulk operator to $N_c$ without loss of generality, so that it is easier to make a connection with the discussions in Section \ref{Sec:3}.
We will compare the holographic objects considered in Section \ref{Sec:3} with those arising from the Hamilton-Kabat-Lifschytz-Lowe (HKLL) approach for constructing local bulk operators \cite{Hamilton:2005ju,Hamilton:2006fh,Hamilton:2006az}, which is perturbative in nature in the $1/N$ expansion.\footnote{There are also intrinsically non-perturbative and state-independent developments which exploit the identification of twisted Ishibashi states with bulk operators \cite{Miyaji:2015fia,Nakayama:2015mva, Verlinde:2015qfa, Nakayama:2016xvw,Goto:2016wme, Lewkowycz:2016ukf}, and are explored most extensively in two dimensions. In fact, twisted Ishibashi states can be considered even when the boundary spacetime does not have crosscap insertions. } In the HKLL approach, a bulk field at a point in AdS can be defined by smearing the CFT operator with the bulk-to-boundary propagator (we do not keep track of the overall normalizations in this section)
\begin{equation}\label{HKLL}
\Phi^{(0)}_\Delta(N_c)=\int dP\, G_{B\partial}^\Delta(N_c,P) \mathcal{O}_\Delta(P)\;.
\end{equation}
The bulk-boundary two-point function can be obtained by performing the above smearing in the CFT two-point function, and we get
\begin{equation}
\langle \Phi^{(0)}_\Delta(N_c) \mathcal{O}_\Delta(P)\rangle \propto G_{B\partial}^\Delta(N_c,P)\;.
\end{equation}
This reproduces the one-point function (\ref{AdS1pt}).
However, applying (\ref{HKLL}) to a CFT three-point function runs into the problem of non-vanishing commutators for space-like separated operators, as the prescription is only good for free particles. Doing the integral, we get \cite{Kabat:2011rz}
\begin{equation}
\langle \Phi^{(0)}_\Delta(N_c) \mathcal{O}_{\Delta_1}(P_1)\mathcal{O}_{\Delta_2}(P_2)\rangle\propto \frac{\eta^{\frac{\Delta-\Delta_1-\Delta_2}{2}} }{(1+x_1^2)^{\Delta_1}(1+x_2^2)^{\Delta_2}}{}_2F_1\left(\tfrac{\Delta+\Delta_1-\Delta_2}{2},\tfrac{\Delta+\Delta_2-\Delta_1}{2};\Delta-\tfrac{d}{2}+1;\eta\right)\;.
\end{equation}
We recognize that this bulk-boundary three-point function is nothing but the conformal block $g_\Delta(\eta)$, which is not surprising from the symmetry point of view.\footnote{One can act on it with the two-particle conformal Casimir operator on the boundary, and use the equation of motion identity for the bulk-to-boundary propagator, to show that it is an eigenfunction.} The conformal block has a branch cut starting at $\eta=1$ where points are space-like separated. The existence of the singularity indicates a failure of the micro-causality. Meanwhile, we recall that in Section \ref{Sec:geoW} we found an alternative geometric representation for the conformal blocks. This gives the above three-point function an interpretation in terms of a geodesic Witten diagram. Using this picture, we can obtain an intuitive understanding of the singularity without computing the integral. We note that the point $\eta=1$ corresponds to the limit where one boundary point is approaching the image of the other boundary point. In this limit, the geodesic line which connects the two boundary points goes through the fixed bulk point $N_c$ (see Figure \ref{Fig:Wgeoetaeq1}). This creates a short distance singularity in the integral, and makes the three-point function singular.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.58\textwidth]{FigWgeoetaeq1}
\caption{Illustration of the branch cut singularity in the conformal block from the geodesic Witten diagram picture. In the limit $\eta\to1$, the point 2 approaches the image of point 1. The geodesic line connecting these two points now goes through $N_c$, and the geodesic Witten diagram integral divergence.}
\label{Fig:Wgeoetaeq1}
\end{center}
\end{figure}
To resolve the singularity and restore micro-causality, \cite{Kabat:2015swa} proposed that one should correct $ \Phi^{(0)}_\Delta$ with infinitely many double-trace operators
\begin{equation} \label{BulkReconImproved}
\Phi_\Delta(N_c)=\Phi^{(0)}_\Delta(N_c)+\sum_{n=0}^\infty a^{12}_n \int dP\, G_{B\partial}^{\Delta_1+\Delta_2+2n}(N_c,P) :\mathcal{O}_{\Delta_1}\mathcal{O}_{\Delta_2}:(P)
\end{equation}
where $a^{12}_n$ contains a $1/N$ suppression so that both terms contribute at the same order. The coefficients of the double-trace operators can be systematically determined by cancelling the branch point singularity at $\eta=1$ \cite{Kabat:2015swa}. As a result, the three-point function $\langle \Phi_\Delta\mathcal{O}_{\Delta_1}\mathcal{O}_{\Delta_2}\rangle$ now contains not only the single-trace conformal block $g_{\Delta}(\eta)$, but also infinitely many double-trace conformal blocks $g_{\Delta_1+\Delta_2+2n}(\eta)$. One can imagine that all the contributions to $\langle \Phi_\Delta\mathcal{O}_{\Delta_1}\mathcal{O}_{\Delta_2}\rangle$ have been resummed. Then, the end result of this prescription for the bulk reconstruction should coincide with the exchange Witten diagram $W^{exchange}_\Delta$ defined in (\ref{defW}). To see it, we recall that the conformal block decomposition of $W^{exchange}_\Delta$ has the same structure as \eqref{BulkReconImproved}, and $W^{exchange}_\Delta$ is free of singularities at $\eta=1$. Note that there is one detail we have glossed over: there are also homogeneous solutions to $a^{12}_n$ which do not have branch singularities. But these solutions just correspond to contact Witten diagrams, which are polynomials of the cross ratio.
The above holographic reconstruction of the three-point function $\langle \Phi_\Delta\mathcal{O}_{\Delta_1}\mathcal{O}_{\Delta_2}\rangle$ can be alternatively phrased as a conformal bootstrap problem. We can ask the following question in the spirit of the seminal work \cite{Heemskerk:2009pn}: given the appearance of a single-trace operator with dimension $\Delta$, what is the total contribution to the field theory two-point function of $\mathcal{O}_{\Delta_1}$, $\mathcal{O}_{\Delta_2}$ at order $1/N$, dictated by the partially broken $SO(d,1)\subset SO(d+1,1)$ conformal symmetry?\footnote{This requires the single-trace operator $\mathcal{O}_\Delta$ to appear in the OPE of $\mathcal{O}_{\Delta_1}\times \mathcal{O}_{\Delta_1}$, and also to have a nonzero one-point function. The latter is possible because we assume the conformal symmetry to be partially broken. We emphasize again that the breaking of conformal symmetry is not due to placing the theory on $\mathbb{RP}^d$ (the space is still $\mathbb{R}^d$), but due to the presence of a bulk local operator (to be interpreted from the solution to the problem).} This question is similar in flavor to the question asked in \cite{Alday:2017gde} about four-point functions in CFTs with full conformal symmetry. Since we are in the large $N$ limit, the conditions of our question indicate that the conformal block decomposition should take the following form
\begin{equation}\label{calGPhiOO}
\begin{split}
\mathcal{G}_{\Phi\mathcal{O}\mathcal{O}}(\eta)={}&\mu g_\Delta(\eta)+\sum_{n=0}^\infty b_n g_{\Delta_1+\Delta_2+2n}(\eta)\\
={}&\sum_{n=0}^\infty c_n \bar{g}_{\Delta_1+\Delta_2+2n}(\eta)
\end{split}
\end{equation}
where only double-trace conformal blocks are allowed to appear in addition to the single-trace conformal block. We can view these expressions as the leading order deformation to the mean field theory two-point function, by adding a single-trace operator.
In order to get rid of the anticipated ambiguities in the double-trace operators coming from contact diagrams, we should also impose a bound on the Regge behavior\footnote{Similar issues with contact diagrams can also arise in the four-point function problem, and can be eliminated by imposing conditions on the Regge growth.}
\begin{equation}
|\mathcal{G}_{\Phi\mathcal{O}\mathcal{O}}|\lesssim |\eta|^{-\epsilon}\;,\;\text{when }\;\; \eta\to\infty\;.
\end{equation}
The extra homogeneous solutions with just double-trace conformal blocks can always be conveniently added back in the very end. This conformal bootstrap problem can then be easily solved by using the analytic functionals which we introduced in Section \ref{Sec:basisfunctional}. Applying the basis of functionals on (\ref{calGPhiOO}), we find that
\begin{equation}
b_n=-\mu\,\omega_n(g_\Delta)\;,\quad c_n=\mu\,\omega_n(\bar{g}_\Delta)\;.
\end{equation}
Comparing with (\ref{Ping}), this indicates that $\mathcal{G}_{\Phi\mathcal{O}\mathcal{O}}(\eta)$ is just proportional to the uniquely defined Polyakov-Regge block $\mathcal{P}_\Delta(\eta)$, {\it i.e.}, an exchange Witten diagram with a local operator in the bulk AdS.
\section{Future directions}\label{Sec:8}
In this paper we performed an analytic study of CFTs on real projective space. We gave a detailed account of a toy model of holography on a $\mathbb{Z}_2$ quotient of AdS, and studied properties of Witten diagrams on this background. The investigation led to a basis of analytic functionals dual to double-trace conformal blocks. We explicitly constructed these functionals from the conformal block decomposition coefficients of exchange Witten diagrams. Although the functionals stem from a toy holography model, they apply universally to $\mathbb{RP}^d$ CFTs. In particular, we applied these functionals to study $O(N)$ vector model in $4-\epsilon$ expansion, and obtained one-point functions to order $\epsilon^2$. We also studied in detail the large $N$ $O(N)$ vector model on $\mathbb{RP}^d$ using independent field theory techniques, and obtained results that are consistent with the $\epsilon$-expansion. Our work leads to a number of interesting future directions.
An interesting extension of our work is to include fermions, and study models on real projective space such as QCD and the Gross-Neveu model. Including fermions is also necessary for considering theories with supersymmetry. The case of 4d $\mathcal{N}=4$ SYM on $\mathbb{RP}^4$ has been recently considered in \cite{Wang:2020jgh} using supersymmetric localization techniques. It will be nice to study it using other analytic techniques, such as those developed in this paper.
Another related direction to explore is thermal CFTs obtained by compactification on $S^1 \times \mathbb{R}^{d- 1}$ \cite{Iliesiu:2018fao}, where two-point functions also nontrivially depend on the spacetime coordinates. Similar to our setup, the new data of thermal CFTs enters as the one-point function coefficients. But unlike our case, spinning operators are also allowed to have non-vanishing thermal one-point functions. Furthermore, the conformal symmetry is fully broken to $U(1) \times O(d-1)$ in thermal CFTs, and two-point functions depend on two independent cross ratios instead of one. Despite the differences, it would be interesting to see if some of our techniques can be generalized to study that problem.
As we pointed out in Section \ref{Sec:5}, the existence of the two-term dimensional reduction formula for conformal blocks suggests an extension of the Parisi-Sourlas supersymmetry to real projective space. It would be very interesting to study in detail the realization of the symmetry in concrete models such as branched polymers, and test its equivalence with the Yang-Lee critical theory on a real projective space with two dimensions less.
A noticeable omission in the literature of $\mathbb{RP}^d$ CFTs is the top-down construction of their holographic duals. On the other hand, theories such as $\mathcal{N}=4$ SYM are completely well-defined on $\mathbb{RP}^4$ at weak coupling, and presumably will remain well-defined at strong coupling as well. Finding a dual description in IIB supergravity for the strong coupling limit should therefore be possible. It will be interesting to find such explicit backgrounds, which will provide the starting point for doing holographic calculations. Similarly, it would be interesting to investigate the same question for the Vasiliev higher-spin theory and further check the conjectured duality to $O(N)$ vector model \cite{Klebanov:2002ja} by using the results obtained in this paper.
Related to studying the holographic duals, an interesting question to ask is whether there are any universal results that can be derived for double-trace deformation of CFT on $\mathbb{RP}^d$ similar in spirit to \cite{Witten:2001ua, Gubser:2002zh, Gubser:2002vv, Hartman:2006dy}. We study the two-point function and free energy in the large $N$ critical $O(N)$ vector model, which can be obtained as a double-trace deformation of the free $O(N)$ model. It would be interesting to see if some of the results are model independent and hold true for more general double-trace deformation.
\section*{Acknowledgments}
The work of S.G and H.K is supported in part by the US NSF under Grants No. PHY-1620542 and PHY-1914860. The work of X.Z. is supported in part by the Simons Foundation Grant No. 488653.
|
1,314,259,996,574 | arxiv | \section*{Abstract}
{\bf
We investigate the momentum distribution function of a single distinguishable impurity particle which formed a polaron state in a gas of either free fermions or Tonks-Girardeau bosons in one spatial dimension. We obtain a Fredholm determinant representation of the distribution function for the Bethe ansatz solvable model of an impurity-gas $\delta$-function interaction potential at zero temperature, in both repulsive and attractive regimes. We deduce from this representation the fourth power decay at a large momentum, and a weakly divergent (quasi-condensate) peak at a finite momentum. We also demonstrate that the momentum distribution function in the limiting case of infinitely strong interaction can be expressed through a correlation function of the one-dimensional impenetrable anyons.
}
\vspace{10pt}
\noindent\rule{\textwidth}{1pt}
\tableofcontents\thispagestyle{fancy}
\noindent\rule{\textwidth}{1pt}
\vspace{10pt}
\section{Introduction}
\label{sec:intro}
Non-interacting Bose and Fermi systems have markedly different momentum distribution functions at low temperature. Bosons tend towards a macroscopic occupation of the zero-momentum state, and fermions spread over the volume of the Fermi sphere. When interparticle interactions are present, the distinction becomes not at all evident. We know from exactly solvable models in one spatial dimension that some observables evolve smoothly from boson- to fermion-like behavior, as a function of the inter-particle interaction strength. An example is provided by the Lieb-Liniger model, representing a gas of bosons interacting through a $\delta$-function potential of an arbitrary strength $g$ \cite{lieb_boseI_1963,lieb_boseII_1963}. The excitation spectrum of the model in the $g\to \infty$ limit, the Tonks-Girardeau gas, is the same as the one of a free Fermi gas~\cite{tonks_complete_1936, girardeau_impurity_TG_60}. Furthermore, any excitation in the Lieb-Liniger gas is parametrized by a set of distinct integers, same way as for a free Fermi gas, giving rise to the notion of the Pauli principle for one-dimensional interacting bosons~\cite{korepin_book}. This is consistent with the fact that the low-energy and momentum excitations of the interacting gapless one-dimensional Bose and Fermi systems can be interpreted as collective boson modes of a unique effective field theory, the procedure called the bosonization~\cite{gogolin_1dbook, giamarchi_book_1d}. Despite of these similarities, the momentum distribution functions of the Tonks-Girardeau and free Fermi gases are radically different, which is seen from the exact~\cite{lenard_TG_64, lenard_TG_66}, as well as asymptotic formulas~\cite{vaidya_opdm_79, jimbo_painleve_80}.
How do interactions shape the momentum distribution function of a single distinguishable mobile particle, an impurity, interacting with a one-dimensional system? It has been demonstrated in Ref.~\cite{frenkel_impurity_momentum_distribution_92} that the function $n(k)$, defined as the probability to find the impurity in the state having the momentum $k$, does not have a single-particle delta-peak $\delta(k)$ in one spatial dimension, for any non-zero value of the impurity-gas coupling strength. Instead, $n(k) \sim k^\nu$ in the $k\to 0$ limit. The value of $\nu$ was found only in the limit of the vanishing impurity-gas coupling strength~\cite{frenkel_impurity_momentum_distribution_92}. Extending this result to an arbitrary coupling strength is a far-from-trivial problem. This is because the many-body spectrum of the whole system contains low-energy excitations with quadratic dispersion relation. The application of the bosonization technique is not straightforward for such a spectrum~\cite{zvonarev_ferrobosons_07, akhanjee_ferrobosons_07}. The recently developed paradigm of the non-linear Luttinger liquids \cite{imambekov_review_12} could perhaps be used to find $\nu$ for an arbitrary interaction strength. However, this has yet to be done. As for finding the exact shape of $n(k)$ in the whole range of values of the momentum $k$, the Bethe ansatz solution remains the only non-perturbative analytical approach available thus far.
In the present paper we investigate the shape of the momentum distribution function $n(k,Q)$ of an impurity interacting with a free Fermi gas in one spatial dimension. The system stays in the polaron state, defined as the minimum energy state at a given total momentum $Q=P_\mathrm{imp} + \sum_{j=1}^{N}P_j$ (Ref.~\cite{frenkel_impurity_momentum_distribution_92} is dealing with $Q=0$ only). The impurity has the same mass as the gas particle, and interacts with the gas through a $\delta$-function potential of an arbitrary (positive or negative) strength $g$. The Hamiltonian reads
\begin{equation}\label{Hamiltonian general}
H = \frac{P_\mathrm{imp}^2}{2m}+\sum_{j=1}^N\frac{P_j^2}{2m}+g \sum_{j=1}^N \delta(x_j-x_\mathrm{imp}).
\end{equation}
Here, $x_j$ ($P_j$) is the coordinate (momentum) of a gas particle, $j = 1, \ldots, N,$ and $x_{\rm imp}$ ($P_{\rm imp}$) is the one of the impurity. Such a model is Bethe ansatz solvable; its eigenfunctions and spectrum have been found by McGuire~\cite{mcguire_impurity_fermions_65, mcguire_impurity_fermions_66}. McGuire's solution is a special case of the Bethe ansatz solution for the Gaudin–Yang model~\cite{gaudin_fermions_spinful_67, yang_fermions_spinful_67, gaudin_book}, having a peculiarity that any eigenfunction can be written as a single determinant resembling the Slater determinant for the free Fermi gas~\cite{edwards_impurity_90, castella_mob_impurity_93, recher_TGtoFF_det_PainleveV}. Such a representation, so far not available for any other interacting Bethe ansatz solvable model, enabled the derivation of an exact analytical expression for the time-dependent two-point impurity correlation function at zero~\cite{gamayun_impurity_Green_FTG_14} and arbitrary temperature~\cite{gamayun_impurity_Green_FTG_16}. Here, we present an exact analytical expression for $n(k,Q)$ in the limit of infinite system size, $L\to\infty$, valid for an arbitrary (positive or negative) coupling strength $g$ and zero temperature. The answer is given in terms of the Fredholm determinant of a linear integral operator of integrable type (see, e.g, section XIV.1 of \cite{korepin_book}). We use our exact analytical result (i) To obtain the large-momentum tails of $n(k,Q)$, and the root mean-square uncertainty of the average momentum of the impurity. (ii) To extract a quasi-condensate-like divergence of $n(k,Q)$ at $k=Q$. (iii) To establish the correspondence between $n(k,Q)$ in the $g\to\infty$ limit and a correlation function of the one-dimensional impenetrable anyons.
The paper is organized as follows. In section \ref{sec:preliminaries} we define the model under consideration. In section~\ref{sec:Fdr} we summarize our exact analytical results expressed in terms of the Fredholm determinants. In sections~\ref{sec:slg} through~\ref{sec:kto0} we analyze various limiting cases of the formulas from section~\ref{sec:Fdr}. Section~\ref{sec:Bd} explains principal steps of the calculation used to get the Fredholm determinant representation of section~\ref{sec:Fdr}. We conclude in section~\ref{sec:conclusions}. The appendices are self-explanatory.
\section{Model \label{sec:preliminaries}}
Our objective is to compute the momentum distribution function of an impurity,
\begin{equation} \label{mainN}
n(k,Q) = \frac{L}{2\pi}\langle \mathrm{min}_Q |\psi_{k\downarrow}^\dagger \psi_{k\downarrow}| \mathrm{min}_Q\rangle,
\end{equation}
interacting with a free one-dimensional spinless Fermi gas at zero temperature. Here, $|\mathrm{min}_Q\rangle$ is a polaron state, defined as the minimum energy state of the system having the total momentum $Q$ and containing only one impurity. We discuss the properties of the polaron state later in this section. Note that our result for the function~\eqref{mainN} is also valid for the impurity immersed into the Tonks–Girardeau gas. This can be explained using the arguments given in the end of section $2$ in Ref.~\cite{gamayun_impurity_Green_FTG_16}.
The Hamiltonian of the entire system is
\begin{equation}
H = H_\uparrow + H_\mathrm{imp}, \label{eq:htot}
\end{equation}
where
\begin{equation}
H_\uparrow = \int_0^L dx\, \psi^\dagger_{\uparrow}(x) \left(-\frac1{2m}\frac{\partial^2}{\partial x^2}\right) \psi_\uparrow(x) \label{eq:hff}
\end{equation}
is the Hamiltonian of the free Fermi gas, $m$ is the particle mass, and
\begin{equation}
H_\mathrm{imp} = \int_0^L dx\, \left[ \psi^\dagger_{\downarrow}(x) \left(-\frac1{2m}\frac{\partial^2}{\partial x^2}\right) \psi_\downarrow(x)+ g \psi^\dagger_\uparrow(x) \psi^\dagger_\downarrow(x) \psi_\downarrow(x) \psi_\uparrow(x) \right].
\end{equation}
The creation (annihilation) operators $\psi^\dagger_{\sigma}$ $(\psi_{\sigma})$ carry the subscript $\sigma=\uparrow$ for the spinless Fermi gas, and $\sigma=\downarrow$ for the impurity. We have
\begin{equation}
\psi_\sigma^\dagger(x) = \frac1{\sqrt L} \sum_p e^{-ipx} \psi^\dagger_{p\sigma}, \qquad p=\frac{2\pi n}{L}, \qquad n=0,\pm1,\pm2,\ldots.
\end{equation}
The Hamiltonian~\eqref{eq:htot} defines the fermionic Gaudin-Yang model~\cite{gaudin_fermions_spinful_67,yang_fermions_spinful_67,gaudin_book}, in which the number of the impurity particles,
\begin{equation}
N_\mathrm{imp} = \int_0^L dx\, \psi^\dagger_{\downarrow}(x) \psi_\downarrow(x)
\end{equation}
is arbitrary. However, the states with $N_\mathrm{imp}>1$ do not contribute to the function~\eqref{mainN}. The first-quantized form of the Hamiltonian~\eqref{eq:htot} with $N_\mathrm{imp}=1$ and $N$ particles from the Fermi gas is given by Eq.~\eqref{Hamiltonian general}. The Planck constant, $\hbar$, is equal to one in our units. A commonly used dimensionless form of the impurity-gas coupling strength $g$ is
\begin{equation}
\gamma = \frac{mg}{\rho_0}, \label{eq:gamma}
\end{equation}
where
\begin{equation}\label{rho0}
\rho_0 = \frac{N}L.
\end{equation}
is the gas density. To further simplify notations, we let
\begin{equation}
m=1
\end{equation}
and measure all momenta in the units of the Fermi momentum,
\begin{equation} \label{kF1}
k_F = \pi \rho_0 =1.
\end{equation}
We restore $m$ and $k_F$ in the captions to the figures.
Equation~\eqref{mainN} can be written as
\begin{equation} \label{mainN2}
n(k,Q) = \frac1{2\pi} \int_0^L dy\, e^{iky}\varrho(y),
\end{equation}
where
\begin{equation} \label{rpp}
\varrho(y)= L\langle \mathrm{min}_Q|\psi_\downarrow^\dagger(y) \psi_\downarrow(0) |\mathrm{min}_Q\rangle
\end{equation}
is the $Q$-dependent reduced density matrix of the impurity. The normalization condition
\begin{equation} \label{nksf}
\sum_k n(k,Q)=\frac{L}{2\pi}
\end{equation}
is equivalent to
\begin{equation}
\varrho(0)=1.
\end{equation}
For the system in a finite volume $L$, periodic boundary conditions are imposed. That $n(k,Q)$ is real implies the involution
\begin{equation} \label{inv}
\varrho(-y)=\varrho^*(y),
\end{equation}
where the star stands for the complex conjugation. The symmetry
\begin{equation} \label{eq:nqinv}
n(-k,Q) = n(k,-Q).
\end{equation}
applied to Eq.~\eqref{inv} gives
\begin{equation}
\varrho^*(y)=\varrho(y), \qquad Q=0.
\end{equation}
In order to compute the function~\eqref{mainN} we use a form-factor summation approach. We write
\begin{equation}\label{npff}
n(k,Q) = \sum\limits_{p_1,p_2,\dots, p_{N}} |\langle N|\psi_{k \downarrow}|\mathrm{min}_Q\rangle|^2.
\end{equation}
Here,
\begin{equation}
|N\rangle = \psi_{p_1\uparrow} \cdots\psi_{p_N\uparrow}|0_\uparrow\rangle
\end{equation}
is the free Fermi gas state containing $N$ fermions with the momenta $p_1,\ldots,p_N$. The vacuum $|0_\sigma\rangle$, $\sigma=\uparrow,\downarrow$, is the state with no particles, $\psi_{p\sigma}|0_\sigma\rangle=0$. The sum in Eq.~\eqref{npff} is over the states whose momenta satisfy the constraint
\begin{equation}\label{kpQ}
k+\sum_{j=1}^N p_j =Q.
\end{equation}
Periodic boundary conditions imply the quantization of the momenta
\begin{equation} \label{qq}
p_j = \frac{2\pi n_j}{L}, \qquad n_j =0,\pm1,\pm2,\ldots, \qquad j=1,\ldots,N.
\end{equation}
The coordinate representation for $|N\rangle$ is the Slater determinant
\begin{equation} \label{eq:slaterff}
|N\rangle = \frac1{\sqrt{L^N N!}} \det\nolimits_N e^{ip_j x_l}, \qquad j,l=1,\ldots,N.
\end{equation}
All eigenstates of the Hamiltonian~\eqref{Hamiltonian general}, $|\mathrm{min}_Q\rangle$ being one of them, have been found in Refs.~\cite{mcguire_impurity_fermions_65, mcguire_impurity_fermions_66}. Let $|Q\rangle$ be an eigenstate having total momentum $Q$. Such a state is parametrized by the quasi-momenta $k_1,\ldots,k_{N+1}$ satisfying
\begin{equation} \label{QQ}
Q = \sum\limits_{j=1}^{N+1}k_j.
\end{equation}
The energy of the state $|Q\rangle$ reads
\begin{equation} \label{efin}
E(Q)= \sum_{j=1}^{N+1} \frac{k_j^2}2.
\end{equation}
Each $k_j$ should satisfy the equation
\begin{equation} \label{bethe1}
k_j = \frac{2\pi}{L}\left(n_j-\frac{\delta_j}{\pi}\right), \qquad n_j=0,\pm1,\pm2,\ldots, \qquad j=1,\ldots,N+1,
\end{equation}
where
\begin{equation} \label{bethe2}
\delta_j = \frac{\pi}{2} - \arctan (\Lambda - \alpha k_j), \qquad 0\le \delta_j<\pi.
\end{equation}
Here,
\begin{equation}
\alpha =\frac{2\pi}\gamma
\end{equation}
where $\gamma$ is given by Eq.~\eqref{eq:gamma}. Thus, one has a system of $N+1$ equations~\eqref{bethe1} for the variables $k_1,\ldots,k_{N+1}$ and $\Lambda$. These equations, called the Bethe equations, are coupled through Eq.~\eqref{QQ}. Any solution to this system has the following properties~\cite{mcguire_impurity_fermions_66}: (i) $\Lambda$ is real. (ii) If $\alpha\ge 0$ all $k_j$'s are real. (iii) If $\alpha<0$ either all $k_j$'s are real, or $k_1,\ldots,k_{N-1}$ are real, while $k_N$ and $k_{N+1}$ have a non-zero imaginary part, and $k_N=k_{N+1}^*$.
We will often use the following representation of the Bethe equations~\eqref{bethe1}:
\begin{equation} \label{ebethe1}
e^{ik_j L} = \frac{\nu_-(k_j)}{\nu_+(k_j)}, \qquad j=1,\ldots,N+1,
\end{equation}
where
\begin{equation} \label{nupm}
\nu_\pm(q)=\frac{1}{\alpha} \frac1{q-k_\mp},
\end{equation}
and
\begin{equation} \label{kpm}
k_{\pm} = \frac{\Lambda\pm i}{\alpha}.
\end{equation}
Taking the derivative of Eq.~\eqref{ebethe1} with respect to $\Lambda$ we get
\begin{equation} \label{kjLambda}
\frac{\partial k_j}{\partial \Lambda} = \frac{2}{L} \frac{\nu_-(k_j)\nu_+(k_j)}{1+\frac2L \alpha \nu_-(k_j)\nu_+(k_j)}, \qquad j=1,\ldots,N+1.
\end{equation}
The point of focus of our paper is $n(k,Q)$ in the thermodynamic limit, defined as the limit of infinite system size, $L\to \infty$, at a constant density
\begin{equation}
\rho_0 = \frac{N}{L}=\mathrm{const}>0, \qquad N,L\to \infty.
\end{equation}
In what follows, we use $L\to\infty$ in place of $L,N\to\infty$ for simplicity of the notations. The choice of the boundary conditions should play no role for $n(k,Q)$ in the thermodynamic limit. The sum over momenta turns into the integral,
\begin{equation}
\frac{2\pi}{L}\sum_k \to \int_{-\infty}^\infty dk\, \qquad L\to\infty,
\end{equation}
and the normalization condition~\eqref{nksf} becomes
\begin{equation}
\int_{-\infty}^\infty dk\, n(k,Q)=1.
\end{equation}
In sections~\ref{sec:igr} through~\ref{s:igab} we proceed with solving the system of Eqs.~\eqref{QQ} and \eqref{bethe1} in the thermodynamic limit for the state $|\mathrm{min}_Q\rangle$ entering Eq.~\eqref{mainN}.
\subsection{Defining $|\mathrm{min}_Q\rangle$ for impurity-gas repulsion \label{sec:igr}}
In the case of the repulsive interaction, $\gamma \ge0$, the $L\to\infty$ limit of Eq.~\eqref{bethe2} reads
\begin{equation} \label{deltaA}
\delta_j = \frac{\pi}{2} - \arctan\left(\Lambda - \alpha \frac{2\pi n_j}{L}\right), \qquad j=1,\ldots,N+1, \qquad L\to\infty.
\end{equation}
We adopt the convention that the distinct integers $n_j$ are enumerated in the increasing order, $n_1<\cdots<n_{N+1}.$ Equation~\eqref{QQ} turns into the algebraic relation between $\Lambda$ and $Q$:
\begin{equation}\label{phase}
Q= Q^D+ \Lambda Z + \frac1\pi [\arctan(\alpha+\Lambda)-\arctan(\alpha-\Lambda)]+ \alpha \varphi,
\end{equation}
where
\begin{equation} \label{ZZ0}
Z = \frac{\arctan(\alpha-\Lambda)+\arctan(\alpha+\Lambda)}{\alpha\pi}
\end{equation}
and
\begin{equation} \label{eq:varphi}
\varphi = \frac{1}{2\pi \alpha^2 }\ln\frac{1 +(\alpha-\Lambda)^2}{1+(\alpha+ \Lambda)^2}.
\end{equation}
The function $Q^D$ encompasses all $n_j$'s:
\begin{equation} \label{QD}
Q^D = \frac{2\pi}{L}\sum\limits_{j=1}^{N+1} n_j -1.
\end{equation}
The energy~\eqref{efin} turns into
\begin{equation} \label{et}
E(Q)= \frac12 \sum_{j=1}^{N+1} \left(\frac{2\pi}{L}n_j\right)^2 + E_\mathrm{min}(Q),
\end{equation}
where
\begin{equation}\label{Elambda}
E_\mathrm{min}(Q)= \frac1{\pi \alpha} -\frac{1+\alpha^2-\Lambda^2}{2\alpha}Z +\Lambda\varphi.
\end{equation}
Let
\begin{equation} \label{njvac}
n_j = -\frac{N+1}{2}+j, \qquad j = 1, \ldots N+1.
\end{equation}
Such a choice leads to $Q^D=0$, and corresponds to the minimum energy state $|\mathrm{min}_Q\rangle$ for $-1\le Q\le 1$. Equation~\eqref{phase} turns into
\begin{equation}\label{phase2}
Q=\Lambda Z + \frac1\pi [\arctan(\alpha+\Lambda)-\arctan(\alpha-\Lambda)]+ \alpha \varphi.
\end{equation}
The parameter $\Lambda$ runs from $-\infty$ to $\infty$ when $Q$ runs from $-1$ to $1$. Equations~\eqref{Elambda} and~\eqref{phase2} determine $E_\mathrm{min}$ as a function of $Q$ for $-1\le Q\le 1$. The minimum energy state for $Q$ outside of that interval is parametrized by consecutive sets of $n_j$'s other than given by Eq.~\eqref{njvac}. The result is a smooth periodic function of $Q$, plotted in the left panel of Fig.~\ref{fig:energy}. Note that
\begin{equation}
E_\mathrm{min}(1)=0,
\end{equation}
and
\begin{equation} \label{ep1}
E_\mathrm{min}(0) = \frac{\alpha-(1+\alpha^2)\arctan\alpha}{\pi\alpha^2}.
\end{equation}
Therefore,
\begin{equation}
E_\mathrm{min}(1) - E_\mathrm{min}(0) \ge 0, \qquad 0\le\gamma\le \infty
\end{equation}
decreases from $1/2$ to zero when $\gamma$ increases from zero to infinity.
\begin{figure}
\includegraphics[width=\textwidth]{energy.pdf}
\caption{\label{fig:energy} Shown is the normalized minimum energy $[E_\mathrm{min}(Q)-E_\mathrm{min}(0)]/[E_\mathrm{min}(k_F)-E_\mathrm{min}(0)]$ as a function of the total momentum $Q$ for the repulsive, and the attractive gas state (two identical curves, left panel), and the attractive bound state (right panel). The absolute value of the impurity-gas interaction strength is $|\gamma|=10$. Note that $E_\mathrm{min}(Q)$ is $Q$-periodic with the period $2k_F$, it is plotted here for the two periods.}
\end{figure}
\subsection{Defining $|\mathrm{min}_Q\rangle$ for impurity-gas attraction: gas state \label{s:igag}}
The gas state is defined for the attractive interaction, $\gamma<0$, as the minimum energy state for all $k_j$'s being real. Such a state has been realized experimentally for the Lieb-Liniger gas in the experiment with ultracold atoms~\cite{haller_superTonks_2009}. The analysis following the steps from section~\ref{sec:igr} leads to Eqs.~\eqref{Elambda} and~\eqref{phase2} in which $\gamma$ is now negative. This results in $E_\mathrm{min}(Q)$ being an odd function of $\gamma$. Therefore, the function $[E_\mathrm{min}(Q)-E_\mathrm{min}(0)]/[E_\mathrm{min}(1)-E_\mathrm{min}(0)]$ coincide with the one for the repulsive case, plotted in the left panel of Fig.~\ref{fig:energy}. The function
\begin{equation}
E_\mathrm{min}(1) - E_\mathrm{min}(0) \le 0, \qquad -\infty\le\gamma\le 0
\end{equation}
decreases from zero to $-1/2$ when $\gamma$ increases from minus infinity to zero.
This means that the minimum energy state for a weak repulsion, $\gamma\ll 1$, does not go continuosly to the gas state for a weak attraction, $-\gamma\ll 1$. Rather, it turns into the weakly attractive bound state, discussed in section~\ref{s:igab}.
\subsection{Defining $|\mathrm{min}_Q\rangle$ for impurity-gas attraction: bound state \label{s:igab}}
The bound state is the true minimum energy state for the attractive interaction, $\gamma<0$. That is, $k_j$'s are not required to be real, as it was for the gas state, section~\ref{s:igag}. As a result, the phase shifts take the form \eqref{deltaA} for the real $k_1,\ldots,k_{N-1}$, and~\cite{mcguire_impurity_fermions_66}
\begin{equation} \label{kc}
k_N = k_+ + \mathcal{O}(e^{-|g|L}), \qquad k_{N+1} = k_- + \mathcal{O}(e^{-|g|L}),
\end{equation}
where $k_\pm$ is defined by Eq.~\eqref{kpm}. Therefore, Eq.~\eqref{QQ} takes the form
\begin{equation} \label{qqb}
Q= Q^D+
\Lambda Z + \frac1\pi [\arctan(\alpha+\Lambda)-\arctan(\alpha-\Lambda)]+ \alpha \varphi +k_+ + k_-,
\end{equation}
where $Q^D$ is given by Eq.~\eqref{QD} with $j$ running from $1$ to $N-1$. Like in the case $\gamma > 0$, we have $Q^D=0$ for the minimum energy states in the interval $-1 \le Q \le 1$:
\begin{equation} \label{qqb2}
Q= \Lambda Z_b + \frac1\pi [\arctan(\alpha+\Lambda)-\arctan(\alpha-\Lambda)]+ \alpha \varphi,
\end{equation}
where
\begin{equation} \label{ZZb0}
Z_b = Z+ \frac2\alpha.
\end{equation}
The function
\begin{equation} \label{eb}
E_\mathrm{min}(Q)= \frac1{\pi \alpha} -\frac{1+\alpha^2-\Lambda^2}{2\alpha}Z +\Lambda\varphi +\frac{k_+^2}2 +\frac{k_-^2}2 = 1+ \frac1{\pi \alpha} -\frac{1+\alpha^2-\Lambda^2}{2\alpha}Z_b +\Lambda\varphi
\end{equation}
entering Eq.~\eqref{et} is plotted in the right panel of Fig.~\ref{fig:energy}, and is a periodic function of $Q$. Unlike for the repulsive and the attractive gas state, (i) $\Lambda$ runs through the finite interval, $-\Lambda_F \le \Lambda \le \Lambda_F$, when $Q$ runs from $1$ to $-1$ in Eq.~\eqref{qqb2}; (ii) $E_\mathrm{min}(Q)$ has cusps at $Q=\pm 1, \pm2, \ldots$. One has
\begin{equation} \label{em1}
E_\mathrm{min}(0) = -\frac1{\alpha^2} +\frac{\alpha-(1+\alpha^2)\arctan\alpha}{\pi\alpha^2}.
\end{equation}
The function $E_\mathrm{min}(1)$ is obtained by substituting $\Lambda_F$ into Eq.~\eqref{eb}, and
\begin{equation}
E_\mathrm{min}(1) - E_\mathrm{min}(0) > 0, \qquad -\infty\le\gamma\le 0
\end{equation}
increases from $1/4$ to $1/2$ when $\gamma$ goes from minus infinity to zero.
\section{Fredholm determinant representation in the thermodynamic limit}
\label{sec:Fdr}
In this section we show the main results of our paper: exact analytic formulas for the impurity momentum distribution function $n(k,Q)$ at zero temperature and an arbitrary positive and negative impurity-gas interaction strength $g$. These formulas contain Fredholm determinants of linear integral operators. Let $V$ be an $M\times M$ matrix with the entries $V_{jl}=V(k_j,k_l)$, $I$ be the identity matrix, and
\begin{equation}
k_j = \frac{2(j-1)}{M-1}-1, \qquad j=1,\ldots,M .
\end{equation}
Then the Fredholm determinant is
\begin{equation} \label{fd}
\det(\hat I+ \hat V) = \lim\limits_{M\to\infty} \det\left(I+\frac{2}{M-1}V\right).
\end{equation}
The right hand side of Eq.~\eqref{fd} taken for a large but finite $M$ can be used to evaluate the Fredholm determinant numerically~\cite{bornemann_fredholm_10}. An equivalent definition,
\begin{equation}
\det (\hat I + \hat V) = \sum_{N=0}^\infty \frac1{N!} \int_{-1}^1 dk_1 \cdots \int_{-1}^1 dk_N
\begin{vmatrix}
V (k_1,k_1) &\dots & V(k_1,k_N) \\
\vdots & \ddots &\vdots \\
V (k_N,k_1) &\dots & V (k_N,k_N) \\
\end{vmatrix}, \label{eq:Frdet2}
\end{equation}
appears in the mathematical literature on the linear integral operators theory (see, e.g., \cite{smirnov_book_highermathIV}, vol IV, p.24). Naturally, $\hat V$ can be recognized as a linear integral operator with the kernel $V(q,q^\prime)$ on the domain $[-1,1]\times [-1,1]$. The necessary existence and convergence conditions are fulfilled for the operators encountered in our paper.
The energy of the state $|\mathrm{min}_Q\rangle$ is a periodic function of $Q$, and $n(k,Q)$, defined by
Eq.~\eqref{mainN}, inherits this periodicity. We rewrite Eq.~\eqref{mainN2} as
\begin{equation} \label{mainN5}
n(k,Q) = \frac1{2\pi} \int_{-\infty}^\infty dy\, e^{iky}\varrho(y) = \frac1{\pi} \mathrm{Re}\left[\int_{0}^\infty dy\, e^{iky}\varrho(y)\right], \qquad L\to\infty.
\end{equation}
In what follows, we write $\varrho(y)$ explicitly for the positive values of $y$, and use the involution~\eqref{inv} to get it for the negative values.
\subsection{Impurity-gas repulsion \label{s:igrdet}}
The Fredholm determinant representation in the case of the repulsive impurity-gas interaction, $\gamma \ge 0$, reads
\begin{equation} \label{rhoTD1}
\varrho(y) = \det(\hat I+\hat{K}+\hat{W}) - \det(\hat I +\hat{K}).
\end{equation}
The identity operator is denoted by $\hat I$. The kernels of the linear integral operators $\hat K$ and $\hat W$, on the domain $[-1,1]\times [-1,1]$, are defined by
\begin{equation} \label{ke}
K(q,q^\prime) = \frac{e_+(q)e_-(q^\prime)-e_-(q)e_+(q^\prime)}{q-q^\prime},
\end{equation}
and
\begin{equation} \label{ke1}
W(q,q^\prime) =\frac{e_-(q)e_-(q^\prime)}{\pi Z},
\end{equation}
respectively. The kernel~\eqref{ke} belongs to a class of integrable kernels~\cite{korepin_book, its_diffeq_corrfunctions_90}. The functions $e_\pm$ are defined as
\begin{equation}\label{epm}
e_+(q) =\frac{1}{\pi} e^{iqy/2+i \delta(q)}, \qquad e_-(q) = e^{-iqy/2}\sin \delta(q),
\end{equation}
and
\begin{equation} \label{ZZ}
Z = \frac{\delta_+ - \delta_-}{\alpha\pi}, \qquad \delta_{\pm} = \delta(\pm 1).
\end{equation}
Here, the phase shift $\delta(q)$ is defined as
\begin{equation} \label{phase3}
\delta(k) = \frac{\pi}2 - \arctan(\Lambda - \alpha k), \qquad 0\le\delta<\pi,
\end{equation}
and the value of $\Lambda$ can be found as a function of $Q$ by using Eq.~\eqref{phase2}. The behavior of the momentum distribution function is illustrated in Fig.~\ref{NKboth}(a).
\begin{figure}
\includegraphics[width=\textwidth]{fignk.pdf}
\caption{\label{NKboth} Impurity's momentum distribution $n(k,Q)$ is shown for different values of the total momentum: $Q=0$ (black solid), $Q=0.8 k_F$ (red dashed), and $Q=k_F$ (blue dotted) lines, respectively. Note that $n(k,Q)$ is singular at $k=Q$. Sections~\ref{sec:slg} through \ref{sec:kto0} discuss the features revealed in this plot.}
\end{figure}
\subsection{Impurity-gas attraction: gas state \label{s:iga1det}}
All formulas from the section~\ref{s:igrdet} are valid for the gas state after letting $\gamma$ be negative. The behavior of the momentum distribution function is illustrated in Fig.~\ref{NKboth}(b).
\subsection{Impurity-gas attraction: bound state \label{s:iga2det}}
The presence of the bound state qualitatively affects the Fredholm determinant representation for the function $\varrho(y)$, as compared with Eq.~\eqref{rhoTD1}:
\begin{equation} \label{rhoTD2}
\varrho(y) = e^{-i(k_++k_-)y} \left[\det(\hat I + \hat{K}_b + \hat W_b) - c\det (\hat I + \hat{K}_b)\right].
\end{equation}
Here,
\begin{equation}
c = 1 - \frac{2e^{ik_-y}(\alpha-y)}{\alpha^2 Z_b},
\end{equation}
the kernels of the linear integral operators $\hat K_b$ and $ \hat W_b$ are defined by
\begin{equation}
K_b(q,q^\prime) = K(q,q^\prime) + \frac{\alpha}\pi e_-(q)e_-(q^\prime) (e^{iqy}+e^{iq^\prime y}),
\end{equation}
and
\begin{equation}
W_b(q,q^\prime) = -\frac{e_-(q)e_-(q^\prime) f(q)f(q^\prime)}{\pi \alpha^2 Z_b},
\end{equation}
respectively. The function $f$ is defined as
\begin{equation}
f(q) = \frac{2ie^{iqy}+\alpha e^{ik_-y}(q-k_+)}{q-k_-},
\end{equation}
$k_+$ and $k_-$ are defined by Eq.~\eqref{kpm}, and
\begin{equation} \label{ZZb}
Z_b = \frac{\delta_+ -\delta_- +2\pi}{\alpha\pi}, \qquad \delta_{\pm} = \delta(\pm 1).
\end{equation}
The other functions entering Eqs.~\eqref{rhoTD2}--\eqref{ZZb} are defined in section~\ref{s:igrdet}. The typical behavior of the momentum distribution function is shown in Fig.~\ref{NKboth}(c).
\section{Limit of strong interaction, $|\gamma |\to\infty$ \label{sec:slg}}
Correlation functions of the model~\eqref{Hamiltonian general} in the $\gamma\to\infty$ limit has been represented as Fredholm determinants in the works~\cite{berkovich_fermi_87, izergin_impenetrable_fermions_98}. Using the Fredholm determinant representation we demonstrate that the one-body density matrix $\varrho(y)$ in the $\gamma\to\infty$ limit can be written as a correlation function of the one-dimensional impenetrable anyons. Such a correspondence remains valid for the gas state in the $\gamma\to-\infty$ limit.
\subsection{Impurity-gas repulsion \label{s:igrslg}}
We begin with discussing the $\gamma\to\infty$ limit of the impurity-gas repulsion. The kernels~\eqref{ke} and~\eqref{ke1} simplify significantly when compared to arbitrary $\gamma$. Using that
\begin{equation}
\delta(q) = \frac\pi2 - \arctan\Lambda + \frac{q\alpha}{1+\Lambda^2}+\cdots, \qquad \alpha\to 0
\end{equation}
we have in the leading order in $\alpha$
\begin{equation}
Z= \frac{2\alpha}{1+\Lambda^2}, \qquad \sin\delta(q)= \frac1{\sqrt{1+\Lambda^2}}, \qquad e^{2i\delta(q)} = \frac{i\Lambda-1}{i\Lambda+1}, \qquad \alpha\to 0.
\end{equation}
This gives us
\begin{equation} \label{keinf}
K(q,q^\prime) = \frac{\lambda}{\pi} \frac{\sin[(q-q^\prime)y/2]}{q-q^\prime}, \qquad
\end{equation}
with
\begin{equation} \label{llambda}
\lambda = \frac{2i}{\Lambda-i}
\end{equation}
for the kernel~\eqref{ke}, and
\begin{equation} \label{ke1inf}
W(q,q^\prime) = \frac{e^{-iy (q+q^\prime)/2}}{2},
\end{equation}
for the kernel~\eqref{ke1}. The $\gamma\to\infty$ limit of Eq.~\eqref{phase2} reads
\begin{equation}
Q = \frac{2 \arctan(\Lambda)}{\pi}.
\end{equation}
Substituting this formula into Eq.~\eqref{llambda} we get
\begin{equation}
\lambda = -1 - e^{-i\pi Q}. \label{leQ}
\end{equation}
Let us now show how $\varrho(y)$ emerges in the model of one-dimensional impenetrable anyons~\cite{patu_LenardI_08}. Recall that the
anyon field operators satisfy the commutation relations
\begin{equation}
\psi_A(x_1)\psi_A^\dagger(x_2) = e^{-i\pi\kappa\,\mathrm{sgn}(x_1-x_2)}\psi_A^\dagger(x_2)\psi_A(x_1)+\delta(x_1-x_2),
\end{equation}
and
\begin{equation}
\psi^\dagger_A(x_1)\psi_A^\dagger(x_2) = e^{i\pi\kappa\,\mathrm{sgn}(x_1-x_2)}\psi_A^\dagger(x_2)\psi^\dagger_A(x_1).
\end{equation}
Here, $\mathrm{sgn}(x)=|x|/x$, $\mathrm{sgn}(0)=0$, and $\kappa$ is the statistics parameter. The correlation function $\langle \psi_A^\dagger(y) \psi_A(0)\rangle$ has the Fredholm determinant representation, given by Eq.~(4) from Ref.~\cite{patu_LenardI_08}. The transformation explained in Ref.~\cite{korepin_book} (see the discussion of the equivalence between Eqs.~(3.12) and~(3.13) in Ch.~XIII therein) leads us to the equality
\begin{equation} \label{rhoanyon}
\langle \psi_A^\dagger(y) \psi_A(0)\rangle = \varrho(y),
\end{equation}
where $\lambda$ entering the kernel~\eqref{keinf} is related to the statistical parameter $\kappa$ as follows:
\begin{equation}
\lambda = -1-e^{i\pi\kappa}. \label{lekappa}
\end{equation}
Comparing Eqs.~\eqref{leQ} and~\eqref{lekappa} we get
\begin{equation} \label{kappaQ}
\kappa = -Q
\end{equation}
for $\kappa$ and $Q$ in the interval between minus one and one. The left hand side of Eq.~\eqref{rhoanyon} has also been extensively evaluated numerically~\cite{santachiara_anyon_08, patu_anyon_momentum_15}. However, no connection between the mobile impurity and anyon correlation functions, as suggested by Eqs.~\eqref{rhoanyon} and~\eqref{kappaQ}, has been given in the literature. Furthermore, the Jordan-Wigner transformation
\begin{equation}
\psi_A(x) = e^{-i\pi(1+\kappa)N(x)} \psi_F(x), \qquad N(x) = \int_{-\infty}^x dx^\prime\, \psi^\dagger_F(x^\prime)\psi_F(x^\prime)
\end{equation}
connects the anyon field operators and the fermion operators. Therefore, the right hand side of Eq.~\eqref{rhoanyon} is a correlation function of a free spinless Fermi gas:
\begin{equation} \label{rhoFF}
\varrho(y) = \langle \mathrm{FS}| \psi_F^\dagger(y) e^{i\pi(\kappa+1)N(y)} e^{-i\pi(\kappa+1)N(0)} \psi_F(0) |\mathrm{FS}\rangle,
\end{equation}
where $|\mathrm{FS}\rangle$ stands for the Fermi sea. Since Eq.~(2.19) from Ch.~XIII in Ref.~\cite{korepin_book} gives
\begin{equation}
\det(\hat I+\hat K) = \langle\mathrm{FS}|e^{i\pi(\kappa+1) N(y)} e^{-i\pi(\kappa+1) N(0)}|\mathrm{FS}\rangle, \label{string}
\end{equation}
it is $\psi_F^\dagger$ and $\psi_F$ that lead to the emergence of the rank-one operator $\hat W$ in Eq.~\eqref{rhoTD1}. Note that the evaluation of the right hand side in Eqs.~\eqref{rhoFF} and \eqref{string} can be done by using the Wick's theorem (for Eq.~\eqref{string} see, e.g., Ref.~\cite{zvonarev_string_09}), without any use of the coordinate representation of the wave functions of the model. Interestingly, in a recent work~\cite{yakaboylu_impurity_19} a two-dimensional impurity model has been linked to anyons, albeit in a different manner: There the statistical parameter is related to the impurity-phonon coupling.
\subsection{Impurity-gas attraction: gas state \label{s:iga1slg}}
We now turn to the case of the gas state for the impurity-gas attraction, introduced in section~\ref{s:igag}. The $\gamma\to -\infty$ limit of the Fredholm determinant representation introduced in section~\eqref{s:iga1det}, leads to the same formulas as the $\gamma\to\infty$ limit, discussed in section~\eqref{s:igrslg}.
\subsection{Impurity-gas attraction: bound state \label{s:iga2slg}}
Finally, we consider the bound state for the impurity-gas attraction. We take the $\gamma\to -\infty$ limit in the formulas of section~\eqref{s:iga2det} and get in the leading order
\begin{equation}
Z_b = \frac{2}\alpha, \qquad K_b(q,q^\prime) = K(q,q^\prime), \qquad W_b(q,q^\prime)=0, \qquad \gamma\to-\infty.
\end{equation}
Furthermore, it follows from Eq.~\eqref{qqb2}
\begin{equation}
\Lambda = \frac1{2}\alpha Q, \qquad -1\le Q \le 1, \qquad \gamma\to -\infty.
\end{equation}
Therefore, we write the following asymptotic expression:
\begin{equation}
\varrho(y)= e^{y/\alpha} \left(1-\frac{y}\alpha\right), \qquad \gamma\to -\infty.
\end{equation}
Substituting this into Eq.~\eqref{mainN5} we get
\begin{equation} \label{nas0}
n(k,Q)=\frac2\pi \frac{\alpha}{(1+\alpha^2 k^2)^2}, \qquad \gamma\to -\infty.
\end{equation}
The $\gamma\to\-\infty$ expansion~\eqref{nas0} is not a uniform estimate of the exact result for $n(k,Q)$, since it misses the divergence at $k=Q$, discussed in detail in section~\ref{sec:kto0}. Still, it conveys an important message: the impurity momentum distribution becomes completely flat, and infinitely broad, in the $\gamma\to -\infty$ limit.
\section{Total momentum $Q=1+ 2 \times\mathrm{integer}$ \label{sec:tm1}}
The case
\begin{equation} \label{Qkf}
Q=1 + 2 \times\textrm{integer}
\end{equation}
is particular (recall that $k_F=1$ everywhere but in the captions to the figures). One finds that $n(k,1)$ for the repulsive ground state, section~\ref{s:igrdet}, and the attractive gas state, section~\ref{s:iga1det}, coincide with the momentum distribution of a free Fermi gas. It follows from Eq.~\eqref{phase2} that $\Lambda=\infty$ at $Q=1$. We have in the leading order in $\Lambda$
\begin{equation}
\delta(k)= \frac1\Lambda, \qquad Z= \frac2{\pi\Lambda^2}, \qquad \Lambda\to\infty
\end{equation}
therefore
\begin{equation}
K(q,q^\prime) = 0, \qquad W(q,q^\prime)= \frac12 e^{-i(q+q^\prime)y/2}
\end{equation}
and Eq.~\eqref{rhoTD1} takes the form
\begin{equation}
\varrho(y) = \frac{\sin (y)}{y}, \qquad \Lambda\to\infty.
\end{equation}
Plugging this function into Eq.~\eqref{mainN5} we indeed get the momentum distribution of a free Fermi gas.
This result can also be obtained without using the Fredholm determinants. For the Gaudin-Yang model, Eq.~\eqref{eq:htot}, all three of the Hamiltonian, the spin-ladder operator
\begin{equation}
S_- = \int_0^L dx\, \psi^\dagger_\downarrow(x) \psi_\uparrow(x) = \sum_p \psi^\dagger_{p\downarrow}\psi_{p\uparrow},
\end{equation}
and the total momentum $P$, commute with each other. Therefore, any state $|\Psi_\mathrm{FF}\rangle$ of a free Fermi gas with $N+1$ particles can be turned into an eigenstate
\begin{equation} \label{psiS-}
|\Psi_i\rangle = \frac1{\sqrt{N+1}} S_-|\Psi_\mathrm{FF}\rangle
\end{equation}
of the Hamiltonian~\eqref{Hamiltonian general}, containing $N$ host particles and one impurity, and having the same energy and momentum as $|\Psi_\mathrm{FF}\rangle$. Furthermore,
\begin{equation} \label{npi}
n_i(p,Q) \equiv \langle\Psi_i|\psi^\dagger_{p\downarrow}\psi_{p\downarrow}|\Psi_i\rangle = \frac1{N+1} \langle\Psi_\mathrm{FF}|\psi^\dagger_{p\uparrow}\psi_{p\uparrow}|\Psi_\mathrm{FF}\rangle.
\end{equation}
The state~\eqref{psiS-} is the minimum energy state $|\mathrm{min}_Q\rangle$ for $Q$ given by Eq.~\eqref{Qkf} and $|\Psi_\mathrm{FF}\rangle$ being the minimum energy state of a free Fermi gas for the same $Q$. This can be shown very straightforwardly by examining the exact eigenfunctions and spectrum of the model~\eqref{Hamiltonian general}, see, for example, section 5 of Supplementary Information in Ref.~\cite{mathy_flutter_2012}. Equation~\eqref{npi} gives the momentum distribution of a free Fermi gas immediately.
The case of the bound state for the attractive interaction, sections~\ref{s:igab} and~\ref{s:iga2det}, is different. The shape of $n(k,Q)$ is qualitatively the same at $Q$ given by Eq.~\eqref{Qkf} in comparison with any other value of $Q$. This is because the state~\eqref{psiS-} is not the minimum energy state of the Hamiltonian at any value of $Q$. We plot $n(k,1)$ in Fig.~\ref{NKboth}(c).
\section{$n(k,Q)$ in the $k\to\infty$ limit \label{sec:nkinf}}
The large $k$ limit of $n(k,Q)$, following Eq.~\eqref{mainN5}, is determined by an expansion of $\varrho(y)$ in the vicinity of $y=0$. It turns out that $\varrho$, $\partial_y \varrho$, and $\partial^2_y \varrho$ are continuous at $y=0$. Therefore
\begin{equation}
\langle P_\mathrm{imp}\rangle \equiv \int dk\, k n(k,Q) = i \partial_y \varrho(y), \qquad y=0 \label{eq:<P>}
\end{equation}
and
\begin{equation}
\langle P_\mathrm{imp}^2\rangle \equiv \int dk\, k^2 n(k,Q) = (i\partial_y)^2 \varrho(y), \qquad y=0. \label{eq:<P2>}
\end{equation}
The third derivative of $\varrho(y)$ has a discontinuity at $y=0$. This implies for the leading term of the large $k$ expansion
\begin{equation}
n(k,Q)= \frac1{ 2\pi}\frac1{k^4} [\partial_y^3 \varrho(y=+0)- \partial_y^3 \varrho(y=-0)], \qquad k\to\pm\infty.
\end{equation}
Taking into account the involution~\eqref{inv} we arrive at
\begin{equation}
n(k,Q)= \frac{C}{k^4}, \qquad k\to\pm\infty, \label{eq:contact1}
\end{equation}
where
\begin{equation}
C=\frac1{ \pi}\mathrm{Re}\, \partial_y^3 \varrho(y=+0). \label{eq:tan}
\end{equation}
Each of Eqs.~\eqref{eq:<P>}, \eqref{eq:<P2>}, and~\eqref{eq:contact1} has a lot of physics behind. We discuss them one-by-one.
\subsection{Analysis of $ \langle P_\mathrm{imp} \rangle$}
For the repulsive ground state, and the attractive gas state we have
\begin{equation}
\langle P_\mathrm{imp} \rangle = \frac{\Lambda}{\alpha} + \frac{\varphi}{Z}, \qquad \gamma>0\textbf{ ground state, and } \gamma<0 \textbf{ gas state} \label{eq:<P>1}
\end{equation}
where $Z$ is defined in Eq. \eqref{ZZ} and $\varphi$ by Eq.~\eqref{eq:varphi}. Recall that $\Lambda$ and $Q$ are connected by Eq.~\eqref{phase2}. Since $\langle P_\mathrm{imp} \rangle$ in Eq.~\eqref{eq:<P>1} is an odd function of $\gamma$, it is sufficient to examine the $\gamma>0$ case. For the attractive bound state $Z$ is replaced with $Z_b$, Eq.~\eqref{ZZb}. Hence,
\begin{equation}
\langle P_\mathrm{imp} \rangle = \frac{\Lambda}{\alpha} + \frac{\varphi}{Z_b}, \qquad \gamma<0 \textbf{ bound state}, \label{eq:<P>2}
\end{equation}
where $\Lambda$ and $Q$ are connected by Eq.~\eqref{qqb2}. Using the Hellmann-Feynman theorem as explained in Ref.~\cite{knap_flutter_signatures_2014} gives the average momentum of the impurity in terms of the group velocity,
\begin{equation} \label{vg}
\langle P_\mathrm{imp}\rangle = \frac{\partial E_\mathrm{min}(Q)}{\partial Q}.
\end{equation}
This leads us to Eqs.~\eqref{eq:<P>1} and~\eqref{eq:<P>2} immediately, consistent with the predictions from the Fredholm determinant representation.
The derivation of Eqs.~\eqref{eq:<P>1} and~\eqref{eq:<P>2} from the Fredholm determinant representation of Eq.~\eqref{eq:<P>} is performed in Appendix \ref{smalldistance}. Though Eqs.~\eqref{rhoTD1} and~\eqref{rhoTD2} look rather different, Eq.~\eqref{eq:<P>2} is connected to Eq.~\eqref{eq:<P>1} by merely a replacement of $Z$ with $Z_b$. Notably, such a replacement also works for the other observables considered in section~\ref{sec:nkinf}: $\langle P_\mathrm{imp}^2\rangle$, Eq.~\eqref{eq:<P2>}, and $C$, Eq.~\eqref{eq:tan}. We show $\langle P_\mathrm{imp}\rangle$ for several values of $\gamma$ in Fig.~\ref{groundp}.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{groundp.pdf}
\caption{\label{groundp} Average momentum $\langle P_\mathrm{imp}\rangle$ of the impurity is shown as a function of the total momentum $Q$. Panel (a) is for $\gamma>0$ ground state, Eq.~\eqref{eq:<P>1}, panel (b) is for $\gamma<0$ bound state, Eq.~\eqref{eq:<P>2}. The solid lines are for $|\gamma|=1$, $3$, $6$, and $10$ (top to bottom). Remarkably, they are continuous in (a), while experiencing a discontinuity in (b) at $Q=k_F$. They are not straight in (b), but this is barely seen with the unaided eye. The dotted and dashed lines stand for $\gamma=0$ and $|\gamma|=\infty$, respectively, and are straight.}
\end{center}
\end{figure}
One can see in this figure that Eq.~\eqref{eq:<P>1} produces a continuous function of $Q,$ while Eq.~\eqref{eq:<P>2} exhibits a discontinuity at $Q=1$. Should such a difference persist for any one-dimensional gas interacting with a mobile impurity, is an open question.
Though the curves in Fig.~\ref{groundp}(b) are not straight lines, the difference cannot be seen by a naked eye. It follows from Eq.~\eqref{vg} that the slope of $\langle P_\mathrm{imp}\rangle$ at $Q=0$,
\begin{equation}
\langle P_\mathrm{imp}\rangle = \frac{Q}{m_*}, \qquad Q\to 0
\end{equation}
is set by the value of the effective mass $m_*$ defined by the expansion of $E(Q)$ at $Q=0$:
\begin{equation}
E(Q)- E(0)= \frac{Q^2}{2m_*}, \qquad Q\to0.
\end{equation}
The explicit form of $E(Q)$ is discussed in section~\ref{sec:preliminaries}. The analytic formula for $m_*$ corresponding to Eq.~\eqref{eq:<P>1} is
\begin{equation} \label{eq:<P>1mass}
m_* = \frac{2}\pi \frac{(\arctan\alpha)^2}{\arctan\alpha - \alpha(1+\alpha^2)^{-1}}, \qquad \gamma>0\textbf{ ground state, and } \gamma<0 \textbf{ gas state}
\end{equation}
(note that $m_*$ in this equation is an odd function of $\gamma$), and the formula for $m_*$ corresponding to Eq.~\eqref{eq:<P>2} is
\begin{equation} \label{eq:<P>2mass}
m_* = \frac{2}\pi \frac{(\pi+\arctan\alpha)^2}{\pi+\arctan\alpha - \alpha(1+\alpha^2)^{-1}}, \qquad \gamma<0 \textbf{ bound state}.
\end{equation}
The analytic expressions~\eqref{eq:<P>1mass} and \eqref{eq:<P>2mass} for the effective mass were obtained for the first time in the works~\cite{mcguire_impurity_fermions_65} and~\cite{mcguire_impurity_fermions_66}, respectively. The $\gamma\to \infty$ limit of Eq.~\eqref{eq:<P>1mass} is $m/m_*=0$: the impurity becomes infinitely heavy. This is contrasted with the $\gamma\to -\infty$ limit of Eq.~\eqref{eq:<P>2mass}, which is $m/m_*=1/2$: the mass of the impurity bound to the gas particles remains finite. A quantitative comparison between $m_*$ for $\gamma>0$ from Eq.~\eqref{eq:<P>1mass}, and $m_*$ for $\gamma<0$ from Eq.~\eqref{eq:<P>2mass} is made in Fig.~\ref{meff} .
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{fmass.pdf}
\caption{\label{meff} The ratio of the impurity's bare and effective masses, $m/m_*$. The solid line is for the $\gamma>0$ ground state, Eq.~\eqref{eq:<P>1mass}, and the dashed line is for the $\gamma<0$ bound state, Eq.~\eqref{eq:<P>2mass}. The former line tends to zero, and the latter tends to $1/2$, in the $|\gamma|\to\infty$ limit.}
\end{center}
\end{figure}
\subsection{Analysis of the coefficient $C$ in the large $k$ expansion $n(k,Q)= C/k^4$ \label{sec:c}}
In this section we give the explicit analytic formula for the coefficient $C$ in Eq.~\eqref{eq:tan}. For the repulsive ground state, and the attractive gas state we have
\begin{equation}
C = \frac1\pi \left(\frac{2}{\pi \alpha^2 } - \frac{Z}{\alpha^2} - \frac{\varphi^2}{Z} \right), \qquad \gamma>0\textbf{ ground state, and } \gamma<0 \textbf{ gas state}, \label{eq:c1}
\end{equation}
where $Z$ is defined in Eq. \eqref{ZZ} and $\varphi$ by Eq.~\eqref{eq:varphi}. Recall that $\Lambda$ and $Q$ are connected by Eq.~\eqref{phase2}, and note that $C$ in Eq.~\eqref{eq:c1} is an even function of $\gamma$.
For the attractive bound state $Z$ is replaced with $Z_b$, Eq.~\eqref{ZZb}. Hence,
\begin{equation}
C = \frac1\pi \left(\frac{2}{\pi \alpha^2 } - \frac{Z_b}{\alpha^2} - \frac{\varphi^2}{Z_b} \right), \qquad \gamma<0\textbf{ bound state}. \label{eq:c2}
\end{equation}
The $\gamma\to\infty$ limit of Eq.~\eqref{eq:c1} reads
\begin{equation}
C = \frac{2}{3\pi^2} \left[\cos\left(\frac{\pi Q}{2}\right) \right]^2, \qquad |\gamma|\to\infty. \label{eq:c1inf}
\end{equation}
The $\gamma\to -\infty$ limit of Eq.~\eqref{eq:c2} is divergent, in consistency with the analysis of section~\ref{s:iga2slg}. We show $C$ for several values of $\gamma$ in Fig.~\ref{cnk}.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{c.pdf}
\caption{\label{cnk} The contact $C$ as a function of the total momentum $Q$. Panel (a) is for $\gamma>0$ ground state (identical to $\gamma<0$ gas state), Eq.~\eqref{eq:c1}. Panel (b) is for $\gamma<0$ bound state, Eq.~\eqref{eq:c2}. The solid lines are for $|\gamma|=1$, $3$, $6$, and $10$ (bottom to top). The lines in (b) are not straight, but this is barely seen with the unaided eye. The dashed line in (a) is $C=2[\cos(\pi Q/2k_F)]^2/3\pi^2$, given for $|\gamma|=\infty$ by Eq.~\eqref{eq:c1inf}. }
\end{center}
\end{figure}
The case $Q=0$ can be compared with the existing literature. Equations~\eqref{eq:c1} and~\eqref{eq:c2} become
\begin{equation}
C= \frac{2(\alpha-\arctan\alpha)}{\pi^2 \alpha^3}, \qquad Q=0, \qquad \gamma>0\textbf{ ground state, and } \gamma<0 \textbf{ gas state},
\end{equation}
and
\begin{equation}
C= \frac{2(-\pi+\alpha-\arctan\alpha)}{\pi^2 \alpha^3}, \qquad Q=0, \qquad \gamma<0\textbf{ bound state},
\end{equation}
respectively. One can check that
\begin{equation}
C = \frac{\gamma^2}{2\pi^2} \frac{\partial E_\mathrm{min}}{\partial\gamma}, \qquad Q=0,
\end{equation}
where $E_\mathrm{min}$ is given by Eqs.~\eqref{ep1} and~\eqref{em1}, respectively. This result is consistent with the general principles determining the coefficient $C$ (sometimes referred to as the contact), developed in the works~\cite{tan_energerics_2008, tan_momentum_2008, barth_contact_1D_2011,doggen_contact_13}. Notably, the contact in the Lieb-Liniger gas~\cite{olshanii_shortdist_LiebLiniger_03} has the value $2/(3\pi^2)$ in the Tonks-Girardeau limit. This coincides with what gives Eq.~\eqref{eq:c1inf} at $Q=0$.
To what extent $C$ could be extracted numerically from the large momentum behavior of $n(k,Q)$ is illustrated in Fig. \eqref{tanC}. We evaluated $n(k,Q)$ from the Fredholm determinant representation presented in section~\ref{sec:Fdr}.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{ctest.pdf}
\caption{\label{tanC} Shown is the convergence of $n(k,Q)$ to $Ck^{-4}$ in the large $k$ limit. The plots are from the numeric evaluation of the Fredholm determinant representation for $n(k,Q)$ given in section~\ref{sec:Fdr}, divided by the value of $C$ found analytically in section~\ref{sec:c}. The solid black lines are for $Q=0$. The dotted red, and dashed blue lines are for $Q=0.8k_F$: the former is for $k>0$, and the latter is for $k<0$. Note that the Fredholm determinants are numerics-friendly, but $n(k,Q)$ decays very fast with increasing $|k|$, and this makes the numerical evaluation of $C$ a challenge.}
\end{figure}
\subsection{Analysis of $ \langle P_\mathrm{imp}^2 \rangle$}
The average of $P_\mathrm{imp}^2$, Eq.~\eqref{eq:<P2>}, is expressed through $\langle P_\mathrm{imp}\rangle$ and $C$:
\begin{equation}
\sigma = \sqrt{\pi C (Z^{-1}-\alpha)} \label{eq:Psigmaresult}
\end{equation}
where, by definition,
\begin{equation}
\sigma = \sqrt{\langle P^2_\mathrm{imp} \rangle -\langle P_\mathrm{imp} \rangle^2}
\end{equation}
is a root-mean-square deviation. Equation~\eqref{eq:Psigmaresult} is valid for the repulsive ground state and attractive gas state. The result for the attractive bound state is obtained by replacing $Z$ with $Z_b$. Exemplary plots of $\sigma$ are shown in Fig.~\ref{fig:sigma}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=\textwidth]{sigma.pdf}
\caption{\label{fig:sigma} The root-mean-square deviation $\sigma$ as a function of the total momentum $Q$. Panel (a) is for $\gamma>0$ ground state (identical to $\gamma<0$ gas state), Eq.~\eqref{eq:Psigmaresult}. Panel (b) is for $\gamma<0$ bound state. The solid lines are for $|\gamma|=1$, $3$, $6$, and $10$ (bottom to top). The lines in (b) are not straight, but this is barely seen with the unaided eye. The horizontal dashed line at $\sigma/k_F=1/\sqrt{3}$ in (a) is for $|\gamma|=\infty$. }
\end{center}
\end{figure}
\section{$n(k,Q)$ in the $k\to Q$ limit \label{sec:kto0}}
In this section we present the $y\to\infty$ expansion of $\varrho(y)$. We use it to prove the existence of the power-law singularity
\begin{equation} \label{dm}
n(k,Q) \sim \frac1{(k-Q)^\nu}, \qquad k\to Q,
\end{equation}
seen in Fig.~\ref{NKboth}, as well as to calculate the exponent $\nu$, and the numerical prefactor. So far, $\nu$ has only been found at $Q=0$ and $\gamma\to +0$ in Ref.~\cite{frenkel_impurity_momentum_distribution_92}; this result follows from our formulas as a particular case.
\subsection{Large $y$ expansion of $\varrho(y)$ in case of impurity-gas repulsion \label{sec:yrhor}}
The density matrix and the momentum distribution are related by Eq.~\eqref{mainN5}. Both are $2k_F$-periodic in $Q$ (recall that $k_F=1$ everywhere but in the captions to the figures). This property together with Eq.~\eqref{eq:nqinv} makes it sufficient to examine $\varrho$ for $0\le Q \le1$ only. The large $y$ expansion of the determinant representation~\eqref{rhoTD1} can be obtained by a finite-size analysis of the form-factors followed by a resummation of the soft modes, along the lines of the works~\cite{shashi_prefactors_11, kitanine_formfactor_11, kozlowski_RH_sine_kernel_11, kitanine_formfactor_12, imambekov_review_12,kozlowski_microscopic_15}. We leave the details for a separate publication. The result is
\begin{equation} \label{rhoAAA}
\varrho(y) = \frac{\mathcal{A}e^{-iQy}}{(2iy)^{F_-^2}(-2iy)^{(1-F_+)^2}} + \frac{\tilde{\mathcal{A}}e^{-i(Q-2)y}}{(2iy)^{\tilde F_-^2}(-2iy)^{(1- \tilde F_+)^2}} +\cdots, \qquad y\to \infty
\end{equation}
The numerical prefactor
\begin{equation} \label{constA}
\mathcal{A} =(2\pi)^{F_- -F_+ +1} e^{-\Delta}Z^{-1}G^2(F_+)G^2(1-F_-)
\end{equation}
depends on $\gamma$ and $Q$ through the phase shift~\eqref{phase3}:
\begin{equation}
F(k)= \frac{\delta(k)}\pi, \qquad F_\pm = F(\pm1).
\end{equation}
Here,
\begin{equation} \label{eq:Delta}
\Delta = \frac{1}{2} \int\limits_{-1}^1 dq \int\limits_{-1}^1 dq^\prime \left[\frac{F(q)-F(q^\prime)}{q-q^\prime}\right]^2 +\int\limits_{-1}^1 dq \frac{F_-^2-F^2(q)}{-1-q}
-\int\limits_{-1}^1 dq \frac{(1-F_+)^2 -[1-F(q)]^2}{1-q},
\end{equation}
the coefficient $Z$ is given by Eq.~\eqref{ZZ}:
\begin{equation} \label{eq:ZF}
Z= \frac{F_+ - F_-}{\alpha},
\end{equation}
and $G$ stands for the Barnes $G$-function, defined by the functional equation
\begin{equation}
G(z+1)= \Gamma(z)G(z),
\end{equation}
with the normalization $G(1)=1$, where $\Gamma(z)$ is the Euler Gamma function. The function $\tilde F$ entering the second term on the right hand side of Eq.~\eqref{rhoAAA} is
\begin{equation}
\tilde F(k) = F(k) +1,
\end{equation}
and $\tilde{\mathcal{A}}$ follows from $\mathcal{A}$ by replacing $F$ with $\tilde F$ in Eqs.~\eqref{constA}, \eqref{eq:Delta} and~\eqref{eq:ZF}. The second term on the right hand side of Eq.~\eqref{rhoAAA} is, generally, subleading -- it decays faster than the first one:
\begin{equation}
\tilde F_-^2 + (1- \tilde F_+)^2 \ge F_-^2 + (1- F_+)^2.
\end{equation}
However, the inequality turns into an equality at $Q=1$, that is, the subleading term becomes of the same order as the leading one, and their sum in Eq.~\eqref{rhoAAA} reproduces the exact formula
\begin{equation} \label{eq:rQ1}
\varrho(y)= \frac{\sin y}y , \qquad Q=1.
\end{equation}
We show $\varrho(y)$ evaluated from the exact expression~\eqref{rhoTD1}, and the convergence of the asymptotic formula~\eqref{rhoAAA} to this exact expression in the panels (a) and (d) of Fig.~\eqref{largeY}, respectively. We would like to emphasize that the decay rates of the leading and the first subleading terms in Eq.~\eqref{rhoAAA} are close to each other when $Q$ is close to one.
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{opdm.pdf}
\caption{One-particle density matrix $\varrho(y)$ at $Q=0$ (black solid) and $Q= 0.8 k_F$ (red dotted) lines. Top panels: absolute value of $\varrho(y)$ from the exact formula. Bottom panels: the absolute value of the ratio of $\varrho(y)$ from the exact and large-$y$-asymptotic formulas.}
\label{largeY}
\end{figure}
\subsection{Large $y$ expansion of $\varrho(y)$ in case of impurity-gas attraction: gas state}
All formulas from the section~\ref{sec:yrhor} are valid for the gas state after letting $\gamma$ be negative. We show $\varrho(y)$ evaluated from the exact expression~\eqref{rhoTD1}, and the convergence of the asymptotic formula~\eqref{rhoAAA} to this exact expression in the panels (b) and (e) of Fig.~\eqref{largeY}, respectively.
\subsection{Large $y$ expansion of $\varrho(y)$ in case of impurity-gas attraction: bound state}
In case of the attractive bound state, the explicit expression for $\varrho(y)$ is given by Eq.~\eqref{rhoTD2}, and the leading term in the $y\to\infty$ expansion reads
\begin{equation} \label{rhoAAb}
\varrho(y) = \frac{\mathcal{A}_b e^{-iQy}}{(2iy)^{(1-F_-)^2}(-2iy)^{F_+^2}}, \qquad y\to\infty,
\end{equation}
where
\begin{equation}\label{constB}
\mathcal{A}_b = \frac{8(2\pi)^{F_- -F_+}}{\pi |Z_b|} \frac{G^2(1+F_+)G^2(2-F_-)}{[1+(\alpha+\Lambda)^2]^2} e^{-\Delta_b}
\end{equation}
with
\begin{multline}
\Delta_b = \frac{1}{2} \int\limits_{-1}^1 dq \int\limits_{-1}^1 dq^\prime \left[\frac{F(q)-F(q^\prime)}{q-q^\prime}\right]^2 +\int\limits_{-1}^1 dq \frac{(1-F_-)^2-[1-F(q)]^2}{-1-q}\\
-\int\limits_{-1}^1 dq \frac{F_+^2 -F(q)^2}{1-q}- 4\alpha\int\limits_{-1}^1 dq \frac{F(q)(\Lambda -\alpha q)}{1+(\Lambda -\alpha q)^2},
\end{multline}
and $Z_b$ given by Eq.~\eqref{ZZb}. The prefactors $\mathcal{A}$ and $\tilde{\mathcal{A}}$, Eq.~\eqref{constA}, depend on $\gamma$ and $Q$ through the phase shift only. By contrast, the prefactor $\mathcal{A}_b$, Eq.~\eqref{constB}, depends on $\gamma$ and $Q$ explicitly.
We show $\varrho(y)$ evaluated from the exact expression~\eqref{rhoTD2}, and the convergence of the asymptotic formula~\eqref{rhoAAb} to this exact expression in the panels (c) and (f) of Fig.~\eqref{largeY}, respectively.
\subsection{The exponent $\nu$ and the prefactor in Eq.~\eqref{dm} for $n(k,Q)$}
The singular part of the momentum distribution, Eq.~\eqref{dm}, is fully characterized by the asymptotic expressions for $\varrho(y)$. Equation~\eqref{rhoAAA} leads to the exponent
\begin{equation} \label{nu}
\nu = 1 - F_-^2 - (1- F_+)^2, \qquad \gamma>0\textbf{ ground state, and } \gamma<0 \textbf{ gas state}
\end{equation}
and Eq.~\eqref{rhoAAb} leads to
\begin{equation} \label{nub}
\nu = 1 - (1-F_-)^2 - F_+^2, \qquad \gamma<0\textbf{ bound state}.
\end{equation}
Both Eqs.~\eqref{nu} and \eqref{nub} tend to the same value in the $|\gamma|\to\infty$ limit,
\begin{equation}
\nu = \frac{1- Q^2}2, \qquad |\gamma|\to\infty,
\end{equation}
which coincides with the result from Ref.~\cite{recher_TGtoFF_det_PainleveV}. This limiting value is indicated with the thin dotted line in Fig.~\ref{nufig}. One can also see that $\nu=0$ when $Q$ reaches the Fermi momentum for the $\gamma>0$ ground state, and $\gamma<0$ gas state. Recall that $n(k,Q)$ turns into the Fermi function at $Q=1$, as illustrated in the panels (a) and (b) of Fig.~\ref{NKboth} and discussed in section~\ref{sec:tm1}. The case $\gamma<0$ bound state is different, there $\nu$ is a non-trivial function of $\gamma$ at $Q=1$.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{nu.pdf}
\caption{Exponent $\nu$ for the singularity $n(k,Q)\sim (k-Q)^{-\nu}$ in the $k\to Q$ limit is shown as a function of $\gamma$. Solid line is for $\gamma>0$ ground state and $\gamma<0$ bound state, dashed red line is for $\gamma<0$ gas state. Thin dotted line indicates where $\nu$ tends in the $|\gamma|\to \infty$ limit.} \label{nufig}
\end{figure}
Letting $Q=0$ and $\gamma\to+0$ in Eq.~\eqref{nu} we get
\begin{equation}
\nu = 1 - \frac{\gamma^2}{2\pi^4}+\cdots, \qquad Q=0, \qquad\gamma\to +0.
\end{equation}
This gives the same dependence on $\gamma$ as in Ref.~\cite{frenkel_impurity_momentum_distribution_92}.
\section{Determinant representation for finite $N$ \label{sec:Bd}}
In this section we present the impurity momentum distribution function $n(k,Q)$ for a finite particle number $N$ through determinants of finite-dimensional matrices. This result is crucial for deriving the Fredholm determinant representation of section~\ref{sec:Fdr}. Recall that we stick to the notations of the paper~\cite{gamayun_impurity_Green_FTG_16}, whenever possible.
Our starting point is Eq.~\eqref{npff}. We write the form-factor as given by Eq.~(5.23) from Ref.~\cite{gamayun_impurity_Green_FTG_16}:
\begin{equation} \label{overlap}
|\langle N|\psi_{k\downarrow}|\mathrm{min}_Q\rangle|^2 = \left(\frac{2}{L} \right)^{N}|\det D|^2 \left|\sum_{j=1}^{N+1} \frac{\partial k_j}{\partial\Lambda}\right|^{-1} \left|\prod_{j=1}^{N+1} \frac{\partial k_j}{\partial\Lambda}\right|.
\end{equation}
Here, $\partial k_j/\partial\Lambda$ is defined by Eq.~\eqref{kjLambda}, and
\begin{equation} \label{detd}
\det D=\begin{vmatrix}
\dfrac{1}{k_1-p_1} &\dots & \dfrac{1}{k_{N+1}-p_1}\\
\vdots & \ddots & \vdots \\
\dfrac{1}{k_1-p_N} &\dots & \dfrac{1}{k_{N+1}-p_N}\\
1 & \dots & 1\\
\end{vmatrix}
\end{equation}
for the determinant of the $(N+1)\times(N+1)$ matrix. The momentum $Q$ of the state $|\mathrm{min}_Q\rangle$ is the sum of the quasi-momenta $k_1,\ldots, k_{N+1}$, Eq.~\eqref{QQ}. How these quasi-momenta are specified is discussed in sections~\ref{sec:igr} through~\ref{s:igab}. The momentum of the state $|N\rangle$ is the sum of $p_1,\ldots, p_N$. Combining Eqs.~\eqref{kpQ} and~\eqref{QQ} implies the constraint
\begin{equation} \label{ks}
k+\sum\limits_{j=1}^N p_j=\sum\limits_{j=1}^{N+1} k_j
\end{equation}
for the sum over $p_1,\ldots, p_N$ in Eq.~\eqref{npff}.
We transform Eq.~\eqref{npff} by replacing the constraint~\eqref{ks} with the Kronecker delta:
\begin{equation} \label{sq2}
n(k,Q)= \frac1{N!}\sum_{p_1}\cdots\sum_{p_N} \delta_{k+\sum_{j=1}^N p_j,\sum_{j=1}^{N+1} k_j} |\langle N|\psi_{k\downarrow}|\mathrm{min}_Q\rangle|^2.
\end{equation}
The summations over $p_1,\ldots,p_N$ on the right hand side of Eq.~\eqref{sq2} run independently from each other. One can see from Eqs.~\eqref{overlap} and~\eqref{detd} that $\langle N|\psi_{k\downarrow}|\mathrm{min}_Q\rangle=0$ if $p_j=p_l$ at $j\ne l$. The factor $1/N!$ is to compensate counting the form-factor multiple times upon the permutations of $p_1,\ldots,p_N$. Equations~\eqref{mainN2} and~\eqref{inv}, and the representation
\begin{equation} \label{kdr}
\delta_{k+\sum_{j=1}^N p_j,\sum_{j=1}^{N+1} k_j} = \frac{1}{L} \int\limits_{-L/2}^{L/2} dy\, \exp\left[iy\left(k+\sum_{j=1}^N p_j-\sum_{j=1}^{N+1} k_j\right)\right]
\end{equation}
imply for Eq.~\eqref{sq2}
\begin{equation} \label{npp}
n(k,Q) = \frac{1}{L} \int\limits_{-L/2}^{L/2} dy\, e^{iky} \varrho(y) = \frac{2}{L}\int\limits_{0}^{L/2} dy \, \mathrm{Re} [e^{iky} \varrho(y)],
\end{equation}
where
\begin{equation} \label{rhoNs}
\varrho(y)=\frac1{N!}\sum_{p_1}\cdots\sum_{p_N} e^{iy \left(\sum_{j=1}^N p_j -\sum_{j=1}^{N+1}k_j \right)} |\langle N|\psi_{k\downarrow}|\mathrm{min}_Q\rangle|^2.
\end{equation}
The terms on the right hand side of Eq.~\eqref{rhoNs} are determined by Eq.~\eqref{overlap}, and $p_1,\ldots,p_N$ are quantized as given by Eq.~\eqref{qq}.
We now take the sum over $p_1,\ldots,p_N$ in Eq.~\eqref{rhoNs}. Let us consider the function
\begin{equation} \label{sid}
S = \frac1{N!} \sum_{p_1}\cdots \sum_{p_N} (\det D)^2 \prod_{j=1}^N f(p_j),
\end{equation}
where $\det D$ is defined by Eq.~\eqref{detd}, $f$ is an arbitrary function, and $p_j$s are quantized as given by Eq.~\eqref{qq}. After some elementary transformations (used, for example, to get the identities in appendix B.3 from Ref.~\cite{gamayun_impurity_Green_FTG_16}) we come at the following representation for Eq.~\eqref{sid}:
\begin{equation} \label{sid2}
S= \sum_{m=1}^{N+1} \det[\alpha(m)_{jl}].
\end{equation}
Here,
\begin{equation}
\alpha(m)_{jl} = \left\{\begin{array}{ll} \displaystyle\sum\limits_{p} \dfrac{f(p)}{(k_j-p)(k_l-p)} & 1\le j\ne m\le N+1 \\&\\
1 & j=m
\end{array}\right.,
\end{equation}
and $p=2\pi n/L$, $n=0,\pm1,\pm2,\ldots$.
For $\gamma>0$ repulsive ground state and $\gamma<0$ attractive gas state the quasi-momenta $k_1,\ldots,k_{N+1}$ are real. This implies
\begin{equation}
|\det D|^2 = (\det D)^2.
\end{equation}
Furthermore, one can show that
\begin{equation}
\frac{\partial k_j}{\partial\Lambda}>0, \qquad -\infty<\Lambda<\infty, \qquad j=1,\ldots N+1
\end{equation}
for any real-valued $k_j$ (see, for example, section 5.2 from Ref.~\cite{gamayun_impurity_Green_FTG_16}). We, therefore, can use the identity~\eqref{sid2} for the function~\eqref{rhoNs}, and get
\begin{equation} \label{rrnn}
\varrho(y) = \partial_\xi \det(A+\xi B) |_{\xi=0},
\end{equation}
where
\begin{equation} \label{Aij}
A_{jl} = \frac{2}{L}\sum_n \frac{e^{2\pi iyn/L}}{(k_j-2\pi n/L)(k_l-2\pi n/L)} e^{-iy(k_j+k_l)/2} \left|\frac{\partial k_j}{\partial \Lambda}\right|^{1/2}
\left|\frac{\partial k_l}{\partial \Lambda}\right|^{1/2}
\end{equation}
and
\begin{equation} \label{deltaAij}
B_{jl} = \left( \sum\limits_{m=1}^{N+1}\frac{\partial k_m}{\partial \Lambda} \right)^{-1} e^{-iy(k_j+k_l)/2}
\left|\frac{\partial k_j}{\partial \Lambda}\right|^{1/2} \left|\frac{\partial k_l}{\partial \Lambda}\right|^{1/2}.
\end{equation}
The matrix $B$ has rank one, and we can write Eq.~\eqref{rrnn} as
\begin{equation} \label{AijB}
\varrho(y) =\det(A+B) - \det A.
\end{equation}
We now turn to the $\gamma<0$ bound state. Here, $k_1,\ldots,k_{N-1}$ are real, and $k_N=k_{N+1}^*$ are complex. This implies
\begin{equation}
|\det D|^2 = -(\det D)^2.
\end{equation}
It follows from Eq.~\eqref{QQ} that
\begin{equation} \label{sumdk}
\sum\limits_{j=1}^{N+1}\frac{\partial k_j}{\partial \Lambda} = \frac{\partial Q}{\partial \Lambda}.
\end{equation}
Since $Q$ and $\Lambda$ are connected by Eq.~\eqref{qqb2}, we get
\begin{equation}
\sum\limits_{j=1}^{N+1}\frac{\partial k_j}{\partial \Lambda} = - \left|\sum\limits_{j=1}^{N+1}\frac{\partial k_j}{\partial \Lambda}\right|<0.
\end{equation}
Using the identity~\eqref{sid2} for the function~\eqref{rhoNs} we come at Eqs.~\eqref{rrnn}--\eqref{deltaAij}.
Later, we will use the following representation for the entries of the matrix~\eqref{Aij}:
\begin{equation} \label{Aij2}
A_{jl} = -\frac{ c(k_j)- c(k_l)}{k_j-k_l}e^{-iy(k_j+k_l)/2} \left|\frac{\partial k_j}{\partial \Lambda}\right|^{1/2}
\left|\frac{\partial k_l}{\partial \Lambda}\right|^{1/2},
\end{equation}
where
\begin{equation} \label{ek}
c(k)= \frac{2}{L}\sum_n \frac{e^{2\pi iy n/L}}{k-2\pi n/L}.
\end{equation}
The uncertainty in Eq.~\eqref{Aij2} at $j=l$ can be resolved by L'H\^opital's rule, which amounts to making use of the expansion
\begin{equation}
c(k_l)= c(k_j) +(k_l-k_j) \left.\frac{\partial c(k)}{\partial k}\right|_{k=k_j} .
\end{equation}
That is,
\begin{equation} \label{Ajj}
A_{jj}=-e^{-iyk_j} \left| \frac{\partial k_j}{\partial \Lambda} \right| \left.\frac{\partial c(k)}{\partial k}\right|_{k=k_j},
\end{equation}
where $c(k)$ is given by Eq.~\eqref{ek} and $ \partial k_j/\partial\Lambda$ by Eq.~\eqref{kjLambda}.
Let us represent the function $c$ from Eq.~\eqref{ek} as
\begin{equation} \label{eek}
c(k) = \oint_\Gamma \frac{dz}{\pi} \frac{e^{izy}}{e^{iLz}-1} \frac{1}{k-z},
\end{equation}
where $\Gamma$ is a union of counter-clockwise-oriented contours around the points $z =2\pi n/L$. Assuming that $k$ is real, we deform $\Gamma$ into a contour encircling the point $z=k$, and two straight lines infinitesimally above and below the real axis:
\begin{equation} \label{ek1}
c(k) = 2 i \frac{e^{iky}}{e^{iLk}-1}- \int\limits_{-\infty+i0}^{\infty+i0} \frac{dz}{\pi} \frac{e^{izy}}{e^{iLz}-1} \frac{1}{k-z}
+ \int\limits_{-\infty-i0}^{\infty-i0} \frac{dz}{\pi} \frac{e^{iz(y-L)}}{1-e^{-iLz}} \frac{1}{k-z}.
\end{equation}
We assume $0 < y < L$; the result for $y=0$ and $y=L$ follows from the continuity of $\varrho(y)$. The first integral is equal to zero, which is seen by using Cauchy's residue theorem (the integration contour is extended to the closed one by adding a half-circle in the upper half-plane). The second integral is equal to zero for the same reason (the integration contour is extended to the lower half-plane). Therefore, we get for Eq.~\eqref{ek}:
\begin{equation} \label{eLinf}
c(k)= 2 i \frac{e^{iky}}{e^{ikL}-1}.
\end{equation}
We now introduce the function
\begin{equation} \label{edef}
e(k)= \frac{e^{ik y}}{\nu_-(k)}.
\end{equation}
Substituting the Bethe equations~\eqref{ebethe1} into Eq.~\eqref{eLinf} we find
\begin{equation} \label{eB}
c(k_j) = e(k_j), \qquad j=1,\ldots,N+1.
\end{equation}
Furthermore,
\begin{equation} \label{deLinf}
\frac{\partial c(k)}{\partial k} = ic(k) \left( y-\frac{L}{1-e^{-ikL}} \right),
\end{equation}
and
\begin{equation}
\frac{\partial e(k)}{\partial k} = ie(k)[y-i\alpha \nu_-(k)].
\end{equation}
Using Eqs.~\eqref{edef}--\eqref{deLinf} we get for Eq.~\eqref{Ajj}
\begin{equation} \label{Ajju}
A_{jj}= -i \left|\frac{\partial k_j}{\partial \Lambda}\right| \frac1{\nu_-(k_j)} \left[y-\frac{L}{2i}\frac1{\nu_+(k_j)} \right].
\end{equation}
This expression can be represented as follows
\begin{equation}\label{Ajj2}
A_{jj}= 1- e^{-ik_j y} \left|\frac{\partial k_j}{\partial \Lambda}\right| \left. \frac{\partial e(k)}{\partial k} \right|_{k=k_j}.
\end{equation}
Thus, we can write the matrix~\eqref{Aij2} as
\begin{equation}\label{AAAij}
A_{jl} = \delta_{jl} -\frac{e(k_j)-e(k_l)}{k_j-k_l}e^{-iy(k_j+k_l)/2} \left|\frac{\partial k_j}{\partial \Lambda}\right|^{1/2}
\left|\frac{\partial k_l}{\partial \Lambda}\right|^{1/2}.
\end{equation}
Equation~\eqref{Ajj2} can be obtained from Eq.~\eqref{AAAij} by making use of the L'H\^opital's rule.
Let us represent Eq.~\eqref{AAAij} as
\begin{equation} \label{AAAij2}
A_{jl} = \delta_{jl} + \frac{2\pi}L K(k_j,k_l), \qquad j,l=1,\ldots,N+1,
\end{equation}
where
\begin{equation} \label{KNp}
K(k_j,k_l)= \frac{e_+(k_j)e_-(k_l) - e_-(k_j)e_+(k_l)}{k_j-k_l}, \qquad j,l=1,\ldots,N+1.
\end{equation}
Here,
\begin{equation} \label{epmfinite}
e_+(k_j) = -\frac1\pi \frac{e^{ik_j y/2}}{\nu_-(k_j)} \left|\frac{L}2 \frac{\partial k_j}{\partial\Lambda}\right|^{1/2}, \qquad e_-(k_j) = e^{-ik_j y/2} \left|\frac{L}2 \frac{\partial k_j}{\partial\Lambda}\right|^{1/2},
\end{equation}
where $\partial k_j/\partial\Lambda$ is defined by the exact formula~\eqref{kjLambda}. The uncertainty in Eq.~\eqref{KNp} at $j=l$ can be resolved by L'H\^opital's rule. The matrix~\eqref{deltaAij} can be written as
\begin{equation} \label{bjl2}
B_{jl}= \frac{2\pi}{L} W(k_j,k_l), \qquad j,l=1,\ldots,N+1,
\end{equation}
where
\begin{equation} \label{Wij}
W(k_j,k_l) = \frac1\pi \left( \sum\limits_{m=1}^{N+1}\frac{\partial k_m}{\partial \Lambda} \right)^{-1} e_-(k_j)e_-(k_l), \qquad j,l=1,\ldots,N+1.
\end{equation}
Using Eqs.~\eqref{AAAij2}--\eqref{Wij} we get for Eq.~\eqref{AijB}
\begin{equation} \label{rhoN}
\varrho(y) = \det\left[\delta_{jl}+ \frac{2\pi}{L}K(k_j,k_l)+\frac{2\pi}{L} W(k_j,k_l)\right]\\
-\det\left[\delta_{jl}+ \frac{2\pi}{L}K(k_j,k_l)\right].
\end{equation}
Recall that we are working at a finite constant density, Eq.~\eqref{rho0}. The expression~\eqref{rhoN} is valid in the interval $0\le y\le L$.
That the exact function $\varrho(y)$ is $L$-periodic and satisfies the involution~\eqref{inv} implies the exact identity
\begin{equation} \label{rhoe}
\varrho(L-y)=\varrho^*(y).
\end{equation}
We have verified numerically that Eq.~\eqref{rhoN} with the kernels~\eqref{KNp}--\eqref{Wij} satisfies Eq.~\eqref{rhoe} for any $N$, and $y$ in the interval $0\le y\le L$. We have also verified it by performing symbolic computations using \textsc{Mathematica} package for $N=2$.
Let us now discuss the case of the complex quasi-momenta: $\mathrm{Im}(k_N)<0$ and $\mathrm{Im}(k_{N+1})>0$, Eq.~\eqref{kc}. The representation \eqref{eek} leads to
\begin{equation} \label{ekpm}
c(k) = - \int\limits_{-\infty+i0}^{\infty+i0} \frac{dz}{\pi} \frac{e^{izy}}{e^{iLz}-1} \frac{1}{k-z}
+ \int\limits_{-\infty-i0}^{\infty-i0} \frac{dz}{\pi} \frac{e^{izy}}{e^{iLz}-1} \frac{1}{k-z}.
\end{equation}
The first (second) integral gives non-zero contribution for $\mathrm{Im}(k)>0$ ($\mathrm{Im}(k)<0$). In both cases one arrives at Eq.~\eqref{eLinf}. Further analysis is the same as for the real quasi-momenta, it leads to Eqs.~\eqref{KNp}--\eqref{rhoN}. Note that
\begin{equation}
c(k_{N+1},L-y) = c^*(k_N,y),
\end{equation}
and the involution~\eqref{rhoe} holds true.
We plot $\varrho(y)$ in Fig.~\ref{fig:rhoNNN}. The top panels show that it oscillates if $Q\ne0$. The bottom panels (d) and (e) demonstrate that the oscillations are largely, but not fully, suppressed for the function $e^{iQy}\varrho(y)$. Since the number of the gas particles, $N=40$, used in the plot, is large, the residual oscillations seen in the bottom panels (d) and (e) can be attributed to the subleading term written explicitly on the right hand side of Eq.~\eqref{rhoAAA}, valid in the thermodynamic limit. There are no visible oscillations in the bottom panel (f), consistent with the small contribution of the subleading terms to the asymptotic formula~\eqref{rhoAAb}. Note that the oscillations of the function $e^{iQy}\varrho(y)$ can be seen in Fig.~4 from Ref.~\cite{recher_TGtoFF_det_PainleveV}, though the thermodynamic limit have not been taken in the analytic formulas used therein, and the period of the oscillations has not been identified.
\begin{figure}[htb]
\includegraphics[width=\textwidth]{frl.pdf}
\caption{The reduced density matrix $\varrho(y)$ is examined for a gas with $N=40$ particles. The red dotted lines are for $\varrho(y)$ at the total momentum $Q=0$. The black solid (dashed) lines are for the real (imaginary) part of $\varrho(y)$ in the upper panels, and of $e^{iQy}\varrho(y)$ in the lower panels, at $Q=0.8 k_F$. Note how well the term $e^{iQy}$ suppresses the oscillations of $\varrho(y)$. \label{fig:rhoNNN}}
\end{figure}
The transition from Eq.~\eqref{rhoN} to the Fredholm determinant representations~\eqref{rhoTD1} and~\eqref{rhoTD2} is straightforward, the details are given in appendix~\ref{calc12}.
\section{Conclusion}
\label{sec:conclusions}
The main result of the present paper is the Fredholm determinant representation, Eqs.~\eqref{rhoTD1} and~\eqref{rhoTD2}, for the momentum distribution function, $n(k,Q)$, of an impurity which formed a polaron state with a free Fermi gas (or the Tonks-Girardeau gas \cite{tonks_complete_1936, girardeau_impurity_TG_60}). Using this representation we examined how the properties of the impurity depend on the strength $g$ of the impurity-gas $\delta$-function interaction potential, and on the value of the total momentum $Q$ of the system (which is the same as the momentum of the polaron). We have found that the formation of the bound state strongly affects the behavior of $n(k,Q)$. In the absence of the bound state $n(k,Q)$ turns into the Fermi function at $Q=1$ (recall that the momenta are given in the units of the Fermi momentum $k_F$ everywhere except for the captions to the figures). This can be seen in Fig.~\ref{NKboth}(a) and~\ref{NKboth}(b). In the presence of the bound state $n(k,Q)$ has a weak singularity at $k=Q$ for any $Q$, including $Q=1$, and the Fermi function does not emerge. This can be seen in Fig.~\ref{NKboth}(c). The distinct role the bound state plays in the behavior of the impurity's momentum distribution function is reflected at the level of the dispersion relation of the polaron. Indeed, the group velocity of the polaron vanishes at $Q=1$ in the absence of the bound state, see Fig.~\eqref{fig:energy}(a). Such a vanishing velocity is consistent with the impurity spreading over the Fermi sea and mimicking the distribution of the gas particles in the momentum space. In the presence of the bound state the group velocity of the polaron does not vanish at $Q=1$, therefore the momentum distribution of the impurity cannot have the shape of the Fermi distribution function. Another distinct feature of the polaron in the presence of the bound state is almost linear dependence of the group velocity on $Q$ for all values of the coupling strength, Fig.~\ref{groundp}(b). That is, the impurity can be viewed as a free particle having the effective mass $m_*$, and its momentum is $\langle P_\mathrm{imp}\rangle \simeq Q/m_*$. This is also seen from perturbative calculations, Ref.~\cite{panochko_polaron_19}, not limited to the exactly solvable case considered in our paper.
We have used the exact wave functions and spectrum of the model. In a number of papers the mobile impurity problem is investigated by using approximate wave functions. Being constructed from a few particle-hole excitations, Refs.~\cite{chevy_universal_2006, combescot_chevy_07}, these functions predict rather accurately some static properties, Ref.~\cite{giraud_impurity_09}, and time dynamics, Fig S4 in Ref.~\cite{mathy_flutter_2012}, of the mobile impurity in one dimension. The momentum distribution function has not been treated using the aforementioned basis of the variation functions, to the best of our knowledge. Other natural ways to construct variation functions, by taking solely a product of coherent states~\cite{shashi_RF_14,shchadilova_polaron_dynamics_16,kain_HF_17}, including Gaussian state correlations between different momentum modes~\cite{shchadilova_polaron_mass_16}, or correlations to an arbitrarily high order~\cite{mistakidis_impurity_19}, are also promising. How to perform a resummation of the excitations containing arbitrary number of the particle-hole pairs for a weak impurity-gas coupling is discussed in Refs.~\cite{burovski_impurity_momentum_2014,gamayun_kinetic_impurity_TG_14,gamayun_quantum_boltzmann_14}.
An exciting development of ultracold atomic physics made it possible to setup experiments on diffusion and drag of quantum impurities embedded in a degenerate ultracold gas. Special to one dimension is the observation of the Bloch oscillations of a mobile impurity moving through a quantum fluid in the absence of a periodic lattice~\cite{meinert_bloch_16}. The momentum distribution function of the impurity has been measured in that experiment. However, there the impurity neither started out in the equilibrium ground state, nor reached such a state in the course of the temporal evolution.
\section*{Acknowledgements}
We thank Vadim Cheianov, Eugene Demler, Pavel Dolgirev, Eoin Quinn, Michael Knap, and Ovidiu P\^{a}\c{t}u for their valuable comments to this work. We also acknowledge the comments from the referees, which helped us to improve the manuscript.
\paragraph{Funding information}
O.L. acknowledges the support from the Russian Foundation for Basic Research under the grant N 18-32-20218. The work of M. B. Z. is supported by Grant No.~ANR-16-CE91-0009-01 and CNRS grant PICS06738.
\begin{appendix}
\section{The $L\to\infty$ limit of Eq.~\eqref{rhoN}: transformation to Eqs.~\eqref{rhoTD1} and~\eqref{rhoTD2} \label{calc12}}
In this appendix we explain how we arrive at the Fredholm determinant representation~\eqref{rhoTD1} and~\eqref{rhoTD2}, valid for $N\to\infty$, starting from Eq.~\eqref{rhoN}, valid for any finite $N$. Recall that we are working at a finite gas density, therefore $N\to\infty$ implies $L\to\infty$.
Combining the definitions~\eqref{nupm} and~\eqref{phase3} we write
\begin{equation}
e^{i\delta(q)} = - \left[\frac{\nu_+(q)}{\nu_-(q)}\right]^{1/2}, \qquad \sin\delta(q) = [\nu_+(q)\nu_-(q)]^{1/2}.
\end{equation}
The $L\to\infty$ limit of Eq.~\eqref{kjLambda} reads
\begin{equation} \label{dklambda2}
\frac{\partial k_j}{\partial \Lambda} = \frac{2}{L} \nu_-(k_j)\nu_+(k_j), \qquad j=1,\ldots,N+1, \qquad L\to\infty
\end{equation}
for the real $k_1,\ldots,k_{N+1}$. This way, we get the kernel~\eqref{ke} from Eq.~\eqref{KNp}. Combining Eqs.~\eqref{sumdk} and~\eqref{phase2} we have
\begin{equation}
\sum_{j=1}^{N+1}\frac{\partial k_j}{\partial\Lambda} =Z.
\end{equation}
This way, we get the kernel~\eqref{ke1} from Eq.~\eqref{Wij}. This completes the derivation of the Fredholm determinant representation~\eqref{rhoTD1}.
Now let us turn to the derivation of Eq.~\eqref{rhoTD2}. The quasi-momenta $k_N$ and $k_{N+1}$ are now complex, $k_N^*=k_{N+1}$. Combining Eqs.~\eqref{sumdk} and~\eqref{qqb2} we have
\begin{equation}
\sum\limits_{j=1}^{N+1}\frac{\partial k_j}{\partial \Lambda} =Z_b.
\end{equation}
Recall that $Z_b<0$. The $L\to\infty$ limit of $k_N$ and $k_{N+1}$ is given by Eq.~\eqref{kc}. The leading term in the large $L$ expansion of Eq.~\eqref{eLinf} in the interval $0\le y \le L$ is
\begin{equation} \label{tck+}
c(k_N) \to c(k_+) = e(k_+)= 2ie^{ik_+(y-L)}, \qquad L\to\infty,
\end{equation}
and
\begin{equation}\label{ckminus}
c(k_{N+1}) \to c(k_-) = e(k_-) = -2ie^{ik_-y}, \qquad L\to\infty.
\end{equation}
Substituting equation~\eqref{kc} into \eqref{kjLambda} we obtain
\begin{equation} \label{dklambda}
\frac{\partial k_N}{\partial\Lambda} = \frac{\partial k_{N+1}}{\partial\Lambda} = \frac1\alpha +\mathcal{O}(e^{-|g|L})
\end{equation}
in place of Eq.~\eqref{dklambda2} for $j=N,N+1$. Further, we limit $y$ to the interval $0\le y\le L/2$, which implies $e(k_+)=0$ for Eq.~\eqref{tck+}. This gives
\begin{equation}
e_+(k_N)\to e_+(k_+)=0, \qquad e_+(k_{N+1})\to e_+(k_-)= \frac{2i}{\pi} e^{ik_-y/2} \left|\frac{L}{2\alpha}\right|^{1/2},
\end{equation}
and
\begin{equation}
e_-(k_N)\to e_-(k_+)=e^{-ik_+y/2} \left|\frac{L}{2\alpha}\right|^{1/2}, \qquad e_-(k_{N+1})\to e_-(k_-)= e^{-ik_-y/2} \left|\frac{L}{2\alpha}\right|^{1/2}
\end{equation}
for the $L\to\infty$ limit of the functions $e_{\pm}$ defined by Eq.~\eqref{epmfinite}. Evidently,
\begin{equation}
e_+(k_j)=-\frac1\pi \frac{e^{ik_j y}}{\nu_-(k_j)}e_-(k_j), \qquad e_+(k_-) = \frac{2i}\pi e^{ik_-y}e_-(k_-).
\end{equation}
Therefore, we get for the $L\to\infty$ limit of the function~\eqref{KNp}
\begin{equation}
K(k_j,k_N)= -\frac1\pi \frac{e^{ik_j y}}{\nu_-(k_j)}\frac{e_-(k_j)e_{-}(k_+)}{k_j-k_+}, \qquad j=1,\ldots,N-1,
\end{equation}
and
\begin{equation}
K(k_j,k_{N+1})= -\frac1\pi \left[\frac{e^{ik_j y}}{\nu_-(k_j)}+2ie^{ik_-y}\right] \frac{e_-(k_j)e_-(k_-)}{k_j-k_-}, \qquad j=1,\ldots,N-1,
\end{equation}
and
\begin{equation}
K(k_N,k_{N+1})= -\frac\alpha\pi e^{ik_-y}e_-(k_+)e_-(k_-).
\end{equation}
For the diagonal terms we use Eq.~\eqref{Ajju}
\begin{equation}
A_{NN}=0, \qquad A_{N+1N+1}= \frac{2y}{\alpha}
\end{equation}
and combine it with Eq.~\eqref{AAAij2}. This gives
\begin{equation}
K(k_N,k_N)= \frac{\alpha}{\pi}e^{ik_+ y}[e_-(k_+)]^2, \qquad K(k_{N+1},k_{N+1})= \frac\alpha\pi e^{ik_-y}[e_-(k_-)]^2 \left(1-\frac{2y}\alpha\right).
\end{equation}
Using the identity
\begin{equation} \label{idr}
\det (M+\xi R) = (1-\xi) \det M +\xi \det(M+R),
\end{equation}
where $\xi$ is a number, and $R$ is a rank one matrix, we write Eq.~\eqref{rhoN} as
\begin{equation} \label{rhod}
\varrho(y)= \left. \partial_\xi \det\left(I +\frac{2\pi}{L}K+\xi\frac{2\pi}L W \right) \right|_{\xi=0}.
\end{equation}
The two last rows and columns of this matrix are special because $k_N$ and $k_{N+1}$ are complex:
\begin{equation} \label{axib}
\det\left(I +\frac{2\pi}{L}K+\xi\frac{2\pi}L W\right) = [e_-(k_-)e_-(k_+)]^2 \det \left(\begin{array}{ccc|c|c}
& & & &\\
& \mathcal{A}& & \mathcal{A}^+ & \mathcal{A}^-\\
& & & &\\
\hline
& \mathcal{A}^+& &a &b\\
\hline
& \mathcal{A}^-& &b &d
\end{array}
\right).
\end{equation}
Here,
\begin{equation}
\mathcal{A}_{jl} = \delta_{jl} + \frac{2\pi}L K_{jl} + \frac{2\pi}L \frac\xi{Z_b} \frac{e_-(k_j)e_-(k_l)}{\pi}, \qquad j,l=1,\ldots,N-1,
\end{equation}
\begin{equation}
\mathcal{A}^+_j = \frac{2\pi}{L}\frac{e_-(k_j)}\pi\left(-\alpha e^{ik_j y} + \frac\xi{Z_b} \right), \qquad j=1,\ldots,N-1,
\end{equation}
\begin{equation}
\mathcal{A}^-_j = \frac{2\pi}{L}\frac{e_-(k_j)}\pi\left[-f_1(k_j) + \frac{\xi}{Z_b}\right], \qquad j=1,\ldots,N-1,
\end{equation}
and
\begin{equation} \label{d2det}
\mathcal{D}\equiv \left(\begin{array}{cc}
a & b \\
b & d
\end{array}\right)= -\frac{2\pi}L \frac\alpha\pi e^{ik_-y}\left(\begin{array}{cc}
0 & 1 \\
1 & \frac{2y}{\alpha}
\end{array}\right) + \frac{2\pi}L \frac{\xi}{\pi Z_b} \left(\begin{array}{cc}
1 & 1 \\
1 & 1
\end{array}\right),
\end{equation}
where
\begin{equation}
f_1(q)= \frac{e(q)-e(k_-)}{k_j-k_-}.
\end{equation}
We calculate the determinant and the inverse of $\mathcal{D}$ omitting the terms which are higher than the first order in $\xi$:
\begin{equation}
\det\mathcal{D} = \left(\frac{2\pi}L\right)^2 e^{ik_-y} \frac{\alpha^2}{\pi^2} \left[-e^{ik_- y} + \frac{\xi}{Z_b}\frac{2}{\alpha}\left(1-\frac{y}\alpha\right)\right]
\end{equation}
and
\begin{equation}
\mathcal{D}^{-1}= \frac{L}{2\pi} \frac\pi\alpha e^{-ik_-y} \left[\left(\begin{array}{cc}
\frac{2y}{\alpha} & -1 \\
-1 & 0
\end{array}\right) - \frac{\xi}{Z_b} \frac{e^{-ik_-y}}\alpha \left(\begin{array}{cc}
\left(1-\frac{2y}{\alpha}\right)^2 & 1-\frac{2y}\alpha \\
1-\frac{2y}\alpha & 1
\end{array}\right) \right].
\end{equation}
Suppose $\mathcal{A}$, $\mathcal{B}$, $\mathcal{C}$, and $\mathcal{D}$ are arbitrary matrices of dimension $n \times n$, $n \times m$, $m \times n$, and $m \times m$, respectively. When $\mathcal{D}$ is invertible, one has the identity
\begin{equation}
\det {\begin{pmatrix} \mathcal{A} & \mathcal{B} \\ \mathcal{C} & \mathcal{D} \end{pmatrix}}=\det(\mathcal{D})\det(\mathcal{A}-\mathcal{B}\mathcal{D}^{-1}\mathcal{C}).
\end{equation}
We use this identity for the determinant~\eqref{axib}, where $\mathcal{D}$ is given by Eq.~\eqref{d2det}.
We have
\begin{equation}
[\mathcal{B}\mathcal{D}^{-1}\mathcal{C}]_{jl} = \mathcal{A}^+_j \mathcal{D}^{-1}_{11} \mathcal{A}^+_l + \mathcal{A}^-_j \mathcal{D}^{-1}_{21} \mathcal{A}^+_l + \mathcal{A}^+_j \mathcal{D}^{-1}_{12} \mathcal{A}^-_l + \mathcal{A}^-_j \mathcal{D}^{-1}_{22} \mathcal{A}^-_l
\end{equation}
This gives
\begin{multline}
[\mathcal{B}\mathcal{D}^{-1}\mathcal{C}]_{jl} = \frac{2\pi}L \frac{e_-(k_j)e_-(k_l)}{\pi} e^{-ik_-y} \left\{ -f_1(k_j)e^{ik_l y} - e^{ik_j y}f_1(k_l) +2y e^{ik_j y} e^{ik_l y} \right.\\ \left.
-\frac{\xi}{Z_b}\frac{e^{-ik_- y}}{\alpha^2}[f_2(k_j)f_2(k_l) -\alpha^2 e^{2ik_-y}] \right\},
\end{multline}
where
\begin{equation}
f_2(q) = \alpha e^{ik_-y} -(\alpha-2y)e^{iqy} -f_1(q).
\end{equation}
This leads to the following representation of Eq.~\eqref{rhod} in the $L\to\infty$ limit:
\begin{equation} \label{rhodw}
\varrho(y)= e^{-ik_+ y} \left. \partial_\xi \left\{ \left[-e^{ik_- y}+\frac{\xi}{Z_b}\frac{2}{\alpha}\left(1-\frac{y}\alpha\right)\right] \det\left(\hat I + \hat K_1 +\xi \hat V_1 \right) \right\} \right|_{\xi=0}.
\end{equation}
Here,
\begin{equation} \label{k1a}
K_1(q,q^\prime)= K(q,q^\prime) + \frac{e_-(q)e_-(q^\prime)}{\pi} e^{-ik_-y} \left[f_1(q)e^{iq^\prime y}+ f_1(q^\prime)e^{iqy}-2ye^{i(q+q^\prime)y} \right],
\end{equation}
and
\begin{equation}
V_1(q,q^\prime)= \frac{e^{-2ik_-y}}{\pi\alpha^2 Z_b} e_-(q)e_-(q^\prime) f_2(q)f_2(q^\prime).
\end{equation}
Using the identity~\eqref{idr} we transform Eq.~\eqref{rhodw} to
\begin{multline} \label{rho11}
\varrho(y)= e^{-i(k_++k_-)y} \left[\det(\hat I + \hat K_1 +\hat W_1) -c\det(\hat I+\hat K_1)\right] \\
= e^{-i(k_++k_-)y}(1-c) \det \left(\hat I + \hat{K}_1 + \frac1{1-c}\hat{W}_1 \right),
\end{multline}
where
\begin{equation}
W_1(q,q^\prime)= -\frac{1}{\pi\alpha^2 Z_b} e_-(q)e_-(q^\prime) f_2(q)f_2(q^\prime),
\end{equation}
and
\begin{equation} \label{c}
c = 1 - \frac{2e^{ik_-y}(\alpha-y)}{\alpha^2 Z_b}.
\end{equation}
One has
\begin{equation}
K_1(q,q^\prime) + \frac1{1-c}W_1(q,q^\prime) = K_b(q,q^\prime) + \frac1{1-c} W_b(q,q^\prime),
\end{equation}
where
\begin{equation}
K_b(q,q^\prime)= K(q,q^\prime) + \frac\alpha\pi e_-(q)e_-(q^\prime)(e^{iqy}+e^{iq^\prime y}),
\end{equation}
\begin{equation}
W_b(q,q^\prime) = -\frac{e_-(q)e_-(q^\prime)f(q)f(q^\prime)}{\pi\alpha^2 Z_b},
\end{equation}
and
\begin{equation}
f(q) = \alpha (e^{ik_-y} + e^{iqy}) - f_1(q).
\end{equation}
We get for Eq.~\eqref{rho11}
\begin{equation}
\varrho(y) = e^{-i(k_++k_-)y}(1-c) \det \left(\hat I + \hat{K}_b + \frac1{1-c}\hat{W}_b \right).
\end{equation}
Using the identity~\eqref{idr} we arrive at the expression
\begin{equation}
\varrho(y) = e^{-i(k_++k_-)y} \left[ \det \left(\hat I + \hat{K}_b + \hat{W}_b \right) - c \det\left(\hat I + \hat{K}_b \right) \right].
\end{equation}
This is the desired representation~\eqref{rhoTD2}.
\section{Small distance expansion of the reduced density matrix}
\label{smalldistance}
In this appendix we derive formulas presented in section~\ref{sec:nkinf}. We start from the finite-size expression for the reduced density matrix, Eq.~\eqref{rhoN}. This way the repulsive ground state, the attractive gas state, and the attractive bound state are treated all at once.
The expansion of the kernels~\eqref{KNp} and~\eqref{Wij} up to order three in $y$ reads
\begin{multline} \label{eq:knexp1}
\frac{2\pi}L \left|\frac{\partial k_j}{\partial\Lambda} \frac{\partial k_l}{\partial \Lambda} \right|^{-1/2} K(k_j,k_l) = -\alpha +i(i+\Lambda)y -\frac{i}2y\alpha(k_j+k_l)\\
+ \left[\frac{\alpha}8 y^2 - \frac{i}{24} (i+\Lambda)y^3\right](k_j^2 - 2k_j k_l +k_l^2) + \frac{i}{48} \alpha y^3 (k_j^3-k_j^2 k_l - k_j k_l^2 + k_l^3) + \cdots,
\end{multline}
and
\begin{multline} \label{eq:knexp2}
\frac{2\pi}L \left(\sum_{m=1}^{N+1}\frac{\partial k_m}{\partial\Lambda}\right) \left|\frac{\partial k_j}{\partial\Lambda} \frac{\partial k_l}{\partial \Lambda} \right|^{-1/2} W(k_j,k_l) = 1 -\frac{i}2y(k_j+k_l)\\
-\frac18 y^2(k_j^2 + 2k_j k_l +k_l^2) + \frac{i}{48} y^3 (k_j^3+3k_j^2 k_l +3k_j k_l^2 + k_l^3) + \cdots ,
\end{multline}
respectively.
After substituting the expansions~\eqref{eq:knexp1} and~\eqref{eq:knexp2} into the determinants on the right hand side of Eq.~\eqref{rhoN} we use the following identity
\begin{equation}
\det\nolimits_N(I+ UV^T)= \det\nolimits_s(I+V^TU).
\end{equation}
Here, $U$ and $V$ are $N\times s$ matrices with the columns formed by $N+1$-component vectors $u_1,\ldots,u_s$ and $v_1,\ldots v_s$, respectively. As a result (\textsc{Mathematica} package has been used to evaluate the determinants) we expanded Eq.~\eqref{rhoN} up to order three in $y$:
\begin{multline} \label{rhoN22}
\varrho(y) = 1 +\frac{-iy}{S_0} S_1 +\frac{(-iy)^2}{2 S_0} \left(S_2-\alpha S_0 S_2+\alpha S_1^2\right) \\
+ \frac{(-iy)^3}{6S_0} \left[S_3-(\Lambda +i)(S_0S_2-S^2_1)-\alpha S_0S_3 +\alpha S_1S_2 \right] + \cdots,
\end{multline}
where
\begin{equation} \label{eq:sndef}
S_n = \sum\limits_{j=1}^{N+1} k_j^{n} \frac{\partial k_j}{\partial \Lambda}.
\end{equation}
We now take the thermodynamic limit in Eq.~\eqref{eq:sndef}. For the repulsive ground state and the attractive gas state we have
\begin{equation}
S_0 = Z ,
\end{equation}
\begin{equation}
S_1 = \frac{\Lambda }{\alpha}Z + \varphi,
\end{equation}
\begin{equation}
S_2 = \frac{\Lambda^2 -1}{\alpha^2 }Z + \frac{2}{\pi \alpha^2} + \frac{2 \Lambda }{\alpha} \varphi,
\end{equation}
\begin{equation}
S_3 = \frac{\Lambda^3 -3}{\alpha^3}\Lambda Z + \frac{4\Lambda}{\pi \alpha^3} + \frac{3\Lambda^2 -1 }{\alpha^2} \varphi.
\end{equation}
Here,
\begin{equation}
\varphi = \frac{1}{2\pi \alpha^2 }\ln\frac{1 +(\alpha-\Lambda)^2}{1+(\alpha+ \Lambda)^2}.
\end{equation}
and $Z$ is given by Eq.~\eqref{ZZ}. Notably, the result for the attractive bound state follows by just replacing $Z$ with $Z_b$, Eq.~\eqref{ZZb}.
Finally, the expansion~\eqref{rhoN22} gives for Eqs.~\eqref{eq:<P>}, \eqref{eq:<P2>}, and \eqref{eq:tan}
\begin{equation}
\langle P_{\rm imp} \rangle = \frac{S_1}{S_0},
\end{equation}
\begin{equation}
\langle P^2_{\rm imp} \rangle = \frac{S_2+ \alpha (S_1^2 - S_0S_2)}{S_0},
\end{equation}
and
\begin{equation}
C = \frac{S_2S_0-S_1^2}{\pi S_0},
\end{equation}
respectively. This leads us to the results discussed in section~\ref{sec:nkinf}.
\end{appendix}
|
1,314,259,996,575 | arxiv | \section{Introduction}
Semirings and semimodules, and their applications, arise in various branches of mathematics, computer science, physics, as well as in many other areas of modern science (see, for example \cite{golan:sata}). Involutive residuated lattices arose in the literature as generalizations of classical propositional logic and classical linear logic (\cite{GJKO}). The article is organized as follows. In Section 1, for the reader's convenience, we provide all the necessary notions on semirings and involutive residuated lattices. Then, we prove a term equivalence between involutive residuated lattices and a special class of semirings that we shall call involutive $0$-free semirings. This categorical isomorphism helps us to find a necessary and sufficient condition for $[0,1]$ to be a subalgebra of an involutive residuated lattice. In particular, we show that $[0,1]$ is a subalgebra of an involutive residuated lattice if and only if $0$ is a multiplicatively idempotent element.
In the second section we focus our attention on those involutive residuated lattices for which $0$ is the bottom element. In this case $1$ is the top and so the lattice is bounded. We consider involutive semiring and we define involutive semimodules. Then we characterize injective and projective involutive semimodules, generalizing similar result for MV-semirings in \cite{dlnv:isaidscmv}. Indeed, involution seems to play a crucial role and the results for MV-semirings seem to be generalizable whenever the Mundici functor is not involved. We show for example that, for a finite commutative involutive semiring, injective and projective finitely generated semimodules coincide and we show, by providing a counterexample, that without the involution this is not true.
It leads us to observe that, even if the involution appears only in the semiring and it doesn't affect at all the structure of the semimodule, it still plays a fundamental role in the study of injective and projective semimodules.
Furthermore, we restate a well-known characterization of injective semimodules over a semiring $A$ in terms of the join-semilattice $Id(A)$ (the ideals of $A$ considered as a join-semilattice) with the reverse order.
\section{Involutive Semirings}
A \emph{0-free} \emph{semiring} is an algebra $(A, +, \cdot, 1)$ such that
\begin{itemize}
\item $(A, +)$ is a commutative semigroup,
\item $(A, \cdot, 1)$ is a monoid and
\item $a (b+c)=a b + a c$ and $(a+b) c=a c + b c$ for all $a, b, c \in A$
\end{itemize}
where, as usual, $ab$ is short for $a\cdot b$. A \emph{semiring} $(A,+,0,\cdot,1)$ satisfies the same axioms, as well as $a+0=a$ and $a0=0=0a$ for all $a\in A$. A \emph{semifield} is a semiring in which all non-zero elements have a multiplicative inverse.
A (0-free) semiring $A$ is \emph{commutative} if $a \cdot b = b \cdot a$, for all $a, b \in A$, it is \emph{1-bounded} if $a+1=1$
and (\emph{additively}) \emph{idempotent} if $a+a=a$ for every $a \in A$ (or equivalently $1+1=1$).
Note that an idempotent semiring has a natural order defined on it by $x \leq y \iff x+y=y$, in which case $+$ is denoted by $\vee$ and $(S,\vee)$ is a join-semilattice. In the 1-bounded case, the identity $1$ is the top element. Such
algebras are also called \emph{integral}, but we do not use this terminology here since integrality has a different meaning in ring theory.
A \emph{residuated join-semilattice} or \emph{0-free residuated idempotent semiring} is an algebra $(A, \vee, \cdot, 1, \backslash, /)$ such that
\begin{itemize}
\item $(A, \vee)$ is a semilattice, with partial order defined by $x\le y\iff x\vee y=y$,
\item $(A, \cdot, 1)$ is a monoid and
\item \textup{(res)} \quad $xy \le z \iff x \le z/y \iff y \le x\backslash z$
\quad holds for all $x,y,z\in A$.
\end{itemize}
A \emph{pointed residuated join-semilattice} $(A,\vee,\wedge,\cdot,1,\backslash,/,0)$ is a residuated lattice with an additional constant $0$. Note that this constant need not be the least element of the lattice. An \emph{involutive residuated lattice} is a pointed residuated join-semilattice that satisfies
${\sim}{-}x = x = -{\sim}x$ for all $x\in A$, where ${\sim}x = x\backslash 0$ and $-x=0/x$.
The operations $\sim,-$ are order-reversing, and are called \emph{left} and \emph{right linear negation}. The residuation equivalences (res) can be replaced by four identities, hence involutive residuated lattices form a variety, denoted by \textsf{InRL}.
It is well known (and easy to see) that
$\backslash,/,0$ can be expressed by the linear negations and the monoid operation:
$$x\backslash y = {\sim}((-y)x), \quad
x/y = -(y({\sim}x) \ \text{ and } \ 0 = {\sim}1 = -1.$$
Residuation also implies that $\cdot$ distributes over $\vee$, hence $(A,\vee,\cdot,1)$ is a $0$-free idempotent semiring.
Note that the constant $0$ is an additive identity if and only if $0$ is the bottom element, or equivalently $1$ is the top element, i.\,e., the semiring is
$1$-bounded. It follows from (res) that $x0=0=0x$, so a $1$-bounded
involutive residuated lattice has a semiring reduct.
If $\cdot$ is commutative then $x\backslash y=y/x$, hence ${\sim}x=-x$.
An \emph{MV-algebra} is a $1$-bounded commutative involutive residuated
lattice that satisfies $x\vee y=(x/y)\backslash x$,
(though these algebras are usually defined
using the operations $\oplus,-,0$, where $x\oplus y = {\sim}(-y\cdot -x)$) \cite{cdm:afomvr}.
An \emph{MV-semiring} \cite{dr:sasiimva} is an algebra $(A,\vee,0,\cdot,1,-)$ such
that
\begin{itemize}
\item $(A,\vee,0,\cdot,1)$ is a commutative idempotent semiring,
\item $x \le y \iff x \cdot -y = 0$ and
\item $x\vee y=-(-x\cdot -(-x\cdot y))$ for all $x,y\in A$.
\end{itemize}
Proposition 4.11 in \cite{dr:sasiimva} shows that MV-algebras and MV-semirings are
term-equivalent. This result has led to fruitful interaction between
research in fuzzy logic and semirings/semimodules \cite{dlnv:isaidscmv,dr:sasiimva}. Below we show that the term-equivalence lifts to involutive residuated lattices and a class of
non-commutative 0-free semirings. This expands the applicability of semiring
techniques to involutive residuated lattices since 1-boundedness and the third axiom of MV-semirings are not required to hold. The general term-equivalence was first shown between coupled semirings and involutive residuated lattices in \cite{jip:raisgbia}.
The variety of involutive residuated lattices is considerably more general than the variety of MV-algebras since the latter have distributive lattice reducts, while there are non-distributive 1-bounded involutive residuated lattices (the smallest examples have seven elements).
An \emph{involutive semiring} is an algebra $(A, \vee, \cdot, 1, {\sim}, -)$ such that
\begin{itemize}
\item $(A, \vee, \cdot, 1)$ is a $0$-free idempotent semiring and
\item $x \le y \iff x \cdot {\sim}y \le -1 \iff -y\cdot x\le -1$ for all $x,y\in A$.
\end{itemize}
The element $-1$ is denoted by $0$, although it need not be the bottom
element of the join-semilattice.
\begin{thm}\label{termeq}
Involutive residuated lattices are term-equivalent to involutive semirings.
\end{thm}
\begin{proof}
As mentioned before, involutive residuated lattices have 0-free semiring
reducts, and $x\le y$ is equivalent to $x\le -{\sim}y=0/{\sim}y$, which
by (res) is equivalent to $x\cdot {\sim}y\le 0=-1$. The equivalence
$x\le y\iff -y\cdot x\le -1$ is proved similarly, showing that any involutive
residuated lattice is an involutive semiring.
Conversely, let $A$ be an involutive semiring and define $x\wedge y=
{\sim}(-x\vee -y)$. It remains to prove the identities
${\sim}{-}x=x=-{\sim}x$, that $(A,\vee,\wedge)$ is a lattice
and that (res) holds.
To prove the identity ${\sim}{-}x=x$, note that $-x\leq y \iff -x\cdot {\sim}y\leq 0 \iff {\sim}y\leq x$. Substituting $y$ by $-x$ we get ${\sim}{-}x\leq x$, hence ${\sim}{-}{\sim}{-}x\leq {\sim}{-}x\leq x$, or equivalently $-x\leq -{\sim}{-}x$.
Similarly we can substitute $x$ by ${\sim}y$ obtaining $-{\sim}y\leq y$, and replacing $y$ by $-x$ we have $-{\sim}{-}x\leq -x$, hence the identity $-x=-{\sim}{-}x$ holds.
Now $x\leq x$ implies $-x \cdot x\leq 0$, hence $-0 \cdot (-x \cdot x)\leq 0$. From the preceding identity it follows that $-{\sim}{-}0 \cdot (-{\sim}{-}x \cdot x)\leq 0$, which implies ${-}{\sim}{-}x \cdot x\leq {\sim}{-}0\leq 0$ and therefore $x\leq {\sim}{-}x$. So we have shown that the identity ${\sim}{-}x=x$ holds, and $-{\sim}x=x$ is proved similarly.
Next, observe that $x\le y\iff -y\cdot x\le 0\iff -y\cdot {\sim}-x\le 0\iff
-y\le -x$. A similar calculation for $\sim$ shows that the unary operations
in an involutive semiring are order-reversing inverses of each other.
Since $(A,\vee)$ is a join-semilattice, it follows that $(A,\wedge)$
is a meet-semilattice with respect to the same order.
We now prove the absorption laws:
$x \wedge (x \vee y)= {\sim} (-x \vee -(x \vee y))={\sim}{-}x = x$ since $x \leq x \vee y$ implies $-(x \vee y) \leq -x$). Similarly
$x \vee (x \wedge y) = x\vee {\sim} (-x \vee -y) = x$ since $-x \leq -x \vee -y$ implies ${\sim}(-x \vee -y) \leq x$. Hence $(A,\vee,\wedge)$ is a lattice.
Finally we prove that if the residuals are defined as $x \backslash z = {\sim} (-z \cdot x)$ and $z/y = -(y \cdot {\sim} z)$ then (res) holds:
$$y \leq {\sim} (-z \cdot x) \Leftrightarrow -z \cdot x \leq -y \Leftrightarrow -z \cdot x \cdot y \leq -1 \Leftrightarrow -z \leq -(x \cdot y) \Leftrightarrow xy \leq z.$$
The second equivalence of (res) is proved similarly, hence any involutive
semiring determines an involutive residuated lattice. The term-equivalence is established by observing that $x\backslash 0={\sim}(-0\cdot x)={\sim}(-{\sim}1\cdot x)={\sim}x$ and likewise $0/x=-x$.
\end{proof}
In the next result we use standard interval notation, so $[0,1] = \{ a \mid 0 \leq a \leq 1\}$.
\begin{thm}
In any involutive semiring (equivalently involutive residuated lattice) the interval $[0,1]$ is a subalgebra if and only if $0$ is a multiplicative idempotent element, i.\,e., $0\cdot 0=0$.
\end{thm}
\begin{proof}
Assume $[0,1]$ is a subalgebra of an involutive semiring. Then $0\le 1$ since
any subalgebra contains the constant $1$. We also have that $0 \leq 1 \iff 0 \leq -{\sim}1 \iff 0 \cdot 0 \leq 0$. The reverse inequality $0\le 0\cdot 0$
holds because any subalgebra is closed under $\cdot$ and hence we obtain $0 \cdot 0 = 0$.
Conversely, assume $0\cdot 0=0$. The equivalence $0 \cdot 0 \leq 0 \iff 0 \leq 1$ shows that $0,1$ are in the interval $[0,1]$. The interval is certainly closed under joins, and closure under ${\sim}$ follows from
$$
0\le a\le 1\quad\iff\quad 0={\sim}1\le{\sim}a\le{\sim}0=1.
$$
The argument for closure under $-$ is the same.
As regards the multiplication we have that if $a, b \in [0,1]$ then $ab \leq a,b$ and in particular $ab \leq 1$.
Observe that if $a \leq b$ and $c \leq d$, we have that $ac \leq bd$, so $0 \leq a, b$ implies $0 \cdot 0 \leq ab$. Since we assume $0 \cdot 0 = 0$ we have that $0 \leq ab$, hence $a,b\in [0,1]$.
\end{proof}
In an involutive residuated lattice $0$ is the bottom element if and only if $1$ is the top element.
Hence \emph{1-bounded involutive semirings} can be defined as idempotent semirings with two unary operations ${\sim},-$ such that
$$
x \le y \iff x \cdot {\sim}y =0 \iff -y\cdot x=0.
$$
\section{Semimodules over 1-bounded involutive semirings}
Let $A$ be a semiring. A (left) $A$-\emph{semimodule} is a commutative monoid
$(M, +, 0)$ with a scalar multiplication $\cdot : A \times M \to M$, such that the following conditions hold for all $a, b \in A$ and $x, y \in M$:
\begin{itemize}
\item $(ab) \cdot x= a \cdot (b \cdot x)$
\item $a \cdot (x + y) = (a \cdot x) + (a \cdot y)$
\item $(a + b) \cdot x = (a \cdot x) + (b \cdot x)$
\item $0_{A} \cdot x= 0_{M}= a \cdot 0_{M}$
\item $1 \cdot x=x$.
\end{itemize}
For example, any semiring $A$ can be considered a \emph{regular} left $A$-semimodule with scalar multiplication $a\cdot x=ax$ for all $a,x\in A$.
The definition of right $A$-semimodules is completely analogous. From now on, we will refer generically to semimodules without specifying left or right and we will use the notations of left semimodules.
Since the intersection of $A$-semimodules is an $A$-semimodule we can define finitely generated and cyclic semimodules in a standard way, in particular an $A$-semimodule $M$ is cyclic if and only if there exists $m \in M$ such that $M = Am = \{am \mid a \in A\}$.
An example of left cyclic semimodule over a semiring $A$ is given by $Ax = \{ax \mid a \in A\}$ with $x \in A$, it is the principal left-ideal of the semiring $A$ generated by $x$.
If $A$ is additively idempotent, then any $A$-semimodule is also
additively idempotent, hence a join-semilattice with $0$ (since $x=1x=(1+1)x=x+x$). In this case
we write $\vee$ instead of $+$ and often make use of the natural order given
by $x\le y \iff x\vee y=y$.
Let $(M, +, 0)$ and $(N, +, 0)$ be two semimodules over a semiring $A$. For any subsemiring $B$ of $A$, we can consider $M,N$ semimodules over $B$. A \emph{$B$-semimodule homomorphism} is a function $f: M \to N$ such that $f(m + m')=f(m) + f(m')$ and $f(b \cdot m)=b \cdot f(m)$ for all $m, m' \in M$ and $b \in B$. The set of all such homomorphisms is denoted by $\mathit{Hom}_B(M,N)$. If we take $M$ be the semiring $A$, considered as a semimodule over itself, then $\mathit{Hom}_B(A,N)$ is an $A$-semimodule with pointwise addition, and scalar multiplication $a\cdot f$ given by $(a\cdot f)(t)=f(ta)$ for all $t\in A$. Note that $a$ has to act from the right if $A$ is noncommutative, while for commutative $A$ it holds in general that $\mathit{Hom}_B(M,N)$ is an $A$-semimodule.
Let $A$ be a semiring and $(M, +, 0)$ an $A$-semimodule.
An $A$-semimodule $E$ is \textit{injective} if and only if, given an $A$-semimodule $M$ and a subsemimodule $N$ of $M$, any semimodule homomorphism $\alpha$ from $N$ to $E$ can be extended to a semimodule homomorphism $\beta$ from $M$ to $E$ such that $\beta\iota=\alpha$.
\medskip
\begin{center}
\begin{tikzcd}
N \arrow [d, hook, "\iota"] \arrow[r, "\alpha"] & E\\
M \arrow [ur,dashed,"\beta"]
\end{tikzcd}
\end{center}
\medskip
A semiring $A$ is called \textit{self-injective} if the regular $A$-semimodule $A$ is injective.
Recall that $\mathbb B$ denotes the 2-element Boolean semifield. A semimodule $M$ is a \emph{retract} of a semimodule $M'$ if there exist homomorphisms $r:M'\to M,s:M\to M'$ such that the composition $rs$ is the identity map on $M$.
\begin{thm}\label{inj} \textup{(}\cite{dlnv:isaidscmv}\textup{)}
Let $A$ be an additively idempotent semiring and $M$ an $A$-semimodule. Then $M$ is injective if and only if there exists a set $X$ such that $M$ is a retract of the $A$-semimodule $\mathit{Hom}_{\mathbb{B}} (A, \mathbb{B})^{X}$.
\end{thm}
Let $A$ be a semiring. An $A$-semimodule $P$ is \textit{projective} if the following condition holds: if $\varphi: M \longrightarrow N$ is a surjective $A$-homomorphism of $A$-semimodules and if
$\alpha: P \longrightarrow N$ is an $A$-homomorphism then there exists an $A$-homomorphism $\beta: P \longrightarrow M$
satisfying $\varphi\beta = \alpha$.
\medskip
\begin{center}
\begin{tikzcd}
& M \arrow[d, two heads, "\varphi"] \\
P \arrow[ur, dashed, "\beta"] \arrow[r, "\alpha"] & N
\end{tikzcd}
\end{center}
\medskip
It is well-known that in any variety of algebras the projective objects are the retracts of free objects.
In the category of semimodules over a semiring $A$, the free object over a set $X$ is $A^{(X)}=\{f:X\to A\mid f(x)=0\text{ for all but finitely many $x\in X$}\}$ (\cite{dr:sasiimva}). So, we obtain the following characterization of projective semimodules.
\begin{thm}
Let $A$ be a semiring. An $A$-semimodule $P$ is projective if and only if it is a retract of the semimodule $A^{(X)}$ for some set $X$.
\end{thm}
\section{Injective and projective semimodules over involutive semirings}
Let $A=(A, \vee, \cdot, 0, 1)$ be an additively idempotent semiring and $(M, \vee, 0)$ a semimodule over it.
$M$ is called \emph{MID-complete} if the semilattice $M$ is a complete lattice, and if $M$ satisfies the meet infinite distributive identity (MID), i. e.
$m\vee \bigwedge_{i \in I} m_{i} = \bigwedge_{i \in I} (m \vee m_{i})$.
A join-semilattice $(A, \vee, 0)$ is join-distributive if for any $a$, $b_0$ and $b_1$ elements of $A$ such that $a \leq b_0 \vee b_1$, then there exist $a_0$, $a_1$ $ \in A$ such that $a_0 \leq b_0$, $a_1 \leq b_1$ and $a = a_0 \vee a_1$.
An \emph{ideal} of a join-semilattice $(A, \vee, 0)$ is a subset $I$ of $A$ such that
\begin{itemize}
\item if $a, b \in I$, then $a \vee b \in I$;
\item if $ a \in I$, $b \in A$ and $b \leq a$, then $b \in I$.
\end{itemize}
In the rest of the section we shall write ideal of a semiring $A$ meaning that such ideal is to be considered as an ideal according to the above definition (i.e. ideal of a join-semilattice and not ideal of a semiring).
The following result is well known from lattice theory.
\begin{lem}
Let $A$ be a join-semilattice. Then the lattice of ideals $(\mathit{Id}(A), \cap, \wedge)$, ordered by reverse inclusion, is complete. If $A$ is join-distributive, then $\mathit{Id}(A)$ is MID-complete
(i.\,e. $J \cap \bigwedge_{i \in I} J_{i \in I} = \bigwedge_{i \in I} (J \cap J_{i \in I})$, for any $J, J_{i} \in \mathit{Id}(A)$).
\end{lem}
\begin{proof}
For the completeness it is sufficient to observe that for any $J_{i} \in \mathit{Id}(A)$ we have that $\bigvee_{i \in I} J_{i} = \bigcap_{i \in I} J_{i}$ and since the set of ideals of a lattice is closed under arbitrary intersections we have that the lattice is complete. For the second part, observe that $\bigwedge_{i \in I} J_{i} = \{ \vee_{k=1}^n a_{i_k} \mid a_{i_k} \in J_{i_k}, \{i_1, \dots, i_n\} \subseteq I, n \in \mathbb{N} \}$.
This set is obviously closed under joins, to see that it is downward closed consider an element $x \leq a \in \bigwedge_{i \in I} J_{i}$, then $a = a_{i_{1}} \vee \dots \vee a_{i_{n}}$ where $a_{i_{k}} \in J_{i_{k}}$, for some $J_{i_{k}} \in \mathit{Id}(A)$ for every $k=1,\dots,n$.
Then, since $A$ is join-distributive, we have that there exist elements $a_{i_{1}}'$, $\dots$, $a_{i_{1}}'$ such that $a_{i_{k}}' \leq a_{i_{k}}$ for every $k \in \{1, \dots , n\}$ and $x = a_{i_{1}}' \vee \dots \vee a_{i_{n}}'$. Since any ideal $J_{i_{k}}$ is downard closed we have that $a_{i_{k}}' \in J_{i_{k}}$ for every $k \in \{1, \dots , n\}$, so $x \in \bigwedge_{i \in I} J_{i}$.
It is now straightforward to see that $J \cap (\bigwedge_{i \in I} J_{i}) = \bigwedge_{i \in I} (J \cap J_{i})$ and the proof is complete.
\end{proof}
For an idempotent semiring $A$, an element $a\in A$ and an ideal $I\subseteq A$, define scalar multiplication by $a\cdot I=\{x\in A\mid xa\in I\}$.
Then $a\cdot I$ is also an ideal of $A$, and
it is straight forward to check that $(\mathit{Id}(A),\cap,I)$ is an $A$-semimodule (ordered by reverse inclusion). Recall also that for a semiring
homomorphism $f:A\to B$, $\mathit{Ker}(f)=\{x\in A\mid f(x)=0\}$, and this is a member of $\mathit{Id}(A)$.
\begin{thm}\label{homid}
Let $A$ be an additively idempotent semiring. Then $\mathit{Hom}_{\mathbb{B}}(A, \mathbb{B})$ and $\mathit{Id}(A)$ are isomorphic as $A$-semimodules.
\end{thm}
\begin{proof} As noted above, $\mathit{Ker}$ is a map from $\mathit{Hom}_{\mathbb{B}}(A, \mathbb{B})$ to $\mathit{Id}(A)$,
and since a function $f:A\to\mathbb B$ is determined by the preimage of $\{0\}$, the map $\mathit{Ker}$ is a bijection. For $f,g\in \mathit{Hom}_{\mathbb{B}}(A, \mathbb{B})$ and $a\in A$ we have
$$\mathit{Ker}(f\vee g)=\{x\in A\mid (f\vee g)(x)=0\}=\mathit{Ker}(f)\cap \mathit{Ker}(g) \text{ and}$$
$$\mathit{Ker}(a\cdot f)=\{x\in A\mid (a\cdot f)(x)=0\}=\{x\in A\mid f(xa)=0\},$$
which agrees with $a\cdot \mathit{Ker}(f)=\{x\in A\mid xa\in \mathit{Ker}(f)\}$.
\end{proof}
With this result we can restate Theorem~\ref{inj}.
\begin{cor}\label{inject}
Let $A$ be an additively idempotent semiring and $M$ an $A$-semimodule. Then, $M$ is injective if and only if $M$ is a retract of $\mathit{Id}(A)^{X}$ for some set $X$.
\end{cor}
\begin{lem}
Let $(B, \vee, 0)$ and $(M, \vee, 0)$ be two semimodules over an idempotent semiring $A$ and suppose that $M$ is a retract of $B$. If $B$ is MID-complete
then $M$ is also MID-complete.
\end{lem}
\begin{proof}
Let $\alpha: M \to B$ and $\beta : B \to M$ be the two homomorphisms which determine the retraction.
We prove that $M$ is a complete semimodule and that $\bigwedge_{i\in I}m_i = \beta(\bigwedge_{i \in I} \alpha(m_{i}))$.
Indeed, we first note that $$m_i = \beta\alpha(m_i)\geq \beta\alpha(\bigwedge_{i\in I}m_i)$$ for all $i\in I$. If $m'\in M$ and $m_i\geq m'$ for all $i\in I$, then we have that $\alpha(m_i)\geq \alpha(m')$ for all $i\in I$, and hence $\bigwedge_{i\in I}\alpha(m_i)\geq \alpha(m')$. This implies that $\beta(\bigwedge_{i\in I}\alpha(m_i))\geq \beta(\alpha(m')) = m'$. Therefore, $\bigwedge_{i\in I}m_i$ exists in $M$ and is equal to $\beta(\bigwedge_{i\in I}\alpha(m_i))$. So, $M$ is a complete.
Since $B$ satisfies the MID law, we have
\begin{equation*}
\begin{array}{rcl}
m \vee \bigwedge_{i\in I}m_i &=& \beta(\alpha(m)) \vee \beta(\bigwedge_{i\in I}\alpha(m_i))= \beta(\alpha(m) \vee \bigwedge_{i\in I}\alpha(m_i))\\
&=& \beta(\bigwedge_{i\in I}(\alpha(m)\vee\alpha(m_i)))= \beta(\bigwedge_{i\in I}\alpha(m\vee m_i))\\
&=&\bigwedge_{i\in I}(m\vee m_i),
\end{array}
\end{equation*}
so, $M$ is MID-complete and the statement is proved.
\end{proof}
\begin{thm}
\label{Theorem 9}
Let $A$ be a distributive and additively idempotent semiring and $M$ an injective semimodule over $A$. Then $M$ is MID-complete.
\end{thm}
\begin{proof}
We know that $M$ is injective if and only if it is a retract of $\mathit{Hom}_{\mathbb{B}}(A, \mathbb{B})^{X}$ for some set $X$. Since we know that $\mathit{Hom}_{\mathbb{B}}(A, \mathbb{B})$ and $\mathit{Id}(A)$ are isomorphic as $A$-semimodules (and obviously as join-semilattices) we have that $\mathit{Hom}_{\mathbb{B}}(A, \mathbb{B})$ is complete and infinitely distributive and so $\mathit{Hom}_{\mathbb{B}}(A, \mathbb{B})^{X}$. From the previous theorem we obtain that $M$ is MID-complete.
\end{proof}
Recall that in a pointed residuated join-semilattice we define $-x=0/x$ and ${\sim}x=x\backslash 0$.
\begin{thm}
Let $A$ be a finite $1$-bounded pointed residuated join-semilattice.
Then $A$ is an involutive semiring if and only if $A$ and $\mathit{Id}(A)$
are isomorphic as $A$-semimodules via the map $\Phi(a)={\downarrow}{-}a$.
\end{thm}
\begin{proof}
By Theorem~\ref{homid} we can consider $\mathit{Id}(A)$ in place of $\mathit{Hom}_\mathbb B(A,\mathbb B)$.
First, assume $A$ is a finite $1$-bounded involutive semiring and define a map $\Phi : A \to \mathit{Id}(A)$ by $\Phi (a) = {\downarrow}{{-}a}=\{x\in A\mid x\le {-}a\}$, where $-a=0/a$. Since every ideal of a finite join-semilattice is principal, and since $-$ is a bijection, this map is also bijective. It is order-preserving since $-$ is order-reversing and $\mathit{Id}(A)$ is ordered by reverse inclusion, hence
$\Phi(a\vee b)=\Phi(a)\cap\Phi(b)$. The following calculation shows that $\Phi$ preserves scalar multiplication:
$$b\cdot \Phi(a)=\{x\in A\mid xb\le-a\}=\{x\in A\mid xba\le 0\}=\{x\in A\mid x\le-(ba)\}=\Phi(ba).$$
Conversely, assume $A$ is a finite $1$-bounded residuated join-semilattice, and $A$, $\mathit{Id}(A)$ are isomorphic as $A$-semimodules via the map $\Phi(a)={\downarrow}{-}a$, where $-a=0/a$ and
$0$ is the bottom element of $A$. Let $f(a)=\bigvee\Phi(a) = -a$. Since $A$ and $\mathit{Id}(A)$ are assumed to be isomorphic, $f$ is a bijection. From residuation it follows that $x\le 0/y\iff xy\le 0\iff y\le x\backslash 0$, hence $-,\sim$ form a Galois connection, hence ${-}{\sim}$ and ${\sim}{-}$ are closure operators and ${-}{\sim}{-}x=-x$. Since $f(x)=-x$ is a bijection, we get ${\sim}{-}x=x$
and ${-}{\sim}x=x$, so $A$ is an involutive semiring by Theorem~\ref{termeq}.
\end{proof}
The previous theorem together with Corollary~\ref{inject} gives the following result.
\begin{cor}
Let $A$ be a finite $1$-bounded involutive semiring and $M$ a semimodule over $A$. Then $M$ is injective if and only if it is a retract of $A^X$ for some set $X$.
\end{cor}
\begin{thm}\label{injiffproj}
Let $A$ be a finite $1$-bounded involutive semiring and $M$ a finitely generated $A$-semimodule. Then, $M$ is injective if and only if it is projective.
\end{thm}
\begin{proof}
Since $A \cong \mathit{Hom}_{\mathbb{B}}(A, \mathbb{B})$ as $A$-semimodules, we have that retracts of $A^{X}$ for some finite set $X$ (projective semimodules) are exactly the retracts of $\mathit{Hom}_{\mathbb{B}}(A, \mathbb{B})^{X}$ (injective semimodules).
\end{proof}
We can wonder in which cases injective and projective semimodules coincide and in particular if we can weaken the hypothesis about the semiring assumed in the above theorem. As regards involution the answer is no and we shall provide an example.
\begin{example}
Consider the three-elements idempotent semiring $A = \{0, a, 1\}$ with $0 < a < 1$ and $a \cdot a = a$, then injective and projective semimodules over this semirings don't coincide. First of all observe that $Id(A) = \{0, {\downarrow}a, A\}$. We know that $A$ is a projective semimodule over itself. We shall now prove that $A$ can't be injective. Suppose that $A$ is self-injective, so it should be a retract of $Id(A)^n$ for some finite $n \in \mathbb{N}$ since $A$ is finitely generated. In this case, we should have an $A$- semimodule morphism $\Phi: Id(A)^n \to A$ such that $Im(\Phi) = A$, if $\Phi( \{0\}, \{0\}, \dots, \{0\})$ is $a$ or $0$, then $|Im(\Phi)|\leq 2$ ($\Phi$ is order-preserving), in particular $Im(\Phi) \neq A$, so $\Phi( \{0\}, \{0\}, \dots, \{0\}) = 1$, but in this case
\[
1 = \Phi( \{0\}, \dots, \{0\}) = \Phi( a \cdot(\{0\}, \dots, \{0\})) = a \cdot \Phi( \{0\}, \dots, \{0\}) = a \cdot 1 = a
\]
which is absurd.
\end{example}
\begin{thm}
Let $A$ be a finite $1$-bounded involutive semiring and $M$ a cyclic $A$-semimodule. Then the following are equivalent:
\begin{enumerate}
\item $M \cong Au$ for some $u \in A$ multiplicatively idempotent (i.\,e. $u \cdot u = u$);
\item $M$ is projective;
\item $M$ is injective.
\end{enumerate}
\end{thm}
\begin{proof}
The equivalence between (1) and (2) is true for any semiring (see \cite[Remark 3.4]{dr:sasiimva}). The equivalence between (2) and (3) is proved in the previous theorem.
\end{proof}
\begin{lem} \textup{(}\cite{dlnv:isaidscmv}\textup{)}
Let $A= \prod_{i\in I}A_i$ be a direct product of semirings $A_i$. Then $A$ is self-injective if and only if each $A_i$ is self-injective.
\end{lem}
\begin{cor}
Every direct product of finite involutive semirings is self-injective.
\end{cor}
\begin{proof}
Let $A$ be a finite commutative involutive semiring. It is clear that $A$ is self-injective since it is isomorphic to $\mathit{Hom}_{\mathbb{B}}(A, \mathbb{B})$ and the corollary is proved.
\end{proof}
\section{Strong semimodules and semimodules over n-potent involutive semirings}
A semimodule $M$ over an involutive semiring $A$ is \emph{strong} if for all $a,b\in A$
$$
\forall m\in M\ (a \cdot m = b \cdot m)\implies \forall m\in M\ (-a \cdot m = -b \cdot m\text{ and }{\sim}a \cdot m = {\sim}b \cdot m).
$$
A semiring $A$ is called $nilpotent$ if for every $a \in A$, $a \neq 1$, there exists a $n \in \mathbb{N}$ such that $a^{n}=0$.
An $A$-semimodule $M$ is \emph{faithful} if the action of each $a \neq 0$ in $A$ on $M$ is nontrivial, i.\,e. $a \cdot x \neq 0$ for some $x \in M$.
\begin{thm}
Let $A$ be a nilpotent $1$-bounded involutive semiring and $M$ a nontrivial $A$-semimodule. Then $M$ is a strong semimodule if and only if $M$ is faithful.
\end{thm}
\begin{proof}
Note that for any $a \in A$ we have that $a \cdot ({\sim}a) = (-a) \cdot a = 0$.
Suppose $M$ is faithful and let $a \cdot x= b \cdot x$, for all $x \in M$. Then we have $0=((-a)a) \cdot x=((-a)b) \cdot x$ for all $x \in M$ and also $((-b)a) \cdot x=0$ for all $x \in M$. Since $M$ is faithful we have $(-a)b=(-b)a=0$, which imply respectively that $b\le a$ and $a \le b$. Consequently we have $a=b$ and obviously $-a \cdot x= -b \cdot x$ and ${\sim}a \cdot x = {\sim}b \cdot x$ for all $x \in M$.
Vice versa, suppose $M$ is strong and that $a \cdot x=0=0 \cdot x$ for all $x \in M$, for some $0 \neq a$ in $A$. Then we have $-a \cdot x=-0 \cdot x= 1 \cdot x=x$ for all $x \in M$, which implies $(-a)^{n} \cdot x=x$ for all $x \in M$ and $n \in \mathbb{N}$. But, since $A$ is nilpotent, we have that $-a=1$ and so $a=0$, which contradicts the hypothesis.
\end{proof}
A semiring $A$ is \textit{multiplicatively idempotent} if $x \cdot x = x$ for every $x \in A$.
\begin{thm}
A $1$-bounded involutive semiring $A$ is multiplicatively idempotent if and only if $A$ is a Boolean algebra.
\end{thm}
\begin{proof}
From $xx=x\le 1$ it follows that $x \cdot y = x \wedge y$, so the semiring is commutative and, in particular, $-x={\sim} x$. Defining $x \rightarrow y$ as ${\sim}((-y) \cdot x) = - ((-y) \cdot x)$, we obtain that $(A, \vee, \wedge, \rightarrow, 0, 1)$ is a Heyting algebra. We have $\neg x$ defined as $x \rightarrow 0$ and so $\neg \neg x = (x \rightarrow 0) \rightarrow 0 = - (1 \cdot (-x)) -(-x)=x$. Therefore the Heyting algebra is a Boolean algebra.
\end{proof}
Let $A$ be a $1$-bounded involutive semiring.
For a $n \in \mathbb{N}$, a semiring $A$ is $n$-\emph{von Neumann regular} if for every $a \in A$, there exists $b \in A$ such that $a^n = a^n \cdot b \cdot a^n$. A $1$-von Neumann regular semiring is simply called \emph{von Neumann regular}.
A semiring $A$ is $n$-\emph{potent} if $a^n = a^{n+1}$ for every $a \in A$, in particular a semiring $A$ is 1-potent if and only if it is multiplicatively idempotent.
\begin{thm}
Let $A$ be a 1-bounded idempotent semiring and $n\in\mathbb N$. Then $A$ is $n$-von Neumann regular if and only if $A$ is $n$-potent.
\end{thm}
\begin{proof} We first prove the result for $n=1$.
$(\Rightarrow)$ Note that, since $A$ is 1-bounded, we have that $a \cdot a \leq a$, for any $a \in A$. Suppose that $A$ is von Neumann regular, so $a \vee (a \cdot a) = (a \cdot b \cdot a) \vee (a \cdot 1 \cdot a) = a \cdot (b \vee 1) \cdot a = a \cdot a$. Since $a \cdot a \leq a$, we have that $a \cdot a = a$.
$(\Leftarrow)$ Suppose $A$ is multiplicatively idempotent, then $a = a \cdot a = a \cdot 1 \cdot a$.
Now let $n\in\mathbb N$. From the two implications proved above,
we have that $A$ is $n$-von Neumann regular iff $a^{2n}=a^{n}$. Since $a^{2n} \leq a^{n+1} \leq a^{n}$, this implies $a^{n+1}=a^{n}$. Obviously $a^{n}=a^{n+1}$ implies $a^{2n}=a^{n}$ for any $a \in A$.
\end{proof}
\begin{thm}
For a $1$-bounded involutive semiring $A$, the following statements are equivalent:
\begin{enumerate}
\item Every left principal semiring ideal $A a$ of $A$ is injective as a semimodule;
\item $A$ is a self-injective von Neumann regular semiring;
\item $A$ is a complete Boolean algebra.
\end{enumerate}
\end{thm}
\begin{proof}
(1)$\Rightarrow$(2). We need only to show that $A$ is a von Neumann regular semiring. Indeed, let $a \in A$. Then, by condition (1), $A \cdot a$ is an injective $A$-semimodule. We then have that there exists an $A$-homomorphism $f: A\longrightarrow A \cdot a$ such that $f|_{A \cdot a} = id_{A \cdot a}$. It implies that $$a = f(a) = f(a\cdot 1) = af(1).$$ On the other hand, since $f(1) \in A \cdot a$, there exists an element $b\in A$ such that $f(1) = b\cdot a$, and hence, $a = a\cdot b\cdot a$. Thus, $A$ is von Neumann regular.
(2)$\Rightarrow$(3). Since $A$ is a self-injective semiring and applying Theorem \ref{Theorem 9}, the lattice $A$ is complete.
From the previous theorems we know that, since $A$ is von Neumann regular then it is idempotent and consequently a Boolean algebra.
(3)$\Rightarrow$(1). Suppose $A$ is a complete Boolean algebra. By \cite[Corollary 2]{fofa:iopoba}, $A$ is a self-injective semiring. Take any $a\in A$. We have that $a\cdot a = a$. Define two $A$-homomorphisms $\alpha: A \cdot a\longrightarrow A$ and $\beta: A\longrightarrow A \cdot a$ by setting: $\alpha(b\cdot a) = b\cdot a$ and $\beta(b) = b\cdot a$ for all $b\in A$. It is obvious that $\beta\alpha = id_{A \cdot a}$; that means, $A \cdot a$ is a retract of the $A$-semimodule $A$. By $A$ is self-injective and by \cite[Lemma 3.1]{dlnv:isaidscmv}, $A \cdot a$ is an injective $A$-semimodule, and hence, statement (1) is proved, finishing the proof.
\end{proof}
\begin{thm}
Let $A$ be a 1-bounded semiring. Then, for a fixed $n \in \mathbb{N}$, the following statements are equivalent:
\begin{enumerate}
\item for every $a \in A$ the cyclic semimodule generated by $a^n$ is injective as a semimodule on $A$;
\item $A$ is self-injective and n-potent.
\end{enumerate}
\end{thm}
\begin{proof}
$(1) \Rightarrow (2)$ Obviously $A$ is self-injective since it is generated by $1^n$. If $A \cdot a^n$ is injective, then exists a $A$ - homomorphism $f: A \to A \cdot a^n$ such that $f|_{A \cdot a^n} = id_{A \cdot a^n}$. It implies that $a^n=f(a^n)=f(a^n \cdot 1)= a^n \cdot f(1)$. Since $f(1) \in A \cdot a^n$, we have that exists an element $b \in A$ such that $f(1)=b \cdot a^n$, so $a^n= a^n \cdot b \cdot a^n$.
We then get that $A$ is n-von Neumann regular and for a previous remark $a^n = a^{n+1}$, for every $a \in A$.\\
$(2) \Rightarrow (1)$ Define $\alpha: A \cdot a^n \to A$ by $\alpha (b \cdot a^n) = b \cdot a^n$ and $ \beta : A \to A \cdot a^n$ by $ \beta (b)= b \cdot a^n$. We then have that $ \beta \alpha (b \cdot a^n) = b \cdot a^{2n}$. Since $a^n=a^{n+1}$ implies $a^{2n}=a^n$ and consequently $b \cdot a^{2n} = b \cdot a^n $, we have that $ \beta \alpha = id_{A \cdot a^n}$. So, $A \cdot a^n$ is a retract of $A$ which is self-injective. This implies that $A \cdot a^n$ is injective too.
\end{proof}
As an example, consider the finite commutative involutive linearly-ordered $1$-bounded involutive semiring $C=\{0, a, b, 1\}$ where $0<a<b<1$, $a \cdot a=0$ and $b \cdot b = b$.
\medskip
\begin{center}
\begin{tikzpicture}[scale=0.7]
\node (1) at (0,2) {$1$};
\node (b) at (0,1) {$b$};
\node (b') at (2,1) {$b^2=b$};
\node (a) at (0,0) {$a$};
\node (a') at (2,0) {$a^2=0$};
\node (0) at (0,-1) {$0$};
\draw (0)--(a)--(b)--(1);
\end{tikzpicture}
\end{center}
\medskip
It is easy to see that $C$ is 2-potent and
we know that $C$ self-injective since it is a projective $C$-semimodule and therefore injective
by Theorem~\ref{injiffproj}.
Hence all the cyclic semimodules of the form $Cc^{n}$ for some $c \in C$ are injective and projective. In particular we have that the semimodules
$\{0\}$, $Cb$ and $C$ are injective and, using Theorem~\ref{injiffproj} again, also projective.
\noindent\textbf{Acknowledgement}. The second author is very grateful for support from the SYSMICS project and for the opportunity to spend a month doing research at Chapman University.
|
1,314,259,996,576 | arxiv | \section{#1}\setcounter{equation}{0}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\newcommand{\comment}[1]{\vspace{5mm}\hspace{15mm}
\begin{minipage}{5in}\baselineskip 7mm\large
{{\bf Comment/Question: } #1}
\end{minipage}\vspace{5mm}
\typeout{Here is a comment line}}
\newcommand{\junk}[1]{}
\newcommand{\textstyle}{\textstyle}
\newcommand{{\mathbb N}}{{\mathbb N}}
\newcommand{{\mathbb F}}{{\mathbb F}}
\newcommand{{\mathbb K}}{{\mathbb K}}
\newcommand{{\mathbb Z}}{{\mathbb Z}}
\newcommand{{\mathcal C}}{{\mathcal C}}
\renewcommand{\H}{{\mathcal H}}
\newcommand{{\rm wt}}{{\rm wt}}
\newcommand{{\rm rank}\,}{{\rm rank}\,}
\newcommand{\mbox{\rm span}}{\mbox{\rm span}}
\newcommand{\mbox{$d_{\mbox{\rm\tiny free}}$}}{\mbox{$d_{\mbox{\rm\tiny free}}$}}
\newcommand{\delay}[1]{\mbox{$\overleftarrow{#1}$}}
\newcommand{\mbox{$\!^{\sf T}$}}{\mbox{$\!^{\sf T}$}}
\newcommand{\floor}[1]{\mbox{$\lfloor{#1}\rfloor$}}
\newcommand{\ceiling}[1]{\mbox{$\lceil{#1}\rceil$}}
\newcommand{\mbox{$\F(\!(D)\!)$}}{\mbox{${\mathbb F}(\!(D)\!)$}}
\newcommand{\zwei}[2]{\left[ \begin{array}{c}
#1 \\ #2 \end{array} \right]}
\newenvironment{liste}{\begin{list}{--\hfill}{\topsep-1.4ex \labelwidth.4cm
\leftmargin.5cm \labelsep.1cm \rightmargin0cm \parsep0ex \itemsep.6ex
\partopsep1.4ex}}{\end{list}}
\newcounter{abc}
\newenvironment{romanlist}{\begin{list}{(\roman{abc})\hfill}{\usecounter{abc}
\topsep.5ex \labelwidth.6cm \leftmargin.7cm \labelsep.1cm
\rightmargin0cm \parsep0ex \itemsep.6ex
\partopsep1.6ex}}{\end{list}}
\newenvironment{alphalist}{\begin{list}{(\alph{abc})\hfill}{\usecounter{abc}
\topsep.5ex \labelwidth.6cm \leftmargin.7cm \labelsep.1cm
\rightmargin0cm \parsep0ex \itemsep.6ex
\partopsep1.6ex}}{\end{list}}
\newenvironment{arabiclist}{\begin{list}{(\arabic{abc})\hfill}{\usecounter{abc}
\topsep.5ex \labelwidth.6cm \leftmargin.7cm \labelsep.1cm
\rightmargin0cm \parsep0ex \itemsep.6ex
\partopsep1.6ex}}{\end{list}}
\newenvironment{algo}{\begin{list}{{\rmfamily\bf
Step~\arabic{abc}:}\hfill}{\usecounter{abc}
\topsep.5ex \labelwidth1.7cm \leftmargin1.7cm \labelsep0cm
\rightmargin0cm \parsep0ex \itemsep.6ex
\partopsep1.6ex}}{\end{list}}
\newcommand{\vier}[4]{\left[ \begin{array}{cc}
#1 & #2 \\ #3 & #4 \end{array} \right]}
\title{The Existence of Strongly-MDS Convolutional Codes\footnote
{This work was supported in part by NSF grants DMS-00-72383 and
CCR-02-05310.}
}
\author{
Ryan Hutchinson\\
{\small Department of Mathematics and Computer Science}\vspace{-2mm}\\
{\small Bemidji State University}\vspace{-2mm}\\
{\small {\em e-mail:} rhutchinson@bemidjistate.edu}
}
\begin{document}
\maketitle
\begin{abstract}
It is known that maximum distance separable and maximum distance
profile convolutional codes exist over large enough finite fields of
any characteristic for all parameters $(n,k,\delta )$. It has been
conjectured that the same is true for convolutional codes that are
strongly maximum distance separable. Using methods from linear
systems theory, we resolve this conjecture by showing that, over a
large enough finite field of any characteristic, codes which are
simultaneously maximum distance profile and strongly maximum
distance separable exist for all parameters $(n,k,\delta )$.\\
\noindent {\bf Keywords:} MDS codes, convolutional codes, column
distances, linear systems, minimal partial realization problem.
\end{abstract}
\section{Introduction}
In recent literature on convolutional codes, several new classes of
codes with optimal distance properties have been introduced. These
classes of codes are known as maximum distance separable (MDS)
codes, maximum distance profile (MDP) codes, and strongly MDS (sMDS)
codes. MDS codes are characterized by the property that they have
the maximum possible free distance for a given choice of code
parameters. sMDS codes are a subclass of MDS codes having the
property that this maximum possible free distance is attained at the
earliest possible encoding step. MDP codes are characterized by the
property that their column distances grow at the maximum possible
rate for a given choice of code parameters.
In~\cite{ro99a1}, it is shown that MDS convolutional codes exist for
all parameters $(n,k,\delta )$ over sufficiently large finite
fields; in~\cite{12}, a similar result is obtained for codes having
the MDP property. In~\cite{gl03r}, sMDS convolutional codes are
introduced and studied, and they are shown to exist for parameters
$(n,k,\delta )$ satisfying $(n-k) \mid \delta$. In addition, it is
conjectured that convolutional codes possessing the MDP and sMDS
properties together exist for all $(n,k,\delta )$. In this work, we
show that this conjecture is correct. The approach used is
systems-theoretic in nature; to obtain the proof, we make use of the
well-known interpretation of a convolutional code as an
input-state-output linear system as well as results from partial
realization theory.
The structure of this paper is as follows. In Section 2, we review
relevant ideas from the theory of convolutional codes. We recall as
well a connection between convolutional codes and input-state-output
linear systems that we will use to obtain our results. In Section 3,
we use a linear systems representation of convolutional codes to
give a characterization of the sMDS property. In Section 4, we use
this characterization to show the existence, for all parameters
$(n,k,\delta )$, of codes possessing both the MDP and sMDS
properties.
\section{Convolutional Codes and Linear Systems}
In this section, we recall some facts about convolutional codes and
their connection with linear systems. Throughout this paper, $0$
will be understood to be the zero matrix or vector of the
appropriate size. Let $k$ and $n$ be positive integers with $k<n$,
$p$ a prime number, ${\mathbb K}$ the algebraic closure of the prime field
${\mathbb F} _p$, and ${\mathbb F}$ a finite subfield of ${\mathbb K}$.
\begin{defi}
A {\em convolutional code} $\mathcal{C}$ of {\em rate} $k/n$ is a
rank-$k$ direct summand of ${\mathbb F} [s]^n$.
\end{defi}
$\mathcal{C}$ is a free ${\mathbb F} [s]$-module and may thus be viewed as
the column space of a full-rank matrix $G(s)\in {\mathbb F} [s]^{n\times k}$,
called a {\em generator matrix} for $\mathcal{C}$. Two full-rank
$n\times k$ matrices $G_1(s)$ and $G_2(s)$ generate the same code if
and only if there exists a unimodular matrix $U(s)\in {\mathbb F}
[s]^{k\times k}$ such that $G_1(s)=G_2(s)U(s)$.
When convenient, we will (at times with a slight abuse of notation)
make use of the fact that ${\mathbb F} [s]^n$ and ${\mathbb F} ^n[s]$ are isomorphic
${\mathbb F} [s]$-modules and think of codewords as elements of ${\mathbb F} ^n[s]$.
For example, the columns of a generator matrix $G(s)$ may be thought
of as polynomials with coefficients in ${\mathbb F} ^n$; we refer to the
degrees of these polynomials as the {\em column degrees} of $G(s)$
and denote the degree of the $j$th column by $\delta _j$. The {\em
high-order coefficient matrix} of $G(s)$, $G_{\infty}$, is the
matrix whose $j$th column is the column coefficient of $s^{\delta
_j}$ in the $j$th column of $G(s)$. In general, $G_{\infty}$ need
not have full rank. It is always possible, though, to find a
unimodular matrix $U(s)\in {\mathbb F} [s]^{k\times k}$ such that $G(s)U(s)$
has a full-rank high-order coefficient matrix (see~\cite{17}). If
$G_{\infty}$ has full rank, then $G(s)$ is called a {\em minimal
generator matrix}.
An important invariant of a convolutional code is its degree,
defined as follows:
\begin{defi}
The {\em degree} $\delta$ of a convolutional code $\mathcal{C}$ is
the maximal degree of a (polynomial) determinant of a $k\times k$
submatrix of a generator matrix of $\mathcal{C}$.
\end{defi}
This definition makes sense, as multiplication by a unimodular
matrix preserves the degrees of such determinants. We note that, if
$G(s)$ is a minimal generator matrix of $\mathcal{C}$ with column
degrees $\delta _1,\ldots ,\delta _k$, then $\delta =\sum _{j=1}^{k}
\delta _j$. A code of rate $k/n$ and degree $\delta$ will be
referred to as an $(n,k,\delta)$-code.
We turn next to notions of distance. We first recall the definition
of Hamming weight:
\begin{defi}
Let $v\in {\mathbb F} ^n$ and $v(t):=\sum _{t=0}^{d} v_ts^t \in {\mathbb F} ^n[s]$.
The {\em Hamming weight of $v$}, wt($v$), is the number
of nonzero components of $v$. The Hamming weight of $v(t)$ is $\text{wt}(v(t)):=\sum _{t=0}^{d} \text{wt}(v_t)$.
\end{defi}
For the purpose of error control coding, it is important that the
minimum weight among the codewords of a code be as large as
possible. This leads to the concept of free distance:
\begin{defi}
The {\em free distance} of a convolutional code $\mathcal{C}$ is
\begin{equation*}
d_{free}(\mathcal{C}):=\min \{{\rm wt}(v(t))\,
\vline\,\,v(t)\in\mathcal{C} \backslash \{ 0 \}\} .
\end{equation*}
\end{defi}
Column distances also play an important part in what follows. They
measure the minimum possible distance between truncated codewords:
\begin{defi}
Let $\mathcal{C}$ be a convolutional code. For $j\in {\mathbb N} _{0}$, the
{\em $j$th column distance} of $\mathcal{C}$ is
$$
d_j^c(\mathcal{C}):=\min \left\{\sum_{t=0}^j{\rm wt}(v_t)\,\vline\,
v(t)\in \mathcal{C}\,\, \text{and}\,\, v_0\neq 0_{n} \right\},
$$
where $v_j=0_n$ if $j>\text{deg }v(t)$.
\end{defi}
The following result gives upper bounds for the column distances and
the free distance of a convolutional code.
\begin{prop} \label{P-dcj.bound}
Let $\mathcal{C}$ be an $(n,k,\delta)$-code.
\begin{enumerate}
\item For every $j\in {\mathbb N} _{0}$,
\[
d_j^c(\mathcal{C})\leq(n-k)(j+1)+1.
\]
If $d_j^c(\mathcal{C})=(n-k)(j+1)+1$ for some $j$, then
$d_i^c(\mathcal{C})=(n-k)(i+1)+1$ when $i\in \{0,\ldots ,j \}$.
\item \begin{equation*} \label{G-Singleton}
d_{free}(\mathcal{C})\leq
(n-k)\Big(\Big\lfloor\frac{\delta}{k}\Big\rfloor+1\Big) +\delta+1.
\end{equation*}
\end{enumerate}
\end{prop}
Statement 1 is proved in~\cite{gl03r}, and statement 2 is proved
in~\cite{ro99a1}. The bound in 2 is called the {\em generalized
Singleton bound}.
Set $L:=\Big\lfloor\frac{\delta}{k}
\Big\rfloor+\Big\lfloor\frac{\delta}{n-k}\Big\rfloor$ and
$M:=\Big\lfloor\frac{\delta}{k}\Big\rfloor+
\Big\lceil\frac{\delta}{n-k}\Big\rceil$. We are now ready to define
the code properties of interest in this work:
\begin{defi}\label{DistProp}
Let $\mathcal{C}$ be an $(n,k,\delta)$-code. Then,
\begin{enumerate}
\item $\mathcal{C}$ is called a {\em maximum distance profile code} ({\em MDP
code}) if
$$
d_L^c(\mathcal{C})=(n-k)(L+1)+1.
$$
\item $\mathcal{C}$ is called a {\em maximum distance separable code} ({\em MDS code}) if
\[
d_{free}(\mathcal{C})=(n-k)\Big(\Big\lfloor\frac{\delta}{ k}
\Big\rfloor+1\Big)+\delta+1.
\]
\item $\mathcal{C}$ is called a {\em strongly MDS code} ({\em sMDS code}) if
\[
d_M^c(\mathcal{C})=(n-k)\Big(\Big\lfloor\frac{\delta}{k}
\Big\rfloor+1\Big)+\delta+1.
\]
\end{enumerate}
\end{defi}
Using the fact that no column distance of $\mathcal{C}$ can exceed
the generalized Singleton bound, one can show that $L$ is the
largest possible value of $j$ for which $d_j^c(\mathcal{C})$ can
attain the upper bound in statement 1 of
Proposition~\ref{P-dcj.bound}. If $\mathcal{C}$ is an MDP code,
then, by Proposition~\ref{P-dcj.bound}, $d_i^c(\mathcal{C})$ attains
this upper bound when $i\in \{0,\ldots ,L \}$. Thus, 1 says that
the column distances of an MDP code are maximal until it is no
longer possible. Similarly, one can show that, if $j<M$, then
$d_j^c(\mathcal{C})<(n-k)\Big(\Big\lfloor\frac{\delta}{k}\Big\rfloor+1\Big)+\delta+1$.
Thus, 3 says that, for an sMDS code, the sequence
$\{d_j^c(\mathcal{C})\} _{j\geq 0}$ attains the generalized
Singleton bound at the smallest possible value of $j$.
In the second part of this section, we introduce a connection
between convolutional codes and linear systems that we will use to
obtain our results. Background information for this discussion and
applications of ideas from systems theory to the construction of
convolutional codes may be found, for example, in~\cite{Antsaklis,
8,ro96a1, ro99a, 7}.
Let $A \in {\mathbb K}^{\delta
\times \delta}, \,\,\, B \in {\mathbb K}^{\delta \times k}, \,\,\, C \in
{\mathbb K}^{(n-k) \times \delta}$, and $D \in {\mathbb K}^{(n-k) \times k}$. Note
that, since the number of entries in the matrices $(A,B,C,D)$ is
finite, these matrices are actually defined over a finite subfield
${\mathbb F}$ of ${\mathbb K}$. The matrices $(A,B,C,D)$ describe a time-invariant
linear system through the equations
\begin{eqnarray} \label{iso}
x_{t+1} & = & Ax_t+Bu_t, \nonumber \\ y_t & = & Cx_t+Du_t,\\
x_0&=&0,\nonumber
\end{eqnarray}
where $x_t \in {\mathbb F} ^{\delta}$, $u_t \in {\mathbb F} ^k$, and $y_t \in {\mathbb F}
^{n-k}$ are called the {\em state vector}, {\em input vector}, and
{\em output vector} at time $t$, respectively. The matrix quadruple
$(A,B,C,D)$ is called a {\em realization} for the system. We recall
the following well-known definition:
\begin{defi}\label{DefB}
$(A,B)$ is called a {\em reachable pair} if
$$
\rm{rank} \left(\left[\begin{array}{ccccc}
B & AB & \cdots & A^{\delta -2}B & A^{\delta -1}B
\end{array}
\right]\right)=\delta.
$$
$(A,C)$ is called an {\em observable pair} if
$$
\rm{rank} \left(\left[\begin{array}{ccccc}
C^T & (CA) ^T & \cdots & (CA^{\delta -2})^T & (CA^{\delta
-1})^T
\end{array}
\right] ^{\it T}\right)=\delta.
$$
\end{defi}
If $(A,B)$ is a reachable pair and $(A,C)$ is an observable pair,
then $(A,B,C,D)$ is called a {\em minimal realization}. In this
case, $\delta$ is called the {\em McMillan degree} of the system. We
denote by $S ^{\delta}_{k,n}$ the set of minimal realizations of
systems over ${\mathbb K}$ having input vectors of size $k$, output vectors
of size $n-k$, and McMillan degree $\delta$.
Let $\{ x_t \} _{t\geq 0}$ be a sequence of vectors in ${\mathbb F}
^{\delta}$ and $\{\binom{y_t}{u_t}\}_{t\geq 0}$, where $y_t \in {\mathbb F}
^{n-k}$ and $u_t \in {\mathbb F} ^k$, a sequence of vectors in ${\mathbb F} ^n$ having
the following properties:
\begin{enumerate}
\item Equations~(\ref{iso}) are satisfied for all $t\in {\mathbb N}_{0}$;
\item There exists a $d\in{\mathbb N}_{0}$ such that $x_{d +1}=0$ and $u_t=0$
for $t\geq d+1$.
\end{enumerate}
These properties guarantee that the sequence
$\{\binom{y_t}{u_t}\}_{t\geq 0}$ has finite weight. We refer to the
truncated sequence $\{\binom{y_t}{u_t} \} _{t=0}^{d}$ as a {\em
finite-weight sequence for $(A,B,C,D)$}. The following remarks
connect finite-weight sequences and codewords.
Let $(A,B,C,D)\in S _{k,n}^{\delta}$. The corresponding transfer
function is $T(s) := C(sI - A)^{-1} B + D$. Let $Q^{-1}(s)P(s)$ be a
left coprime factorization of $T(s)$, and set $H(s) := [-Q(s)~
P(s)]$. Set
$$
y(s) := y_0 s^{d} + y_1 s^{d - 1} + \cdots + y_{d} \in {\mathbb F} ^{n-k}[s]
$$
and
$$
u(s) := u_0 s^{d} + u_1 s^{d - 1} + \cdots + u_{d} \in {\mathbb F} ^k[s],
$$
and use their coefficients to form the vector sequence
$\{\binom{y_t}{u_t} \} _{t=0}^{d}$. We then have the following
equivalent conditions; see~\cite{8,ro99a,7} for more details:
\begin{enumerate}
\item The set $\{\binom{y_t}{u_t} \} _{t=0}^{d}$ of vectors is a finite-weight sequence for $(A,B,C,D)$.
\item
\begin{equation*}
\left[
\begin{tabular}{lllccccc}
$0$ & $\cdots\cdots\cdots\cdots $ & \multicolumn{1}{l|}{$0$} & $A^d
B$ & $ A^{d -1}B$ & $\cdots $ & $AB$ & $B$ \\ \hline
\multicolumn{3}{c|}{} & $D$ &$0$ &$\cdots$ &$\cdots$ &$0$ \\
\multicolumn{3}{c|}{} & $CB$ & $D$ &$\ddots$ & &$\vdots$ \\
\multicolumn{3}{c|}{$-I_{(d +1)(n-k)}$} & $CAB$ & $CB$ &$\ddots$ &$\ddots$ &$\vdots$ \\
\multicolumn{3}{c|}{} & $\vdots $ &$\vdots$ & $\ddots$ & $\ddots$ &$0$ \\
&&\multicolumn{1}{l|}{} & $CA^{d -1}B$ & $CA^{d -2}B$ & $ \cdots $ &
$CB$ & $D$
\end{tabular}
\right] \left[
\begin{array}{c}
y_0 \\
y_1 \\
\vdots \\
y_{d} \\ \hline
u_0\\
u_1 \\
\vdots \\
u_{d}
\end{array}
\right] =0.
\end{equation*}
\item There exists a `state vector polynomial'
$$
x(s) = x_0s^{d}+x_{1}s^{{d}-1} + \cdots + x_{d} \in {\mathbb F} ^{\delta}[s]
$$
such that
\begin{equation*}
\label{kern} \left[
\begin{array}{ccc}
sI-A&0&-B\\ -C&I_{n-k}&-D
\end{array}
\right]\left[
\begin{array}{c}
x(s)\\y(s)\\u(s)
\end{array}
\right] =0.
\end{equation*}
\item
$ H(s)\zwei{y(s)}{u(s)}= [-Q(s) \ P(s)] \zwei{y(s)}{u(s)}=0. $
\item $y(s)=T(s)u(s)$.
\end{enumerate}
Further, the right ${\mathbb F} [s]$-kernel of $H(s)$ is an $(n,k,\delta
)$-code $\mathcal{C}$.
The code $\mathcal{C}$ is not quite suitable for our purposes. This
is due to the fact that the finite-weight sequence
$$
\binom{y_{0}}{u_{0}},\binom{y_{1}}{u_{1}},\ldots ,\binom{y_{d
-1}}{u_{d -1}},\binom{y_{d}}{u_{d}}
$$
corresponds with the codeword
$$
\binom{y_{d}}{u_{d}}+\binom{y_{d -1}}{u_{d -1}}s+\cdots
+\binom{y_{1}}{u_{1}}s^{d -1}+\binom{y_{0}}{u_{0}}s^{d} \in
\mathcal{C}.
$$
Working in the systems setting, we will show there is a realization
$(A,B,C,D)\in S ^{\delta}_{k,n}$ for which any finite-weight
sequence
$$
\binom{y_{0}}{u_{0}},\binom{y_{1}}{u_{1}},\ldots ,\binom{y_{d
-1}}{u_{d -1}},\binom{y_{d}}{u_{d}}
$$
(with $u_0 \neq 0$) formed using (\ref{iso}) has the properties that
$$
\sum_{t=0}^L \text{wt}\left (\binom{y_{t}}{u_{t}}\right )
=(L+1)(n-k)+1
$$
and
$$
\sum_{t=0}^M \text{wt}\left (\binom{y_{t}}{u_{t}}\right )\geq
(n-k)\Big(\Big\lfloor\frac{\delta}{ k} \Big\rfloor+1\Big)+\delta+1.
$$
Due to the order reversal noted above, it will not necessarily be
true that $d_L^c(\mathcal{C})=(L+1)(n-k)+1.$ The next result shows
how to overcome this problem:
\begin{prop}
Let $\mathcal{C}$ be an $(n,k,\delta )$-code with minimal generator
matrix $G(s)$. Let $\overline{G(s)}$ be the matrix obtained by
replacing each entry $p_{ij}(s)$ of $G(s)$ by
$\overline{p_{ij}(s)}:=s^{\delta _j}p_{ij}(s^{-1})$, where $\delta
_j$ is the $j$th column degree of $G(s)$. Then, $\overline{G(s)}$ is
a minimal generator matrix of an $(n,k,\delta )$-code
$\overline{\mathcal{C}}$, and
$$
\binom{y_{0}}{u_{0}} +\binom{y_{1}}{u_{1}} s + \cdots +
\binom{y_{d -1}}{u_{d -1}} s^{d
-1} +\binom{y_{d}}{u_{d}} s^{d} \in\overline{\mathcal{C}}
$$
if and only if
$$
\binom{y_{d}}{u_{d}}+\binom{y_{d -1}}{u_{d -1}}s+\cdots
+\binom{y_{1}}{u_{1}}s^{d -1}+\binom{y_{0}}{u_{0}}s^{d} \in
\mathcal{C}.
$$
\end{prop}
\begin{proof}
First, $\overline{G(s)}$ has rank $k$, since a $k\times k$ minor of
$\overline{G(s)}$ is zero if and only if the corresponding minor of
$G(s)$ is.
Next, let $G_0$ denote the $n\times k$ matrix whose $ij$th entry is
$p_{ij}(0)$ and $\overline{G} _0$ the $n\times k$ matrix whose
$ij$th entry is $\overline{p_{ij}(0)}$. Because $\mathcal{C}$ is a
summand of ${\mathbb F} [s]^n$, $G_0$ has full rank, so that each column of
$G_0$ has at least one nonzero entry. This means that
$G_0=\overline{G} _{\infty}$, $\overline{G} _{\infty}$ has full
rank, corresponding columns of $G(s)$ and $\overline{G(s)}$ have the
same column degrees, and $\overline{\overline{G(s)}} =G(s)$. From
the definition of $\overline{G(s)}$, we have that
$G_{\infty}=\overline{G} _0$; since $G(s)$ is minimal, $\overline{G}
_0$ also has full rank.
Suppose $p(s)\in {\mathbb F} [s]$ has degree $d$ and is a common divisor of
the $k\times k$ minors of $\overline{G(s)}$. Since $\overline{G}
_0$ has full rank, $p(0)\neq 0$, so that $s^dp(s^{-1})$ has degree
$d$. Since $\overline{\overline{G(s)}} =G(s)$, $s^dp(s^{-1})$ is a
common divisor of the $k\times k$ minors of $G(s)$. As $\mathcal{C}$
is a summand of ${\mathbb F} [s]^n$, it follows that $d=0$, so that the only
common divisors of the $k\times k$ minors of $\overline{G(s)}$ are
the nonzero elements of ${\mathbb F}$. Thus, the column space of
$\overline{G(s)}$ is a summand of ${\mathbb F} [s]^n$, which means that it is
a rate $k/n$ convolutional code $\overline{\mathcal{C}}$. It follows
from the remarks in the preceding paragraph that $\overline{G(s)}$
is a minimal generator matrix of $\overline{\mathcal{C}}$ and that
$\overline{\mathcal{C}}$ has degree $\delta$.
Consider the vector polynomials
$$
v(s):=v_{d}+v_{d -1}s+\cdots +v_{1}s^{d -1}+v_{0}s^{d}
$$
and
$$
\overline{v(s)}:=v_{0}+v_{1}s+\cdots +v_{d -1}s^{d -1}+v_{d}s^{d}
$$
in ${\mathbb F} ^n[s]$, and note that $\overline{v(s)} =s^{d}v(s^{-1})$.
Thinking of $v(s)$ and $\overline{v(s)}$ as column vectors in ${\mathbb F}
[s]^n$, we observe that a $(k+1)\times (k+1)$ minor of
$\begin{bmatrix} G(s)&\vline&v(s)\end{bmatrix}$ is zero if and only
if the corresponding minor of $\begin{bmatrix}\overline{G(s)}&\vline
&\overline{v(s)}
\end{bmatrix}$ is. Since $\mathcal{C}$ and $\overline{\mathcal{C}}$
are summands of ${\mathbb F} [s]^n$, this means that $v(s)\in \mathcal{C}$ if
and only if $\overline{v(s)}\in \overline{\mathcal{C}}$.
\end{proof}
This result, together with the remarks preceding it, shows that
$$
\binom{y_{0}}{u_{0}},\binom{y_{1}}{u_{1}},\ldots ,\binom{y_{d
-1}}{u_{d -1}},\binom{y_{d}}{u_{d}}
$$
is a finite-weight sequence for $(A,B,C,D)$ if and only if
$$
\binom{y_{0}}{u_{0}} +\binom{y_{1}}{u_{1}} s + \cdots +
\binom{y_{d -1}}{u_{d -1}} s^{d
-1} +\binom{y_{d}}{u_{d}} s^{d} \in\overline{\mathcal{C}}.
$$
$\overline{\mathcal{C}}$, then, will have the property that
$d_M^c(\overline{\mathcal{C}})=(n-k)\Big
(\Big\lfloor\frac{\delta}{k} \Big\rfloor +1\Big )+\delta +1$. For
the rest of the paper, we will refer to the code
$\overline{\mathcal{C}}$ as the code represented by the matrices
$(A,B,C,D)$.
\section{Trivial Rank Deficiency and the sMDS Property}
In this section, we give conditions on the entries of the matrices
in a realization $(A,B,C,D)\in S ^{\delta}_{k,n}$ guaranteeing that
the convolutional code these matrices represent has both the MDP and
sMDS properties. For $(A,B,C,D)\in S ^{\delta}_{k,n}$ and $j\in {\mathbb N}
_{0}$, we form the matrices
\begin{equation} \label{Bl-To}
\mathcal{T}_j :=
\left[
\begin{array}{ccccc}
D & 0 &\cdots &\cdots &0 \\
CB & D &\ddots & &\vdots \\
CAB & CB &\ddots &\ddots &\vdots \\
\vdots & \vdots &\ddots &\ddots &0 \\
CA^{j -1}B & CA^{j -2}B &\cdots & CB & D
\end{array}
\right].
\end{equation}
\begin{notation}
Let $l_1,l_2 \in {\mathbb N}$ satisfy $1\leq l_1\leq (j+1)(n-k)$ and $1\leq
l_2\leq (j+1)k$. Let $1\leq i_1<\cdots <i_{l_1}\leq (j+1)(n-k)$ and
$1\leq j_1<\cdots <j_{l_2}\leq (j+1)k$ be two sequences of integers.
We denote by $(\mathcal{T} _j)_{j_1,\ldots ,j_{l_2}}^{i_1,\ldots
,i_{l_1}}$ the $l_1\times l_2$ submatrix obtained from $\mathcal{T}
_j$ by intersecting rows $i_1,\ldots ,i_{l_1}$ and columns
$j_1,\ldots ,j_{l_2}$.
\end{notation}
\noindent Notice that, if $(\mathcal{T} _j)_{j_1}^{i_1}\not =0$,
then $j_1\leq\lceil\frac{i_1}{n-k}\rceil k$.
In what follows, the notion of trivial rank deficiency plays an
important role. To define trivial rank deficiency, we think of
replacing the entries of the block matrices in $\mathcal{T}_j$ with
the indeterminates of the polynomial ring
$R:={\mathbb K}[x_1,x_2,\ldots,x_{(j+1)(n-k)k}]$. Specifically, we replace
the entry $(s,t)$ of the matrix $D$ with the indeterminate $x_{(s -
1)k + t}$ and the entry $(s,t)$ of the matrix $CA^{i}B$ with the
indeterminate $x_{(i+1)(n - k)k + (s - 1)k + t}$. The zero entries
above the block diagonal remain zero.
\begin{defi}\label{trd}
Let $c$ be an integer with $0\leq c\leq n-k-1$, and let $l$ be an
integer satisfying $1\leq l\leq \min \{ (j+1)(n-k)-c,(j+1)k\}$. A
square submatrix of $\mathcal{T}_j$ is said to be {\em trivially
rank deficient} if the determinant of this submatrix is zero when it
is viewed as a matrix over $R$ in the manner described above. A
submatrix
$(\mathcal{T}_j)^{i_1,i_2,\ldots,i_{l+c}}_{j_1,j_2,\ldots,j_l}$ of
$\mathcal{T}_j$ is called {\em trivially rank deficient} if all
${l+c \choose l}$ $l\times l$ submatrices of
$(\mathcal{T}_j)^{i_1,i_2,\ldots ,i_{l+c}} _{j_1,j_2,\ldots,j_l}$
are trivially rank deficient.
\end{defi}
To say that $(\mathcal{T}_j)^{i_1,i_2,\ldots ,i_{l+c}}
_{j_1,j_2,\ldots,j_l}$ is trivially rank deficient is to say that
$(\mathcal{T}_j)^{i_1,i_2,\ldots ,i_{l+c}} _{j_1,j_2,\ldots,j_l}$
has less than full rank regardless of how elements of ${\mathbb K}$ are
substituted for the indeterminates of $R$. The next lemma shows how
to determine if a given submatrix is trivially rank deficient.
\begin{lemma}\label{B}
Let $l$ be an integer with $1\leq l\leq \min \{
(j+1)(n-k)-c,(j+1)k\}$, and let $(\mathcal{T}_j)^{i_1,i_2,\ldots
,i_{l+c}}_{j_1,j_2,\ldots ,j_l}$ be an $(l+c)\times l$ submatrix of
$\mathcal{T}_j$. Then, the following are equivalent:
\begin{enumerate}
\item $(\mathcal{T}_j)^{i_1,i_2,\ldots ,i_{l+c}}_{j_1,j_2,\ldots ,j_l}$ is trivially
rank deficient.
\item $(\mathcal{T}_j)^{i_{1+c},i_{2+c},\ldots ,i_{l+c}}_{j_1,j_2,\ldots ,j_l}$ is trivially rank deficient.
\item The inequality
$$
j_t>\Big\lceil\frac{i_{t+c}}{n-k}\Big\rceil k
$$
holds for some $t\in\{ 1,\ldots ,l \}$.
\end{enumerate}
\end{lemma}
\begin{proof}
For notational convenience, we set $(\mathcal{T} _j)_{\bar j}^{\bar
i}:=(\mathcal{T}_j)^{i_{1+c},i_{2+c},\ldots
,i_{l+c}}_{j_1,j_2,\ldots ,j_l}$ and $(\mathcal{T} _j)^{\tilde
i}_{\bar j}:=(\mathcal{T}_j)^{i_1,i_2,\ldots
,i_{l+c}}_{j_1,j_2,\ldots ,j_l}$.\\
${\bf 1 \Longrightarrow 2}$: Suppose $(\mathcal{T} _j)^{\tilde
i}_{\bar j}$ is trivially rank deficient. Then, by definition, all
${l+c \choose l}$ $l\times l$ submatrices of $(\mathcal{T}
_j)^{\tilde i}_{\bar j}$ are trivially rank deficient. In
particular, $(\mathcal{T} _j)_{\bar j}^{\bar i}$ is trivially rank deficient.\\
${\bf 2 \Longrightarrow 3}$: Suppose that $(\mathcal{T} _j)_{\bar
j}^{\bar i}$ is trivially rank deficient. We first use induction on
$l$ to prove that $(\mathcal{T} _j)_{\bar j}^{\bar i}$ is lower
block triangular and has a 0 on its diagonal. If $l=1$ and
$(\mathcal{T} _j)_{\bar j}^{\bar i}$ is trivially rank deficient,
then the claim is trivially true. Suppose $l$ satisfies $2\leq l
\leq \min \{ (j+1)(n-k)-c,(j+1)k\}$, that the induction hypothesis
is satisfied for $1,2,\ldots ,l-1$, and that $(\mathcal{T} _j)_{\bar
j}^{\bar i}$ is trivially rank deficient. If $(\mathcal{T}
_j)_{j_1}^{i_{l+c}} =0$, then every entry in $(\mathcal{T} _j)_{\bar
j}^{\bar i}$ is 0, and thus all diagonal entries are 0. If
$(\mathcal{T} _j)_{j_1}^{i_{l+c}}\neq 0$, then let $x_{\iota}$ be
the indeterminate corresponding with $(\mathcal{T}
_j)_{j_1}^{i_{l+c}}$ when $\mathcal{T} _j$ is viewed over $R$ in the
manner described before Definition \ref{trd}. Notice that, when
$(\mathcal{T} _j)_{\bar j}^{\bar i}$ is viewed in this way, the
indeterminate $x_{\iota}$ appears exactly once. Since $x_{\iota}$ is
transcendental over ${\mathbb K} (x_1,\ldots ,x_{\iota -1})$, doing a
cofactor expansion along the first column of $(\mathcal{T} _j)_{\bar
j}^{\bar i}$ (still viewing $(\mathcal{T} _j)_{\bar j}^{\bar i}$
over $R$) shows that the $(l - 1)\times (l - 1)$ submatrix
$(\mathcal{T}_j)^{i_{1+c},i_{2+c},\ldots ,i_{l-1+c}}_{j_2,j_3,\ldots
,j_l}$ is trivially rank deficient. By the induction hypothesis,
$(\mathcal{T}_j)^{i_{1+c},i_{2+c},\ldots ,i_{l-1+c}}_{j_2,j_3,\ldots
,j_l}$ is lower block triangular and has a 0 on its diagonal. It
follows that there is an integer $h$ satisfying $1\leq h\leq l-1$
such that $(\mathcal{T} _j)_{j_{h+1}}^{i_{h+c}}= 0$. This, in turn,
means that $(\mathcal{T} _j)_{\bar j}^{\bar i}$ is lower block
triangular. Because we assumed that $(\mathcal{T} _j)_{\bar j}^{\bar
i}$ is trivially rank deficient, it follows that at least one of
$(\mathcal{T}_j)^{i_{1+c},i_{2+c},\ldots ,i_{h+c}}_{j_1,j_2,\ldots
,j_h}$ and $(\mathcal{T}_j)^{i_{h+1+c},i_{h+2+c},\ldots
,i_{l+c}}_{j_{h+1},j_{h+2},\ldots ,j_l}$ is trivially rank
deficient. By the induction hypothesis, at least one of these
submatrices is lower block triangular and has a 0 on its diagonal.
As the diagonals of these submatrices lie on the diagonal of
$(\mathcal{T} _j)_{\bar j}^{\bar i}$, the claim follows.
Next, we note that the diagonal entries of $(\mathcal{T} _j)_{\bar
j}^{\bar i}$ are the entries $(\mathcal{T} _j)_{j_1}^{i_{1+c}}$,
$(\mathcal{T} _j)_{j_2}^{i_{2+c}}$, $\ldots$ , $(\mathcal{T}
_j)_{j_l}^{i_{l+c}}$. From the structure of $\mathcal{T} _j$, it is
clear that, when $(\mathcal{T} _j)_{\bar j}^{\bar i}$ is viewed over
$R$, a diagonal entry $(\mathcal{T} _j)_{j_t}^{i_{t+c}}$ is 0 if and
only if
$$
j_t > \Big\lceil\frac{i_{t+c}}{n-k}\Big\rceil k.
$$
It follows that
$$
j_t > \Big\lceil\frac{i_{t+c}}{n-k}\Big\rceil k
$$
for some $t\in \{1,\ldots ,l\}$.
\\
${\bf 3 \Longrightarrow 1}$: If
$$
j_t>\Big\lceil\frac{i_{t+c}}{n-k}\Big\rceil k
$$
for some $t\in\{ 1,\ldots ,l \}$, then $(\mathcal{T} _j)_{\bar
j}^{\bar i}$ has a 0 on its diagonal and is lower block triangular.
$(\mathcal{T} _j)_{\bar j}^{\bar i}$ is therefore trivially rank
deficient. Let $(\mathcal{T}_j)^{w_1,w_2,\ldots
,w_l}_{j_1,j_2,\ldots ,j_l}$ be a submatrix of $(\mathcal{T}
_j)^{\tilde i}_{\bar j}$. Since $w_t\leq i_{t+c}$,
$$
j_t>\Big\lceil\frac{w_t}{n-k}\Big\rceil k
$$
holds as well. As before, it follows that
$(\mathcal{T}_j)^{w_1,w_2,\ldots ,w_l}_{j_1,j_2,\ldots ,j_l}$ is
trivially rank deficient. Consequently, all $l+c\choose l$ $l\times
l$ submatrices of $(\mathcal{T} _j)^{\tilde i}_{\bar j}$ are
trivially rank deficient, so that $(\mathcal{T} _j)^{\tilde i}_{\bar
j}$ is trivially rank deficient.
\end{proof}
We next characterize the MDP and sMDS properties in terms of trivial
rank deficiency. We denote by $r$ the difference of the generalized
Singleton bound and the upper bound for the $L$th column distance:
\begin{multline*}
r:=(n-k)\Big (\Big \lfloor \frac{\delta}{k} \Big \rfloor +1\Big ) +\delta
+1-\Big (\Big \lfloor
\frac{\delta}{k} \Big \rfloor +\Big \lfloor \frac{\delta}{n-k} \Big \rfloor +1\Big
)(n-k)-1=\\ \delta -\Big \lfloor
\frac{\delta}{n-k} \Big \rfloor (n-k).
\end{multline*}
Note that $r$ is the remainder of $\delta$ on division by $n-k$. If
$r=0$, then $L=M$, and a code is MDP if and only if it is sMDS. This
case was considered in~\cite{gl03r} and~\cite{12}, so we will assume
that $r\in \{1,\ldots ,n-k-1\}$. In this situation, $M=L+1$.
\begin{theo}\label{CharsMDS}
Let $(A,B,C,D)\in S ^{\delta}_{k,n}$ and ${\mathcal C}$ be the
$(n,k,\delta)$-code represented by $(A,B,C,D)$. Then, $\mathcal{C}$
is an MDP code if and only if every square submatrix of $\mathcal{T}
_L$ that is not trivially rank deficient has full rank. ${\mathcal C}$ is an
sMDS code if and only if, for every integer $l$ satisfying $1\leq
l\leq \min\{(M+1)(n-k)-(n-k-r),(M+1)k\}$, every submatrix
$(\mathcal{T}_M)^{i_1,i_2,\ldots ,i_{l+n-k-r}}_{j_1,j_2,\ldots
,j_l}$ that is not trivially rank deficient has full rank.
\end{theo}
\begin{proof}
The first statement is~\cite[Corollary 2.5]{12}. We consider next
the second statement.\\
$\Longleftarrow$: Suppose that
$$
v:=\left[
\begin{array}{ccccccccc}
y_0^T & y_1^T &\ldots & y_M^T & \vline&u_0^T&u_1^T & \cdots &
u_M^T
\end{array}\right]^T
$$
is formed from the first $M+1$ vectors of a finite-weight sequence
for $(A,B,C,D)$ with $u_0\not = 0$, so that the matrix equation
\begin{equation*}
\left[
\begin{tabular}{lllccccc}
\multicolumn{3}{c|}{} & $D$ &$0$ &$\cdots$ &$\cdots$ &$0$ \\
\multicolumn{3}{c|}{} & $CB$ & $D$ &$\ddots$ & &$\vdots$ \\
\multicolumn{3}{c|}{$-I_{(M +1)(n-k)}$} & $CAB$ & $CB$ &$\ddots$ &$\ddots$ &$\vdots$ \\
\multicolumn{3}{c|}{} & $\vdots $ &$\vdots$ & $\ddots$ & $\ddots$ &$0$ \\
&&\multicolumn{1}{l|}{} & $CA^{M -1}B$ & $CA^{M -2}B$ & $ \cdots $ &
$CB$ & $D$
\end{tabular}
\right] v =0_{(M +1)n}
\end{equation*}
is satisfied, and denote the weight of
$$
u:=\left[
\begin{array}{cccc}
u_0^T & u_1^T &\cdots & u_M^T
\end{array}
\right]^T
$$
by $w$. For $t\in \{1,\ldots ,w \}$, let $j_t$ denote the position
of the $t$th nonzero entry in $u$, and let $\bar u$ denote the
vector obtained from $u$ by deleting all of the zero entries.
Suppose that $(\mathcal{T} _M)_{\bar j}^{\tilde
i}:=(\mathcal{T}_M)^{i_1,i_2,\ldots ,i_{w+n-k-r}}_{j_1,j_2,\ldots
,j_w}$ is a submatrix of $\mathcal{T}_M$ such that
\begin{equation}\label{equat}
(\mathcal{T} _M)_{\bar j}^{\tilde i}\bar u=0.
\end{equation}
Since $u_0 \not = 0$, we have that
$$
j_1\leq k \leq \Big \lceil \frac{i_{1}}{n-k} \Big \rceil k\leq \Big \lceil \frac{i_{1+n-k-r}}{n-k} \Big \rceil
k.
$$
By Lemma \ref{B}, $(\mathcal{T}_M)^{i_1,i_2,\ldots
,i_{1+n-k-r}}_{j_1}$ is not trivially rank deficient, so that it has
full rank. This means that at least one of its entries is nonzero,
and, since (\ref{equat}) holds, it follows that
$$
j_2\leq \Big \lceil \frac{i_{1+n-k-r}}{n-k} \Big \rceil k\leq \Big \lceil \frac{i_{2+n-k-r}}{n-k} \Big \rceil
k.
$$
By Lemma \ref{B}, $(\mathcal{T}_M)^{i_1,i_2,\ldots
,i_{2+n-k-r}}_{j_1,j_2}$ is not trivially rank deficient, so that it
has full rank. Consequently, at least one $2\times 2$ minor of
$(\mathcal{T}_M)^{i_1,i_2,\ldots ,i_{2+n-k-r}}_{j_1,j_2}$ is
nonzero. Again, since (\ref{equat}) holds, it follows that
$$
j_3\leq \Big \lceil \frac{i_{2+n-k-r}}{n-k} \Big \rceil k\leq \Big \lceil \frac{i_{3+n-k-r}}{n-k} \Big \rceil
k.
$$
Continuing, we see that, for $t\in \{ 1,\ldots ,w \}$,
$$
j_t\leq \Big \lceil \frac{i_{t+n-k-r}}{n-k} \Big \rceil k.
$$
A final application of Lemma~\ref{B} gives that $(\mathcal{T}
_M)_{\bar j}^{\tilde i}$ is not trivially rank deficient. By
hypothesis, it must have full rank, which contradicts the hypothesis
that $(\mathcal{T} _M)_{\bar j}^{\tilde i}\bar u=0$. Consequently,
at most $w+n-k-r-1$ rows of $(\mathcal{T} _M)_{\bar j}^{\tilde i}$
are in the left kernel of $\bar u$. It follows that $v$ has weight
at least $w + ((M + 1)(n - k) - (w + n - k - r - 1)) = M(n - k) + 1
+ r = (L + 1)(n - k) + 1 + r$, which means that $d_M^c(\mathcal{C})
\geq (L + 1)(n - k) + 1 + r$. Recalling Proposition
\ref{P-dcj.bound} and the definition of $r$, we conclude that
$d_M^c(\mathcal{C})= (L + 1)(n - k) + 1 + r$, so that $\mathcal{C}$
is sMDS.
$\Longrightarrow:$ We prove the contrapositive. Suppose that the
matrix\eqr{Bl-To} has a $(w + n - k - r)\times w$ submatrix
$(\mathcal{T} _M)_{\bar j}^{\tilde
i}:=(\mathcal{T}_M)^{i_1,i_2,\ldots ,i_{w+n-k-r}}_{j_1,j_2,\ldots
,j_w}$ that is not trivially rank deficient and that has less than
full rank. There then exists a vector $(\mathcal{T} _M)_{\bar
j}^{\tilde i}\bar u\neq 0$ of weight $w'\leq w$ such that $\bar
u=0$. Let
$$
u:=\left[
\begin{array}{cccc}
u_0^T & u_1^T &\cdots & u_M^T
\end{array}
\right]^T\in {\mathbb F} ^{Mk}
$$
be the vector in which the $j_t$th entry is the $t$th entry of $\bar
u$ and all other entries are zero; because of the block Toeplitz
structure of $\mathcal{T} _M$, we may assume that $u_0\neq 0$. Using
(\ref{iso}), we form the vector
$$
v:=\left[
\begin{array}{ccccccccc}
y_0^T & y_1^T & \cdots & y_M^T & \vline& u_0^T&
u_1^T &
\cdots & u_M^T
\end{array}
\right]^T.
$$
Because $[-I_{(M+1)(n-k)}\mid \mathcal{T} _M]v=0$, the weight of $v$
is at most $w' + (M + 1)(n - k) - (w - n - k - r)\leq (M + 1)(n - k)
- (n - k - r) = (L + 1)(n - k) + r<(L+1)(n-k)+r+1$. We may choose
additional information vectors $u_{M+1},\ldots ,u_{d}$ so that
$x_{d}=0$ (see, for example,~\cite{Antsaklis}); in other words, it
is possible to extend $v$ into a finite-weight sequence for
$(A,B,C,D)$ with weight less than the generalized Singleton bound.
Thus, $d_M^c(\mathcal{C})<(L+1)(n-k)+r+1$, so that $\mathcal{C}$ is
not an sMDS code.
\end{proof}
Theorem \ref{CharsMDS} gives polynomial conditions on the entries of
a realization $(A,B,C,D)\in S ^{\delta}_{k,n}$ that may be used to
determine whether or not the convolutional code these matrices
represent has the MDP and sMDS properties. In the next section, we
use this information to show that we can find a realization
$(A,B,C,D)\in S ^{\delta}_{k,n}$ representing an $(n,k,\delta
)$-code that has the MDP and sMDS properties.
\section{Proof of the Existence of sMDS Convolutional Codes}
Recall that we defined the block matrices making up $\mathcal{T} _M$
in terms of matrices $(A,B,C,D)$. In this section, we will work in
the opposite direction. Let $\{ F_0,F_1,\ldots ,F_j \}$ be a
sequence of matrices in ${\mathbb K} ^{(n-k)\times k}$. Slightly abusing
notation, we set
\begin{equation} \label{Bl-To2}
\mathcal{T}_j :=
\left[
\begin{array}{cccc}
F_0 & 0 &\cdots &0 \\
F_1 & F_0 &\ddots &\vdots \\
\vdots & \vdots &\ddots &0 \\
F_j & F_{j-1} &\cdots & F_0
\end{array}
\right].
\end{equation}
The plan is to show the existence of a sequence $\{ F_0,F_1,\ldots
,F_M \}$ of matrices in ${\mathbb K} ^{(n-k)\times k}$ such that
\begin{enumerate}
\item $\mathcal{T} _M$
has the property that, for all integers $l$ with $1\leq l\leq \min
\{(M+1)(n-k)-(n-k-r),(M+1)k \}$, every submatrix
$(\mathcal{T}_M)^{i_1,i_2,\ldots ,i_{l+n-k-r}}_{j_1,j_2,\ldots
,j_l}$ that is not trivially rank deficient has full rank;
\item there is a minimal partial realization
$(A,B,C,D)\in S ^{\delta}_{k,n}$ of this matrix sequence (this means
that $D=F_0$ and $CA^{i-1}B=F_i$ for $1\leq i\leq M$).
\end{enumerate}
The matrices $(A,B,C,D)$ will represent the desired code. We begin
with the following lemma.
\begin{lemma}\label{Ex1}
There exists a sequence $\{ F_0,F_1,\ldots ,F_L \}$ of matrices in
${\mathbb K} ^{(n-k)\times k}$ such that every square submatrix of
$\mathcal{T} _L$ that is not trivially rank deficient has full rank.
\end{lemma}
\begin{proof}
Note that we may think of such a matrix sequence $\{ F_0,F_1,\ldots ,F_L \}$ as a point in ${\mathbb K} ^{(L+1)(n-k)k}$. To begin, think of the matrix (\ref{Bl-To2}) with $j=L$ as being defined over the polynomial ring ${\mathbb K}
[x_1,x_2,\ldots,x_{(L+1)(n-k)k}]$, the entries corresponding with the indeterminates of this ring in a manner
analogous to that in the previous section. When viewed in this way, the determinant of a square submatrix of
$\mathcal{T} _L$ that is not trivially rank deficient is a nonzero polynomial in ${\mathbb K}
[x_1,x_2,\ldots,x_{(L+1)(n-k)k}]$, and there is a finite number of such polynomials. The solution sets of these polynomials
make up a proper algebraic subset of ${\mathbb K} ^{(L+1)(n-k)k}$, the complement of which is a nonempty Zariski open set. Choose
$\left \{ F_0,\ldots ,F_L \right \}$ to be a point in this open set.
\end{proof}
To determine the degree of a minimal partial realization of a matrix
sequence $\{ F_0,F_1,\ldots ,F_M \}$, we consider the matrices
$$
\mathcal{F}_{x,y}:= \left[
\begin{array}{cccc}
F_1 & F_2 & \cdots & F_y \\
F_2 & F_3 & \cdots & F_{y+1} \\
\vdots & \vdots & & \vdots \\
F_x & F_{x+1}& \cdots & F_{x+y-1}
\end{array}
\right].
$$
In~\cite[Lemma 3]{te70}, it is shown that the degree of a minimal
partial realization of $\{ F_0,F_1,\ldots ,F_M \}$ is given by the
expression
\begin{equation}\label{rank}
\sum_{x =1}^{M} \text{rank } \mathcal{F} _{x ,M+1-x} - \sum_{x
=1}^{M-1} \text{rank } \mathcal{F} _{x ,M-x}.
\end{equation}
The next results show that, starting with a matrix sequence $\{
F_0,F_1,\ldots ,F_L \}$ as described in Lemma \ref{Ex1}, we can find
a matrix $F_M$ so that the expression (\ref{rank}) evaluates to
$\delta$.
\begin{lemma}\label{FM1}
Let $\left\{F_0,\ldots,F_M\right\}$ be a sequence of matrices in ${\mathbb K}
^{(n-k)\times k}$ such that every square submatrix of $
\mathcal{T}_L$ that is not trivially rank deficient has full rank.
Then,
\begin{enumerate}
\item For $x\in \{1,\ldots ,M-1 \}$, ${\rm rank}\, \mathcal{F} _{x ,M-x }=\min \{ x
(n-k),(M-x )k \}$.
\item If ${\rm rank}\, \mathcal{F}_{x
,M+1-x} <\min \{ x (n-k),(M+1-x )k \}$, then $x=\lceil M\frac{k}{n}
\rceil$. If $x\in \{1,\ldots ,M \} \backslash \{ \lceil M\frac{k}{n}
\rceil \}$, then ${\rm rank}\, \mathcal{F} _{x ,M+1-x }=\min \{ x
(n-k),(M+1-x ) k \}$.
\item Set $\bar x:=\lceil M\frac{k}{n} \rceil$. The expression
(\ref{rank}) reduces to ${\rm rank}\, \mathcal{F} _{\bar x,M+1-\bar x}$.
\end{enumerate}
\end{lemma}
\begin{proof}
To verify the first claim, observe that $\mathcal{F} _{x ,M-x }$
differs by a column permutation from a submatrix of $\mathcal{T} _L$
that has full rank.
For the second claim, suppose first that $x(n-k)\leq (M+1-x)k$. The
hypothesis is then that ${\rm rank}\, \mathcal{F}_{x ,M+1-x } <x (n-k)$. If
$x <M$, it follows from 1 that ${\rm rank}\, \mathcal{F}_{x , M-x } =\min
\{ x (n-k),(M-x )k \}$, which means that $x (n-k)>(M-x )k$.
Together, this gives
$$
(M-x )k<x (n-k)\leq (M+1-x )k
$$
(note that the first inequality also holds if $x =M$). This can be
rewritten as
$$
M\frac{k}{n} <x \leq (M+1)\frac{k}{n}.
$$
If we suppose instead that $(M+1-x )k\leq x (n-k)$, similar
reasoning leads to
$$
(M+1)\frac{k}{n} \leq x <M\frac{k}{n} +1.
$$
In all, we have
$$
M\frac{k}{n} <x <M\frac{k}{n} +1.
$$
Since $x$ is an integer, $x=\lceil M\frac{k}{n} \rceil$. The second
statement follows immediately.
The third claim follows directly from the first two, since $x<\bar x
\implies x(n-k)<(M-x)k$ and $x>\bar x \implies x(n-k)>(M+1-x)k$.
\end{proof}
\begin{theo}\label{FM3}
Let $\left\{F_0,\ldots ,F_L\right\}$ be a sequence of matrices in
${\mathbb K} ^{(n-k)\times k}$ such that every square submatrix of
$\mathcal{T}_L $ that is not trivially rank deficient has full rank.
Then, one can find a matrix $F _M\in {\mathbb K} ^{(n-k)\times k}$ such that
\begin{enumerate}
\item the matrix
$$
\mathcal{F}_{\bar x,M+1-\bar x}= \left[
\begin{array}{cccc}
F_1 & F_2 & \cdots & F_{M+1-\bar x} \\
F_2 & F_3 & \cdots & F_{M+2-\bar x} \\
\vdots & \vdots & & \vdots \\
F_{\bar x} & F_{\bar x +1}& \cdots & F_M
\end{array}
\right]
$$
has rank $\delta$.
\item the matrix $\mathcal{T}_M$ has the property that, for every
integer $l$ with $1\leq l\leq \min \{(M+1)(n-k)-(n-k-r),(M+1)k \}$,
every submatrix $(\mathcal{T}_M)^{i_1,i_2,\ldots
,i_{l+n-k-r}}_{j_1,j_2,\ldots ,j_l}$ that is not trivially rank
deficient has full rank.
\end{enumerate}
\end{theo}
\begin{proof}
We may write
$$
\delta =\Big\lfloor \frac{\delta}{n-k} \Big\rfloor
(n-k)+r=\Big\lfloor \frac{\delta}{k} \Big\rfloor k+r',
$$
where $1\leq r<n-k$ and $0\leq r'<k$. Since
$$
M=L+1=\Big\lfloor \frac{\delta}{n-k} \Big\rfloor +\Big\lfloor
\frac{\delta}{k} \Big\rfloor +1,
$$
we see that
\begin{align*}
\frac{Mk}{n}&=\Big\lfloor \frac{\delta}{n-k} \Big\rfloor \frac{k}{n}
+\Big\lfloor \frac{\delta}{k} \Big\rfloor \frac{k}{n} +\frac{k}{n}
=\Big\lfloor \frac{\delta}{n-k} \Big\rfloor -\Big\lfloor
\frac{\delta}{n-k} \Big\rfloor \frac{n-k}{n} +\Big\lfloor
\frac{\delta}{k} \Big\rfloor \frac{k}{n} +\frac{k}{n}\\
&=\Big\lfloor \frac{\delta}{n-k} \Big\rfloor -\frac{\delta -r}{n}
+\frac{\delta -r'}{n} +\frac{k}{n} =\Big\lfloor \frac{\delta}{n-k}
\Big\rfloor +\frac{k-r'+r}{n}.
\end{align*}
Since $1<k-r'+r<n$, we have $\lfloor \frac{\delta}{n-k} \rfloor
=\bar x -1$, so that $\delta =(\bar x -1)(n-k)+r$ and
$$
\frac{Mk}{n} =\bar x -1+\frac{k-r'+r}{n}.
$$
Multiplying both sides by $n$ and subtracting $\bar x k$ from both
sides, we get
$$
(M-\bar x )k=(\bar x -1)(n-k)+r-r'=\delta -r',
$$
from which it follows that $(M-\bar x )k\leq \delta$. Since $r'
<k$, it also follows that $\delta <(M+1-\bar x )k$.
We next want to see that we may find a matrix $F_M$ as described in
the statement of the theorem. We first consider the top $r$ rows of
$F_M$. Using the same reasoning as in the proof of Lemma \ref{Ex1},
we may find elements of ${\mathbb K}$ to form these top $r$ rows so that all
square submatrices of the top $M(n-k)+r$ rows of $\mathcal{T} _M$
that are not trivially rank deficient have full rank. In
particular, all square submatrices of the top $\delta$ rows of
$\mathcal{F}_{\bar x,M+1-\bar x}$ have full rank. Denote the
$r\times k$ matrix consisting of these $r$ rows by $F_M'$. Since
$\delta <(M+1-\bar x )k$, $\text{rank } \mathcal{F} _{\bar x
,M+1-\bar x} \geq
\delta$ will hold regardless of how the entries of the bottom $n-k-r$ rows of $F_M$ are chosen. To find entries for these rows
so that ${\rm rank}\, \mathcal{F}_{\bar x,M+1-\bar x} =\delta$, consider the top
$\delta$ rows of $\mathcal{F} _{\bar x ,M-\bar x}$. Since $\delta
\geq (M-\bar x )k$, we may choose $M-\bar x$ of these $\delta$ rows
to form an $(M-\bar x )k\times (M-\bar x )k$ submatrix that
necessarily has full rank. This means that the last $n-k-r$ rows of
$\mathcal{F} _{\bar x ,M-\bar x }$ may each be expressed as a linear
combination of the rows of our chosen submatrix. Consequently, we
may take the last $n-k-r$ rows of $F_M$ to be the corresponding
linear combinations of the rows of
$$
\left[
\begin{array}{c}
F_{M+1-\bar x} \\
F_{M+2-\bar x} \\
\vdots \\
F_M'
\end{array}
\right]
$$
extending the rows of our chosen submatrix. With this, we have
found an $F_M$ so that $\text{rank } \mathcal{F} _{\bar x ,M+1-\bar
x } =\delta$.
Suppose finally that $(\mathcal{T} _M)_{\bar j}^{\bar
i}:=(\mathcal{T}_M)^{i_1,i_2,\ldots ,i_{l+n-k-r}}_{j_1,j_2,\ldots
,j_l}$ is a submatrix of $\mathcal{T}_M$ that is not trivially rank
deficient and does not have full rank. Then, in particular,
$(\mathcal{T}_M)^{i_1,i_2,\ldots ,i_l}_{j_1,j_2,\ldots ,j_l}$ does
not have full rank. Since $(\mathcal{T}_M)^{i_1,i_2,\ldots
,i_l}_{j_1,j_2,\ldots ,j_l}$ is contained in the top $M(n-k)+r$ rows
of $\mathcal{T} _M$, it must be trivially rank deficient. By Lemma
\ref{B}, there exists a smallest integer $t\in \{ 1,\ldots l \}$
such that
$$
j_t>\Big\lceil \frac{i_t}{n-k} \Big\rceil k.
$$
Since $(\mathcal{T} _M)_{\bar j}^{\bar i}$ is not trivially rank
deficient, it also follows from Lemma \ref{B} that
$$
j_{\tau}\leq\Big\lceil\frac{i_{\tau +n-k-r}}{n-k}\Big\rceil k \,\,\, \forall \,\tau \in
\{1,\ldots ,l \},
$$
so that $(\mathcal{T} _M)_{\bar j}^{\tilde i}:=(\mathcal{T}
_M)^{i_{t+n-k-r},i_{t+1+n-k-r},\ldots
,i_{l+n-k-r}}_{j_t,j_{t+1},\ldots ,j_l}$ is not trivially rank
deficient. Since $j_t>k$, $(\mathcal{T} _M)_{\bar j}^{\tilde i}$
must be a submatrix of $\mathcal{T} _L$. Thus, $(\mathcal{T}
_M)_{\bar j}^{\tilde i}$ has full rank. Recalling how $t$ was
chosen, we conclude that $(\mathcal{T} _M)_{\bar j}^{\bar i}$ has
full rank. This is a contradiction. We conclude that, if a submatrix
$(\mathcal{T} _M)_{\bar j}^{\bar i}$ is not trivially rank
deficient, then it has full rank.
\end{proof}
\begin{cor}\label{FM4}
Let $\left\{F_0,\ldots ,F_M\right\}$ be as in Theorem~\ref{FM3}.
Then, the expression (\ref{rank}) evaluates to $\delta$.
\end{cor}
We are now ready to finish our existence proof.
\begin{theo}\label{Ex4}
An MDP and sMDS $(n,k,\delta )$-code exists over a sufficiently
large finite field.
\end{theo}
\begin{proof}
By Lemma \ref{Ex1} and Theorem~\ref{FM3}, we can find a sequence
$\left\{F_0,\ldots ,F_M\right\}$ of matrices in ${\mathbb K} ^{(n-k)\times
k}$ such that
\begin{enumerate}
\item every square submatrix of $ \mathcal{T}_L$ that is not
trivially rank deficient has full rank
\item every $(l+n-k-r)\times l$
submatrix of $\mathcal{T}_M$ that is not trivially rank deficient
has full rank
\item the minimum possible degree of a partial realization of
$\left\{F_0,\ldots ,F_M\right\}$ is $\delta$.
\end{enumerate}
Since there are a finite number of entries in the matrices
$\left\{F_0,\ldots ,F_M\right\}$, the entries all belong to some
finite subfield ${\mathbb F}$ of ${\mathbb K}$. From~\cite[Theorem 1]{te70}, there is
a minimal realization $(A,B,C,D)\in S ^{\delta}_{k,n}$ of the
sequence $\left\{F_0,\ldots,F_M\right\}$ with entries in ${\mathbb F}$. By
Theorem~\ref{CharsMDS}, the $(n,k,\delta )$-code represented by
$(A,B,C,D)$ is both MDP and sMDS.
\end{proof}
With this, we have shown that the conjecture in~\cite{gl03r} that
codes having both the MDP and sMDS properties exist for all
parameters $(n,k,\delta )$ is correct. It is still an open problem
as to how one may construct matrices of the form (3.4) leading to
codes with these properties, and this must be
left for future research.\\
\noindent {\bf Acknowledgements:} The author wishes to thank
Joachim Rosenthal, Heide Gluesing-Luerssen, and Jos\'e Ignacio
Iglesias Curto for helpful comments during the preparation of this
paper. He also wishes to thank the anonymous referees for their
careful readings and detailed comments.
|
1,314,259,996,577 | arxiv | \section{Background}
O'Shea et al. \cite{o2016convolutional} first introduced deep learning approaches to I/Q modulation pattern classification, showing great accuracy improvements over traditional statistical methods. In \cite{o2018over}, a thorough exercise in various methods for radio signal classification was conducted and showed deep learning architectures such as ResNets significantly outperform advanced statistical machine learning methods such as gradient boosted trees with hand crafted high-order statistical features. In \cite{ramjee2019fast}, Ramjee et al. furthered this work by trying many different deep learning approaches to classify modulated radio signals and investigated as how to best reduce the training time of the approaches. Deep learning architectures investigated included a DenseNet, and Convolutional Long Short-term Deep Neural Network (CLDNN), ResNet and the LSTM architectures. Their results showed the ResNet outperformed all architecture over all SNRs tested. Ramjee et al. demonstrated that, in general, more sophisticated architectures outperform those demonstrated by O'Shea et al. since they are capable of mapping new non-linear relationships in addition to having many more parameters to optimize over.
This problem space is generally aimed at improving the modulation classification task developed in [1]. However, direct comparison is difficult because: (a) inconsistency in reporting train/test split on a dataset~\cite{west2017deep,luo2019radio}, (b) inconsistency in how the train/test sets are formed, whether the data is randomly shuffled or not,~\cite{xu2020spatiotemporal}, and (c) use of a simpler dataset~\cite{lin2020hybrid,wu_sun_wei_zhao} which has many more samples per class and fewer classes. To our best knowledge, few publications make any effort toward repeatability or reproducibility.
In this study we test our methods against approaches that have performed competitively, if not state-of-the-art, in the literature. We demonstrate our methods substantially outperform these approaches; peak classification accuracy of 92.4\% in complex models compared to a peak of 83.6\%, outperforming parameter-matched models by over 10\%, and outperforming models with equivalent inference speed by over 10\%.
\section{Complex Convolutions for Real-Valued Inputs}
\label{our_way}
Convolutional neural networks do not actually compute convolutions, but rather, cross-correlations. In applications such as telecommunications, computing a convolution in the complex domain is important to filter a complex data stream (known as I/Q data) and extract features for classification. Treating a two dimensional array consisting of real and imaginary components as a two dimensional array consisting of real values does not allow for the network to learn from the joint correlations that inherently exist in I/Q data~\cite{krzyston2020complex, krzyston_icpr}.
Recent work~\cite{krzyston2020complex} showed how real-valued deep learning frameworks can compute complex-valued convolutions using standard deep-learning convolutional followed by a linear combination. Implementing complex convolutions into the architecture from~\cite{o2016convolutional} drastically improved the classification accuracy. Further work~\cite{krzyston_icpr}, showed complex convolutions are able to better learn feature representations in noisy environments and are more effective than naively adding more parameters to a network.
Recently, there has been another approach developed to enable deep learning paradigms to compute complex convolutions~\cite{chakraborty2019surreal}. This work takes a geometric approach to understanding the relationship between the real and imaginary components by defining the convolution as a weighted Fr\'{e}chet mean on a Lie group. This new form of convolution necessitated the development of a new activation function, $G$-transport. When comparing to the work of \cite{o2016convolutional}, this method of complex convolutions slightly underperformed~\cite{o2016convolutional} while producing a model that was 70\% the size.
For comparison, our method to compute complex convolutions, originally demonstrated in~\cite{krzyston2020complex}, used 1.0038 times the number of parameters as the network in~\cite{o2016convolutional}. Although our complex convolutions use twice as many parameters as traditional convolutions, this slight increase stems from the majority of the parameters being in the dense layers. However, our method outperformed~\cite{o2016convolutional} with statistical significance over five trials~\cite{krzyston_icpr}. Further, in~\cite{krzyston_icpr} the activation maximizations were qualitatively analyzed and it can be seen that the features learned with complex convolutions better captures the relationship between I and Q components than traditional CNNs.
\section{High-Capacity CNN Architectures with Complex Convolutions}
Krzyston et al. have only tested complex convolutions by linear combination~\cite{krzyston2020complex, krzyston_icpr} on a low-capacity CNN described in~\cite{o2016convolutional}. The CNN in~\cite{o2016convolutional} has two convolutional layers and two dense layers. The network in \cite{krzyston2020complex}, called Krzyston 2020, only differs in the first convolutional layer being complex.
Following the work of AlexNet in 2012~\cite{NIPS2012_4824}, there were many follow up works trying to evaluate the relationship between the depth of a network and improvements in performance \cite{simonyan2014very, szegedy2015going}. However, gradients would get smaller as networks get deeper preventing deep architectures from learning, better known as the vanishing gradient problem.
The vanishing gradient problem was ultimately addressed by Residual Networks~\cite{he2016deep}, better known as ResNets. ResNets immediately became the state of the art across computer vision, and remain the leading architecture to use for image classification and pattern recognition problems. Following the work of Residual Networks, was the development of Densely Connected Networks~\cite{huang2017densely}, known as DenseNets.
In this work, we integrate complex convolutions into state of the art CNN paradigms, Residual Networks, Densely Connected Networks, as well as a combination of the two, which we call a Dense ResNet, inspired by~\cite{liublind}.
\subsection{Residual Networks}
Residual connections~\cite{he2016deep} in neural networks were proposed to address the problem of vanishing gradients and enable networks to be deeper than originally though possible. By adding the identity to itself, it enabled features to propagate further through the network, suggesting more robust representation learning in addition to proving to be easier to optimize. Figure~\ref{fig:res} shows how a residual connection is formed.
\begin{figure}
\centering
\includegraphics[width=2 in]{residual_h5.eps}
\caption{Residual connection, originally described in~\cite{he2016deep}.}
\label{fig:res}
\end{figure}
Utilizing the PyTorch ResNet GitHub repo~\cite{pytorch_res}, we develop two smaller ResNets, ResNet-18 and ResNet-34. We developed smaller ResNets to keep inference speeds low. We developed two different sizes to test if more layers leads to a meaningful improvement in classification performance. The number indicates the number of computational layers, counted in the same manner of more well-known ResNets. We developed the complex convolution variants, named ResNet-18 C and ResNet-34 C respectively, where all convolutional layers compute complex convolutions.
\subsection{Densely Connected Networks}
Dense connections~\cite{huang2017densely} in neural networks were proposed as an alternative to residual connections for the vanishing gradients problem while adding far fewer parameters to optimize than ResNets. In dense connections, each 'block', comprised of convolutional layers in series, is connected to every other block and the feature maps from one layer are concatenated to the input of the next. Figure~\ref{fig:dense} shows a DenseNet comprised of four densely connected dense blocks.
\begin{figure}
\centering
\includegraphics[width=3.39 in]{Dense4.eps}
\caption{DenseNet architecture, originally described in~\cite{huang2017densely}, with only four dense blocks.}
\label{fig:dense}
\end{figure}
Utilizing the PyTorch DenseNet GitHub repo~\cite{pytorch_dense}, we develop two smaller DenseNets and their complex convolution variants, DenseNet-57 (C) and DenseNet-73 (C).
\subsection{Dense ResNets}
Inspired by the work done in~\cite{liublind}, we developed an architecture that utilizes both dense and residual connections, Dense ResNets. Dense ResNets leverage both means of addressing the vanishing gradient problem and feature propagation. Following~\cite{liublind}, the Dense ResNets have 6 blocks, each block is comprised of four of the residual connections seen in Figure~\ref{fig:res} (totaling eight convolutional layers per block), and are followed by two dense layers. Additionally the kernel size decreases every other block, from 7, to 5, to 3.
With the same motivation for two smaller ResNets and DenseNets, we developed two sizes of the Dense ResNet, Dense ResNet-35 and Dense ResNet-68, the difference being the number of blocks. Dense ResNet-68 has six blocks and the Dense ResNet-35 has three blocks, each with descending kernel sizes. The number indicates the total number of computational layers in the architecture and is counted in the same fashion as a ResNet or DenseNet architecture. Complex convolutional variants were created, denoted as Dense ResNet-35 C and Dense ResNet-68 C respectively.
\section{Experimental Design}
These architectures were trained and tested on the RadioML 2016.10a open source dataset used in \cite{o2016convolutional}. This standard baseline dataset for I/Q modulation classification consists of 11 modulations (8 digital and 3 analog) at SNR levels from -20 to 18 dB with 2 dB steps. Additionally the dataset includes variation in the following properties: center frequency offset, sample clock rate, sample clock offset, and initial phase. There 1,000 samples of all modulation schemes at all SNR values. The shape of each sample is 2 x 128, representing 128 samples in time and 2 channels, I and Q \cite{o2016convolutional}.
\begin{figure}
\centering
\includegraphics[width=3.39 in]{RML2016_train_-20_18_test_-20_18_SNR_Classification_edited.eps}
\caption{Average classification accuracy as a function of SNR with standard deviation bars. The boxes enclose the accuracies of the complex and traditional convolutional high-capacity architectures respectively.}
\label{fig:class_all}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=4.3 in]{RML2016_Summary_Plot.eps}
\caption{Average overall accuracies, with standard deviation bars, for all models tested. The acronyms are as follows: Krzyston is from [9], RN = ResNet, DN = DenseNet, DRN = Dense ResNet, and C = utilized complex convolutions. The unpaired student t-test was used to compute p-values determining statistical significance of one models performance versus another.}
\label{fig:summary}
\end{figure*}
The data was shuffled across both the modulation formats and SNR levels then split 50/50 into train/test sets. Each architecture performed the classification task five times, reshuffling to obtain new train/test sets each time. Inference speed was quantified as the duration for a trained network to make a prediction with the test samples and trained model loaded onto the GPU. All trials were performed on a single NVIDIA GeForce GTX 1080 Ti GPU.
\section{Results}
In Figure~\ref{fig:class_all}, the averaged classification accuracy with standard deviation bars is plotted as a function of SNR. Boxed and shown by the darker colored plots, all of the complex variations of all high-capacity architectures outperform their traditional convolution counterparts, especially as SNR increases above -6dB. For all SNR greater than -2dB these complex variation increase by at least 12\%. Further, the addition of skip connections enabled the complex networks to perform much better than the Krzyston 2020 network. Krzyston 2020 outperforms all of the higher capacity networks utilizing traditional convolutions, when tested on samples below 2dB SNR. The Dense ResNet-68 C achieved state of the art performance with an average overall accuracy of 61.5\%, peaking at 92.4\% at 14 dB SNR. It achieves over 80\% accuracy for SNRs greater than -2dB, only requiring 128 digital I/Q samples.
Figure~\ref{fig:summary} compares the classification accuracies of the complex convolutional networks, over five trials, versus the traditional variants as well as the Krzyston 2020 network. The performances of all the high-capacity complex architectures outperformed their traditional counterparts and the Krzyston 2020 architecture with statistical significance. The unpaired student t-test was used to compute p-values.
\begin{figure}
\centering
\includegraphics[width=3.39 in]{Params_v_Acc.eps}
\caption{Average classification accuracy plotted against the number of parameters in the architecture, in millions.}
\label{fig:param_v_acc}
\end{figure}
Figure~\ref{fig:param_v_acc} shows the relationship between average overall accuracy and the number of trainable parameters for each model. High-capacity architectures nearly doubled in complexity when enabled to compute complex convolutions, due to the large number of convolutional layers in each architecture and the complex convolutions leveraging a 2 x $m$ kernel size. However, when comparing models with nearly the same number of parameters, the ability to perform complex convolutions substantially improves the performance. For example, the Dense ResNet-68 and Dense ResNet-35 C models have nearly the same number of parameters as well as the ResNet-34 and ResNet-18 C architectures being nearly parameter-matched, yet in both cases the architecture with the ability to compute complex convolutions outperforms the other, on average, by 10.15\% and 12.49\% respectively. Further, Figure~\ref{fig:param_v_acc} shows there was no substantial performance benefit from adding more layers/parameters to the models.
Figure~\ref{fig:speed_v_acc} shows the relationship between average overall accuracy and average normalized inference speed. The average inference speeds for each network was normalized to the average inference speed of the Krzyston 2020 network, which was 0.398$\mu$s/test sample. Complex models tend to take longer to compute inferences. However, a complex network of equal inference speed greatly outperforms a traditional architecture of similar speed. For example, the Dense ResNet-68 and Dense ResNet-35 C models compute inferences at the same speed but the Dense ResNet-35 C outperforms by 10.15\%.
\begin{figure}
\centering
\includegraphics[width=3.39 in]{Speed_v_Acc.eps}
\caption{Average classification accuracy plotted against average inference speeds, normalized by [9] (0.398 $\mu$s/sample).}
\label{fig:speed_v_acc}
\end{figure}
\section{Conclusion}
In this work we examined the modulation classification performance of high-capacity architectures when enabled to compute complex convolutions. Combining complex convolutions and various types of skip connections enables state of the art performance on I/Q modulation classification. An architectures ability to compute complex convolutions yields over 10\% higher accuracy than simply using a large architecture with the same number of parameters or an architecture that infers at the same speed.
Future work includes speeding up the performance of these higher-capacity, complex convolutional networks for real-world applications via quantization/pruning. Further, recent articles~\cite{spooner_2020_bpsk,spooner_2020_more} detailed fundamental issues with the RML2016a and RML2016b datasets, which are commonly used in the field. The impacts of these issues on classification performance and generalizability of the trained models is an area of future investigation.
\vfill
\pagebreak
\Urlmuskip=0mu plus 1mu\relax
\bibliographystyle{IEEETran}
|
1,314,259,996,578 | arxiv | \section{Introduction}
The discovery of the `butterfly effect' \cite{Lo63a} effectively ended the idea that weather forecasting can be understood purely as the problem of integrating a deterministic system forward in time. Instead, the problem of accurate weather forecasting becomes one of determining, from a given initial state, the likely trajectories of the atmosphere on its underlying attractor \cite{Slingo2011}. Similarly, the problem of producing reliable climate projections can be understood as determining how, and to what extent, the likelihood of traversing different trajectories changes in the presence of an external forcing \cite{Corti1999, Palmer1999, Woollings2010a}. As such, it becomes natural to ask whether the climate attractor exhibits significant deviations from Gaussianity, since such deviations, even locally, may strongly constrain the available trajectories. In other words, understanding the `shape' of the attractor becomes a problem of great practical importance.
The study of local non-Gaussianity in the atmosphere has classically been done under the guise of so-called regimes \cite{Vautard1990, Michelangeli1995, Lorenz2006}. However, despite being studied since the 1970's, no clear-cut and generally accepted definition of a regime exists. This is likely because of the wide variety of behaviour exhibited by dynamical systems that are considered to have regimes. Indeed, most definitions found in existing studies, often stated only implicitly, are based either on density considerations, where a regime corresponds to a region of above-average density (`clusters') \cite{Stephenson04, Vautard1990, Straus2010}, or some temporal persistence criteria, whereby a regime is a phenomenon with a clear lifecycle and longer-than-average lifespan \cite{Mo1987, Lorenz2006, Franzke2008, Falkena2020}; some studies also combine the two \cite{Falkena2020}. These definitions suffer from two key problems. Firstly, the algorithms involved often require essentially ad-hoc choices up front, such as the choice of number of clusters in $K$-means clustering algorithms, or temporal persistence thresholds, which directly influence the output regimes. This adds an additional layer of complexity to analysis, as it can be hard to motivate the choice of one parameter over another. Secondly, and more seriously, as we will show, any simplistic definition based on density and temporal persistence invariably fails to account for one or more classical regime systems in the literature. While a more technical definition based on exact solutions (e.g., fixed-points and periodic orbits) of the flow can work and even be computationally tractable for relatively low-dimensional systems \cite{gibson2008visualizing, ding2016estimating}, they suffer from the `curse of dimensionality', and are generally limited to simple systems. In a state-of-the-art application of these concepts to atmospheric modelling, \cite{lucarini2020new} identified unstable periodic orbits (UPOs) corresponding to zonal and blocking events in a low resolution quasi-geostrophic model. However, the dimensionality of this model still sits well below that of weather and climate models, let alone the physical system itself. Inherently, UPOs and their stability are model features, limited by the accuracy of their associated models and such analysis cannot be directly applied to observational data.
The emergence of the field of topological data analysis \cite{carlsson08} offers another perspective, by bringing the attention away from specific dynamical properties, such as density and temporal persistence, and back to a more general consideration of the `shape', i.e., the \emph{topology}, of dynamical systems. We will argue that the only clearly unifying feature shared between a number of classic examples of regime systems (Lorenz `63, Lorenz `96, Charney-deVore and the North Atlantic jet) is their non-trivial topology, and that the particular topological structure associated with these systems captures well their most familiar features. This suggests the following informal definition of regimes: a dynamical system supported on an attractor $A$ is said to exhibit regime structure if $A$ has non-trivial topological structure. To make this informal definition more formal, we make use of persistent homology \cite{otter2017roadmap}, a technique in topological data analysis that gives a principled way of computing homology groups, and hence topological invariants, for point-cloud data sets. In order to overcome the problem that point clouds (i.e., a disconnected set of points) have trivial topology, in persistent homology one constructs a filtration of homology groups, by thickening each point with a ball of radius $r$ and computing homology for increasing values of $r$. Because many of the most iconic topological features of dynamical systems, such as the two holes in the Lorenz `63 system (cf. Figure \ref{fig:lorenz_holes_evolution}), vanish in the limit of infinitely many points, it turns out to be crucial to augment the standard filtration by taking into account density. To do this, we compute, for any dynamical system, a \emph{bifiltration} of topological invariants, which encodes non-trivial topological features of the attractor, such as connected components (i.e. well-separated regions of points), holes (e.g. as in Lorenz `63) and higher-dimensional voids. The topological non-triviality of this bifiltration then becomes a way of defining regime structure in a robust and computationally tractable way.
Let us make some remarks on technical benefits. Firstly, our method does not require an up-front choice of parameters that directly influences the regime structure found, unlike for instance K-means clustering. Secondly, homological computations do not essentially depend on the dimensionality of the data set, meaning this method does not suffer from the usual `curse of dimensionality'. Finally, persistent homology is model-independent, in that it does not require any prior knowledge of the underlying equations defining the data set, unlike for e.g. UPOs. This method therefore provides, in principle, a way to search for regime structure in arbitrary high-dimensional data sets with no a priori knowledge about whether they have them or not.
The potential of topological techniques for dynamical systems analysis was first suggested, among others, in \cite{Maletic2016}, in which it was demonstrated that persistent homology can locate the holes of the Lorenz `63 system. Of particular relevance is the recent work of \cite{Gokhan2020}, which uses persistent homology in order to obtain a simplified representation of chaotic dynamics, an approach similar in spirit to our paper. A recent application of persistent homology to the real atmosphere is \cite{Muszynski2019}, which studies `atmospheric rivers' using a combination of homology and machine learning. For a different application of topological ideas to ocean modeling, we can also recommend \cite{Stanley2019}. There are several additional lines of work, that have used methods from topology to study dynamical systems, such as \cite{KM16}, and \cite{KLT16}, to cite a few.
Finally, a cautionary note on language. The use of the word `persistent' in `persistent homology' comes from the way in which topological features that persist for a certain number of filtration values are considered to give meaningful information. In particular, there is no relationship with the temporal persistence of regime states. To avoid ambiguity, in this manuscript the word `persistence' will always refer to topological persistence, while when referring to temporal persistence of regime states, we will make use of the qualifier `temporal'.
The paper is structured as follows. In Section \ref{sec:data} we provide details of the dynamical systems used, including the observational atmospheric data. In \ref{sec:background}, details concerning persistent homology are presented, and the need for a bifiltration is motivated. In Section \ref{sec:methodology} we detail the algorithmic procedure used to compute topological metrics. The results of applying this methodology to our suite of data sets is shown and discussed in Sections \ref{sec:results} and \ref{sec:discussion}, with conclusions and future directions in Section \ref{sec:conclusions}.
\section{Data}
\label{sec:data}
\subsection{Lorenz `63}
The Lorenz `63 system, first introduced and studied in \cite{Lo63a}, is a chaotic dynamical system in three variables $x,y,z$ defined by the equations
\begin{equation}
\begin{split}
\frac{dx}{dt} &= \sigma (y-x), \\
\frac{dy}{dt} &= x(\rho -z)-y, \\
\frac{dz}{dt} &= xy - \beta z,
\end{split}
\end{equation}
and represents a highly simplified model of atmospheric convection. The attractor famously resembles a butterfly, with two regimes corresponding to the two `wings'; its regime behaviour has been extensively studied \cite{Palmer1994, Yadav2005}. Here we use the standard choice of constants $\sigma, \beta$ and $\rho$, namely $\sigma=10, \beta=8/3, \rho=28$. We generate a timeseries of 20000 points by integrating the equations with a forward Euler scheme at a timestep $dt=5\cdot 10^{-5}$.
\subsection{Charney--DeVore}
\label{sec:cdv}
The Charney-deVore (CdV) model, first derived in \cite{Charney1979}, provided one of the first examples of multiple equilibria in an atmospheric model, and can be thought of as a crude model of large-scale midlatitude blocking dynamics. It is based on a severe spectral truncation of the barotropic vorticity equation in a $\beta$-plane channel, as in Equation (\ref{eq:master}), where $\Psi$ is a streamfunction, $\gamma h$ is an orographic profile and $\Psi^*$ is an external forcing.
\begin{equation}
\frac{\partial }{\partial t}\nabla^{2}\Psi=-J(\Psi, \nabla^{2}\Psi +\gamma h) -\beta\frac{\partial\Psi}{\partial x}-C(\Psi-\Psi^{*})
\label{eq:master}
\end{equation}
While in \cite{Charney1979} the main focus is on a three-mode truncation of the system, where a marginally less severe truncation keeping three zonal and two meridional modes is applied, the p.d.e.\ reduces to the six-equation o.d.e.\ system shown in Equation \ref{eq:xx}, containing quadratic non-linearities and linear coriolis, orographic, and relaxation terms.
\begin{equation}\label{eq:xx}
\begin{split}
\dot{x_{1}}& =\tilde{\gamma_{1}}x_{3} -C(x_{1} -x_{1}^*)\\
\dot{x_{2}}& = \beta_{1} x_{3} -\alpha_{1}x_{1}x_{3} -\delta_{1}x_{4}x_{6} -C(x_{2} -x_{2}^*)\\
\dot{x_{3}}& =-\beta_{1} x_{2} -\gamma_{1}x_{1} +\alpha_{1}x_{1}x_{2} +\delta_{1}x_{4}x_{5} -C(x_{3} -x_{3}^*)\\
\dot{x_{4}}& =\tilde{\gamma_{2}}x_{6} +\epsilon\cdot(x_{2}x_{6} - x_{3}x_{5}) -C(x_{4} -x_{4}^*)\\
\dot{x_{5}}& =\beta_{2} x_{6} -\alpha_{2}x_{1}x_{6} -\delta_{2}x_{3}x_{4} -C(x_{5} -x_{5}^*)\\
\dot{x_{6 }}& =-\beta_{2} x_{5} -\gamma_{2}x_{4} +\alpha_{2}x_{1}x_{5} +\delta_{2}x_{2}x_{4} -C(x_{6} -x_{6}^*)
\end{split}
\end{equation}
A parameter set where this model produces self-sustaining chaotic dynamics was found in \cite{Crommelin2004}, and we use those same parameters here (see ibid for a full discussion of the constant and parameter values and meaning of each term in equation \ref{eq:xx}). An interactive simulation showing the evolution of this system can be found at \url{joshdorrington.github.io/cdv_simulator/}.
This system has been introduced as it exhibits multimodality (i.e., regimes) in a model which is significantly more complex and more physically interpretable than the Lorenz `63 system, and which also has particularly challenging phase-space structure. The regime dynamics are of Pomeau-Maneville type \cite{Pomeau1980} in that they consist of long-lived quasi-stationary periods in the vicinity of a weakly unstable fixed point, punctuated by a `bursting' behaviour and a transition to chaotic flow. These chaotic transients shadow unstable homoclinic orbits radiating from the fixed point, and so lend considerable structure to the model attractor, with a series of strongly preferred looping trajectories. A timeseries of 20000 points was generated by integrating the equations with a forward Euler scheme at a timestep $dt=2\cdot 10^{-4}$. In order to visualise the data in three dimensions, a truncation of the six dimensional space is required. Because around 98\% of the variance is explained by the first three empirical orthogonal functions (EOFs), we use these to define a truncated space. Homological computations were found to be essentially unchanged when using the truncated space or all six dimensions, so the truncated space is used in all computations.
\subsection{Lorenz `96}
The Lorenz `96 model was introduced in \cite{Lorenz1996} as an idealized, chaotic model of the atmosphere which is of greater complexity than the Lorenz `63 system \cite{Karimi2010}. It is defined in our case by coupling eight variables $X_i, k=1, \ldots, 8$, representing large-scale variability with 32 variables $Y_j, j=1,\ldots 32$, representing small-scale variability, using the following equations:
\begin{equation}
\begin{split}
\dot{X_k} &= -X_{k-1}(X_{k-2} - X_{k+1}) - X_k + F
- \frac{hc}{b} \sum_{j=J(k-1)+1}^{kJ} Y_j; k=1, \ldots, K,\\
\dot{Y_j} &= -cbY_{j+1}(Y_{j+2} - Y_{j-1}) - cY_j
+ \frac{hc}{b}X_{int[(j-1)/J]+1}; j=1,\ldots,JK.
\end{split}
\end{equation}
Cyclic boundary conditions are then imposed: $X_{k+K} = X_k, Y_{j+jK}=Y_{j}$. The parameters are chosen as in \cite{Christensen2015}, which also discusses the meaning of the different constants.
Due to the interpretation of the equations in terms of large-scale modes coupled to small-scale modes, Lorenz `96 has been utilised in several studies looking at different ways to parameterise unresolved sub-grid scale variability in forecast systems \cite{Wilks2005, Christensen2015, Vissio2018, Gagne2020}. Its regime structure has been considered in, e.g., \cite{Lorenz2006} and \cite{Christensen2015}. The analysis of ibid also makes it clear that the key regime variability is concentrated in the first four EOFs. Computations are therefore always done using the subspace spanned by these.
\subsection{Observational data: Euro-Atlantic jet regimes}
To represent real atmospheric data, two so-called reanalysis data sets are used. Actual observational data, whether from stations or satellite, are always unevenly distributed in time and space and therefore contain gaps. Reanalysis data fills in these gaps by blending observations with short-range weather forecasts using data assimilation methods. Two such reanalysis data sets are used here: ERA20C \cite{Poli2016}, which covers the period 1900-2010, and ERA-Interim \cite{Dee2011}, which covers the period 1979-2015. Because the period prior to 1979 suffers from a lack of satellite data, the ERA-Interim data set is generally considered more reliable. However, for the purpose of this paper, where we are looking to detect fine structure in phase space, we present results using ERA20C only. ERA-Interim data was found to produce qualitatively similar results, and so is not shown.
The general suitability of ERA-20C for regime-based studies has been commented on in previous studies \cite{Parker2019, Strommen2020}, and essentially relies on the fact that there is a long and consistent record of surface observations in the Euro-Atlantic sector, which will be our area of interest. The existence and properties of regimes in the wintertime Euro-Atlantic circulation has been extensively studied, either through the prism of pressure fields, typically geopotential height at 500hPa, or winds, in the form of zonal winds at 850hPa (hereafter ua850). Studies based on pressure data \cite{Vautard1990, Michelangeli1995, Dawson2012, Dorrington2020, Falkena2020} typically use clustering algorithms to classify distinct circulation regimes. On the other hand, wind data is usually processed more directly in order to capture the variability of the North Atlantic eddy-driven jetstream, a relatively coherent stream of zonal winds. By measuring the location of the maximum wind-speed of the jet, one can define the latitude of the jet on any given day: the histogram of this jet-latitude index is visibly and robustly trimodal, suggesting the existence of three distinct regimes \cite{Woollings2010b}. The differences between these two perspectives, which would a priori be expected to be equivalent, can be reconciled by taking into account the added variability coming from the speed of the jet, after which both pressure and wind data suggest three very robust regimes \cite{Madonna2017, Strommen2020}. Applications to predictability have been studied in both contexts, see, e.g., \cite{Cassou2008, Strommen2020}.
In this paper we will be focused on seeing how our framework views these three jet regimes, and so define a data set we will refer to as `JetLat'. This will be a 3-dimensional data set consisting of the daily jet latitude, and the daily values of the first and second principal components of ua850 anomalies. Data is always restricted to the North Atlantic region, defined by 15N-75N, 300E-360E, and the winter season December-January-February (DJF). The jet latitude was computed using the methodology of \cite{Parker2019}, which also includes a discussion of the jet in ERA20C.
We note that we choose to use a data set which explicitly contains the jet latitude, already known to be multimodal, because we wish to validate our methodology against known regime systems before applying it to less well-understood contexts. The question of locating these jet regimes using unprocessed data (i.e., data not containing any prior knowledge of the jet-latitude index) will be discussed in the conclusions, in Section \ref{sec:conclusions}. The results of applying our methodology to pressure data will be discussed in Section \ref{sec:results}.
\section{Persistent homology for dynamical systems}
\label{sec:background}
Over the last 20 years, methods from the mathematical area of topology have been increasingly used to study data analysis problems. In this section we discuss how some of these methods can be used to study dynamical systems.
In topology one is interested in studying properties of shapes that do not change when one continuously deforms the shape, for instance when one squeezes or bends it.
If one considers an annulus as in Figure~\ref{F:annulus}, then no matter how the annulus is bent or stretched, it will still be composed of one piece, and have one loop. One says that the number of pieces and loops of a shape are topological invariants. On the other hand, deformations that are not allowed include cutting or gluing. If one was to cut the annulus in half, as illustrated in Figure~\ref{F:annulus}, one would break it into separate pieces with no loops.
One can think of these invariants as giving a very coarse description of the shape of a space or of data.
\begin{figure}[h!]
\centering
(a)\includegraphics[scale=0.15]{Figure1_a.png} \hspace{0.5cm}
(b)\includegraphics[scale=0.2]{Figure1_b.png} \hspace{0.5cm}
(c) \includegraphics[scale=0.15]{Figure1_c.png}
\caption{(a) An annulus, and (b) a shape obtained by continuously deforming the annulus, which has the same number of pieces (components) and loops as the annulus. (c) A space obtained from the annulus by a deformation that is not continuous, and thus with a different number of pieces and loops. }
\label{F:annulus}
\end{figure}
The fact that topological invariants are very coarse is useful for data analysis, because, in particular, they do not depend on the choice of parametrisation, coordinates, or ambient dimension, and thus they are independent of many choices introduced during preprocessing steps. This is an aspect that is crucial in our work.
There are different ways to use topology to study data. Persistent homology, which is the method that we use in our work, is one of the standard techniques and has been very successful in many applications. In the remaining part of this section we first introduce persistent homology, and we then explain how it can be used to study the specific types of dynamical systems that we study in our work.
\begin{figure}[h!]
\includegraphics[scale=0.6]{Figure2.png}
\caption{(a) A set of points lying on a plane, with similarity given by proximity in the Euclidean distance. (b) A filtration of nested spaces, called a `Vietoris--Rips complex', obtained by connecting points within a certain distance by an edge, and filling in resulting triangles. Barcodes describing the lifetime of (c) components and (d) holes. We illustrate using a yellow vertical dashed line, how we can read off information from the barcodes: at filtration value $1.4$ there are two components (the yellow line intersects two lines in the barcode for the components), and two holes, as the yellow line intersects two lines in the barcode for the holes. }\label{F:example PH}
\end{figure}
\subsection{Persistent homology}
\label{sec:pers_hom}
Given experimental data composed of points or vectors representing measurements, together with a measure of similarity (e.g., given by proximity, or correlation), in persistent homology one considers a thickening of the data set at increasing similarity scales, see Figure~\ref{F:example PH} for an example. This process yields a nested sequence (a so-called `filtration') of increasingly thickened spaces. One then analyses the evolution (`persistence') of components, holes, voids, and higher-dimensional holes (`topological features') across the filtration.
The barcode is an algebraic invariant that summarises how the topological features evolve across the filtration: the left endpoint of an interval represents the
birth of a feature, while its right endpoint represents the death of the same feature. When a
feature is still `alive' at the largest thickening scale that one considers, the lifetime
interval is an infinite interval. For instance, we can read off from Figure~\ref{F:example PH}(c) that there are two components that have significantly longer lifetimes than the others (corresponding to the cluster of points forming a figure eight on the left of the figure, and the cluster of remaining points on the right), while from Figure~\ref{F:example PH}(d) we can infer that there are two holes that live much longer than the others, which correspond to the two holes in the figure-eight cluster.
The interpretation of the intervals in the barcode that we have given here is only one the possible applications of persistent homology to the study of data. In other types of applications, it might be the intervals of a certain length, and not necessarily the longest ones, that encode significant information, see for instance, \cite{Bendich2016,bubenik2020persistent}. In particular, the interpretation of the barcode is application specific. We discuss how we interpret the barcode in our work in more detail in Section \ref{sec:significance_test}.
\subsection{Computational complexity of PH}
\label{sec:complexity}
The theory behind (one-parameter) persistent homology is well-understood, and amounts to standard linear algebra. Conversely, the computation of the barcode is expensive, since the computational complexity can grow exponentially in the size of the input data, in the worst case. To sidestep such difficulties, in this work we use optimised algorithms, and sparsification techniques, see also Section \ref{sec:parameters}.
While the computational complexity of persistent homology depends on the size of the input data, and, thus, on the number of measurements, for the types of filtered spaces that we consider here, it does not depend on the dimension of the ambient space. We refer the reader to the survey \cite{otter2017roadmap} for a detailed discussion of the computational complexity of the main persistent homology algorithms. A consequence that this has for our work is that we can add more variables to our models without affecting the computational cost. This is one of the reasons that persistent homology is so effective for the study of dynamical systems.
\subsection{Optimal representatives of cycles}\label{SS:repr cycles}
\begin{figure}[ht]
\centering
(a) \begin{tikzpicture}
\node at (0,0) {$\bullet$};
\node at (0.9,0) {$\bullet$};
\node at (2,0) {$\bullet$};
\node at (2,1) {$\bullet$};
\node at (2,2) {$\bullet$};
\node at (1.2,2) {$\bullet$};
\node at (0,2) {$\bullet$};
\node at (0,0.8) {$\bullet$};
\node at (0,1.3) {$\bullet$};
\node at (0.7,0.7) {$\bullet$};
\node at (1.2,0.6) {$\bullet$};
\draw[thick, -,fill=yellow!20] (0,0)--(0.9,0)--(0.7,0.7)--(0,0);
\draw[thick, -,fill=yellow!20] (2,0)--(0.9,0)--(0.7,0.7)--(2,0);
\draw[thick, -,fill=yellow!20] (2,1)--(1.2,2)--(2,2)--(2,1);
\draw[thick, -,fill=yellow!20] (0,2)--(0,1.3)--(1.2,2)--(0,2);
\draw[thick, -,fill=yellow!20] (0,1.3)--(0,0.8)--(0.7,0.7)--(0,1.3);
\draw[thick, -,fill=yellow!20] (0,0)--(0,0.8)--(0.7,0.7)--(0,0);
\draw[thick, -,fill=yellow!20] (2,0)--(2,1)--(1.2,0.6)--(2,0);
\draw[thick, -,fill=yellow!20] (2,0)--(0.7,0.7)--(1.2,0.6)--(2,0);
\draw[thick, -, cyan] (0.7,0.7)--(1.2,0.6)--(2,1)--(1.2,2)--(0,1.3)--(0.7,0.7);
\draw[thick, -, purple] (0,0)--(2,0)--(2,2)--(0,2)--(0,0);
\end{tikzpicture}
(b) \begin{tikzpicture}
\end{tikzpicture}
\caption{(a) Two representatives for the same $1$-cycle: (purple) a not optimal one, and (cyan) an optimal representative.
}\label{F: cycles}
\label{fig:my_label}
\end{figure}
Given an interval in the barcode describing the lifetime of a component or hole, we are interested in studying the points in the data that correspond to such a component or hole. Such points are called `representatives' for cycles in dimension $0$ ($0$-cycles, i.e., components) and in dimension $1$ ($1$-cycles, i.e., holes).
Ideally, we want to be able to choose representatives that are easily interpretable from a geometric point of view. For instance, we might want representatives for $1$-dimensional cycles to have minimal length in a suitable sense, see the illustration in Figure~\ref{F: cycles}.
Thus, we are interested in representatives that satisfy some minimality condition: for holes we
compute optimal representatives for $1$-cycles \cite{DHM19} using the software Persloop \cite{persloop}, while for components, we use representatives to find all the points in a component.
We note that finding optimal representatives for cycles is a challenging problem; the software Persloop implements an algorithm that gives a heuristic approximation for $1$-cycles in $3$D, but which might fail to give meaningful $1$-cycles on higher dimensional data sets.
\subsection{Multiparameter persistent homology}
In many application problems, one might wish to study filtrations that depend on more than one parameter. For instance, consider the point cloud in $\mathbb{R}^2$ in Figure~\ref{fig:bifiltration pointcloud}. If one were to consider only the points belonging to higher-density regions, one could associate to these points a distance-based filtration, as illustrated in Example \ref{F:example PH} and discussed in Section \ref{sec:pers_hom}. Then, by computing the persistent homology of such a filtration, one could read off from the barcodes that the point cloud has a long-lived component, and a long-lived hole. For such a data set, it might be difficult in practice to choose the right density value, and therefore one would ideally wish to consider point clouds thresholded at all possible density values, thus obtaining a bifiltration, as illustrated in Figure \ref{fig:bifiltration}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\foreach \position in {
(2.6,2.1), (2.9,2.4), (2.4,2.9), (2.7,3.5),(3.2,1.95),(3.6,1.9),
(4.0,1.9), (4.8,2), (5.1,2.6), (5.1,3.2), (4.9,3.9), (4.1,4.0),(4.6,3.8),(5,3.6),(4.2,3.9),(3.5,4.), (2.9,3.9),(2.5,2.2), (2.4,3),(2.8,2.4), (2.6,3.4),(3,1.9),(3.8,1.9), (4.1,1.9), (4.6,1.9), (5,2.4), (5.1,3.4), (4.2,4.0), (4.6,3.8),(4.7,3.85),(4.5,3.85),(4.4,3.93),(4.35,3.9),(4,3.9),(5,3.8),(4.3,3.9),(3.7,4), (2.9,4),(3.5,4),(3.6,3.95),(3.2,4.1),(3.3,4.15),(3.1,4),(3,4),(2.9,3.9),(2.8,3.8),(2.7,3.95),(5.2,3.5),(5.2,3),(2.5,2.7), (2.6,2.6),(2.7,2.5),(2.8,2.4),(2.5,2.6), (2.6,2.55),(2.7,2.2),(2.8,2.25),(2.6,2.35),(2.7,2.3),(2.8,2.35),(3,2.1),(3.1,2.1),(3.2,2.05),(3.2,2.1),(3.4,2.0),(3.45,1.95),(4.5,2),(4.7,2),(4.4,2),(4.3,1.95),(5,2.4),(5,2.35),(4.9,2.3),(4.9,2.2),
(5.1,3),(5.15,3.1),(5.2,3.3),(5.15,3.2),(5.15,3.3),(5.03,2.9),(5.1,2.8),(2.5,3.5),(2.6,3.4),(2.6,3.3),(2.55,3.25),(2.4,3.25),(2.4,3.15),(2.6,3.6),(2.65,3.65),(3.8,3.9),(3.88,3.95),(4,2.7),(4.3,3.15),(3.4,3.3),(7,3.5),(3.6,1),(6,3)
}\node at \position []{$\bullet$};
\end{tikzpicture}
\caption{A finite set of points in $\mathbb{R}^2$ for which a distance-based filtrations might fail to capture interesting topological information.}
\label{fig:bifiltration pointcloud}
\end{figure}
The theory of persistent homology does not generalise to filtrations that depend on more than one parameter. In particular, there is no generalisation of the barcode, as described in Section \ref{sec:pers_hom} and illustrated in Figure \ref{F:example PH}, for multifiltrations. Finding appropriate ways to quantify the `persistence' of topological invariants, such as the number of components or holes, is currently one of the most active areas of research in TDA, and several researchers have proposed invariants that are computable, and capture in an appropriate sense what it means for topological features to be `persistent'. In Section \ref{sec:rivet} we discuss one such approach.
\begin{figure}[ht]
\centering
\input{bifiltration.tikz}
\caption{A bifiltration obtained by decreasing density and increasing distance: given the finite set $X$ of points in $\mathbb{R}^2$ illustrated in Figure \ref{fig:bifiltration pointcloud}, and a density estimation, we consider subsets $X'\subset X$ of points having density above a certain threshold. For each subset $X'$ of points we then construct a distance-based filtration by taking balls with increasing radii centered at the points.}
\label{fig:bifiltration}
\end{figure}
\subsubsection{Barcodes along one-dimensional subspaces}
\label{sec:rivet}
In one approach to defining invariants for multiparameter persistence that are suitable for applications, researchers study ways to restrict a bifiltration, such as the one in Figure \ref{fig:bifiltration}, to one-dimensional subspaces, and then study barcodes along such restrictions \cite{LW15,BCF+08}.
\begin{figure}[ht]
\centering
\input{bifiltration_restr.tikz}
\caption{Barcodes for holes along restrictions to vertical lines of the bifiltration from Figure \ref{fig:bifiltration}. We note that barcodes for the first two lines are empty. }
\label{fig:bifiltration slice}
\end{figure}
As illustrated in Figure \ref{fig:bifiltration slice}, restricting oneself to points up to a specific density threshold amounts to considering a filtration of spaces along a vertical line in the bifiltration. By studying persistent homology of this filtration, we are thus computing the barcode of the restriction of the bifiltration along this line. More generally, one could consider lines with any slope in the $2$-parameter space, and then compute the barcode of the restriction of the bifiltration along this line. It is known that this process is robust in an appropriate sense only for lines having positive slope, see the discussion in \cite[Section 1.5]{LW15}. In particular, here, if we consider filtrations for different density threshold levels, we might observe intervals suddenly appearing or disappearing in the corresponding barcodes.
Lesnick and Wright implemented their methods \cite{LW15} in the software package RIVET \cite{rivet}, which is currently the only existing software package for the computation of multiparameter persistent homology.
In Figure \ref{fig:rivet ex}, we illustrate how one can use RIVET to compute such barcodes for the example of bifiltration that we gave in Figure \ref{fig:bifiltration}. Unfortunately, the current implementation in RIVET is not memory-efficient enough for the types of data sets that we study in our work, since if one is interested in computing barcodes to study the lifetime of loops, the software can only handle data sets of a few hundred points. Thus, one main direction that we plan to pursue in future work, is to work on optimisations of the computations implemented in RIVET. In particular, in future work we plan to compute barcodes along restriction to lines with positive slopes, to obtain a method that is robust.
\begin{figure}[ht]
\centering
(a)\includegraphics[scale=0.1]{Figure7_a.png}
(b)\includegraphics[scale=0.1]{Figure7_b.png}
\caption{An example of the RIVET interface for the bifiltration in Figure \ref{fig:bifiltration}, for (a) components and (b) holes. We reproduce screenshots of the RIVET GUI; on the left hand-side of panel the user can probe the $2$-parameter space with a blue line, while on the right-hand side the interface displays the barcode along that line. }
\label{fig:rivet ex}
\end{figure}
\subsection{Bifiltrations for dynamical systems}
\label{sec:bifiltrations}
The need to consider not just a filtration of distances, as in the standard method of one-parameter persistent homology, but a bifiltration of distance and density, can be motivated here in two ways. Firstly, and most fundamentally, the dynamical systems we are interested in are always \emph{continuous}, and so no two regions on the attractor can be fully disconnected from each other. In fact, the connectedness of the attractor of a continuous dynamical system can be proved mathematically, given a suitable definition of `attractor' \cite{Gobbino1997}, implying that persistent homology will never detect more than one long-lived connected component from a generic sample of the system. The second reason can be understood by considering the Lorenz `63 system. In Figure~\ref{fig:lorenz_holes_evolution} we demonstrate a particular feature of the system, namely that the two iconic holes `close up' as one increases the sample size. In other words, topological features that would be detected for any finite, generic sample can disappear in the limit of infinitely many points, implying that, somewhat counter-intuitively, having too many points can be a problem! These two observations suggest that a naive application of persistent homology to a continuous dynamical system may easily fail to detect both long-lived connected components and long-lived holes.
The basic underlying problem is that in one-parameter persistent homology one computes a filtration by increasing Euclidean distances between the points, ignoring any variations in density. However, the regimes classically identified with clustering methods typically correspond to regions of above-average density, suggesting that the connected components we are interested in should be relative to density. Furthermore, in the Lorenz `63 system, the reason any generic sample of the attractor yields visually clear holes is the fact that the regions of phase space close to the centre of the holes, i.e., near the fixed points, are very low density regions. Therefore, the holes in the system are only identified in data with respect to some measure of density.
This problem is analogous to the fundamental problem solved by persistent homology, that point-cloud data sets have trivial topology. The solution there, of thickening the points according to a parameter choice (the radius) and then computing a filtration, can be mimicked to account for density. By specifying a density threshold, one can restrict the data to only those regions where the density exceeds the threshold (e.g. the 20\% densest points) before carrying out the persistent homology computation. Letting this density threshold vary generates a density-filtration of persistent homology computations, i.e., a distance-density \emph{bifiltration} of homology groups.
We give an example of such a bifiltration in Figure \ref{fig:bifiltration}.
We note that the bifiltration, and hence the corresponding topological properties that one observes, depend on a choice of density estimation function.
As a final remark, we note that it may be possible to achieve good results by extending the filtration to other measures besides density. In particular, our tests suggest that using phase space velocities can, in some situations, be useful: prior knowledge of the system of interest might inform more particular choices. For the context of this paper, we will only consider density.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.9]{Figure8.png}
\caption{In (a)-(c): the Lorenz `63 system visualised using 10000, 20000 and 50000 timesteps respectively. In (d), the Lorenz `63 system using 50000 timesteps, with colours representing the density, as measured with the kernel density estimator.}
\label{fig:lorenz_holes_evolution}
\end{figure}
\section{Computational methodology}
\label{sec:methodology}
We now describe the full algorithm that we perform to analyse a given data-set sample. The basic method is the following:
\\
\begin{itemize}
\item[(1)] Normalise each dimension in the data set to have unit variance.
\item[(2)] Estimate the density at every point in your normalised data-set sample $D$.
\item[(3)] Pick a percentage threshold $P\%$. Select the sub-sample $D_P$ defined by the upper $P$th density percentile of $D$, i.e. the $P\%$ densest points of $D$.
\item[(4)] Compute persistent homology for $D_P$ and extract the topological features of interest: birth/death times for each cycle detected; the points belonging to each of the five longest-lived connected components; a topological representative of each of the five longest lived loops.
\item[(5)] Repeat for values of $P$ ranging from $10\%, 20\%, \ldots, 100\%$ and examine the features that appear in the resulting bifiltration.
\end{itemize}
We note that the normalisation is important to ensure that interesting structure is not missed purely by virtue of existing along a direction in phase space with smaller magnitudes, such as loops that appear as `squashed' ellipses in the raw data. Further details on the other steps now follow.
\subsection{Density estimation}
\label{sec:density_estimation}
The primary method used was a kernel density estimator (KDE) with a Gaussian kernel \cite{Marron2007}, computed using inbuilt functions of the scipy python package \cite{2020SciPy-NMeth}; we used the default option of Scott's Rule to determine the bandwidth. Using a KDE has two clear advantages for topological applications. Firstly, it produces smooth estimates, which avoids potential issues whereby outlier points remain even after a severe density threshold has been applied. Such outliers will often appear as spurious long-lived connected components, effectively just adding noise to the analysis. Secondly, KDE's are well-suited to represent multimodality, a key feature we want to capture.
A second, cruder method was also tested, which involved directly binning the space and counting the datapoints in each bin. To facilitate the computations, this density estimate was carried out in the space spanned by the first three EOFs, under the assumption that the resulting estimate would be accurate for the scales we were interested in studying. A fixed number of $160^d$ bins were used in each case, where $d$ is the dimension of the data set.
In our results, the KDE produced good results in all cases except for the CdV system. As will be shown, the CdV system exhibits some very fine-scale structure in the form of `thin', low-density loops that emerge within a larger, more chaotically-inhabited, low-density region. The Gaussian KDE we used was found to smear away a lot of this structure, while the direct binning method picked out these features easily. In the other data sets, both the KDE and direct binning methods produce qualitatively similar results, but the KDE exhibits a notably smoother estimate, as expected. For this reason, results obtained using the KDE are shown for all data sets except CdV, where the results obtained with direct binning are shown instead. It would clearly be of interest to address the question of whether a more appropriate density estimation method (e.g. choice of bandwidth) might yield good results in all cases, but this is left for future work.
\subsection{Computation of persistent homology and representative cycles}
\label{sec:parameters}
Persistent homology is always computed using the Vietoris--Rips complex (see Figure \ref{F:example PH}) and the python package Gudhi \cite{gudhi}. Gudhi takes as input both the data set and several user-specified input parameters, the choice of which we now outline.
\begin{itemize}
\item \verb|max_edge|: This parameter determines the maximal distance threshold to consider in the filtration. Setting this as the maximal distance between any two points in the data set guarantees that the filtration terminates (i.e., ends with a single connected component at the end), so this parameter can always be chosen in a principled manner. Because all our data sets were normalised, we were able to set \verb|max_edge| $=5.0$ for all data sets.
\item \verb|min_pers|: This parameter determines the minimal lifespan that a computed homological cycle needs to attain in order to be included in the final output from Gudhi. The choice of this parameter therefore determines the scales of the topological features one wants to consider. For this reason, a principled choice of the parameter requires some prior knowledge about the `grid-scale' of the data set. In practice, the only downside in setting this parameter as very small, thereby retaining features at all scales, is an associated increase in computational cost. Because all our data sets are normalised, we found that a parameter choice between $0.15$ and $0.50$ gave good answers at low cost for all data sets. The higher value was used for CdV, as the main features there exist at higher scales, while for systems like JetLat, with subtler behaviour, the smaller value was used.
\item \verb|sparse|: This parameter is internal to Gudhi's algorithm, and determines the extent to which the computed Rips complex is sparsened before computating persistent homology. This was set to $0.7$ for all data sets.
\item \verb|pre_sparse|: This parameter is fed in to the Gudhi \verb|sparsify_point_set| routine, which is used to perform a preliminary sparsification of the data set prior to carrying out computations. The routine is built to sparsify data sets in a way which does not change the topology, e.g., by replacing densely connected regions with a sparser set of points covering the same region. Because our data sets are always filtered by density prior to computation, such a sparsification has no impact on our results, but allows the computations to be sped up significantly. In fact, for large data sets with a time dimension exceeding 30000 timesteps, computations would typically run out of memory and crash. Setting an appropriate value of \verb|pre_sparse|, which greatly reduces the number of points, was therefore crucial. In practice, we set this value as the smallest positive number which would allow the computations to finish at a reasonable rate. A value of $0.05$ was found to be suitable for Lorenz `63, Lorenz `96, while $0.005$ worked best for CdV. For the JetLat data set, where the total number of time-steps available is only around 10000, this sparsification step was not necessary and hence not carried out.
\end{itemize}
After computing the filtration and homology at a given density threshold, the five longest-lived components and loops were identified. Explicitly determining the points belonging to each connected component can be done easily using output from Gudhi, which gives the full filtration. By keeping track of which points are linked up as the filtration radius grows, basic python code suffices to determine all the components; the code used is freely available online (see the Appendix).
We note that obtaining a representative cycle of loops is significantly harder, as discussed in Section \ref{SS:repr cycles}.
\subsection{Sensitivity to parameter choices}
Several tests were carried out to determine the sensitivity of our results to the parameter choices described in the previous section. A selection of density thresholds for the different data sets were chosen at random, and standard birth/death plots produced using Gudhi for the resulting filtered data sets. It was found that the qualitative features of these birth/death plots did not appreciably change in response to mild perturbations of the parameters, implying that the basic topological features, as summarized in our bifiltration plots, do not depend sensitively on our choices. The size and location of connected components was also found to be largely insensitive to such parameter changes.
On the other hand, the representatives of loops, as computed with PersLoop, were found to exhibit sensitive dependence, in particular on the \verb|pre_sparse| parameter. A small perturbation of this parameter would often lead to the software not terminating properly, or producing a very different representative loop. A similar phenomenon was observed when keeping parameters fixed, but changing other aspects of pre-processing, such as the choice of density filtering or the use of EOF data versus raw data. The reader should therefore be cautioned that the representative cycles we show in our plots are not to be viewed as reliable output from a stable algorithm. Rather, they are included to demonstrate that the topological features seen in our bifiltration plots can, in principle, be visualised in the data itself, and really do correspond to the features one expects.
\subsection{Significance testing}
\label{sec:significance_test}
Any reasonable definition of a regime system should exclude a Gaussian distribution from being one. Therefore, in order to assess whether features identified in our bifiltration methodology are more than just sampling noise, we implemented the following procedure. First we draw 10000 random samples from a three-dimensional Gaussian distribution with unit variance. Secondly, we run this through our methodology described at the beginning of Section \ref{sec:methodology}. The maximal lifespans of both the connected components and loops obtained at any of the density thresholds were kept: the whole procedure is then repeated ten times and the maximal lifespan obtained across all random draws is used as a measure of noise. Specifically, features with a lifespan close to this value, of around $0.4$, are likely to be noise coming from grid-scale sampling variability, while features with a lifespan greatly exceeding this are likely to be indicative of significant non-trivial topology. We note that when carrying out this procedure, the connected components in Gaussian samples containing 3 or less points were not included, because one or two big outlier points can easily produce very long-lived `components'. For consistency, components with 3 or less points that are detected in any data set are always clearly marked in plots.
Finally, we note that because all our data sets are normalised prior to computing homology, the unit variance Gaussian offers an appropriate comparison for all the data sets that we consider.
\section{Results}
\label{sec:results}
For each data set, we now produce a standard bifiltration plot summarising the lifespans of persistent cycles across a range of density thresholds. In addition, to visualise these topological features, particular density thresholds are hand-picked for each data set and plotted, together with a visualisation of either the connected components or the representatives of loops present at that threshold.
\subsection{The Gaussian}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.9]{Figure9.png}
\cprotect\caption{A distance-density bifiltration of a unit variance Gaussian. For each density threshold on the $x$-axis, the lifespan of the 5 longest-lived connected components (red dots if the component contains more than 3 points: red stars otherwise) and 5 longest-lived loops (blue triangles) are plotted. The stippled line shows the largest lifespan obtained across multiple Gaussian samples. The meaning of the \verb|min_pers| parameter is explained in Section \ref{sec:parameters}.}
\label{fig:gaussian_bifiltration}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.9]{Figure10.png}
\caption{In (a): a random sample of a unit variance Gaussian. In (b): the 70\% densest points of the sample; (c): the 40\% densest points; (d): the 10\% densest points. In (b)-(d), points are coloured according to the connected component they live in: longest-lived (red colour), 2nd longest lived (blue colour) and 3rd longest lived (green colour).}
\label{fig:gaussian_thresholds}
\end{figure}
As explained in Section \ref{sec:significance_test}, results from the unit variance Gaussian are used to estimate the significance of features obtained for all other data sets, since any definition of regimes should exclude the Gaussian from having any. We therefore first present results for a randomly drawn sample of 10000 points from such a distribution. These are shown in Figure \ref{fig:gaussian_bifiltration}. As expected, no non-trivial topological features are detected in this data set, with each density threshold exhibiting only a single connected component (the red dot at infinity) and some spurious outliers (the red stars) at the `grid-scale'. The loops found (blue triangles) are all extremely close to the minimum persistence choice, implying that these were only barely registered by the algorithm and do not persist for notably longer than isolated outlier components.
The seeming change in behaviour at the 100\% threshold, where no density filtering has been applied, is due to the existence of big outliers in the raw sample. This is clearly seen in Figure \ref{fig:gaussian_thresholds}, showing the Gaussian sample at various thresholds. Because even an extremely mild density threshold immediately removes the big outliers seen in Figure \ref{fig:gaussian_thresholds}(a), the possible lifespan of small components with 3 or less points drops dramatically from the 100\% to 90\% threshold. This is also why the longest-lived loops are found at the 100\% threshold. As is clear from Figure \ref{fig:gaussian_thresholds}, these loops are just noise, and indeed any representatives of these produced by PersLoop (not shown) are visually confirmed as such. Figures \ref{fig:gaussian_thresholds}(b)-(d) also highlight the 3 longest-lived connected components at each threshold. It can be seen that this yields one component containing almost all points, and two components consisting of 1 or two points that simply happen to be fractionally further removed from the rest of the point mass.
These observations already confirm that our methodology correctly identifies the Gaussian as having no non-trivial topology at any density threshold. The comparison of Figure \ref{fig:gaussian_bifiltration} with the equivalent plots for other data sets, to which we now turn, will make this even clearer.
\subsection{Lorenz `63}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.9]{Figure11.png}
\cprotect\caption{A distance-density bifiltration of the Lorenz `63 system. For each density threshold on the $x$-axis, the lifespan of the 5 longest-lived connected components (red dots if the component contains more than 3 points: red stars otherwise) and 5 longest-lived loops (blue triangles) are plotted. The stippled line shows the largest lifespan expected from Gaussian noise. The meaning of the \verb|min_pers| parameter is explained in Section \ref{sec:parameters}.}
\label{fig:lorenz_bifiltration}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.9]{Figure12.png}
\caption{In (a): a long integration of the Lorenz `63 system. In (b): the 80\% densest points; (c): the 60\% densest points; (d): the 20\% densest points. In (b) and (c), representatives of the 2 (respectively 1) longest-lived loops are overlain. The longest-lived loop is always in red, the 2nd longest in blue. In (d), the longest-lived connected component is marked in red.}
\label{fig:lorenz_thresholds}
\end{figure}
Figure \ref{fig:lorenz_bifiltration} shows the bifiltration plot of the Lorenz `63 system. This plot can be understood by reference to Figure \ref{fig:lorenz_thresholds}, which visualises the system, and the longest-lived components/loops, at different thresholds. At low density thresholds, as shown in Figure \ref{fig:lorenz_thresholds}(d), there is just one connected component, corresponding to the dense central region between the two wings. Because the density is concentrated in this area, as seen in Figure \ref{fig:lorenz_holes_evolution}(d), there is no trace of the two wings until you move to higher thresholds. At the 60\% threshold, Figure \ref{fig:lorenz_thresholds}(c), one of the two wings close up, at which point an extremely long-lived hole emerges in the bifiltration plot: the representative produced by PersLoop confirms that this corresponds to the right wing. At the 70\% threshold, the other wing closes up as well, after which one retains two long-lived holes for all further density thresholds. Figure \ref{fig:lorenz_thresholds}(b) confirms that the two holes found by Gudhi at this point correspond to the two holes in the wings. Note that the apparent asymmetry between the two loops is due to sampling variability.
Two other points are worth observing in Figure \ref{fig:lorenz_bifiltration}. Firstly, besides the key topological features coming from the wings, all other features have lifespans at the \verb|min_pers| threshold, implying that these features exist only at or below the grid-scale of Lorenz `63. Secondly, these grid-scale features have lifespans below what is expected from a Gaussian bifiltration, demonstrating that our significance test has correctly classified these as noise. Furthermore, the lifespans of the two loops, and one connected component, greatly exceed Gaussian noise. The conclusion from our methodology is therefore that the Lorenz '63 system has two significant holes, corresponding precisely to the two regimes defined by the wings, and is otherwise fully connected.
\subsection{Lorenz `96}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.9]{Figure13.png}
\cprotect\caption{A distance-density bifiltration of the Lorenz `96 system. For each density threshold on the $x$-axis, the lifespan of the 5 longest-lived connected components (red dots if the component contains more than 3 points: red stars otherwise) and 5 longest-lived loops (blue triangles) are plotted. The stippled line shows the largest lifespan expected from Gaussian noise. The meaning of the \verb|min_pers| parameter is explained in Section \ref{sec:parameters}.}
\label{fig:lorenz96_bifiltration}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.9]{Figure14.png}
\caption{In (a): a long integration of the Lorenz `96 system. In (b): the 50\% densest points; (c): the 20\% densest points; (d): the 10\% densest points. In (b) and (d), representatives of the 3 longest-lived connected components are marked with colours, while in (c), the 3 longest-lived loops are overlain in colour. In all cases, the longest-lived feature is in red, the 2nd longest-lived in blue and the 3rd longest-lived in green.}
\label{fig:lorenz96_thresholds}
\end{figure}
Figure \ref{fig:lorenz96_bifiltration} shows the bifiltration plot for the Lorenz `96 system, which suggests the existence of a considerable amount of significant topological structure. Figure \ref{fig:lorenz96_thresholds} shows some of this structure at different thresholds, though we remind the reader that because the homological computations in this case were done using a 4-dimensional EOF truncation, our 3-dimensional projections necessarily obscure some of the features. We also note that, due to the limitations of PersLoop, optimal loops were computed using the space spanned by the first three EOFs only, which also leads to some minor distortions.
The characteristic looping behaviour of the system is already visible in the unfiltered data set, Figure \ref{fig:lorenz96_thresholds}(a), reflecting the rotational symmetry in the defining equations. The looping trajectories result in regions which, after an appropriate density threshold is imposed, appear as holes in an otherwise connected space, as in Figure \ref{fig:lorenz96_thresholds}(b). The most prominent loop appearing in this manner is the one circling the full perimeter of the space, as seen in Figure \ref{fig:lorenz96_thresholds}(c). Note that PersLoop identifies this loop as the 3rd longest-lived at the 20\% threshold. The representatives found for the longest and 2nd longest-lived loops are made to look particularly spurious due to the flattening of the 4th dimension, but are either way examples of way in which PersLoop sometimes produces representatives that are far from optimal. For very severe density thresholds, such as the 10\% threshold shown in Figure \ref{fig:lorenz96_thresholds}(d), the data set splits up into distinct components, implying significant local variations in density across the attractor.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.9]{Figure15.png}
\caption{The two regimes $A$ (blue) and $B$ (red), as defined in \cite{Christensen2015}, marked in the space spanned by the first 3 principal components of Lorenz `96. The full data set is shown in transparent black in the background.}
\label{fig:lorenz96_regimes}
\end{figure}
To see how this topological structure relates to the more classical approach to regimes in Lorenz `96, recall the approach taken by Lorenz \cite{Lorenz2006}, further expanded on in \cite{Christensen2015}, which the reader should refer to for this discussion. In ibid, the dynamics are first projected onto the two-dimensional space spanned by the magnitudes of the concatenated principal component vectors $[PC1,PC2], [PC3,PC4]$. Two local peaks in temporal persistence are identified in this space, clearly visible in Figure 7(c) of ibid, and these are used to define two regimes denoted $A$ and $B$. Regime $A$ corresponds to the bottom right-hand corner of the concatenated space, which is also where the density is concentrated (c.f. subplot (a) of the same figure), while regime $B$ corresponds to a very low-density region in the top left-hand corner. In Figure \ref{fig:lorenz96_regimes}, points loosely corresponding to these two corners of phase space have been marked, with the top left-hand corner defined by $|[PC1,PC2]|<3, 14>|[PC3,PC4]|>10$, and the bottom right-hand corner by $15>|[PC1,PC2]|>10, |[PC3,PC4]|<5$. This clearly suggests that regime $A$ corresponds to the densely populated loop around the outer perimeter, while regime $B$ corresponds to the low-density hole in the centre\footnote{We remind the reader again that the squashing away of the fourth dimension gives the appearance of regime $B$ spilling out into the perimeter.}. In other words, the regimes diagnosed in \cite{Christensen2015} correspond to topological features of the system that are detectable with persistent homology.
\subsection{Charney-deVore}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.9]{Figure16.png}
\cprotect\caption{A distance-density bifiltration of the CdV system. For each density threshold on the $x$-axis, the lifespan of the 5 longest-lived connected components (red dots if the component contains more than 3 points: red stars otherwise) and 5 longest-lived loops (blue triangles) are plotted. The stippled line shows the largest lifespan expected from Gaussian noise. The meaning of the \verb|min_pers| parameter is explained in Section \ref{sec:parameters}.}
\label{fig:cdv_bifiltration}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.9]{Figure17.png}
\caption{In (a): a long integration of the CdV system. In (b): the 50\% densest points; (c): the 20\% densest points; (d): the 10\% densest points. In (b) and (c), representatives of the 2 longest-lived loops are overlain in colour, while in (d), the 2 longest-lived components are marked in colour. In all cases, the longest-lived feature is in red and the 2nd longest-lived in blue.}
\label{fig:cdv_thresholds}
\end{figure}
Figure \ref{fig:cdv_bifiltration} shows the bifiltration results for the CdV system: we remind the reader that the computations are done using the space spanned by the first three EOFs. The most notable features are a number of long-lived loops that emerge at density thresholds between $50\%$ and $90\%$. The existence of such loops can already be seen by eye in the raw data set, shown in Figure \ref{fig:cdv_thresholds}(a). As noted in Section \ref{sec:density_estimation}, these loops are low-dimensional, preferred trajectories shadowing unstable homoclinic orbits, separated by sparsely populated regions. The use of the direct binning method to estimate density effectively highlights these loops, and the representatives found by PersLoop, as in Figure \ref{fig:cdv_thresholds}(b) and (c), confirm that these are precisely the long-lived loops identified in Figure \ref{fig:cdv_bifiltration}.
In terms of connected components, the only threshold at which there appears to be more than one connected component with at least four points is the $20\%$ threshold. However, manual inspection here reveals that this second component in fact contains \emph{exactly} four points, and can therefore be considered as noise, as with the spurious components seen at the $40\%$ and $90\%$ thresholds. Therefore, from the perspective of the bifiltration, CdV can be thought of as a dense central region with low-density loops spiraling outward. This neatly matches the dynamics one observes in numerical simulation, and the theoretical understanding of the CdV system as chaotic transients bursting from a weakly unstable near-equilibria.
In the classical perspective, CdV has two persistent regimes associated with orbits slowing as they enter the neighbourhood of one of two fixed-points. One of these fixed points, associated with blocking, is located close to the dense central region, while the other more zonally symmetric fixed point lies close to the back left corner, when viewed as in Figure \ref{fig:cdv_thresholds}, and the loops pass close to this region. As mentioned in Section \ref{sec:cdv}, the regime dynamics in CdV are asymmetrical, in that the blocked regime is quasistationary and experiences almost deterministic evolution, while the zonal regime is characterised by turbulent chaotic behaviour. From this we can understand why the quasistationary blocking state is associated with a connected component, while the zonal state is not. Instead, the zonal regime can be understood as a consequence of the many looping trajectories visiting a common, disparate region of phase space.
\subsection{The North Atlantic Jet}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.9]{Figure18.png}
\cprotect\caption{A distance-density bifiltration of the JetLat data set. For each density threshold on the $x$-axis, the lifespan of the 5 longest-lived connected components (red dots if the component contains more than 3 points: red stars otherwise) and 5 longest-lived loops (blue triangles) are plotted. The stippled line shows the largest lifespan expected from Gaussian noise. The meaning of the \verb|min_pers| parameter is explained in Section \ref{sec:parameters}.}
\label{fig:jetlat_bifiltration}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.9]{Figure19.png}
\caption{In (a): the raw JetLat data set. In (b): the 57\% densest points; (c): the 50\% densest points; (d): the 10\% densest points. In (b)-(d), representatives of the 3 longest-lived connected components are marked with colours. The longest-lived feature is in red, the 2nd longest-lived in blue and the 3rd longest-lived in green.}
\label{fig:jetlat_thresholds}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.9]{Figure20.png}
\caption{Composites of zonal wind anomalies at 850hPa for the ERA20C data set across all winter days between 1900 and 2010 that (a) belong to the longest-lived JetLat component; (b) belong to the 2nd longest-lived JetLat component.}
\label{fig:jetlat_composites}
\end{figure}
We finally test our method using the JetLat data set, capturing variability of the North Atlantic eddy-driven jet. Figure \ref{fig:jetlat_bifiltration} shows the result of the bifiltration computation, with the raw data set in panel (a). The only evidence of non-trivial topology emerges when restricting to the 10\% densest points, at which point the data set splits cleanly into two connected components, as shown in panel (d). The lifespans of both components greatly exceed anything expected from Gaussian noise, and their sizes are also considerable, containing around 900 and 100 points each. Figure \ref{fig:jetlat_composites} shows composites of zonal wind anomalies of ERA20C across all days belonging to these two long-lived components, identifying the longest-lived one as the Central jet latitude mode and the 2nd longest-lived as the Northern jet latitude mode.
Since one dimension of the JetLat data set contains the jet latitude index, which is trimodal in and of itself, the a priori expectation might be that the data set should split into three connected components, not two. However, making the density filtration finer did not change the result, suggesting this is a robust outcome of our methodology. To understand why this happens, Figure \ref{fig:jetlat_pdf} shows the JetLat probability distribution function (pdf), as computed using the kernel density estimator. In panel (a), the raw data set is plotted with colours indicating density, while in (b), density is plotted as a function of jet latitude and $PC1$ (the first two dimensions of JetLat). In this latter panel, the points corresponding to the two long-lived components at the $10\%$ threshold have been coloured in, with red being the longest-lived and blue the 2nd longest-lived. While panel (a) already suggests that there are two, rather than three, clearly marked peaks in density, panel (b) most clearly explains what is happening. In and of itself, the jet latitude index is clearly trimodal, but the situation changes when it is extended out across multiple dimensions. While the Northern peak remains clearly separated from the Central peak, the Southern peak becomes smeared out across the space spanned by the two principal components, leaving it resembling a `shoulder', rather than a clear peak. Because our density thresholds amount to taking horizontal slices across this space, the bifiltration is able to find the Central and Northern peaks, but not the Southern. The implications of this are discussed in the next section.
Note that when computing a bifiltration using the first 3 or 4 principal components of geopotential height anomalies at 500hPa, the features detected are all at the level of Gaussian noise. This is consistent with the findings of \cite{Stephenson04}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.9]{Figure21.png}
\caption{Visualising the pdf of the JetLat data set. In (a), the 3-dimensional data with colours indicating the density, as estimated with a kernel density estimator. In (b), density as a function of the two first dimensions of the JetLat data set, i.e. jet latitude and the first principal component of ua850. In (b), points belonging to the longest-lived connected component are coloured red, while points belonging to the 2nd longest-lived component are coloured blue.}
\label{fig:jetlat_pdf}
\end{figure}
\section{Discussion}
\label{sec:discussion}
\subsection{Strengths and weaknesses of our methodology}
The results in the previous section suggest that the bifiltration methodology succeeds in identifying whether a data set has non-trivial topological structure or not. In particular, it rejects a Gaussian distribution as having any, and correctly detects the relevant structure for four examples of data sets generally considered to have regimes. We further showed that the topological structure encodes, in different ways, the regime behaviour. For Lorenz `63, the regimes correspond to two holes; for Lorenz `96 to a loop and a hole; for CdV to a dense, connected region and several loops emanating from this; and for JetLat, to two dense, connected components.
The main apparent shortcoming of the methodology, besides the instabilities associated with trying to compute optimal representatives of loops, was the inability to identify three distinct regimes in the JetLat data set. As explained in Section \ref{sec:results}.5, this is due to the fact that, when viewed across multiple dimensions, the Southern jet latitude mode appears less as a distinct peak and more as an extended shoulder, which the horizontal density slices of our filtration cannot easily capture. The obvious way to attempt to remedy this is to consider slices with positive slope. As explained in Section \ref{sec:bifiltrations}, this is also required to make the evolution of topological features continuous across the bifiltration, implying that this is a natural way to improve our methodology for stability reasons alone. We hope to examine this, using the RIVET software (c.f. Section \ref{sec:rivet}), in future work. It has also been noted \cite{Hazelton2003} that Gaussian kernels can sometimes flatten peaks too much: a more thorough examination of optimal density estimators for our data sets is for this reason another avenue of future work.
While the failure to detect the Southern jet mode should probably be viewed as a shortcoming, we would also suggest that this failure may shed some light on a few curious features in the literature. Firstly, many studies have tried to diagnose regimes in the Euro-Atlantic sector, and, depending on the choice of input data, pre-processing steps and diagnostics, these studies have suggested there may be anywhere between 2 and 6 regimes (see \cite{Hannachi2019}, \cite{Dorrington2020}, \cite{Dawson2012}, \cite{Madonna2017}, \cite{Falkena2020} respectively for examples of each number). While the ambiguity between the choices 3, 4 and 5 is at least in part due to the confounding influence of the jet speed \cite{Dorrington2020}, and the choice of 2 regimes usually corresponds to the North Atlantic Oscillation dipole \cite{Woollings2008, Hannachi2019}, the striking divergence in the number of regimes across studies using similar techniques is still somewhat puzzling. Our results suggest that one possible reason for this is that, depending on what angle one views the Euro-Atlantic circulation from, different regimes may appear either as clearly distinct peaks or more ambiguous and hard to detect shoulders.
Secondly, in \cite{Strommen2020}, the ability of a numerical weather forecast model to make skillful predictions of the Euro-Atlantic circulation was studied from the perspective of the three jet latitude regimes. It was found that the model was able to skillfully detect changes in the Northern mode compared to the Southern and Central modes, but was not able to robustly separate between the Southern and Central modes. In other words, from the perspective of the forecast model, the jet appeared to behave as if it had 2, not 3, regimes. By considering Figure \ref{fig:jetlat_pdf}(b), it is perhaps not surprising that an imperfect model may struggle to reproduce the more subtle behaviour of the Southern shoulder, and produce a cruder approximation of the pdf as having just two peaks. A comparison between this figure and an equivalent one for model data (not shown) does suggest the model has a notably flatter Southern peak.
\subsection{Why a simpler definition of regime fails}
We have shown that non-trivial topological structure, as measured with a bifiltration of homology, provides a unifying way of understanding the main examples of non-linear dynamical systems generally considered as having regimes. Because this comes at the cost of introducing an extra level of abstraction, it is reasonable to ask if a similar unification could be achieved using the more common ways of understanding regimes, namely density peaks (i.e., clustering) or temporal persistence. We will now show that, on the face of it, no such simpler unification appears possible.
To see this, first notice that while for JetLat, the two regimes correspond clearly to local maxima in density, both Lorenz `96 and CdV are examples where the density of the two regimes are wildly different. For Lorenz `63, while a bimodal pdf can be obtained by time-averaging \cite{Corti1999}, Figure \ref{fig:lorenz_holes_evolution}(d) makes it clear that the regions defined by the two regimes (i.e., the two wings) are, in the raw data set, not local density maxima. Hence a definition of regimes as local density maxima/minima or clustering will invariably fail to account for one of these systems.
Next, one might consider a criteria based on any of the closely related concepts of temporal persistence, average residence times or phase speed velocities. However, also here one finds that the behaviour of the different systems differs dramatically. In the Lorenz `63 system, temporal persistence peaks (and velocities are smallest) at the dense region connecting the two wings, while temporal persistence is in general minimal\footnote{With the exception being the extremely rare trajectories that pass sufficiently close to either fixed point.} in the wings themselves, where velocities peak. On the other hand, for Lorenz `96, both regimes correspond to peaks in temporal persistence/residence time, as mentioned already, while in CdV the two regimes are broadly asymmetric in terms of their temporal persistence and velocities, with the blocking regime featuring high temporal persistence/low velocities and the zonal regime favouring low temporal persistence/high velocities. Even in the real atmosphere, the behaviour does not appear to be uniform. Already in \cite{Woollings2010b}, where the jet latitude regimes were first presented, it was noted that the forcing on the jet by transient eddies, thought to be a key driver in generating temporal persistence, appears to be operating similarly at all latitudes, not just at the peaks of the trimodal distribution. In other words, the extent to which the three jet latitude modes can be characterised as having higher-than-average temporal persistence is ambiguous. This ambiguity is further supported by the results of \cite{Faranda2017}, which examined the closely related 4-regime picture of the North Atlantic. By computing a measure of both local temporal persistence and local density, they locate the four regimes in distinct quadrants of temporal persistence-density space, implying the regimes all have strikingly different characteristics.
A simplistic definition of regimes based on temporal persistence, residence times or velocities will, therefore, inevitably fail to capture the behaviour in one or more of these systems. It is also clear from this discussion that the situation cannot be salvaged by using a definition combining both notions. Hence it seems, to these authors, not to be possible to find a definition of regimes, using density or temporal persistence alone, that unifies all the systems we considered. While an alternative definition of regimes based on fixed points, UPOs or other `exact solution' techniques might seem plausible, computing such solutions is extremely computationally demanding, and state-of-the-art techniques are only able to handle systems of significantly lower dimensionality than existing climate models \cite{lucarini2020new}. More crucially, these techniques are inherently model features, in that they rely on being able to integrate the model dynamics. Given that models are known to exhibit systematic biases in their regime structure \cite{Fabiano2020}, inferring conclusions about the real atmosphere based on results obtained from models would require considerable care. It is therefore not currently clear how such `exact solution' techniques can be applied to observational data sets.
\section{Conclusions and further directions}
\label{sec:conclusions}
In this paper we have argued that the unifying feature across the most well-known examples of regime systems, is their non-trivial topological structure. We showed that using persistent homology, one can compute a bifiltration of topological invariants which encodes such non-trivial structure. By carrying out this computation for four classical regime systems (Lorenz `63, Lorenz `96, Charney-deVore and the North Atlantic jet), we showed that the information detected in such a bifiltration encodes the key features of each system associated with their regimes. It was pointed out that these systems also exhibit widely differing behaviour in terms of the density and temporal persistence of their regimes, suggesting that no simple definition of regime structure based solely on these notions is likely to be general enough to capture all of them.
These results justify our proposed definition: that a dynamical system is said to have regime structure if its attractor exhibits non-trivial topology. This definition can be obviously adjusted to relate to local regions of phase space only, to account for, e.g., the Euro-Atlantic sector as a particular region in the larger climate attractor. Our methodology shows that besides being a definition which captures a sufficiently wide variety of behaviour, it has the important quality of being computationally tractable for the size of data sets typically used in meteorology and climate science. Furthermore, far from being simply a mathematically neat abstraction, we argue that this topological perspective offers concrete practical benefits, for three main reasons.
To understand the first reason, it is helpful to recall, as discussed in the introduction, that the raison d'être of regimes is to understand questions of predictability across multiple timescales. An overemphasis on properties related to density (as in clustering methods) or temporal persistence may end up obfuscating analysis, not only because regime systems can have a wide variety of behaviour with respect to these notions, but, crucially, because the most salient information may be located in entirely different aspects of the system. The CdV system is an instructive example in this regard. While its classical regimes are associated with fixed points, the most striking impact of these is the tight, looping behaviour it generates (c.f. Figure \ref{fig:cdv_thresholds}). Knowing that the system is on such a narrowly defined trajectory provides significantly more information than simply knowing that the system is in the vicinity of a fixed-point. From a topological perspective, where no knowledge of fixed-points is implicit, these loops are what stand out as the major feature of CdV, implying that focusing attention on such features can highlight information which is otherwise being overlooked. This potential of topological methods to obtain efficient, simplified representations of chaotic dynamics was also noted in \cite{Gokhan2020} using different ideas.
The second and third reasons relate to the technical benefits of persistent homology algorithms. Unlike many existing algorithms for regime analysis, such as $K$-means clustering, persistent homology is effectively non-prescriptive. That is, the only parameters required for the algorithm are generic to the system, such as a measure of the spatial scales of the system, as opposed to parameters that explicitly influence the diagnosed regimes, such as the choice of $K$ in $K$-means. Homological techniques are therefore particularly well-suited to studying systems where prior knowledge of regime structure is less clear. The ability of our technique to capture the regime behaviour associated to several classical systems lends confidence in its ability to locate relevant structure in such contexts, such as the real atmosphere.
Finally, we note that several studies suggest that the multimodal behaviour observed in the North Atlantic jet involves not just changes to zonal winds, but the complex interplay between winds, pressure and temperatures, in the form of meridional heat transport and baroclinicity \cite{Novak2015}. There are tantalising clues that genuine loops in the attractor might be detectable when taking this into account (c.f. \cite{Novak2017}, Figures 4 and 5). It is therefore plausible that the detection of significant loops in the climate attractor requires a technique that can gracefully handle multiple dimensions of data encoding several atmospheric variables. The fact that homological computations are completely exempt from the `curse of dimensionality' (cf. Section \ref{sec:complexity}) implies that persistent homology is, in principle, such a tool. Turning this principle into realisable practice will, however, require improvements to two branches of software: software capable of carrying out more subtle bifiltrations (such as RIVET), and software capable of producing stable optimal representatives of homology classes (such as PersLoop). It is the hope of these authors that such improvements might allow for the detection of robust regime structure in the atmosphere using unprocessed, but high-dimensional, raw data.
\section*{Acknowledgements}
We thank Sayan Mandal and Tamal Dey for their assistance with the software package PersLoop, as well as Hannah Christensen for sharing Lorenz `96 data and helpful conversations.
KS was funded by a Thomas Philips and Jocelyn Keene Junior Research Fellowship in Climate Science at Jesus College, Oxford. JD is funded by NERC Grant NE/L002612/1. MC was supported by a grant from the Office of Naval Research Global.
|
1,314,259,996,579 | arxiv | \section{Introduction}
\subsection{A Subsection Sample}
Please note that the first paragraph of a section or subsection is
not indented. The first paragraph that follows a table, figure,
equation etc. does not need an indent, either.
Subsequent paragraphs, however, are indented.
\subsubsection{Sample Heading (Third Level)} Only two levels of
headings should be numbered. Lower level headings remain unnumbered;
they are formatted as run-in headings.
\paragraph{Sample Heading (Fourth Level)}
The contribution should contain no more than four levels of
headings. Table~\ref{tab1} gives a summary of all heading levels.
\begin{table}
\caption{Table captions should be placed above the tables.}\label{tab1}
\begin{tabular}{|l|l|l|}
\hline
Heading level & Example & Font size and style\\
\hline
Title (centered) & {\Large\bfseries Lecture Notes} & 14 point, bold\\
1st-level heading & {\large\bfseries 1 Introduction} & 12 point, bold\\
2nd-level heading & {\bfseries 2.1 Printing Area} & 10 point, bold\\
3rd-level heading & {\bfseries Run-in Heading in Bold.} Text follows & 10 point, bold\\
4th-level heading & {\itshape Lowest Level Heading.} Text follows & 10 point, italic\\
\hline
\end{tabular}
\end{table}
\noindent Displayed equations are centered and set on a separate
line.
\begin{equation}
x + y = z
\end{equation}
Please try to avoid rasterized images for line-art diagrams and
schemas. Whenever possible, use vector graphics instead (see
Fig.~\ref{fig1}).
\begin{figure}
\includegraphics[width=\textwidth]{fig1.eps}
\caption{A figure caption is always placed below the illustration.
Please note that short captions are centered, while long ones are
justified by the macro package automatically.} \label{fig1}
\end{figure}
\begin{theorem}
This is a sample theorem. The run-in heading is set in bold, while
the following text appears in italics. Definitions, lemmas,
propositions, and corollaries are styled the same way.
\end{theorem}
\begin{proof}
Proofs, examples, and remarks have the initial word in italics,
while the following text appears in normal font.
\end{proof}
For citations of references, we prefer the use of square brackets
and consecutive numbers. Citations using labels or the author/year
convention are also acceptable. The following bibliography provides
a sample reference list with entries for journal
articles~\cite{ref_article1}, an LNCS chapter~\cite{ref_lncs1}, a
book~\cite{ref_book1}, proceedings without editors~\cite{ref_proc1},
and a homepage~\cite{ref_url1}. Multiple citations are grouped
\cite{ref_article1,ref_lncs1,ref_book1},
\cite{ref_article1,ref_book1,ref_proc1,ref_url1}.
\section{Conclusion}
To address the memory-bound limitation of LBM,
we design \added[]{a new 3D parallel memory-aware LBM algorithm} that systematically combines single copy distribution, single sweep, swap algorithm, prism traversal, and merging two collision-streaming cycles.
We also keep thread safety and reduce the synchronization cost in parallel.
The parallel 3D memory-aware LBM outperforms state-of-the-art LBM software by up to 89.2\% on a Haswell node, 84.6\% on a Skylake node and 38.8\% on a Knight Landing node, respectively.
Our future work is to
\added[id=fu]{merge more time steps on distributed memory systems and on GPU}.
\bibliographystyle{splncs04}
\subsection{Parallel 3D Memory-aware LBM}
\label{sec:3D-omp-mem-aware-prism}
To support manycore systems, we choose OpenMP~\cite{OPENMP} to realize the parallel 3D memory-aware LBM algorithm~\footnote{\cite{slaughter2020task} states that when the minimum effective task granularity (METG) of parallel runtime systems are smaller than task granularity of large-scale LBM simulations, all of these runtime system can deliver good parallel performance.}.
Fig.\ref{fig:3D-2step-prism-omp-A} illustrates its idea on a $8\times4\times4$ cuboid, which is evenly partitioned by two threads along the X-axis (\textit{height}).
Then each thread traverses a $4\times4\times4$ sub-domain with prism stride $tile=4$.
Line 4 in Alg.\ref{alg:3D-2step-prism-omp} defines the start and end layer index of each thread's sub-domain, thus the end layers $myEndX$ are ``\textit{intersections}" (e.g., layer 4 and 8).
Fig.\ref{fig:3D-2step-prism-omp-1} shows the initial state at time step $t$.
In addition, the parallel 3D memory-aware Alg.\ref{alg:3D-2step-prism-omp} \added[]{consists of} three stages: Preprocessing, Sub-domain computation, and Post-processing.
\begin{figure}[h!]
\centering
\begin{subfigure}[t]{0.205\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-alg/3D-2step-prism-omp-1.pdf}
\caption{\small Initialization.}
\label{fig:3D-2step-prism-omp-1}
\end{subfigure}
\begin{subfigure}[t]{0.235\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-alg/3D-2step-prism-omp-2.pdf}
\caption{\small Stage I.}
\label{fig:3D-2step-prism-omp-2}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-alg/3D-2step-prism-omp-3.pdf}
\caption{\small Stage II: Case 1.}
\label{fig:3D-2step-prism-omp-3}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-alg/3D-2step-prism-omp-case-2.pdf}
\caption{\small Stage II: Case 2.}
\label{fig:3D-2step-prism-omp-case-2}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-alg/3D-2step-prism-omp-case-0.pdf}
\caption{\small Stage II: Case 0.}
\label{fig:3D-2step-prism-omp-case-0}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-alg/3D-2step-prism-omp-7.pdf}
\caption{\small Stage II: Case 3.}
\label{fig:3D-2step-prism-omp-7}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-alg/3D-2step-prism-omp-12.pdf}
\caption{\small Stage II: Case 4}
\label{fig:3D-2step-prism-omp-12}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-alg/3D-2step-prism-omp-legend.pdf}
\caption{\small Legends.}
\label{fig:3D-2step-prism-omp-legend}
\end{subfigure}
\caption{\small Parallel 3D two-step memory-aware LBM on a $8\times4\times4$ cuboid.}
\label{fig:3D-2step-prism-omp-A}
\end{figure}
\begin{algorithm}[h!]
\caption{ Parallel 3D Memory-aware LBM}
\label{alg:3D-2step-prism-omp}
\scriptsize{
\begin{algorithmic} [1]
\For {iT = 0; iT $<$ N; iT += 2}
\State \textbf{\#pragma omp parallel default(shared)\{}
\State $sub\_h = lx / nthreads$; // height of each thread's sub-domain
\State myStartX = $1 + thread\_id \times sub\_h$; myEndX = $(thread\_id+1) \times sub\_h$;
\Statex \textit{/* Stage I: First collide \& revert on the intersection layer.*/}
\State $collide$ \& $revert$ on all $ly\times lz$ cells on layer iX = myEndX;
\State \textbf{\#pragma omp barrier}
\Statex \textit{/* Stage II: Main computation in each thread's sub-domain.*/}
\For {outerX = myStartX; outerX $\leq$ myEndX; outerX += tile}
\For {outerY = 1; outerY $\leq$ $ly$ + tile - 1 ; outerY += tile}
\For {outerZ = 1; outerZ $\leq$ $lz$ + 2 * (tile - 1); outerZ += tile}
\For {innerX=outerX; innerX$\leq$MIN(outerX+tile-1, myEndX); ++innerX, ++dx}
\State minY = outerY - dx; maxY = minY + tile - 1; dy = 0; \added[]{/* forward shift */}
\For {innerY=MAX(minY, 1); innerY$\leq$MIN(maxY, $ly$); ++innerY, ++dy}
\State minZ = outerZ - dx - dy; maxZ = minZ + tile - 1; \added[]{/* leftward shift */}
\For {innerZ = MAX(minZ, 1); innerZ $\leq$ MIN(maxZ, $lz$); ++innerZ}
\Statex // \textit{Case 0: First $collide$ \& $stream$ on the first row and column of each layer except the intersection layers.}
\If{innerX != myEndX \&\& (innerX == 1 or innerY == 1 or innerZ == 1)}
\State First $boundary\_cell\_comp(innerX, innerY, innerZ)$;
\State \textbf{continue;}
\EndIf
\Statex // \textit{Case 1: First $collide$ \& $stream$ on layer $myStartX$:}
\If{innerX == myStartX}
\State First $adaptive\_collide\_stream$(innerX, innerY, innerZ);
\Statex // \textit{Case 2: First $collide$ \& $stream$ on $myStartX + 1$; Second $collide$ \& $revert$ on $myStartX$:}
\ElsIf{innerX == myStartX + 1}
\State First $adaptive\_collide\_stream$(innerX, innerY, innerZ);
\State Second $collide$ \& $revert$ on (innerX-1, innerY-1, innerZ-1);
\State Handle the second $collide$ \& $revert$ of neighbors at certain boundary locations;
\Statex // \textit{Case 3: First $stream$ on layer $myEndX$; Second $collide$ \& $stream$ under one layer:}
\ElsIf{innerX == myEndX}
\State First $adaptive\_stream$(innerX, innerY, innerZ);
\State Second $adaptive\_collide\_stream$(innerX-1, innerY-1, innerZ-1);
\State $boundary\_neighbor\_handler$ (innerX, innerY, innerZ);
\Statex // \textit{Case 4: first $collide$ \& $stream$ on other layers; Second $collide$ \& $stream$ under one layer:}
\Else
\State First $adaptive\_collide\_stream$(innerX, innerY, innerZ);
\State Second $adaptive\_collide\_stream$(innerX-1, innerY-1, innerZ-1);
\State $boundary\_neighbor\_handler$(innerX, innerY, innerZ);
\EndIf
\EndFor
\EndFor
\EndFor
\EndFor
\EndFor
\EndFor
\State \textbf{\#pragma omp barrier}
\Statex \textit{/* Stage III: second $collide$ \& $stream$ on the intersection; then second $stream$ on the layer $myStartX$. */}
\State $adaptive\_collide\_stream$ at all $ly\times lz$ cells on layer iX = myEndX;
\State \textbf{\#pragma omp barrier}
\State $stream$ at all $ly\times lz$ cells on layer iX = myStartX;
\State \textbf{\}}
\EndFor
\end{algorithmic}
}
\end{algorithm}
\begin{enumerate}
\itemsep0em
\item \textbf{Stage I (Preprocessing)} \textit{line 5 in Alg.\ref{alg:3D-2step-prism-omp}}:
In Fig.\ref{fig:3D-2step-prism-omp-2}, thread 0 and 1 compute the first $collide$ and $revert$ on the ``intersection" layers 4 and 8, respectively,
and then change them to pink.
\item \textbf{Stage II (Sub-domain computation)} handles five cases from step 2 to 7.
In \textit{case 0} (\textit{lines 15$\sim$17 in Alg.\ref{alg:3D-2step-prism-omp}}), when thread 0 and 1 access the cells on the first row and column of each layer except the ``intersection" layers,
we execute the first $boundary\_cell\_comp$ on them and change them to orange.
\item Fig.\ref{fig:3D-2step-prism-omp-3} shows \textit{case 1} (\textit{lines 18$\sim$19 in Alg.\ref{alg:3D-2step-prism-omp}}).
When thread 0 and 1 access the cells on layer $myStartX$ (iX = 1 \& 5), respectively,
we execute the $adaptive\_collide\_stream$ on them to compute at time step $t$,
and then change the boundary cells to orange and the inner cells to red.
\item Fig.\ref{fig:3D-2step-prism-omp-case-2} shows \textit{case 2} (\textit{lines 20$\sim$23 in Alg.\ref{alg:3D-2step-prism-omp}}).
When thread 0 and 1 are on layer $myStartX+1$ (iX = 2 \& 6), respectively,
we execute the first $adaptive\_collide\_stream$ at time step $t$
and change boundary cells to orange and inner cells to red.
Meanwhile, cell (5,1,1) and (1,1,1)
have collected the data dependencies to $collide$ at time step $t+1$,
we execute the second $collide$ and $revert$ but without $stream$ on them, and change to light purple.
\item Fig.\ref{fig:3D-2step-prism-omp-case-0} shows that when continuing traversal in Prism 1,
thread 0 and 1 are on layer iX = 3 \& 6.
Since the cells traversed in this figure are in the first row and column, case 0 is used here, otherwise, case 4 is used.
\item Fig.\ref{fig:3D-2step-prism-omp-7} shows \textit{case 3} (\textit{lines 24$\sim$27 in Alg.\ref{alg:3D-2step-prism-omp}}).
When thread 0 and 1 are on the intersection layers (iX = 4 \& 8),
we execute the remaining first $stream$ at time step $t$ due to preprocessing in Stage I.
Then if cells under one layer (iX = 3 \& 7) collect their data dependency at time step $t+1$,
we execute the second $adaptive\_collide\_stream$ on them.
\item Fig.\ref{fig:3D-2step-prism-omp-12} shows \textit{case 4} (\textit{lines 28$\sim$31 in Alg.\ref{alg:3D-2step-prism-omp}}).
When thread 0 and 1 are on the other layers of sub-domain,
we conduct the first \textit{adaptive\_collide\_stream} on (innerX, innerY, innerZ) at time step $t$,
and then the second \textit{adaptive\_collide\_stream} on (innerX-1, innerY-1, innerZ-1) at time step $t+1$.
Then we call \textit{boundary\_neighbor\_handler} to compute the neighbors of (innerX, innerY, innerZ) at certain locations at time step $t+1$.
\item \textbf{Stage III (Post-processing)} \textit{lines 33$\sim$35 in Alg.\ref{alg:3D-2step-prism-omp}}:
Firstly, since Stage I and case 3 have completed the first computation on intersection layers,
we wrap up the second $collide$ and $stream$ on intersections.
Secondly, since case 2 have executed the second $collide$ and $revert$ on the first layers $myStartX$ of each sub-domain, the second $stream$ remains to be executed.
\end{enumerate}
\textbf{How to Handle Thread Safety near Intersection Layers:}
\begin{figure}[t!]
\centering
\includegraphics[width=0.73\textwidth]{figures/3D-alg/3D-omp-dependency-3.pdf}
\caption{\small Handle thread safety on intersection layers.
}
\label{fig:3D-omp-dependency}
\end{figure}
We aim to keep thread safety and minimize the synchronization cost \added[]{during parallel executions.
To this end,} we need to carefully design the initial state of each thread \added[comment={why max each thread's workload? We want to let each thread's sub-domain large enough, so threads will compute locally in most of time, then they synchronize less frequently only at the intersection layers.}]{so that the majority of computation stays in each threads' local sub-domain}.
The left part of Fig.\ref{fig:3D-omp-dependency} shows the view of Fig.\ref{fig:3D-2step-prism-omp-A} along X-Z axis, and layer 4 is the intersection layer that partitions two threads' sub-domains.
The right part shows the data dependencies near the intersection layer in two time steps.
In the figure, the red block represents Stage I of Alg.\ref{alg:3D-2step-prism-omp}, yellow blocks Stage II, and green blocks Stage~III .
The arrows indicate that data are transferred from layer A to B by using a procedure (or B depends on A).
\added[]{There are three non-trivial dependencies requiring to handle thread safety near intersection layers.}
(1)
Since the swap algorithm only streams data to half of the neighbors under one layer,
the $swap\_stream$ on layer 5 ---the first layer of thread 1's sub-domain--- should be delayed after the $revert$ on layer 4 in thread 0's sub-domain.
Thus, in Stage I, we pre-process $collide$ and $revert$ at time step $t$ but without $stream$ on layer 4, since
$stream$ on layer 4 depends on the post-collision on layer 3, which has not been computed yet.
(2) In Stage II, the second $swap\_stream$ on layer 6 called by the case 4 procedure should be delayed after the second $revert$ but without $swap\_stream$ on layer 5.
This is because thread 1 cannot guarantee that thread 0 has completed the second $swap\_steam$ on layer 4.
To keep thread safety, $swap\_stream$ on layer 5 is delayed to Stage III.
(3) Thus, in Stage III, the second $swap\_stream$ on layer 5 is delayed after the second $swap\_stream$ on layer 4.
Above all,
since the \added[id=fu]{major} computation happens in Stage II of each thread's sub-domain,
we avoid the frequent ``layer-wise" thread synchronizations \added[]{that occur} in the wave-front parallelism.
Besides, we only synchronize at the intersection layers every two time steps,
hence the overhead of three \textit{barriers} of Alg.\ref{alg:3D-2step-prism-omp}
becomes much less.
\subsection{Sequential 3D Memory-aware LBM}
\label{sec:3D-seq-mem-aware}
We design and develop the sequential 3D memory-aware LBM (shown in Alg.\ref{alg:3D-2step-seq-prism}),
based on the latest efficient Fuse Swap LBM,
by adding two more features: merging two collision-streaming cycles to explore the temporal locality,
and introducing the prism traversal to explore the spatial locality.
Fig.\ref{fig:3D-2step-seq} shows an example on
how to
merge two collision-streaming cycles given a $4\times4\times4$ cube:
\begin{enumerate}
\itemsep0em
\item Fig.\ref{fig:3D-2step-1} shows the initial state of all cells at the current time step $t$.
Green cells are on boundaries, and blue cells are located in the inner bulk domain.
\item In Fig.\ref{fig:3D-2step-2}, we compute the first $collide$, $revert$ and $boundary\_swap\_stream$ row by row
on the bottom layer iX = 1.
After a cell completes the first computation, we change it to orange.
\item In Fig.\ref{fig:3D-2step-3}, we compute the first $collide$ and $boundary\_swap\_stream$ row by row till cell (2,2,1) on the second layer iX = 2.
\item In Fig.\ref{fig:3D-2step-4}, cell (2,2,2) completes its first $collide$ and $swap\_stream$, so we change it to red since they are inner cells. Then we observe that cell (1,1,1) is ready for the second $collide$, so we change it to yellow.
\item In Fig.\ref{fig:3D-2step-5}, we execute the second $collide$ and $boundary\_swap\_stream$ on cell (1,1,1), and change it to purple.
\end{enumerate}
\begin{figure}[h!]
\captionsetup{justification=raggedright,format=hang}
\centering
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-alg/3D-2step-1.pdf}
\caption{\scriptsize Initialization.}
\label{fig:3D-2step-1}
\end{subfigure}
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-alg/3D-2step-2.pdf}
\caption{\scriptsize First computation on layer iX=1.}
\label{fig:3D-2step-2}
\end{subfigure}
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-alg/3D-2step-3.pdf}
\caption{\scriptsize First computation on layer iX=2.}
\label{fig:3D-2step-3}
\end{subfigure}
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-alg/3D-2step-4.pdf}
\caption{\scriptsize First computation on cell (2,2,2).}
\label{fig:3D-2step-4}
\end{subfigure}
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-alg/3D-2step-5.pdf}
\caption{\scriptsize Second computation on cell (1,1,1).}
\label{fig:3D-2step-5}
\end{subfigure}
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-alg/3D-2step-legend-2.pdf}
\caption{\scriptsize Legends.}
\label{fig:3D-2step-6}
\end{subfigure}
\caption{\small 3D sequential two-step memory-aware LBM on a $4\times4\times4$ cube lattice.}
\label{fig:3D-2step-seq}
\end{figure}
\begin{figure}[b!]
\centering
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-alg/3D-prism-1.pdf}
\caption{\small Layer iX=1.}
\label{fig:3D-seq-prism-1}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-alg/3D-prism-2.pdf}
\caption{\small Layer iX=2.}
\label{fig:3D-seq-prism-2}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-alg/3D-prism-3.pdf}
\caption{\small Layer iX=3.}
\label{fig:3D-seq-prism-3}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-alg/3D-prism-4.pdf}
\caption{\small Layer iX=4.}
\label{fig:3D-seq-prism-4}
\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=0.5\textwidth]{figures/3D-alg/prism-9-10.pdf}
\caption{\small Prism 9 and 10 are parallelpiped shape. Layer iX=4 is on the top.}
\label{fig:3D-seq-prism-9-10}
\end{subfigure}
\quad
\centering
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=0.4\textwidth]{figures/3D-alg/3D-swap-stream-1D-2.pdf}
\caption{Planar slice when cutting Fig.\ref{fig:3D-swap-stream} (swap stream operation) along Y-Z plane.}
\label{fig:3D-swap-stream-yz}
\end{subfigure}
\caption{Sequential 3D prism traversal on a $4 \times 16 \times 16$ cuboid box.}
\label{fig:3D-seq-prism}
\end{figure}
To further increase data reuse, we optimize the algorithm's spatial locality by designing a ``prism traversal" method, since the shape of this traversal constructs a 3D pyramid prism or a parallelpiped prism.
We use an example to explain its access pattern in a $4 \times 16 \times 16$ cuboid with stride $tile=4$.
Fig.\ref{fig:3D-seq-prism-1}$\sim$\ref{fig:3D-seq-prism-4} are the four separate $16 \times 16$ layers of the cuboid from bottom to top.
The cells with the same number on the four layers construct a \textit{prism} (e.g., the cells with number 1 in Fig.\ref{fig:3D-seq-prism-1}$\sim$\ref{fig:3D-seq-prism-4} construct a pyramid-shape ``Prism 1").
In each prism, we still firstly go along Z-axis, then along Y-axis, and upward along X-axis at last.
Then we traverse prism-wise from Prism 1 to Prism 30.
Finally, if a cuboid is much larger than this example,
the majority of prisms are ``parallelpiped" shapes like Prism 9 and 10 in Fig.\ref{fig:3D-seq-prism-9-10}.
The reason why the planar slice of a prism is either triangles or parallelograms is due to the $swap\_stream$ operation.
When cutting Fig.\ref{fig:3D-swap-stream} ($swap\_stream$) along the Y-Z plane, we have a planar slice as shown in Fig.\ref{fig:3D-swap-stream-yz}.
We observe that a cell (star) swaps with its lower right neighbor (orange) at direction 9.
In other words, when the orange cell swaps with the upward row,
its neighbor ``shifts" one cell \textit{leftward}.
Similarly, if cutting Fig.\ref{fig:3D-swap-stream} ($swap\_stream$) along the X-Y plane,
when a cell swaps data with the upward row, its neighbor ``shifts" one cell \textit{forward}.
Thus when we traverse $tile$ number of cells on Z-axis at row $iY$,
they can swap with $tile$ number of cells but shifted one cell leftward at row $iY+1$,
thereby we get parallelograms in Fig.\ref{fig:3D-seq-prism-1}$\sim$\ref{fig:3D-seq-prism-4}.
When the shift encounters domain boundaries, we truncate the parallelograms and get isosceles right triangles or part of parallelograms. At last, we can safely combine ``prism traversal" with merging two collision-streaming cycles, since the cell at left forward down corner has been in a post-collision state and ready to compute the second computation when following the above traversal order.
\begin{algorithm}[h!]
\caption{3D Sequential Memory-aware LBM}\label{alg:3D-2step-seq-prism}
\scriptsize{
\begin{algorithmic} [1]
\State tile := stride of the prism traversal
\For {iT = 0; iT $<$ N; iT += 2}
\For {outerX = 1; outerX $\leq$ $lx$; outerX += tile}
\For {outerY = 1; outerY $\leq$ $ly$ + tile - 1; outerY += tile}
\For {outerZ = 1; outerZ $\leq$ $lz$ + 2* (tile - 1); outerZ += tile}
\For {innerX=outerX; innerX $\leq$ MIN(outerX+tile-1, $lx$); ++innerX, ++dx}
\State minY = outerY - dx; maxY = minY + tile - 1; dy = 0; \added[]{/* forward shift */}
\For {innerY=MAX(minY, 1); innerY $\leq$ MIN(maxY, $ly$); ++innerY, ++dy}
\State minZ = outerZ - dx - dy; maxZ = minZ + tile - 1; \added[]{/* leftward shift */}
\For {innerZ=MAX(minZ, 1); innerZ $\leq$ MIN(maxZ, $lz$); ++innerZ}
\Statex \hspace*{1cm}/* (1) First computation at time step $t$. */
\State $adaptive\_collide\_stream$(innerX, innerY, innerZ);
\Statex \hspace*{1cm}/* (2) Second computation at time step $t+1$. */
\If{innerX $>$ 1 \&\& innerY $>$ 1 \&\& innerZ $>$ 1}
\State $adaptive\_collide\_stream$(innerX-1, innerY-1, innerZ-1);
\EndIf
\Statex \hspace*{1cm}/* (3) Second computation of neighbors at certain locations. */
\State $boundary\_neighbor\_handler$(innerX, innerY, innerZ);
\EndFor
\EndFor
\EndFor
\EndFor
\EndFor
\EndFor
\State Second $collide$, $revert$ \& \textit{boundary\_swap\_stream} on the top layer iX = $lx$.
\EndFor
\Function {$boundary\_cell\_comp$}{iX, iY, iZ}
\State $collide$, $revert$, \& \textit{boundary\_swap\_stream} on (iX, iY, iZ) to half of its neighbors;
\EndFunction
\Function {$adaptive\_collide\_stream$}{iX, iY, iZ}
\If{(iX, iY, iZ) is on the boundary}
\State $boundary\_cell\_comp$(iX, iY, iZ);
\Else
\State $collide$ \& $swap\_stream$ on (iX, iY, iZ) to half of its neighbors;
\EndIf
\EndFunction
\Function {$boundary\_neighbor\_handler$}{iX, iY, iZ}
\Statex // Handle the second computation of (iX, iY, iZ)'s neighbors at certain locations.
\If{iZ $==$ $lz$} // (iX, iY, iZ) is the last cell of a row.
\State $boundary\_cell\_comp$ (iX-1, iY-1, iZ);
\EndIf
\If{iY $==$ $ly$ \&\& iZ $>$ 1} // (iX, iY, iZ) is in the last row of a layer.
\State $boundary\_cell\_comp$(iX-1, iY, iZ-1);
\EndIf
\If{iY $==$ $ly$ \&\& iZ $==$ $lz$} // (iX, iY, iZ) is the last cell on a layer.
\State $boundary\_cell\_comp$(iX-1, iY, iZ);
\EndIf
\EndFunction
\end{algorithmic}
}
\end{algorithm}
Alg.\ref{alg:3D-2step-seq-prism} presents the sequential 3D memory-aware LBM.
Lines $6\sim10$ traverse the domain prism-wise with stride $tile$.
Lines $11\sim14$ merge two time steps computation.
The first $stream$ starting from the bottom layer iX = 1 in Line 11 is necessary due to the data dependency for the second computation.
In particular, the if-statement in Line 13 ensures that the cell to compute at time step $t+1$ is in a post-collision state, no matter using D3Q15, D3Q19, D3Q27 or extended lattice models.
For simplicity, Lines 16$\sim$29 define three helper functions.
\subsection{Performance of Parallel 3D Memory-aware LBM}
\label{subsec:3D-strong-scaling}
Given $N$ cores, Palabos LBM solvers partition the simulation domain evenly along three axes by $N_{z} \times N_{y} \times N_{x} = N$ MPI processes,
which follows the underlying memory layout of cells along the axis of Z, then Y, and X at last.
But our 3D memory-aware LBM partitions a domain only along X-axis by $N$ OpenMP threads.
Hence, Palabos LBM solvers have a smaller Y-Z layer size per core than our algorithm and have closer memory page alignment especially for a large domain.
To exclude the factor caused by different partition methods, when the input of Palabos LBM solvers still uses cubes, 3D memory-aware LBM will take two different inputs.
Firstly, it takes the input of the ``equivalent dimension" of those cubes, such that
a thread in our algorithm and a process in Palabos will compute a sub-domain with the same dimension after the respective partition method.
Secondly, it simply takes the identical input of those cubes.
\begin{table}[b!]
\renewcommand{\arraystretch}{1.0}
\caption{\small Equivalent input used by 2-step prism LBM when the input of Palabos LBM solvers is a cube with $L = 840$ on a Haswell node.}
\centering
{\scriptsize
\begin{tabularx}{0.75\textwidth}{c | c | c | c | c | c | c | c | c |c |c |c}
\thickhline
Cores & 1 &2 &4 &6 &8 &10 &12 &14 &20 &24 &28\\
\hline
$lx$ (height) &840 &1680 &3360 &5040 &3360 &8400 &5040 &11760 &8400 &10080 &11760 \\
$ly$ (width) &840 &840 &420 &420 &420 &420 &420 &420 &420 &420 &420 \\
$lz$ (length) &840 &420 &420 &280 &420 &168 &280 &120 &168 &140 &120 \\
\thickhline
\end{tabularx}
}
\label{tbl:3D-fair-cube-input-840}
\end{table}
Fig.\ref{fig:3D-strong-comb-cube} shows the strong scalability of three LBM algorithms on three types of compute nodes. The input of Palabos LBM solvers use cubes with edge size $L$ from small to large.
Tab.\ref{tbl:3D-fair-cube-input-840} gives an example of the equivalent input used by 3D memory-aware LBM
when Palabos LBM solvers use a cube with $L = 840$ on a Haswell node.
We observe that the 2-step prism LBM scales efficiently and always achieves the best performance in all cases.
(1) When using the equivalent input of cubes,
for small scale cubes (with $L=112, 192, 272$) in Fig.\ref{fig:3D-strong-comb-haswell-112}.\ref{fig:3D-strong-comb-skx-192}.\ref{fig:3D-strong-comb-knl-272},
3D memory-aware LBM (green legend) is faster than the second-fastest Palabos (Fuse Prism) (orange legend) by up to 89.2\%, 84.6\%, and 38.8\% \added[id=fu]{on the Haswell, Skylake, and KNL node, respectively.
Missing L3 cache on KNL prevents the similar speedup as other two CPUs.}
In Fig.\ref{fig:3D-strong-comb-haswell-448}.\ref{fig:3D-strong-comb-skx-576}.\ref{fig:3D-strong-comb-knl-476}, for the middle scale cubes (with $L=448, 576, 476$),
it is still faster than Palabos (Fuse Prism) by up to 37.9\%, 64.2\%, and 28.8\% on three CPU nodes, respectively.
\added[id=fu]{Due to unbalanced number of processes assigned on three axes,
we observe that the performance of Palabos Fuse and Fuse Prism drop on some number of cores.}
In Fig.\ref{fig:3D-strong-comb-haswell-840}.\ref{fig:3D-strong-comb-skx-960}.\ref{fig:3D-strong-comb-knl-680}, for the large scale cubes (with $L=840, 960, 680$),
it is still faster than Palabos (Fuse Prism) by up to 34.2\%, 34.2\%, and 31.8\%, respectively.
(2) When using the identical input of cubes, although our 3D memory-aware LBM has larger Y-Z layer sizes,
it is still faster than Palabos (Fuse Prism) but with less speedup than before, i.e., by up to 21.1\%, 54.7\%, and 30.1\% on three CPU nodes, respectively.
\added[id=fu]{The less speedup suggests our future work to partition a 3D domain along three axes to utilize closer memory page alignment on smaller Y-Z layer size.}
\begin{figure}[t!]
\centering
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-exp/haswell/bridges_comb_dim_112.pdf}
\caption{\small Haswell $L = 112$.}
\label{fig:3D-strong-comb-haswell-112}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-exp/haswell/bridges_comb_dim_448.pdf}
\caption{\small Haswell $L = 448$.}
\label{fig:3D-strong-comb-haswell-448}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-exp/haswell/bridges_comb_dim_840.pdf}
\caption{\small Haswell $L = 840$.}
\label{fig:3D-strong-comb-haswell-840}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-exp/skx/stampede2_skx_comb_dim_192.pdf}
\caption{\small Skylake $L = 192$.}
\label{fig:3D-strong-comb-skx-192}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-exp/skx/stampede2_skx_comb_dim_576.pdf}
\caption{\small Skylake $L = 576$.}
\label{fig:3D-strong-comb-skx-576}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-exp/skx/stampede2_skx_comb_dim_960.pdf}
\caption{\small Skylake $L = 960$.}
\label{fig:3D-strong-comb-skx-960}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-exp/knl/stampede2_knl_comb_dim_272.pdf}
\caption{\small KNL $L=272$.}
\label{fig:3D-strong-comb-knl-272}%
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-exp/knl/stampede2_knl_comb_dim_476.pdf}
\caption{\small KNL $L=476$.}
\label{fig:3D-strong-comb-knl-476}%
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-exp/knl/stampede2_knl_comb_dim_680.pdf}
\caption{\small KNL $L=680$.}
\label{fig:3D-strong-comb-knl-680}%
\end{subfigure}
\caption{\small Strong scalability performance on three types of compute nodes.
``2-step prism eqv" = Parallel 3D memory aware LBM takes the equivalent input of cubes.}
\label{fig:3D-strong-comb-cube}%
\end{figure}
\section{Experimental Evaluation}
\label{sec:3D-LBM-exp}
In this section, we first present the \added[]{experimental setup} and validations on our 3D memory-aware LBM.
Then we evaluate its sequential and parallel performance.
\subsection{Experiment Setup and Verification}
\label{subsec:3D-setup}
The details of our experimental hardware platforms are provided in Table.\ref{tbl:arch}.
To evaluate the performance of our new algorithms,
we use the 3D lid-driven cavity flow \added[]{simulation} as
\added[]{an example}.
The 3D cavity has a dimension of $lz \times ly \times lx$, and its top lid moves with a constant velocity $v$.
Our 3D memory-aware LBM algorithms have been implemented as C++ template functions, \added[]{which are then added to the Palabos framework.}
\added[]{For verification, we construct a $cavity$ with the same procedure, and then separately execute four algorithms on it, i.e., Palabos solvers $fuse()$ and $fuse\_prism()$ for $N$ time steps,
and our memory-aware algorithms $two_\_step\_prism()$ and $two\_step\_prism\_omp()$ for $N/2$ time steps.}
\added[]{Then, we compute the velocity norm of each cell
and write to four separate logs.
At last, we verify that our algorithms produce the same result} as Palabos
\added[]{for guaranteeing} \added[]{software correctness.
\begin{table}[h!]
\renewcommand{\arraystretch}{1.0}
\caption{\small Details of our experimental platforms.}
\centerline{
{\scriptsize
\begin{tabularx}{0.883\textwidth}{|c | c | c | c|}
\thickhline
& {\it Bridges at PSC} & \multicolumn{2}{c|}{\it Stampede2 at TACC} \\
\thickhline
Microarchitecture & {\it Haswell'14} & {\it Skylake'17} & {\it Knight Landing'16}\\
\hline
Intel CPU product code & Xeon E5-2695v3 & Xeon Platinum 8160 & Xeon Phi 7250\\
Total \# Cores/node & 28 on 2 sockets & 48 on 2 sockets & 68 on 1 socket\\
Clock rate (GHz) & 2.1$\sim$3.3 & 2.1 nominal(1.4$\sim$3.7) & 1.4\\
L1 cache/core & 32KB & 32KB & 32KB \\
L2 cache/core & 256KB & 1MB & 1MB per 2-core tile\\
L3 cache/socket & 35MB & 33MB (Non-inclusive) & 16GB MCDRAM\\
DDR4 Memory(GB)/node & 128 (2133 MHz) & 192 (2166 MHz) & 96 (2166 MHz) \\
\hline
Compiler & icc/19.5 & \multicolumn{2}{c|}{icc/18.0.2}\\
AVX extension& AVX2 & \multicolumn{2}{c|}{AVX512}\\
\thickhline
\end{tabularx}
}
}
\label{tbl:arch}
\end{table}
\subsection{Performance of Sequential 3D Memory-aware LBM}
\label{subsec:3D-seq}
The first set of experiments with 3D cavity flows compare the sequential performance of four different LBM algorithms, which are the Fuse Swap LBM (with / without prism traversal), and the Two-step Memory-aware LBM (with / without prism traversal).
For simplicity, we use the abbreviations of fuse LBM, fuse prism LBM, 2-step LBM and 2-step prism LBM, respectively.
The problem input are 3D cubes with edge size $L =64 \sim 896$.
\added[id=fu]{Every algorithm with a prism stride configuration is executed five times, and the average MFLUPS (millions of fluid lattice node updates per second) is calculated}.
\added[id=fu, comment={``we use computer time rather than human time to
search a space of code variations for a fixed problem"}]{For the ``prism" algorithms, different prism strides (ranging from 8, 16, 32, ..., to 448)
are tested,
and we select the best performance achieved.}
\begin{figure}[h!]
\centering
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-exp/haswell/bridges_seq_cube.pdf}
\caption{\small Haswell.}
\label{fig:3D-seq-haswell}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-exp/skx/stampede2_skx_seq_cube.pdf}
\caption{\small Skylake.}
\label{fig:3D-seq-skx}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-exp/knl/stampede2_knl_seq_cube.pdf}
\caption{\small Knight Landing.}
\label{fig:3D-seq-knl}
\end{subfigure}
\caption{\small Sequential performance using four LBM algorithms on three types of CPUs.}
\label{fig:3D-seq}
\end{figure}
Fig.\ref{fig:3D-seq} shows the sequential performance on three types of CPUs.
When we use small edge sizes (e.g., $L = 64, 128$), 2-step LBM is the fastest.
But when $L\geq256$, 2-step prism LBM performs the best
and is up to 18.8\% and 15.5\% faster than the second-fastest Palabos (Fuse Prism LBM solver) on Haswell and Skylake, respectively.
But since KNL does not have an L3 cache,
2-step prism LBM is only 1.15\% faster than Palabos (Fuse Prism LBM solver).
We observe that the performance of algorithms without prism traversal starts to drop when $L\geq384$.
Since \added[]{the} swap algorithm streams to half of its neighbors on its own layer and the layer below,
$23.9 MB/layer \times 2 layers = 47.8 MB$ (\added[]{when $L=384$), which exceeds the L3 cache size} (35 MB per socket on Haswell).
Thus we need to use spatial locality by adding the feature of prism traversal.
Consequently, on Haswell and Skylake,
fuse LBM is improved by up to 71.7\% and 58.2\%, respectively,
2-step LBM is improved by up to 28.6\% and 50.4\%, respectively.
When only adding the feature of merging two steps,
2-step LBM is faster than Palabos (Fuse) by up to 53.3\% on Haswell and 20.5\% on Skylake.
Hence, we conclude that both prism traversal and merging two steps significantly increase cache reuse on the large domain.
\begin{figure}[h!]
\centering
\begin{subfigure}[t]{0.28\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-exp/haswell/remora/640_NUMA_USED.pdf}
\caption{\small $L=640$.}
\label{fig:3D-seq-haswell-used-640}
\end{subfigure}
\begin{subfigure}[t]{0.28\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-exp/haswell/remora/768_NUMA_USED.pdf}
\caption{\small $L=768$.}
\label{fig:3D-seq-haswell-used-768}
\end{subfigure}
\begin{subfigure}[t]{0.28\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-exp/haswell/remora/896_NUMA_USED.pdf}
\caption{\small $L=896$.}
\label{fig:3D-seq-haswell-used-896}
\end{subfigure}
\caption{Memory usage on two sockets of a Haswell node.}
\label{fig:3D-seq-numa-haswell}
\end{figure}
In Fig.\ref{fig:3D-seq}, we observe that the performance of all algorithms starts to drop when $L \geq 768$ on Haswell and $L = 896$ on Skylake.
To find out the reason,
we use Remora~\cite{rosales2015remora} to monitor the memory usage on each socket of the Haswell node.
As $L$ increases from 640 to 896, the memory usage on socket 1 (red area) in Fig.\ref{fig:3D-seq-haswell-used-640}$\sim$\ref{fig:3D-seq-haswell-used-896} has enlarged from 2.4 GB to 63.9 GB.
When memory usage exceeds the 64GB DRAM capacity per socket on \added[]{the Haswell node, foreign NUMA memory accesses are involved, thus the sequential performance reduces.} Similar results also happen on the Skylake node.
However, because the KNL node only has one socket, the performance on KNL does not drop.
\input{narrative/3D-exp-omp}
\section{Baseline 3D LBM Algorithm}
\label{sec:3D-fundamental}
The baseline 3D LBM algorithm in this paper is called \textit{Fuse Swap LBM} as shown in Alg.\ref{alg:3D-fuse}, which involves three features: single-copy distribution, swap algorithm, and loop fusion.
We choose the swap algorithm~\cite{LBM-swap} since it is relatively simpler than the other single-copy distribution methods,
and is more efficient to use simple index arithmetic to access neighbors in the matrix-based memory organization.
The swap algorithm replaces the copy operations between a cell and its neighbors in the streaming kernel by a value swap, thereby it is in-place and does not require the second copy.
But when combining it with loop fusion,
we must guarantee that the populations of neighbors involved in the swap are already in a post-collision state to keep thread safety~\cite{latt2007technical}.
The work-around solution is to adjust the traversal order of simulation domains with a predefined order of discrete cell velocities~\cite{latt2007technical}.
Thus each cell can stream its post-collision data by swapping values with half of its neighbors pointed by the ``red" arrows ($1 \sim 9$ directions for D3Q19 in Fig.\ref{fig:3D-swap-stream}),
if those neighbors are already in post-collision and have ``reverted" their distributions.
We define this operation as ``$swap\_stream$".
The ``\textbf{$revert$}" operation in Fig.\ref{fig:3D-collide-revert} lets a cell locally swap its post-collision distributions to opposite directions.
To make the Fuse Swap LBM more efficient, \added[]
{Palabos pre-processes and post-processes the boundary cells on the bounding box at line 2 and 7, respectively, so that it can remove the boundary checking operation in the inner bulk domain.
Thus Alg.\ref{alg:3D-fuse} is divided into three stages in every time step as follows.}
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.27\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-alg/3D-swap-stream.pdf}
\caption{\small $swap\_stream$}
\label{fig:3D-swap-stream}
\end{subfigure}
\qquad
\begin{subfigure}[b]{0.27\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-alg/3D-collide-revert.pdf}
\caption{\small $revert$}
\label{fig:3D-collide-revert}
\end{subfigure}
\qquad
\begin{subfigure}[b]{0.27\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/3D-alg/3D-fuse.pdf}
\caption{\small Three stages computation.}
\label{fig:3D-fuse}
\end{subfigure}
\caption{Two operations and three stages computation used in sequential 3D Fuse Swap LBM.}
\label{fig:3D-swap-stream-revert}
\end{figure}
\begin{algorithm}[h!]
\caption{3D Fuse Swap LBM}
\label{alg:3D-fuse}
\scriptsize{
\begin{algorithmic} [1]
\For {iT = 0; iT $<$ N; ++iT}
\State Stage I: $collide$ and $revert$ on the bounding box, i.e., 6 surfaces of cuboid (1,1,1) to ($lx,ly,lz$)
\Statex // Stage II: bulk domain computation
\For {iX = 2; iX $\leq$ $lx-1$; ++iX}
\For {iY = 2; iY $\leq$ $ly-1$; ++iY}
\For {iZ = 2; iZ $\leq$ $lz-1$; ++iZ}
\State $collide$ \& $swap\_stream$ on (iX, iY, iZ) to half of its neighbors
\EndFor
\EndFor
\EndFor
\State Stage III: $boundary\_swap\_stream$ on the bounding box
\EndFor
\end{algorithmic}
}
\end{algorithm}
\section{The 3D Memory-aware LBM Algorithm}
\label{sec:3D-mem-aware}
\input{narrative/3D-LBM-mem-aware}
\input{narrative/3D-LBM-mem-aware-omp}
\section{Introduction}
\label{sec:intro-motivation}
Computational Fluid Dynamics (CFD) simulations
have revolutionized the design process in various scientific, engineering, industrial, and medical fields.
The current Reynolds averaged Navier-Stokes (RANS) methods can solve steady viscous transonic and supersonic flows, but are not able to reliably predict turbulent separated flows \cite{witherden2017future}.
Lattice Boltzmann method (LBM) is a young and evolving approach to solving these problems in the CFD community \cite{coreixas2019comprehensive}.
It originates from a mesoscale description of the fluid (based on the Boltzmann equation), and directly incorporates physical terms
to represent complex physical phenomena, such as multi-phase flows, reactive and suspension flows, etc.
Besides, many {\it collision models} have been developed for LBM to improve its stability to the second order of numerical accuracy when simulating high Reynolds number flows~\cite{coreixas2019comprehensive}.
\added[]{However, it is challenging to achieve high performance for LBM algorithms, since LBM has large data storage costs and is highly memory-bound on current architectures \cite{succi2019towards}.}
Driven by our prior work \cite{fu2018designing} to merge multiple collision-streaming cycles (or time steps) in 2D,
this study aims to augment the memory-awareness idea to support parallel 3D LBM to optimize data re-utilization.
Although it might seem to be straightforward to move from the 2D space to 3D space,
it is significantly much more difficult to design an efficient 3D memory-aware LBM algorithm.
In this paper, we target solving the following three main challenges.
(1) As geometries change from 2D to 3D, the required data storage increases from $O(N^2)$ to $O(N^3)$,
meanwhile data dependencies of the lattice model becomes much more complicated.
There exist single-copy distribution methods to reduce data storage cost by half, but they require following a particular traversal order.
Can we combine the best single-copy distribution method with our idea of merging multiple collision-streaming cycles to design a 3D memory-aware LBM with higher performance?
(2) If the combination is possible, since normal 3D tiling~\cite{rivera2000tiling} does not apply to this case, how to additionally explore the spatial locality?
(3) When designing the parallel 3D memory-aware LBM,
a non-trivial interaction occurs at the boundaries between threads,
how to guarantee thread safety and avoid race conditions?
Although some existing works use wavefront parallelism to explore the temporal locality, they insert frequent layer-wise synchronizations among threads every time step \cite{liu2017accelerating,wellein2009efficient}. In this paper, we aim to reduce the synchronization cost among parallel threads.
To the best of our knowledge, this paper makes the following contributions.
First, we design both sequential and parallel 3D memory-aware LBM
algorithms that combine five features: single-copy distribution, loop fusion (single sweep), swap algorithm, prism traversal, and merging two collision-streaming cycles.
Second, we present a parallelization method to keep the thread safety on the intersection layers among threads and reduce the synchronization cost in parallel.
At last, two groups of experiments are conducted on three different manycore architectures, followed by performance analysis.
The first group of sequential experiments (i.e., using a single CPU core) shows that our memory-aware LBM outperforms the state-of-the-art \added[]{Palabos (Fuse Swap Prism LBM solver)}\cite{Palabos} by up to 19\% on a Haswell CPU and 15\% on a Skylake CPU.
The second group evaluates the performance of parallel algorithms.
The experimental results show that our parallel 3D memory-aware LBM outperforms \added[]{Palabos} by up to 89\% on a Haswell node with 28 cores,
85\% on a Skylake node with 48 cores,
and 39\% on a Knight Landing node with 68 cores.
\section{Related Work}
\label{sec:LBM-relate}
Existing research on designing efficient LBM algorithms mainly focuses on optimizing memory accesses within one time step of LBM due to its iterative nature.
For instance, a few LBM algorithms (e.g., swap~\cite{LBM-swap,valero2017reducing}, AA~\cite{LBM-AA}, shift~\cite{LBM-shift}, and
esoteric twist~\cite{LBM-esoteric-twist},
etc.) retain a single copy of the particle distribution data \added[]{(i.e., ``single-copy distribution'')},
and optimize the memory access pattern in the LBM streaming kernel, but each of the algorithms needs to follow a set of constraints
\added[id=fu, comment={Now We have more space to insert all these contents into the paper.}]{(e.g., swap requires predefined order of discrete cell velocities~\cite{latt2007technical}, AA requires distinguishing between even and odd time steps, shift requires extra storage~\cite{latt2007technical}, esoteric twist requires only one version of the LB kernel~\cite{wittmann2013comparison}, etc.)}
\cite{vardhan2019moment} uses a moment-based representation with extra distribution pseudo domain to further reduce the storage cost.
Some works hide the inter-process communication cost on multicore accelerators \cite{crimi2013early},
and achieve large-scale parallelization on HPC systems \cite{randles2013performance} and GPU \cite{LBM-AA}.
\added[id=fu]{\cite{zeiser2008introducing} introduces a cache oblivious blocking 3D LBM algorithm,
but it has an irregular parallelism scheme due to its recursive algorithm design.}
In summary, the above methods focus on optimizations within one time step.
\added[]{Differently, our 3D memory-aware LBM aims to adopt the efficient single-copy distribution scheme, and
design new methodologies to merge two collision-streaming cycles to explore both temporal and spatial data locality at the same time for achieving higher performance.}
Another category of works manages to accelerate LBM by
wavefront parallelism,
which generally groups many threads to successively compute on the same spatial domain.
\cite{liu2017accelerating} presents a shared-memory wavefront 2D LBM together with loop fusion, loop bump, loop skewing, loop tiling, and semaphore operations.
But due to its high synchronization cost incurred by many implicit barriers in wavefront parallelism,
their parallel performance has only 10\% of speedup on average.
\cite{habich2009enabling} presents a shared-memory wavefront 3D LBM with two-copy distributions,
and does not use spatial locality techniques such as loop fusion and loop blocking.
\cite{wellein2009efficient} presents a shared-memory wavefront 3D Jacobi approach together with spatial blocking.
\added[]{It uses two-copy distributions and has simpler 6-neighbors dependencies (rather than the 19 or 27 neighbors in 3D LBM).}
\added[id=fu]{\cite{malas2015multicore} combines the wavefront parallelism with diamond tiling.}
By contrast, our 3D memory-aware LBM does not use the wavefront parallelism, \added[]{but judiciously} contains three light-weight synchronization barriers every two collision-streaming cycles.
\added[]{In addition,} we partition the simulation domain and assign a local sub-domain to every thread,
rather than all threads work on the same sub-domain in wavefront parallelism.
\added[]{In each sub-domain, each thread in our algorithm computes multiple time steps at once,
rather than one thread computes one time step at a time in wavefront parallelism.}
In addition, \added[]{each of our threads also utilizes prism techniques to optimize spatial locality.}
This strategy in particular favors new manycore architectures, which tend to have increasingly larger cache sizes.
\added[]{Modern parallel software packages that support LBM can be classified into two categories based upon their underlying data structures.}
One category adopts matrix-based memory alignment at the cell level (e.g., Palabos~\cite{latt2020palabos}, OpenLB~\cite{heuveline2007openlb}, HemeLB~\cite{Hemelb-paper}, HemoCell~\cite{Hemocell-paper}).
Since neighbors can be easily found through simple index arithmetics in this case, they are \added[]{more suitable for} simulations with dense geometries.
The other category adopts adjacent list data structures \added[]{(e.g.,} Musubi~\cite{Musubi}, waLBerla~\cite{waLBerla-paper}, HARVEY~\cite{randles2013performance}).
They are \added[]{often used for} simulating \added[]{domains} with sparse and irregular geometries, \added[]{but their} cells require
\added[]{additional} memory of pointers, and double the memory \added[]{consumption} in the worst case.
\added[id=fu]{In this study, we choose the widely-used and efficient matrix-based data structure in the LBM community, and select the state-of-the-art Palabos library as the baseline,
since Palabos
provides a broad modeling framework, supports applications with complex physics, and shows high computational performance.}
\added[id=fu]{\cite{perepelkina2018lrnla}
designs a locally recursive non-locally asynchronous (LRnLA) conefold LBM algorithm,
which uses recursive Z-curve arrays for data storage, and recursively subdivides the space-time dependency graph into polytopes to update lattice nodes.
However, our work
uses a more directly accessible matrix-based data storage and has a regular memory access pattern.
Besides, our prism traversal
can independently or integrate with merging two time steps to operate on the lattice nodes,
while \cite{perepelkina2018lrnla} operates on the dependency graph.}
|
1,314,259,996,580 | arxiv |
\section{Introduction}
Safety is critical for a multitude of modern robotic systems: from autonomous vehicles, to medical and assistive robots, to aerospace systems. When deployed in the real world, these systems face sources of uncertainty such as imperfect perception, approximate models of the world and the system, and unexpected disturbances. In order to achieve the high degrees of safety necessary for these robots to be deployed at scale, it is essential that controllers can not only guarantee safe behavior, but also provide robustness to these uncertainties.
In the field of control theory, safety is often defined as the forward invariance of a ``safe set'' \cite{ames2016control}. In this view, a closed-loop system is considered safe if all trajectories starting inside the safe set will remain in this set for all time. Several tools exist for generating controllers which can guarantee this forward-invariance property, including Control Barrier Functions (CBFs) \cite{ames_control_2019}, reachability-based controllers \cite{bansal2017hamilton}, and state-constrained Model-Predictive Controller (MPC) approaches \cite{hewing2020learning}.
Considerable advancements have been made in guaranteeing safety or stability in the presence of bounded uncertainties \cite{zhou1998essentials, blanchini2008set, aubin2011viability, sontag2008input,kolathaya2018input,alan2021safe}. Yet less attention has been paid to the case of unbounded uncertainties, where the aforementioned methods generally do not apply.
Obtaining robust safety in the case of unbounded disturbances is particularly important when considering systems subject to stochastic disturbances, since these disturbances are often modeled as continuous random variables with unbounded support (e.g., zero-mean, additive Gaussian noise); for such systems, it is impossible to give an absolute bound on the disturbance magnitude. Existing methods for unbounded, random disturbances fall into two categories. The first is to impose step-wise chance constraints on a given safety criterion (e.g., a state constraint in MPC \cite{hewing2020learning} or CBF-based controllers \cite{ahmadi_risk-averse_2022}), which in turn provide one-step safety guarantees. The other class of approaches \cite{kushner1967stochastic, prajna2004stochastic, santoyo_verification_2019, clark_control_2019, steinhardt2012finite} use Lyapunov or barrier function techniques to provide bounds on the safety probabilities for trajectories over a fixed time horizon; existing approaches, however, often assume the presence of a stabilizing controller, or model the system in continuous-time (i.e., assume the controller has, in effect, infinite bandwidth).
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/quadruped_result_hero.pdf}
\caption{Safety of a simulated quadrupedal robot locomoting on a narrow path for a variety of controllers. \textbf{(Top Left)} The safe region that the quadruped is allowed to traverse. \textbf{(Bottom Left)} A system diagram depicting the states of the quadruped $\lmat x, y, \theta \end{bmatrix}^\top$. \textbf{(Top Right)} 50 trajectories for 3 controllers: one without any knowledge of safety ($\mb{k}_{\textrm{nom}}$), one with a standard safety filter \eqref{eq:dtcbfop}, and finally our method which accounts for stochasticity \eqref{eq:jed}. \textbf{(Bottom Right)} Plots of $h(\mb{x})$, a scalar value representing safety. The system is safe (i.e., in the green safe region) if $h(\mb{x}) \geq 0 $. }
\label{fig:hero_fig}
\vspace{-1.1cm}
\end{figure}
In order to best represent the uncertainty that might appear from sources such as discrete-time perception errors or sampled-data modeling errors, we focus our work on generating probabilistic bounds of safety for discrete-time (DT) stochastic systems. While MPC state constraints are generally enforced in discrete time, CBFs, normally applied in continuous time, have a discrete-time counterpart (DTCBFs) that were first introduced in \cite{agrawal2017discrete} and have gained popularity due to their compatibility with planners based on MPC \cite{zeng2021safety,liu2022iterative,wills2004barrier}, reinforcement learning \cite{cheng2019end}, and Markov decision processes \cite{ahmadi2019safe}.
In a stochastic setting, martingale-based techniques have been leveraged to establish safety guarantees \cite{santoyo_verification_2019,steinhardt2012finite}, yet these works have limited utility when analyzing the safety of discrete-time CBF-based controllers.
In particular, the ``c-martingale'' condition used in \cite{steinhardt2012finite} does not admit a multiplicative scaling of the barrier function, and therefore, at best, provides a weak worst-case safety bound for CBF-based controllers that grows linearly in time. The work of \cite{santoyo_verification_2019} (which builds upon \cite{kushner1967stochastic}, as does this paper) is largely focused on offline control synthesis to achieve a desired safety bound (as opposed to the online, optimization-based control studied in this work). Also, the method proposed in \cite{santoyo_verification_2019} can only generate discrete-time controllers for affine barriers, which severely limits its applicability to general barrier functions. Both papers also depend on sum-of-squares (SoS) programming \cite{papachristodoulou2005tutorial} for control synthesis/system verification, thereby requiring an offline step that scales poorly with the state dimension. The goal of this paper is to extend the results of \cite{kushner1967stochastic} in a different direction, and thereby enable the synthesis of online controllers that can be realized on robotic systems.
The main contribution of this paper is to apply martingale-based probability bounds in the context of discrete-time CBFs to guarantee robust safety under stochastic uncertainty. To this end, we leverage the bounds originally presented in the seminal work by Kushner \cite{kushner1967stochastic}.
Our first key contribution is the translation of these results from a Lyapunov setting to a CBF one. To this end, we present a new proof of the results in \cite{kushner1967stochastic} which we believe to be more complete and intuitive and which relates to the existing results of Input-to-State Safety (ISSf) for systems with bounded uncertainties \cite{kolathaya2018input}.
Furthermore, we present a method (based on Jensen's inequality) to account for the effects of process noise on a DTCBF-based controller. Finally, we apply this method to a variety of systems in simulation to analyze the tightness of our bound and demonstrate its utility. These experiments range from simple examples that illustrate the core mathematics---a single and double integrator and a pendulum---to a high fidelity simulation of a quadrupedal robot locomoting along a narrow path with the uncertainty representing the gap between the simplified and full-order dynamics models.
\section{Background}
In this section we provide a review of safety for discrete-time nonlinear systems via control barrier functions (CBFs), and review tools from probability theory useful for studying systems with stochastic disturbances.
\subsection{Safety of Discrete-time Systems}
Consider a discrete-time (DT) nonlinear system with dynamics given by:
\begin{align}
\mb{x}_{k+1} = \mb{F}(\mb{x}_k, \mb{u}_k), \quad \forall k \in \mathbb{N}, \label{eq:dt_dyn}
\end{align}
with state $\mb{x}_k \in \mathbb{R}^n$, input $\mb{u}_k \in \mathbb{R}^m$, and continuous dynamics $\mb{F}: \mathbb{R}^{n} \times \mathbb{R}^{m} \to \mathbb{R}^n$. A continuous state-feedback controller $\mb{k}:\mathbb{R}^n\to\mathbb{R}^m$ yields the DT closed-loop system:
\begin{align}
\mb{x}_{k+1} = \mb{F}(\mb{x}_k, \mb{k}(\mb{x}_k)), \quad \forall k \in \mathbb{N}. \label{eq:dt_autonomous}
\end{align}
We formalize the notion of safety for systems of this form using the concept of forward invariance:
\begin{definition}[Forward Invariance \& Safety \cite{blanchini2008set}]
A set $\mathcal{C}\subset \mathbb{R}^n $ is \textit{forward invariant} for the system \eqref{eq:dt_autonomous} if $\mb{x}_0 \in \mathcal{C}$ implies that $\mb{x}_k \in \mathcal{C}$ for all $k \in \mathbb{N}$. In this case, we call the system \eqref{eq:dt_autonomous} \textit{safe} with respect to the set $\mathcal{C}$.
\end{definition}
Discrete-time barrier functions (DTBFs) are a tool for guaranteeing the safety of discrete-time systems. Consider a set $\mathcal{C} \triangleq \left \{ \mb{x} \in \mathbb{R}^n \mid h(\mb{x}) \geq 0 \right \}$ expressed as the 0-superlevel set of a continuous function $h:\mathbb{R}^n\to\mathbb{R}$. We refer to such a function $h$ as a DTBF\footnote{ The state constraint $\mb{x}_k \in \mathcal{C}$, when expressed as $h(\mb{x}_k) \geq 0 $, is the special case of a DTBF with $\alpha = 0 $. } if it satisfies the following properties:
\begin{definition}[Discrete-Time Barrier Function (DTBF) \cite{agrawal2017discrete}]
Let $\mathcal{C}\subset \mathbb{R}^n $ be the $0$-superlevel set of a continuous function $h:\mathbb{R}^n \to \mathbb{R}$. The function $h$ is a \textit{discrete-time barrier function} (DTBF) for \eqref{eq:dt_autonomous} on $\mathcal{C}$ if there exists an $\alpha \in [0, 1] $ such that for all $\mb{x} \in \mathbb{R}^n$, we have that:
\begin{align}
h(\mb{F}(\mb{x}, \mb{k}(\mb{x}))) \geq \alpha h(\mb{x}). \label{eq:dtbf_constraint}
\end{align}
\end{definition}
\noindent This inequality mimics that of discrete-time Lyapunov functions \cite{bof2018lyapunov}, and similarly regulates the evolution of $h$ based on its previous value. DTBFs serve as a certificate of forward invariance as captured in the following theorem:
\begin{theorem}[\cite{agrawal2017discrete}]
Let $\mathcal{C}\subset \mathbb{R}^n $ be the $0$-superlevel set of a continuous function $h:\mathbb{R}^n \to \mathbb{R}$. If $h$ is a DTBF for \eqref{eq:dt_autonomous} on $\mathcal{C}$, then the system \eqref{eq:dt_autonomous} is safe with respect to the set $\mathcal{C}$.
\end{theorem}
\noindent Intuitively, the value of $h(\mb{x}_k)$ can only decay as fast as the geometric sequence $\alpha^kh(\mb{x}_0)$, which is lower-bounded by 0, thus ensuring the safety (i.e., forward invariance) of $\mathcal{C}$.
Discrete-time control barrier functions (DTCBFs) provide a tool for constructively synthesizing controllers that yield closed-loop systems that possess a DTBF:
\begin{definition}[Discrete-Time Control Barrier Function (DTCBF) \cite{agrawal2017discrete}]
Let $\mathcal{C}\subset \mathbb{R}^n $ be the $0$-superlevel set of a continuous function $h:\mathbb{R}^n \to \mathbb{R}$. The function $h$ is a \textit{discrete-time control barrier function} (DTCBF) for \eqref{eq:dt_dyn} on $\mathcal{C}$ if there exists an $\alpha \in [0, 1] $ such that for each $\mb{x} \in \mathbb{R}^n$, there exists a $\mb{u}\in\mathbb{R}^m$ such that:
\begin{align}
h(\mb{F}(\mb{x}, \mb{u})) \geq \alpha h(\mb{x}). \label{eq:dtcbf_constraint}
\end{align}
\end{definition}
Given a CBF $h$ for \eqref{eq:dt_dyn} and a corresponding $\alpha\in[0,1]$, we define the point-wise set of control values:
\begin{equation}
\mathscr{K}_{\rm CBF}(\mb{x}) = \left\{ \mb{u}\in\mathbb{R}^m \mid h(\mb{F}(\mb{x},\mb{u})) \geq \alpha h(\mb{x})\right\}.
\end{equation}
This yields the following result:
\begin{theorem}[\cite{agrawal_constructive_2022}] \label{thm:dtcbf}
Let $\mathcal{C}\subset \mathbb{R}^n $ be the $0$-superlevel set of a continuous function $h:\mathbb{R}^n \to \mathbb{R}$. If $h$ is a DTCBF for \eqref{eq:dt_dyn} on $\mathcal{C}$, then the set $\mathscr{K}_{\rm CBF}(\mb{x})$ is non-empty for all $\mb{x}\in\mathbb{R}^n$, and for any continuous state-feedback controller $\mb{k}$ with $\mb{k}(\mb{x})\in \mathscr{K}_{\rm CBF}(\mb{x})$ for all $\mb{x}\in\mathbb{R}^n$, the function $h$ is a DTBF for \eqref{eq:dt_autonomous} on $\mathcal{C}$.
\end{theorem}
Given a continuous nominal controller $\mb{k}_{\rm nom}:\mathbb{R}^n\times \mathbb{N}\to\mathbb{R}^m$ and a DTCBF $h$ for \eqref{eq:dt_dyn} on $\mathcal{C}$, a controller $\mb{k}$ satisfying $\mb{k}(\mb{x},k)\in \mathscr{K}_{\rm CBF}(\mb{x})$ for all $\mb{x}\in\mathbb{R}^n$ and $k \in \mathbb{N}$ can be specified via the following optimization problem:
\begin{align}
\label{eq:dtcbfop}
\mb{k}(\mb{x}) = \argmin_{\mb{u}\in\mathbb{R}^m}&\quad \Vert \mb{u}-\mb{k}_{\rm nom}(\mb{x},k) \Vert^2 \tag{DTCBF-OP}\\ \textrm{s.t.}& \quad h(\mb{F}(\mb{x},\mb{u})) \geq \alpha h(\mb{x}).\nonumber
\end{align}
We note that unlike the affine inequality constraint that arises with continuous-time CBFs \cite{ames_control_2019}, the DTCBF inequality constraint \eqref{eq:dtcbf_constraint} is not necessarily convex with respect to the input, preventing it from being integrated into a convex optimization-based controller. To solve this issue, it is often assumed that the function $h\circ\mb{F}:\mathbb{R}^n\times\mathbb{R}^m\to\mathbb{R}$ is concave with respect to its second argument \cite{agrawal2017discrete, ahmadi2019safe, zeng2021safety}. This assumption was shown to be well motivated for concave $h$ \cite{taylor_safety_2022}.
\subsection{Stochastic Preliminaries}
We now review tools from probability theory that will allow us to utilize information about the distribution of a stochastic disturbance signal in constructing a notion of stochastic safety and corresponding safety-critical controllers. We choose to provide this background material at the level necessary to understand our later constructions of stochastic safety and safety-critical controllers, but refer readers to \cite{grimmett2020probability} for a precise measure-theoretic presentation of the following concepts.
The key tool underlying our construction of a notion of stochastic safety is a nonnegative supermartingale, a specific type of expectation-governed random process:
\begin{definition}
Let $\mb{x}_k$ be a sequence of random variables that take values in $\mathbb{R}^n$, $W:\mathbb{R}^n\times\mathbb{N}\to\mathbb{R}$, and suppose that $\mathbb{E}\big[\lvert W(\mb{x}_k,k) \rvert\big] <\infty$ for $k\in \mathbb{N}$. The process $W_k\triangleq W(\mb{x}_k,k)$ is a supermartingale if:
\begin{equation}
\label{eq:supermartingale}
\mathbb{E}[ W_{k+1} \mid \mb{x}_{0:k}] \leq W_k~\textrm{almost~surely~for~all~}k\in\mathbb{N},
\end{equation}
where $\mb{x}_{0:k}$ indicates the random variables $\left\{\mb{x}_0, \mb{x}_1, \ldots, \mb{x}_k\right\}$. If, additionally, $W_k\geq 0$ for all $k\in\mathbb{N}$, $W_k$ is a nonnegative supermartingale. If the process is non-decreasing in expectation, the process $W_k$ is a submartingale. If the inequality \eqref{eq:supermartingale} holds with equality, the process $W_k$ is a martingale.
\end{definition}
An important result from martingale theory that we will use to develop probabilistic safety guarantees is \textit{Ville's inequality}, which allows us to bound the probability that a nonnegative supermartingale will rise above a certain value:
\begin{theorem}[Ville's Inequality \cite{ville1939etude}]
Let $W_k$ be a nonnegative supermartingale. Then for all $\lambda\in\mathbb{R}_{>0}$,
\begin{align}
\P \left\{ \sup_{k\in \mathbb{N}} W_k > \lambda \right\} \leq \frac{\mathbb{E}[W_0]}{\lambda}.
\label{eq:ville}
\end{align}
\end{theorem}
Intuitively, Ville's inequality can be compared with Markov's inequality for nonnegative random variables; since the process $W_k$ is nonincreasing in expectation, Ville's inequality allows us to control the probability the process instead moves upward above $\lambda$.
Lastly, as we will see when synthesizing safety-critical controllers in the presence of stochastic disturbances, we will need to enforce conditions on the expectation of a DCTBF. In doing so, we will need to relate the expectation of the DCTBF $h(\mb{x}_{k+1})$ to the expectation of the state $\mb{x}_{k+1}$. This will be achieved using Jensen's inequality:
\begin{theorem}[Jensen's Inequality \cite{liao2018sharpening}]
\label{thm:jensen}
Consider a continuous function $h: \mathbb{R}^n \to \mathbb{R}$ and a random variable $\mb{x}$ that takes values in $\mathbb{R}^n$ with $\E[\Vert\mb{x}\Vert] < \infty$. We have that:
\begin{align}
\begin{cases}
\textrm{if $h$ is convex, }& \textrm{then } \E[h(\mb{x})] \geq h(\E[\mb{x}]) ,\\
\textrm{if $h$ is concave, } &\textrm{then } \E[h(\mb{x})] \leq h(\E[\mb{x}]).
\end{cases} \label{prop:jensens}
\end{align}
\end{theorem}
\section{Safety of Discrete-Time Stochastic Systems}\label{sec:main_thm}
In this section we provide one of our main results in the form of a bound on the probability that a system with stochastic disturbances will exit a given superlevel set of a DTBF over a finite time horizon.
Consider the following modification of the DT system \eqref{eq:dt_dyn}:
\begin{align}
\mb{x}_{k+1} = \mb{F}(\mb{x}_k, \mb{u}_k) + \mathbf{d}_k, \quad \forall k \in \mathbb{N}, \label{eq:dt_dyn_dist}
\end{align}
with $\mb{d}_k$ taking values in $\mathbb{R}^n$, and a closed-loop system:
\begin{align}
\mb{x}_{k+1} = \mb{F}(\mb{x}_k, \mb{k}(\mb{x}_k)) + \mb{d}_k, \quad \forall k \in \mathbb{N}.\label{eq:dt_autonomous_dist}
\end{align}
We assume that $\mb{x}_0$ is known and the disturbances $\mb{d}_k$ are a sequence of independent and identically distributed (with distribution $\mathcal{D}$) random variables\footnote{This implies the dynamics define a Markov process, i.e. $\E[h(\mb{F}(\mb{x}_k,\mb{u}_k) + \mb{d}_k)\mid\mb{x}_{0:k}] = \E[h(\mb{F}(\mb{x}_k,\mb{u}_k) + \mb{d}_k)\mid\mb{x}_k],$ since the state $\mb{x}_{k+1}$ at time $k+1$ only depends on the state $\mb{x}_k$, input $\mb{u}_k$, and disturbance $\mb{d}_k$ at time $k$.} with (potentially unbounded) support on $\mathbb{R}^n$, generating the random process $\mb{x}_{1:k}$. To study the safety of this system, we will use the following definition:
\begin{definition}[$K$-Step Exit Probability]
Let $h:\mathbb{R}^n\to\mathbb{R}$ be a continuous function. For any $K\in\mathbb{N}$, $\gamma\in\mathbb{R}_{\geq 0}$, and initial condition $\mb{x}_0\in\mathbb{R}^n$, the $K$-step exit probability of the closed-loop system \eqref{eq:dt_autonomous_dist} is given by:
\begin{align}
P_u(K,\gamma,\mb{x}_0) = \mathbb{P}\left\{ \min_{k \in \{0, \dots, K\}} h(\mb{x}_k) < - \gamma \right\}.
\end{align}
\end{definition}
\noindent which describes the probability that the system will leave the $-\gamma$ superlevel set of $h$ within $K$ steps. This probability is directly related to the robust safety concept of Input-to-State Safety (ISSf) \cite{kolathaya2018input} which reasons about the superlevel set of $h$ which is rendered safe in the presence of bounded disturbances. For the remainder of this work, we will omit the dependence of $P_u$ on $K$, $\gamma$, and $\mb{x}_0$ for notational simplicity.
\begin{remark}
\textup{The finite time aspect of $K$-step exit probabilities is critical since systems exposed to unbounded disturbances
will exit a bounded set with probability $P_u = 1$ over an infinite horizon \cite{steinhardt2012finite, chern_safe_2021}. Intuitively, this is because a sufficiently large sample will eventually be drawn from the tail of the distribution that forces the system out in a single step.}
\end{remark}
Given this definition, we now provide one of our main results relating DTBFs to $K$-step exit probabilities. We note that this result is a reframing of the stochastic invariance theorem in \cite{kushner1967stochastic, santoyo_verification_2019}. Our reframing features three key components. First, we develop our results using the standard formulation of DTBFs covered in the background. Second, we produce a probability bound not only for $\mathcal{C}$ (defined as the 0-superlevel set of $h$, such that $\gamma = 0$), but for all non-positive superlevel sets of $h$ ($\gamma \geq 0$), a stochastic variant of ISSf \cite{kolathaya2018input}. Third, we present a complete proof of our result, with the goal of illuminating how to leverage tools from martingale theory to reason about the safety of discrete-time stochastic systems.
\begin{theorem}\label{thm:kushner_main}
Let $h:\mathbb{R}^n \to \mathbb{R}$ be a continuous, upper-bounded function with upper bound $M\in\mathbb{R}_{>0}$. Suppose there exists an $\alpha\in (0,1)$ and a\footnote{The original presentation of Theorem \ref{thm:kushner_main} in \cite{kushner1967stochastic} considers variable $\delta_k$ for $k \in \{0, \dots, K\}$, which are known \textit{a priori}. In most practical applications, one assumes a lower bound that holds for all $\delta_k$, motivating our use of a constant $\delta$. Moreover, the use of a constant $\delta$ significantly clarifies the proof.} $\delta \leq M(1-\alpha)$ such that the closed-loop system \eqref{eq:dt_autonomous_dist} satisfies:
\begin{align}
\mathbb{E}[~h(\mb{F}(\mb{x}, \mb{k}(\mb{x})) + \mb{d}) \mid \mb{x}~] \geq \alpha h(\mb{x})+ \delta, \label{eq:kushner_constraint}
\end{align}
for all $\mb{x}\in \mathbb{R}^n$, with $\mb{d}\sim\mathcal{D}$. For any $K \in \mathbb{N}$ and $\gamma \in\mathbb{R}_{\geq 0}$, if $\delta < -\gamma(1 - \alpha)$, we have that:
\begin{align}
\label{eq:probup}
P_u & \leq \left( \frac{M - h(\mb{x}_0 )}{M + \gamma } \right)\alpha^K + \frac{M (1 - \alpha) - \delta }{M + \gamma}\sum_{i =1}^K\alpha^{i-1}.
\end{align}
Alternatively if $\delta \geq -\gamma (1 - \alpha) $, then:
\begin{align}
\label{eq:problo}
P_u\leq 1 - \frac{h(\mb{x}_0) + \gamma }{M+\gamma}\left( \frac{M\alpha +\gamma + \delta}{M+ \gamma} \right)^K.
\end{align}
\end{theorem}
\begin{remark}
\textup{The upper bound $\delta \leq M(1-\alpha)$ is relatively non-restrictive, as not only is $\delta$ typically negative, but it must hold such that, in expectation, $h(\mb{x}_{k+1})$ cannot rise above the upper bound $M$ on $h$. The switching condition between \eqref{eq:probup} and \eqref{eq:problo} of $\delta = \gamma(1-\alpha)$ corresponds to whether, in expectation, the one-step evolution of the system remains in the set $\mathcal{C}_\gamma = \{ \mb{x} \in \mathbb{R}^n \mid h(\mb{x}) \geq - \gamma \}$ when it begins on the boundary of $\mathcal{C}_\gamma$.}
\end{remark}
To make our argument clear at a high level, we begin with a short proof sketch before proceeding in detail.
\textit{Proof sketch: } The key tool in proving Theorem \ref{thm:kushner_main} is Ville's inequality \eqref{eq:ville}. Since $h(\mb{x}_k)$, in general, is not a super- or submartingale, we will first construct a nonnegative supermartingale, $W_k \triangleq W(\mb{x}_k, k)$, by scaling and shifting $h(\mb{x}_k)$. We can then apply Ville's inequality \eqref{eq:ville} to bound the probability of $W_k$ going above any $\lambda > 0$. Next we find a particular value of $\lambda$, denoted $\lambda^*$, such that:
\begin{equation}
\max_{k \in \{0, \ldots, K\}} W_k \leq \lambda^* \implies \min_{k \in \{0, \ldots, K\}} h(\mb{x}_k) \geq -\gamma.
\end{equation}
Intuitively, this means that any sequence $W_k$ that remains below $\lambda^*$ ensures that the corresponding sequence $h(\mb{x}_k)$ remains (safe) above $-\gamma$. This allows us to bound the $K$-step exit probability $P_u$ of our original process $h(\mb{x}_k)$ with the probability that $W_k$ will rise above $\lambda^*$:
\begin{align}
P_u \leq \mathbb{P}\left\{\max_{k \in \{0, \ldots, K\}} W_k > \lambda^*\right\} \leq \frac{\mathbb{E}[W_0]}{\lambda^*} = \frac{W_0}{\lambda^*},
\end{align}
where the last equality will follow as it is assumed $\mb{x}_0$ is known \textit{a priori}. Particular choices of $W$ and $\lambda^*$ will yield the bounds stated in the theorem, completing the proof.
\subsection{Proof: Constructing a Nonnegative Supermartingale}
We will begin by constructing a nonnegative supermartingale, allowing us to use Ville's inequality. To construct this supermartingale, we first note that by rearranging terms in the inequality in \eqref{eq:kushner_constraint}, we can see the process $M - h(\mb{x}_k)$ resembles a supermartingale:
\begin{align}
\E[M - h(\mb{x}_{k+1})\mid \mb{x}_k] &\leq \alpha (M - h(\mb{x}_k)) + M(1-\alpha) -\delta,\nonumber\\
& \triangleq \alpha(M - h(\mb{x}_k)) + \varphi, \label{eq:phidef}
\end{align}
but with a scaling $\alpha$ and additive term $\varphi \triangleq M(1-\alpha) - \delta$ that makes $\mathbb{E}\left[M - h(\mb{x}_{k+1}) \mid \mb{x}_k\right] \nleq M - h(\mb{x}_k)$ in general. To remove the effects of $\alpha$ and $\varphi$, consider the function $W: \mathbb{R}^n \times \mathbb{N} \to \mathbb{R}$ defined as:
\begin{align}
W(\mb{x}_k, k) & \triangleq \underbrace{(M - h(\mb{x}_k))\theta^k}_\textrm{negate and scale} -\underbrace{\varphi\sum_{i=1}^{k} \theta^{i}}_{\textrm{cancel $\varphi$}} + \underbrace{\varphi \sum_{i=1}^{K} \theta^{i}}_{\textrm{ensure $W \geq 0$}},
\label{eq:W_expanded}
\end{align}
where $\theta \in [1, \infty)$ will be used to cancel the effect of $\alpha$, but is left as a free variable that we will later use to tighten our bound on $P_u$. Denoting $W_k \triangleq W(\mb{x}_k,k)$, we now verify $W_k$ is a nonnegative supermartingale. We first show that $W_k \geq 0$ for all $k \in \{0, \dots, K\}$. Combining the two sums in \eqref{eq:W_expanded} yields:
\begin{align}
\label{eq:compactWk}
W_k = (M - h(\mb{x}_k))\theta^k + \varphi \sum_{i=k+1}^K \theta^i,
\end{align}
which is nonnegative as $h(\mb{x}) \leq M$ for all $\mb{x} \in \mathbb{R}^n$, $\theta \geq 1$, and $\varphi \geq 0$ since $\delta \leq M(1-\alpha)$ by assumption. We now show that $W_k$ satisfies the supermartingale inequality \eqref{eq:supermartingale}
\begin{align}
\textcolor{black}{\mathbb{E}}&\textcolor{black}{\left[W_{k+1} \mid \mb{x}_{0:k} \right] = \E[W_{k+1} \mid \mb{x}_k ],} \label{eq:markov}\\ &= (M-\mathbb{E}[h(\mb{x}_{k+1})\mid \mb{x}_k])\theta^{k+1} + \varphi \sum_{i=k+2}^K \theta^i, \label{eq:wkp1def}\\
&\leq (M - \alpha h(\mb{x}_k) - \delta)\theta^{k+1} + \varphi \sum_{i=k+2}^K \theta^i,\label{eq:w_barrier_cond}\\
&= \alpha \theta (M - h(\mb{x}_k))\theta^k + \theta^{k+1}\underbrace{((1-\alpha) M - \delta)}_{=\varphi} + \varphi \sum_{i=k+2}^K \theta^i,\nonumber\\
&= \underbrace{\alpha \theta}_{\text{req.} \leq 1} (M - h(\mb{x}_k)) \theta^k+ \varphi \sum_{i=k+1}^K \theta^i \leq W_k \label{eq:w_theta_const},
\end{align}
where \eqref{eq:markov} is due to the Markovian nature of system \eqref{eq:dt_autonomous_dist}, \eqref{eq:wkp1def} comes from using \eqref{eq:compactWk} to write $W_{k+1}$, \eqref{eq:w_barrier_cond} follows from \eqref{eq:kushner_constraint}, and \eqref{eq:w_theta_const} follows from the preceding line using the definition of $\varphi$ and assuming the further requirement that $\theta \leq \frac{1}{\alpha}$. Thus, we have shown that $W_k$ is a nonnegative supermartingale.
\subsection{Proof: Bounding the Exit Probability via Ville's Inequality}
Since $W_k$ is a nonnegative supermartingale, we can apply Ville's inequality to establish:
\begin{align}
\label{eq:villeB}
\P \left\{ \max_{k \in \{ 0, \dots, K\} } W_k > \lambda \right\} \leq \frac{\mathbb{E}[W_0] }{\lambda} = \frac{W_0}{\lambda}.
\end{align}
for all $\lambda\in\mathbb{R}_{>0}$.
To relate this bound to the $K$-step exit probability $P_u$, we seek a value of $\lambda$, denoted $\lambda^*$, such that:
\begin{equation}
\max_{k \in \{0, \ldots, K\}} W_k \leq \lambda^*. \implies \min_{k \in \{0, \ldots, K\}} h(\mb{x}_k) \geq -\gamma.
\end{equation}
In short, we will choose a value of $\lambda^*$ such that all trajectories of $W_k$ that remain below $\lambda^*$ must also have $h_k \geq -\gamma$. To this end, we use the geometric series identity\footnote{At $\theta =1 $, the fraction $\frac{1 - \theta^k}{1 - \theta}$ is not well defined. However, the proof can be carried out using the summation notation. In this case $\lambda^* = M + \gamma$, and \eqref{eq:villeB} yields $P_u \leq 1 - \frac{h(\mb{x}_0) + \gamma - \varphi K }{M + \gamma}$. } $\sum_{i=1}^k \theta^{i-1} =\frac{1 - \theta^k}{1 - \theta}$ to rewrite $W_k$ as:
\begin{align}
\label{eqn:wkgeomid}
W_k &= (M - h(\mb{x}_k))\theta^k + \varphi \theta \frac{\theta^K - \theta^k}{\theta-1}.
\end{align}
Let us define:
\begin{align}
\lambda_k = \left( \gamma + M - \frac{\varphi \theta}{\theta -1}\right)\theta^{k} + \frac{\varphi \theta}{\theta-1} \theta^K > 0,
\end{align}
which, intuitively, applies the same time-varying scaling and shift to a constant, $-\gamma$, that was applied to $h(\mb{x}_k)$ to yield $W_k$ \eqref{eqn:wkgeomid}. Let us choose:
\begin{align}
\lambda^* \triangleq \min_{k \in \{0, \ldots, K\}} \lambda_k.
\end{align}
Since we assume $\max_{k \in \{0, \ldots, K\}} W_k \leq \lambda^*,$ we can write, for all $k \in \{0, \ldots, K\}$:
\begin{align}
0 &\geq W_k - \lambda^* \geq W_k - \lambda_k = (-\gamma - h_k) \theta^k.
\end{align}
Since $\theta > 1,$ this implies that $-\gamma - h_k \leq 0$ for all $k \in \{0, \ldots, K\}$, and thus $\min_{k \in \{0, \ldots, K\}} h(\mb{x}_k) \geq -\gamma,$ as needed.
\subsection{Proof: Choosing $\theta$ to Minimize the Ville's Bound}
Since our supermartingale $W_k$ includes a free parameter $\theta \in (1, \frac{1}{\alpha}]$, we will choose a value of $\theta$ in this interval which provide the tightest bound on $P_u$.
\textbf{Case 1: } Consider the first case where $\delta < -\gamma(1 - \alpha)$, implying $\varphi > (M + \gamma) (1 - \alpha) $. In this case $\frac{1}{\alpha} < \frac{M+\gamma}{M + \gamma - \varphi}$ and thus all of the allowable choices of $\theta \in (1, \frac{1}{\alpha})$ are such that $\theta < \frac{M+ \gamma}{M + \gamma - \varphi}$. Denoting $k^*$ such that $\lambda^* = \lambda_{k^*}$, we have that:
\begin{align}
\lambda^* &= \underbrace{\left( \gamma + M - \frac{\varphi \theta}{\theta -1}\right)}_{\leq 0 }\theta^{k^*} + \frac{\varphi \theta}{\theta -1} \theta^{K}. \label{eq:case1}
\end{align}
\noindent Thus, we know $\min_{k \in \{0, \dots, K\}} \lambda_k $ occurs at $k^* = K$ and so:
\begin{align}
P_u & \leq \frac{W_0}{\lambda^*} = \frac{M - h(\mb{x}_0) + \frac{\varphi\theta}{\theta -1}\left( \theta^K - 1\right) }{(M + \gamma)\theta^K}.
\end{align}
Since this bound is a decreasing function of $\theta$ (as shown in Lemma \ref{lm:decreasing} in Appendix \ref{apdx:kushner_lemmas}), we choose the largest allowable value $\theta^* = \frac{1}{\alpha}$ to achieve the bound:
\begin{align}
P_u & \leq \frac{W_0}{\lambda^* } = \frac{M - h(\mb{x}_0) + \frac{\varphi }{1 - \alpha}\left( \alpha^{-K} -1 \right) }{ (M + \gamma)\alpha^{-K} }, \\
& = \left( \frac{M - h(\mb{x}_0) }{M + \gamma }\right) \alpha^K + \frac{M (1-\alpha) - \delta}{M + \gamma}\sum_{i=1}^K \alpha^{i-1},
\end{align}
where we again use the geometric series identity.
\textbf{Case 2: } Now consider the second case where $\delta \geq -\gamma(1 - \alpha)$, so $\varphi \leq (M + \gamma) (1- \alpha) $, which implies that the set $[\frac{M+ \gamma}{M + \gamma - \varphi}, \frac{1}{\alpha}]$ is nonempty. Choosing a value of $\theta$ in this set ensures that:
\begin{align}
\lambda^* &= \underbrace{\left( \gamma + M - \frac{\varphi \theta}{\theta -1}\right)\theta^{k^*}}_{\geq 0 } + \frac{\varphi \theta}{\theta -1} \theta^{K}.
\end{align}
\noindent Thus $\min_{k \in \{0, \dots, K\}} \lambda_k$ occurs at $k^* = 0 $ and:
\begin{align}
P_u & \leq \frac{W_0}{\lambda } = \frac{(M - h(\mb{x}_0)) + \frac{\varphi\theta}{\theta -1}\left( \theta^K - 1\right) }{(M + \gamma) + \frac{\varphi \theta}{\theta - 1}\left( \theta^K - 1\right)},\\
& = 1 - \frac{h(\mb{x}_0) + \gamma }{ M + \gamma + \frac{\varphi \theta }{\theta - 1}\left( \theta^K -1 \right) }.
\end{align}
Since this bound
is increasing in $\theta$ (as shown in Lemma \ref{lm:increasing} in Appendix \ref{apdx:kushner_lemmas}), we choose
$\theta^* = \frac{M+ \gamma}{M + \gamma - \varphi}$ to achieve the bound:
\begin{align}
P_u \leq 1 - \left( \frac{h(\mb{x}_0) + \gamma}{M + \gamma}\right)\left( \frac{M\alpha + \gamma + \delta}{M+ \gamma}\right)^K.
\end{align}
If, alternatively, we choose $\theta \in \left(1, \frac{M + \gamma }{M + \gamma - \varphi }\right] $, then the inequality in \eqref{eq:case1} holds, $k^* = K$, and the bound is decreasing in $\theta$ as in Case 1. Evaluating this bound for the minimizing value $\theta^* = \frac{M + \gamma}{M + \gamma - \varphi }$ again yields:
\begin{align}
P_u &\leq \frac{M - h(\mb{x}_0) + (M + \gamma) ( \theta^K - 1) }{(M + \gamma) \theta^K},\\
& = 1 - \left(\frac{h(\mb{x}_0) + \gamma}{M + \gamma}\right)\left( \frac{M\alpha + \gamma + \delta}{M + \gamma }\right)^K.
\end{align}
\hfill $\blacksquare$
\section{Practical Considerations for Enforcing Stochastic DTCBFs}
Theorem \ref{thm:kushner_main} allows us to reason about the finite-time safety of systems governed by DTBFs. To utilize the results of this theorem in a control setting, we aim to use DTCBFs to develop control methods which enforce the expectation condition:
\begin{align}
\E[h(\mb{F}(\mb{x}_k, \mb{u}_k)+ \mb{d}_k) \mid \mb{x}_k] & \geq \alpha h(\mb{x}_k). \label{eq:stochastic_dtcbf_constraint}
\end{align}
Like the \ref{eq:dtcbfop} controller, we seek to enforce this constraint using an optimization-based controller that enforces safety while achieving pointwise minimal deviation from a nominal controller $\mb{k}_\textrm{nom}$ in the form of an \underline{E}xpectation-based \underline{D}TCBF \eqref{eq:dtcbf_op} Controller:
\begin{align}
\mb{k}_{\textrm{ED}}(\mb{x}_k) = \argmin_{\mb{u} \in \mathbb{R}^m } \quad & \Vert \mb{u} - \mb{k}_{\textrm{nom}}(\mb{x}_k,k) \Vert^2 \label{eq:dtcbf_op} \tag{ED}\\
\textrm{s.t. } \quad & \E [h(\mb{F}(\mb{x}_k, \mb{u})+ \mb{d}_k) \mid \mb{x}_k ] \geq \alpha h(\mb{x}_k). \nonumber
\end{align}
The expectation in \eqref{eq:dtcbf_op} adds complexity that is not generally considered in the application of deterministic DTCBFs. More commonly, CBF-based controllers solve ``certainty-equivalent'' optimization programs, like this \underline{C}ertainty-\underline{E}quivalent \underline{D}TCBF \eqref{eq:CE_dtcbf} controller, that replaces the expected barrier value $\E[h(\mathbf{x}_{k+1})\mid\mb{x}_k]$ with the barrier evaluated at the expected next state, $h(\E[\mb{x}_{k+1}\mid\mb{x}_k])$:
\begin{align}
\mb{k}_\text{CED}(\mb{x}_k) = \argmin_{\mb{u} \in \mathbb{R}^m } \quad & \Vert \mb{u} - \mb{k}_{\textrm{nom}}(\mb{x}_k,k) \Vert^2 \label{eq:CE_dtcbf} \tag{CED}\\
\textrm{s.t. } \quad & h(\mb{F}(\mb{x}_k, \mb{u})+ \E[\mb{d}_k]) \geq \alpha h(\mb{x}_k). \nonumber
\end{align}
where $\E[\mb{F}(\mb{x}_k,\mb{u}_k)|\mb{x}_k] = \mb{F}(\mb{x}_k,\mb{u}_k)$ and $\E[\mb{d}_k|\mb{x}_k] = \E[\mb{d}_k]$. This constraint is often easier to evaluate than \eqref{eq:stochastic_dtcbf_constraint} since it allows control actions to be selected with respect to the expected disturbance $\E[\mb{d}_k]$ without needing to model the disturbance distribution $\mathcal{D}$. If the disturbance is zero-mean, then this form of the constraint is implicitly enforced by DTCBF controllers such as those presented in \cite{agrawal2017discrete, zeng2021safety}. However, when replacing \ref{eq:dtcbf_op} with \ref{eq:CE_dtcbf} it is important to consider the effect of Jensen's inequality in Theorem \ref{thm:jensen}.
If the ``certainty-equivalent'' constraint in \ref{eq:CE_dtcbf} is strictly concave\footnote{The constraint $h(\mb{x}_k + \mb{u}) \geq \alpha h(\mb{x}_k)$ is concave in $\mb{u}$ when $h$ is convex and it is convex in $\mb{u}$ when $h$ is concave. }, then we can apply the results of Theorem \ref{thm:kushner_main} directly since Jensen's inequality tightens the constraint and ensures satisfaction of the expectation condition \eqref{eq:kushner_constraint}. Unfortunately, using such a controller is a non-convex optimization program which can be impractical to solve. If, instead, the constraint is convex, then \ref{eq:CE_dtcbf} is a convex program, but does not necessarily enforce the expectation condition \eqref{eq:kushner_constraint} in Theorem \eqref{thm:kushner_main} due to the gap introduced by Jensen's inequality.
In order to apply the results of Theorem \ref{thm:kushner_main} to controllers of the form \eqref{eq:CE_dtcbf} with convex constraints, we must first provide a bound on the gap introduced by Jensen's inequality. In particular, for any concave function $h: \mathbb{R}^n \to \mathbb{R} $ and random variable $\mb{d} \sim \mathcal{D}$, we seek to determine a value $\psi \in \mathbb{R}_{\geq 0}$ such that, for all $\mb{x} \in \mathbb{R}^n$ and $\mb{u}\in\mathbb{R}^m$:
\begin{align}
\E[h(\mb{F}(\mb{x}, \mb{u}) +\mb{d}) \mid \mb{x}] \geq h(\mb{F}(\mb{x}, \mb{u}) + \E[\mb{d}]) - \psi, \label{eq:jensen_gap}
\end{align}
\noindent thus quantifying the gap introduced by Jensen's inequality.
A large body of work has studied methods for finding the smallest possible $\psi$ that satisfies \eqref{eq:jensen_gap}. Here we adapt a result in \cite{becker2012variance} to achieve a relatively loose, but straightforward bound:
\begin{lemma}\label{lm:jensen_gap}
Consider a twice-continuously differentiable, concave function $h: \mathbb{R}^n\to \mathbb{R}$ with $\sup_{\mb{x} \in \mathbb{R}^n} \Vert \nabla^2 h(\mb{x})\Vert_2 \leq \lambda_{\max} $ for some $\lambda_{\max}\in\mathbb{R}_{\geq0}$, and a random variable $\mb{x}$ that takes values in $\mathbb{R}^n$ with $\E[\Vert \mb{x} \Vert] < \infty$ and $\Vert \textup{cov}(\mb{x}) \Vert < \infty$. Then we have that:
\begin{align}
\E[h(\mb{x})] \geq h(\E[\mb{x}]) - \frac{\lambda_{\max}}{2} \textup{tr}(\textup{cov}(\mb{x})).
\end{align}
\end{lemma}
\noindent The proof is included in Appendix \ref{pf:jensen_gap}. We note that although this value of $\psi= \frac{\lambda_{\textup{max}}}{2}\textrm{tr}(\textrm{cov}(\mb{x}))$ is easy to interpret, tighter bounds exist which have less restrictive assumptions than a globally bounded Hessian \cite{liao2018sharpening}. We also note that one could also use sampling-based methods to approximately satisfy the constraint \eqref{eq:jensen_gap} by estimating $\psi$ empirically.
Next we present a controller which combines the mean-based control of the ``certainty equivalent'' \eqref{eq:CE_dtcbf} while also accounting for Jensen's inequality. This \underline{J}ensen-\underline{E}nhanced \underline{D}TCBF Controller (JED) includes an additional control parameter $c_\textup{J} \geq 0 $ to account for Jensen's inequality:
\begin{align}
\mb{k}_\textup{ED}(\mb{x}_k) = \argmin_{\mb{u} \in \mathbb{R}^m } \quad & \Vert \mb{u} - \mb{k}_{\textrm{nom}}(\mb{x}_k,k) \Vert^2 \label{eq:jed} \tag{JED}\\
\textrm{s.t. } \quad & h(\mb{F}(\mb{x}_k, \mb{u}_k)+ \E[\mb{d}_k]) - c_\textup{J} \geq \alpha h(\mb{x}_k). \nonumber
\end{align}
Given this controller and a method for bounding $\psi$, we can now apply Theorem \ref{thm:kushner_main} while accounting for (or analyzing) the effects of Jensen's inequality on the \eqref{eq:jed} controller:
\begin{theorem}\label{thm:kushner_jensen}
Consider the system \eqref{eq:dt_autonomous_dist} and let $h:\mathbb{R}^n \to \mathbb{R}$ be a twice-continuously differentiable, concave function such that $\sup_{\mb{x} \in \mathbb{R}^n} h(\mb{x}) \leq M$ for $M\in\mathbb{R}_{>0}$ and $\sup_{\mb{x} \in \mathbb{R}^n} \Vert \nabla^2 h(\mb{x}) \Vert_2 \leq \lambda_{\max} $ for $\lambda_{\max}\in\mathbb{R}_{\geq0}$. Suppose there exists an $\alpha \in (0,1)$ and a $c_\textup{J} \in [0, \frac{\lambda_\textup{max}}{2}\textup{tr(cov}(\mb{d}))+ M(1-\alpha) ]$ such that:
\begin{align}
h(\mb{F}(\mb{x}, \mb{k}(\mb{x})) + \mathbb{E}[\mb{d}] ) - c_\textup{J} \geq \alpha h(\mb{x}), \label{eq:jensen_dtcbf}
\end{align}
for all $\mb{x} \in \mathbb{R}^n$ with $\mb{d}\sim\mathcal{D}$. Then we have that:
\begin{equation}
\mathbb{E}[~h(\mb{F}(\mb{x}, \mb{k}(\mb{x})) + \mb{d}) \mid \mb{x}~] \geq \alpha h(\mb{x})+ \delta,
\end{equation}
for all $\mb{x}\in\mathbb{R}^n$ with $\mb{d}\sim\mathcal{D}$ and $\delta = c_\textup{J} - \frac{\lambda_\textup{max}}{2}\textup{tr(cov}(\mb{d}_k))$.
\end{theorem}
\begin{proof}
Given $\mb{x} \in \mathbb{R}^n $, Lemma \ref{lm:jensen_gap} ensures that:
\begin{align}
0 &\leq h(\mb{F}(\mb{x}, \mb{k}(\mb{x})) + \mathbb{E}[\mb{d}] ) - c_\textup{J} - \alpha h(\mb{x})\\
& \leq \E[h(\mb{F}(\mb{x}, \mb{k}(\mb{x})) + \mb{d} )\mid\mb{x}] + \psi - c_\textup{J} -\alpha h(\mb{x})
\end{align}
where $\psi = \frac{\lambda_\textup{max}}{2}\textup{tr(cov}(\mb{d})) $. Letting $\delta = c_\textup{J} -\frac{\lambda_\textup{max}}{2}\textup{tr(cov}(\mb{d}))$ yields the the desired result.
\end{proof}
\section{Practical Examples}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/1D_sys_smaller.pdf}
\caption{ The dashed lines represent the theoretical probability bounds for the system as in Theorem \ref{thm:kushner_main}. The solid lines represent the Monte Carlo (MC) estimated $P_u$ across 500 experiments. }
\label{fig:steinhardt_comparison}
\vspace{-1cm}
\end{figure}
In this section we consider a variety of simulation examples that highlight the key features of our approach
\subsection{Linear 1D System}
Here we analyze our bounds by considering the case of unbounded i.i.d. disturbances $d_k \sim \mathcal{N}(0,1)$ for the one dimensional system ($x, u, \in \mathbb{R}$) and safe set:
\begin{align}
x_{k+1} = x_k + 2 + u_k + \sigma d_k \textrm{, } \;\mathcal{C} = \{ x \mid 1-x^2 \geq 0 \}.
\end{align}
The Jensen gap for this system and DTCBF is bounded by $\psi= \sigma^2$. For simulation, we employ the \ref{eq:jed} controller with $c_\textup{J}= \sigma^2$, $\alpha =1 - \sigma^2$, and nominal controller $\mb{k}_{\textrm{nom}}(\mb{x}_k, k) = 0 $. Figure \ref{fig:steinhardt_comparison} shows the results of 500 one second long trials run with a variety of $\sigma \in [0, 0.2] $ and also displays how the bound on $P_u$ decreases as $\gamma$ increases.
\subsection{Simple Pendulum}
Next we consider an inverted pendulum about its upright equilibrium point with the DT dynamics:
\begin{align}
\lmat
\theta_{k+1}\\
\dot{\theta}_{k+1}
\end{bmatrix}
=
\lmat
\theta_k + \Delta t \dot{\theta}_k\\
\dot{\theta}_k + \Delta t \sin(\theta_k)
\end{bmatrix}
+
\lmat
0\\
\Delta t \mb{u}
\end{bmatrix}
+ \mb{d}_k,
\end{align}
\noindent with time step $\Delta_t = 0.01 $ sec, i.i.d disturbances $\mb{d}_k \sim \mathcal{N}(\mb{0}_2,\textrm{Diag}([0.005^2, 0.025^2]]) $, and safe set\footnote{Diag$: \mathbb{R}^n \to \mathbb{R}^{n\times n}$ generates a square diagonal matrix with its argument along the main diagonal.}:
\begin{align}
\mathcal{C} = \bigg \{ \mb{x} \in \mathbb{R}^n ~\bigg|~ \underbrace{1 - \frac{6^2}{\pi^2} \mb{x}^\top \lmat 1 & 3^{-\frac{1}{2}} \\ 3^{-\frac{1}{2}} & 1 \end{bmatrix} \mb{x}}_{h_\textrm{pend}(\mb{x})} \geq 0 \bigg\}
\end{align}
\noindent which is constructed using the continuous-time Lyapunov equation as in \cite{taylor_safety_2022} and for which $\vert \theta \vert \leq \pi/6$ for all $\mb{x} \in \mathcal{C}$. Figure \ref{fig:pendulum} shows the results of 500 one second long trials for each $\mb{x}_0 \in \mathcal{C}$ using the \ref{eq:jed} controller with parameters $\alpha = 1 - \psi, c_\textup{J} = \psi $, where $\psi = \frac{\lambda_\textrm{max}}{2}\textrm{tr(cov(} \mb{d}_k)) $.
This figure highlights the influence of $\mb{x}_0$ and shows how the bound on $P_u$ increases as $h(\mb{x}_0)$ decreases.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/pendulum_combined.pdf}
\caption{\textbf{(Top Left) }System diagram of the inverted pendulum. \textbf{(Top Right)} 500 one second long example trajectories starting at $\mb{x}_0 = 0 $. \textbf{(Bottom Left) } Monte Carlo estimates of $P_u$ for $\gamma = 0 $ using 500 one second long trials for each initial conditions represented by a black dot. \textbf{(Bottom Right) } Our (conservative) theoretical bounds on $P_u$ from Theorem \ref{thm:kushner_main} }
\label{fig:pendulum}
\vspace{-1cm}
\end{figure}
\subsection{Double Integrator}
\begin{figure}[t]
\centering
\includegraphics[width=0.97\linewidth]{figures/lqg.pdf}
\caption{Simulation results for double integrator over $500$ trials. \textbf{(Top left):} Planar ($x,y$) trajectories for the approximated \ref{eq:dtcbf_op} controller, with the safe set (a unit square) plotted in green. \textbf{(Top right):} Planar ($x,y$) trajectories for a \ref{eq:CE_dtcbf} controller. \textbf{(Bottom left):} The $h(\mb{x}_k)$ for both controllers, with the max and min values shaded. \textbf{(Bottom right):} Percent of trajectories that have remained safe over time. We also plot our (conservative) bound \eqref{eq:problo} on the unsafe probability $P_u$.}
\label{fig:double-integrator}
\vspace{-0.6cm}
\end{figure}
We also consider the problem of controlling a planar system with unit-mass double-integrator dynamics to remain inside a convex polytope (in particular, a unit square centered at the origin). Using Heun's method, the
dynamics are given by:
\begin{align}
\mb{x}_{k+1} &= \left[\begin{array}{cc}\mb{I}_2 & \Delta t \; \mb{I}_2 \\ \mb{0}_2 & \mb{I}_2 \end{array}\right] \mb{x}_k + \left[\begin{array}{c}\frac{\Delta t^2}{2}\mb{I}_2\\ \Delta t \mb{I}_2\end{array}\right] \mb{u}_k + \mb{d}_k,\\
&\triangleq \mb{A} \mb{x}_k + \mb{B} \mb{u}_k + \mb{d}_k, \label{eq:linear-gaussian}
\end{align}
where $\Delta t$ is the integration time step and $\mb{d}_k \sim \mathcal{N}(\mb{0}_4, \mb{Q})$ is a zero-mean Gaussian process noise added to the dynamics.
Here we use $\Delta t = 0.05$ sec, and $\mb{Q} = \mb{B} \mb{B}^T$, which corresponds to applying a disturbance force $\mb{f}_k \sim \mathcal{N}(0, \mb{I}_2)$ to the system at each timestep.
To keep the system inside a convex polytope, we seek to enforce the affine inequalities $\mb{C} \mb{x} \leq \mb{w}$ for $\mb{C} \in \mathbb{R}^{n_c \times n}, \mb{w} \in \mathbb{R}^{n_c}.$ Thus, we define our barrier $h(\mb{x}) = -\max(\mb{C}\mb{x} - \mb{w})$, where $\max(\cdot)$ defines the element-wise maximum, and $h(\mb{x}) \geq 0$ if and only if the constraint $\mb{C} \mb{x} \leq \mb{w}$ holds. Implementing the
\ref{eq:dtcbf_op} controller for this system is non-trivial, since the expectation of $h(\mathbf{x})$ for a Gaussian-distributed $\mb{x}$ does not have a closed form. Similarly, implementing the \ref{eq:jed} controller to account for Jensen's inequality is non-trivial since $h$ is not twice continuously differentiable. We instead choose to enforce a conservative approximation of the barrier condition \eqref{eq:stochastic_dtcbf_constraint} using the \textit{log-sum-exp} function. As we show in Appendix \ref{apdx:polytope}, this approximation yields an analytic upper bound (derived using the moment-generating function of Gaussian r.v.s) on $\E[h(\mb{x}_{k+1})]$ which can be imposed via a convex constraint.
Figure \ref{fig:double-integrator} plots the results of 500 simulated trajectories for the double integrator system using the proposed \ref{eq:dtcbf_op} controller, and the certainty equivalent \ref{eq:CE_dtcbf} controller that neglects the presence of process noise. Both controllers have a nominal controller $\mb{k}_\text{nom}(\mb{x}) = \left[50, 0\right]$ which seeks to drive the system into the right wall. All trajectories start from the origin. We note the proposed controller is indeed more conservative than the \ref{eq:CE_dtcbf} controller, yielding both fewer and smaller violations of the safe set. In the bottom right, we also plot our bound as a function of the time horizon, which we note is quite conservative compared to our Monte Carlo estimate of the safety probability, motivating future work.
\subsection{Quadruped}
Finally, we consider the problem of controlling a simulated quadrupedal robot locomoting along a narrow path. The simulation is based on a Unitree A1 robot as shown in Figure \eqref{fig:hero_fig} which has 18 degrees of freedom and 12 actuators. An ID-QP controller designed using concepts in \cite{buchli2009inverse} and implemented at 1kHz is used to track stable walking gaits with variable planar velocities and angle rate using the motion primitive framework presented in \cite{Ubellacker2021}. We simulate the entire quadruped's dynamics at 1kHz, but follow a similar methodology to \cite{molnar_model-free_2022} and consider the following simplified discrete-time single-integrator system for DTCBF-based control:
\begin{align}
\mb{x}_{k+1
= \mb{x}_
+ \Delta t\lmat \cos\theta & - \sin \theta & 0 \\ \sin \theta & \cos\theta & 0 \\ 0 & 0 & 1\end{bmatrix} \lmat v^x_k \\ v^y_k \\ \theta_k \end{bmatrix} + \mb{d}_k. \label{eq:reduced_order_quad}
\end{align}
where $\mb{x}_k = \lmat x, & y, & \theta \end{bmatrix}^\top $. In order to represent the error caused by uncertain terrain, zero mean Gaussian disturbances are added to the quadruped's $(x,y)$ body position and velocity with variances of $2.25\times 10^6$ and $0.01$ respectively. This random noise along with the dynamics-mismatch between the full-order quadrupedal dynamics and \eqref{eq:reduced_order_quad} is modeled as an i.i.d. random process $\mb{d}_k$.
The quadruped is commanded to stand and then traverse a 7 meter path that is 1 meter wide, with the safe set
$ \mathcal{C} = \{ \mathbf{x} \in \mathbb{R}^n \mid 0.5^2 - y^2 \geq 0 \} $. For this simulation, three controllers are compared: a simple nominal controller $\mb{k}_\textrm{nom}(\mb{x}) = \lmat 0.2, & 0, & -\theta \end{bmatrix}^\top $ with no understanding of safety, the \ref{eq:dtcbfop} controller with $\alpha = 0.99$, and our proposed \ref{eq:jed} controller with $\alpha =0.99$ and $c_\textup{J} = \psi$ using the mean and covariance estimates, $\E[\mb{d}_k] \approx \lmat -0.0132, & -0.0034, & -0.0002 \end{bmatrix}^\top$ and $\textrm{tr(cov}(\mb{d}_k)) \approx \psi = 0.000548$, which were generated using 15 minutes of
walking data controlled by $\mb{k}_\textrm{nom}$.
The results of 50 trials for each controller can be seen in Figure \ref{fig:hero_fig}. As expected, $\mb{k}_\textrm{nom}$ generated the largest safety violations and \ref{eq:jed} the smallest and fewest safety violations.
\section{Conclusion}
\label{sec:conclusion}
In this work, we developed a bound for the finite-time safety of stochastic discrete-time systems using discrete-time control barrier functions. Additionally, we presented a method for practically implementing convex optimization-based controllers which satisfy this bound by accounting for or analyzing the effect of Jensen's inequality. We presented several examples which demonstrate the efficacy of our bound and our proposed \ref{eq:dtcbf_op} and \ref{eq:jed} controllers,
This paper offers a large variety of directions for future work. In particular, in our practical examples, we find the safety bound presented here is often quite conservative in practice. One way forward would be to find other supermartingale transformations of the process $h(\mb{x}_k)$ (perhaps programatically, as in \cite{steinhardt2012finite}) that can yield tighter bounds than those in Theorem \ref{thm:kushner_main}. Another potential avenue may consider alternative martingale inequalities to the Ville's inequality used in this work. Another important open question is how to incorporate state uncertainty into our framework. This would allow us to reason about the safety of CBF-based controllers that operate in tandem with state estimators such as Kalman Filters or SLAM pipelines. Similarly, our methods may have interesting applications in handling the dynamics errors introduced in sampled-data control which can perhaps be modeled as a random variable or learned using a distribution-generating framework such as a state-dependent Gaussian processes or Bayesian neural networks. Finally, we assume that the disturbance distribution $\mathcal{D}$ is known exactly, \textit{a priori}; it would be interesting to consider a ``distributionally robust'' variant of the stochastic barrier condition \eqref{eq:stochastic_dtcbf_constraint} that can provide safety guarantees for a class of disturbance distributions.
\section*{Acknowledgments}
The authors would like to thank Alexander De Capone and Victor Dorobantu
for their invaluable discussion and Joel Tropp for his course on Probability. The authors would also like to thank Wyatt Ubellacker for generously providing his fantastic quadruped simulation environment.
\subsection{Motivating Case: DTCBFs with Bounded Disturbances}
|
1,314,259,996,581 | arxiv | \section{Introduction}
\label{sec:intro}
The paradigm of \emph{inflation}~\cite{Guth81,Linde82,AS82} emerged in
the early Eighties as a way of resolving a number of outstanding puzzles
in cosmology, by postulating that the Universe underwent a phase of
accelerated expansion. Inflationary models predict that the Universe is
spatially flat, and that the quantum zero-point fluctuations of the
space-time metric produce a nearly scale invariant spectrum of density
perturbations that are responsible for the formation of cosmic structures
and the generation of a primordial cosmic gravitational wave background
(CGWB). Observations of the Cosmic Microwave Background (CMB), most
recently with WMAP, have provided a confirmation of the first two
predictions~\cite{WMAP}; the generation of primordial gravitational waves
is still to be verified. This test is important for both cosmology and
fundamental physics. In fact, the actual detailed implementation of an
inflationary model requires the introduction of additional fields that
are not part of the already experimentally well tested standard model
of particle physics and may produce effects at energy scales well beyond
those probed by particle physics experiments. The observation
of a CGWB either directly, with gravitational wave instruments, or
indirectly, via the effect on the CMB provides a unique way of measuring
the physical parameters of the models and an opportunity for testing new
ideas in fundamental physics and cosmology.
Inflation predicts a quasi-scale invariant CGWB between $\sim 10^{-16}$~Hz
and $\sim $1~GHz whose spectrum $h_0^2 \, \Omega_\mathrm{gw}(f)$ (the
fractional energy density in gravitational waves, normalised to the
critical density, per unit logarithmic frequency interval) does not
exceed $10^{-15}$ at any one frequency~\cite{Turner97}. Third generation
ground-based km-scale laser interferometers are expected to achieve a
sensitivity $h_0^2 \, \Omega_\mathrm{gw}(f) \sim 10^{-11}$
in the frequency range
$\approx 10$~Hz - a few $\times 100$~Hz (cf~\cite{CT02} for a
recent review). As the characteristic amplitude $h_\mathrm{c}$ on a
bandwidth $\Delta f$ produced by a stochastic background is
\begin{equation}
h_\mathrm{c}(f) \approx 4 \times 10^{-30} \, \left( \frac{h_0^2 \,
\Omega_\mathrm{gw}}{10^{-16}} \right)^{1/2} \left( \frac{f}{1 \,
\mathrm{Hz}}\right)^{-3/2} \left( \frac{\Delta f}{10^{-7} \, \mathrm{Hz}}
\right)^{1/2} \,,
\end{equation}
there is an obvious advantage in observing at lower frequencies.
Unfortunately, the Laser Interferometer Space Antenna
(LISA)~\cite{LISA_ppa} will not offer an opportunity to improve (much)
beyond the sensitivity of ground-based detectors because of the
instrument's limitations -- only one interferometer, preventing
cross-correlation experiments -- and the intensity of astrophysical
foregrounds in the mHz frequency band~\cite{HBW90,FP03,UV01}, where LISA
achieves optimal sensitivity. It is currently accepted that a LISA
follow-up mission aimed at the lowest possible frequency band not
compromised by astrophysical foregrounds, $0.1\,\mathrm{Hz} - 1\,\mathrm{Hz}$
represents the best opportunity to directly study inflation. As a result of this, a
new mission concept has recently emerged: the Big-Bang-Observer (BBO),
which is presently being investigated by NASA~\cite{bbo}. This consists
of a constellation of four interferometers in a Heliocentric orbit at 1
AU from the Sun. By making the arm length of the BBO interferometers
$\approx 100$ shorter than those of LISA, the centre of the
observational window is shifted to several $\times 0.1$ Hz; improved technology for
lasers, optics and drag-free systems will allow to achieve a sensitivity
$h_0^2 \, \Omega_\mathrm{gw}(f) \lower.5ex\hbox{\ltsima} 10^{-16}$. A similar mission, although
consisting of only one interferometer, has been proposed in Japan:
DECIGO~\cite{decigo}.
Gravitational waves produced during inflation will also have an indirect
effect on the structure of the cosmic microwave background (CMB) by affecting most
importantly its polarisation~\cite{SZ97}.
The investigation of the signature of GWs has been
one of the drivers in the design of Planck~\cite{planck}, an ESA mission
currently scheduled for launch in 2007; moreover vigorous efforts are
underway to design and develop more ambitious instruments, such as
CMBPol~\cite{cmbpol}, in order to carry out highly sensitive searches.
The programme to test the prediction of the generation of a gravitational
wave stochastic background during inflation relies therefore on substantial sensitivity
improvements for mission either in the gravitational wave or microwave band
(cf {\em e.g.}~\cite{Cooray05,SigCoo05}).
In this paper we investigate how the direct observation of primordial
gravitational waves by BBO can constrain the parameter space of inflationary
models and what are the implications for the design of a mission. We also explore
how such information compare with and complement those that can be gained with
future CMB data. The paper is organised as follows: in Section~\ref{sec:model} we review
single-field slow-roll inflation, the spectrum $\Omega_\mathrm{gw}(f)$ of the cosmic
gravitational wave background that is generated in this epoch
and show that $\Omega_\mathrm{gw}(f)$
can be characterised by only two unknown parameters; in
Section~\ref{sec:results} we discuss the region of the parameter space
that can be probed by the Big-Bang-Observer mission, and how this
depends on different technological choices for the mission; we also
compare and contrast this results with what one might be able to
achieve with future CMB observations, with missions such as Planck and CMBPol;
Section~\ref{sec:conclusions} contains our conclusions and pointers to
future work.
\section{Single-field slow roll inflation}
\label{sec:model}
In this section we briefly review a class of inflationary models where
the period of accelerating cosmological expansion is described by a
single dynamical parameter, the inflation field (see e.g. \cite{LiLy00}) and derive an expression for the spectrum of primordial gravitational waves as a function of the model parameters.
Such analysis can be generalised to multi-field inflationary models, cf.
e.g.~\cite{Lid97}. Throughout the paper we adopt geometrical units in which
$c = G = 1$.
The dynamics of a homogeneous and isotropic scalar field $\phi$ in a
cosmological background described by the Friedmann-Robertson-Walker
metric is determined by the equation of motion
\begin{equation}
\ddot{\phi} + 3H \, \dot{\phi} + V^{\prime}(\phi) = 0 \,,
\end{equation}
where $a$ is the scale factor, $H = \dot{a} / a$ the expansion rate and
$V(\phi)$ the scalar field potential; in the previous equation dots
refer to time derivatives and primes to derivatives with respect to
$\phi$. The evolution of $a$ is encoded into the Friedmann equation,
\begin{equation}
H^2 = \frac{8\pi}{3 \, m^2_{pl}} \left[ \frac{\dot{\phi}^2}{2} + V(\phi)
\right] \,,
\end{equation}
where $m_{pl} \sim 10^{19}$~GeV is the Planck mass. Inflation is a
period of accelerated expansion where $\ddot{a} / a \, > \, 0$ which
implies that the \textit{slow-roll parameters},
\begin{eqnarray}
\epsilon & = & \frac{m^2_{pl}}{16 \pi} \left( \frac{V^{\prime}}{V}
\right)^2\,,
\label{epsilon}\\
\eta & = & \frac{m^2_{pl}}{8 \pi} \left( \frac{V^{\prime\prime}}{V}
\right)\,,
\label{eta}
\end{eqnarray}
must be less than 1.
Inflation generates two types of metric perturbations: (i) \textsl{scalar
or curvature perturbations}, coupled to the energy momentum tensor of
the matter fields, that constitute the seeds for structure formation and
for the observed anisotropy of the CMB and (ii) \textsl{tensor or
gravitational wave perturbations} that, at first order, do not couple
with the matter fields. Tensor perturbations are responsible for a CGWB.
In the slow-roll regime ($\epsilon \,, \eta < 1$), the power spectra of
curvature and tensor perturbations are given by
\begin{eqnarray}
\Delta^2_{{\cal R}} & = & \left[ \frac{H}{\dot{\phi}} \left(
\frac{H}{2\pi} \right) \right]^2_{k=aH} \,,
\label{Deltar}\\
\Delta^2_T & = & \frac{16}{\pi} \left( \frac{H}{m_{pl}} \right)^2_{k=aH} \,,
\label{Deltat}
\end{eqnarray}
where $\Delta^2_{{\cal R}}$ and $\Delta^2_T$ are functions of the
comoving wavenumber $k$ evaluated when a given mode crosses the causal
horizon $(k = aH)$. The spectral slopes of the scalar and tensor
perturbations are then given by
\begin{eqnarray}
n_s -1 & = & \frac{\rmd \ln \Delta_{{\cal R}}^2} {\rmd \ln \, k} \,,
\label{ns}\\
n_T & = & \frac{\rmd \ln \Delta_{T}^2} {\rmd \ln \, k} \,;
\label{nt}
\end{eqnarray}
$n_s$ and $n_T$ can also be written in terms of the slow-roll parameters
$\epsilon$ and $\eta$ as
\begin{eqnarray}
n_s & = & 1 - 6 \epsilon + 2 \eta \,,
\label{newns}\\
n_T & = & -2 \epsilon \,.
\label{newnt}
\end{eqnarray}
For single field slow-roll inflationary models the full set of metric
perturbations is described in terms of the quantities $\Delta_{{\cal
R}}$, $\Delta_T$, $n_s$ and $n_T$, which are however not independent.
Using Equations~(\ref{epsilon})-(\ref{Deltat}),(\ref{newns})
and~(\ref{newnt}) one finds the consistency relation
\begin{equation}
n_T = -\frac{r}{8} \,,
\label{consist}
\end{equation}
where
\begin{equation}
r = \frac{\Delta^2_T}{\Delta^2_R}
\label{ratio}
\end{equation}
is the so-called tensor-to-scalar ratio.
The spectrum of a cosmological
gravitational wave stochastic background is defined as
\begin{equation}
\Omega_\mathrm{gw}(f) = \frac{1}{\rho_c} \frac{\rmd
\rho_\mathrm{gw}}{\rmd \ln f} \,,
\end{equation}
where $\rho_\mathrm{gw}$ is the gravitational waves energy density,
$f = k / 2 \pi$ is the physical frequency and $\rho_c = 3 H^2_0 / 8 \pi$
is the critical energy density today. $H_0$ is the Hubble parameter and
$h_0 \equiv H_0/100 \, \mathrm{km} \, \mathrm{sec}^{-1} \,
\mathrm{Mpc}^{-1}$, so that $h_0^2 \Omega_\mathrm{gw}(f)$ is
independent of the value of the Hubble constant.
For the class of single-field, slow-roll inflationary models considered
here, the spectrum of a CGWB is given by~\cite{Lid97}
\begin{equation}
\Omega_\mathrm{gw}(f) = \frac{1}{24} \Delta^2_T \frac{1}{z_\mathrm{eq}}
\left( \frac{f}{f_0} \right)^{n_T}
\label{spectrum}
\end{equation}
where $z_\mathrm{eq} \approx 2.4 \times 10^4$ is the redshift of
matter-radiation equality and $f_0$ a reference frequency. In order to
be consistent with the recent analysis carried out by the WMAP team, in
this paper we choose $f_0 = 3.1 \times 10^{-17}$~Hz, corresponding
to a wavenumber $k_0 = 0.002 \; \mbox{Mpc}^{-1}$. Using the
Equations~(\ref{consist}) and~(\ref{ratio}), the
spectrum~(\ref{spectrum}) can be written as~\cite{Turner97}
\begin{equation}
\Omega_\mathrm{gw}(f) = \Omega_0 \, r \, A \exp \left[ {\cal N}(f) \,
n_\mathrm{gw}(f) \right] \,,
\label{newspectrum}
\end{equation}
where
\begin{eqnarray}
{\cal N} & \simeq & 28.8 + \ln \left( \frac{f}{10^{-4} \mbox{Hz}}
\right) \,, \\
n_\mathrm{gw} & = & -\frac{r}{8} \left\{ 1 + \frac{\mathcal{N}}{2} \,
\left[ (n_s-1) + \frac{r}{8} \right] \right\} \,,
\end{eqnarray}
and $\Omega_0 = 5.1 \times 10^{-15}$. In Equation~(\ref{newspectrum}) the parameter $A$
accounts for the power spectrum normalisation with respect to the COBE
results: this parameter is currently constrained by the
measurements of CMB anisotropy to $A \sim 0.7 - 1.1$~\cite{Sper03}. Moreover,
since the GW spectrum is extrapolated over a wide range of scales, in
Equation~(\ref{newspectrum}) we have included the first order correction
for the running of the tensor spectral slope. Notice that
Equation~(\ref{newspectrum}) is valid provided that
\begin{equation}
\left| (n_s-1) + \frac{r}{8} \right| \, \ll \, \frac{2}{\mathrm{max} \,
\cal{N}} \,.
\label{constraint}
\end{equation}
For $n_s = 1$ and $r \ll 1$, Equation~(\ref{newspectrum}) gives
$\Omega_\mathrm{gw}(f) \approx 3.7 \times 10^{-17} \, (r/10^{-2})$,
where we have set $A = 0.7$. For single-field inflationary models
$\Omega_\mathrm{gw}(f)$ is therefore described by two ``primordial
parameters'', $n_s$ and $r$, and one parameter $A$ which encodes the
effects due to the late cosmological evolution, such as the nature of
the dark energy component. In this paper we set $A = 0.7$ and consider
the gravitational wave spectrum $\Omega_\mathrm{gw}(f)$ as described by
two unknown parameters, $n_s$ and $r$, that need to be determined by
observations.
\section{Testing inflationary models with the Big-Bang-Observer}
\label{sec:results}
The Big-Bang-Observer is presently envisaged as a constellation of four
3-arm space-based interferometers on the same Heliocentric orbit at the
vertices of an equilateral triangle, with two interferometers co-located
and rotated by $180^\circ$ at one of the vertices. The arm length of the
interferometers is about $5 \times 10^{4}$ km (a hundredth of the LISA
arm length) corresponding to a peak sensitivity at $\sim 1$~Hz.
Different parameters have been suggested for the instrument, which in
turn correspond to different sensitivities; following~\cite{bbo} we
consider three possible choices, that we summarise in
Table~\ref{tab:bbo-param}; we call the corresponding mission concept as
``BBO-lite'', ``BBO-standard'' and ``BBO-grand''. In this Section we explore the
region of the parameter space $(n_s, r)$ that can be probed with an
instrument of the BBO class and how it depends on the instrumental
parameters; we also compare the sensitivity of a gravitational wave
mission with the information that can be obtained indirectly from CMB
observations using WMAP, Planck~\cite{planck} and CMBPol~\cite{cmbpol}.
Gravitational wave searches for stochastic backgrounds are optimally
carried out by cross-correlating the data sets recorded at different
instruments, which allows to disentangle the common stochastic
contribution of a CGWB from the (supposedly uncorrelated) contribution
from the instrumental noise~\cite{AR99}. The signal-to-noise ratio can be
efficiently built only when the separation of two instruments is smaller
than (half of) the typical wavelength of the waves (in the BBO case $\lambda
\approx 10^{11}$~cm), and therefore only the co-located instruments
can be used in the BBO mission to carry out highly sensitive searches of
stochastic signals. The other interferometers of the constellation allow
to accurately identify individual sources and subtract any contaminating
radiation from the data streams. Assuming that the noise of the
instruments is uncorrelated, stationary and Gaussian, the optimal
signal-to-noise ratio $\mathrm{S/N}$ that can be achieved
is~\cite{UV01}
\begin{eqnarray}
\mathrm{S/N} & \approx & \frac{3 H_0^2}{10 \pi^2} \sqrt{T} \left[
\int_{-\infty}^{\infty} \rmd f \frac{\gamma^2(f)
\Omega_{\mathrm{gw}}^2(f)}{f^6 S_h^{(1)}(f)\, S_h^{(2)}(f)}
\right]^{1/2} \,,
\nonumber\\
& \approx & 3 \, \left( \frac{h_0^2\Omega_\mathrm{gw}}{10^{-15}}
\right) \, \left[ \left( \frac{\Delta f}{1 \, \mathrm{Hz}} \right) \,
\left( \frac{T}{10^8 \, \mathrm{s}} \right) \, \right]^{1/2} \left(
\frac{f}{1 \, \mathrm{Hz}} \right)^{-3} \, \left( \frac{S_h}{10^{-48} \,
\mathrm{Hz}^{-1}} \right)^{-1} \,,
\label{eqn:snr}
\end{eqnarray}
where $S_h^{(1, 2)}$ is the power spectral density of the detectors noise
-- in the remaining of the paper we assume the instruments to have
identical sensitivity and therefore set $S_h^{(1)}(f) = S_h^{(2)}(f) =
S_h(f)$ -- $T$ is the integration time, $\Delta f$ is the effective
bandwidth over which the signal-to-noise ratio is accumulated and
$\gamma(f)$ is the overlap reduction function~\cite{UV01}. In
Table~\ref{tab:bbo-param} we report the frequency at which the noise of
BBO reaches the minimum and the corresponding value of $S_h$, depending
on the choice of the instrumental parameters.
\begin{table}
\caption{\label{tab:bbo-param} Possible instrumental parameters of the
proposed Big-Bang-Observer mission~\cite{bbo}: laser power $P_{La}$ and
wavelength $\lambda$, optical efficiency $\epsilon$, mirror diameter
$D$ and the ratio of the BBO acceleration noise to that of LISA $\eta$.
Using these parameters it is straightforward to derive the noise spectral
density $S_h(f)$ from~\cite{generator}: accordingly we report the
frequency $f_\mathrm{*}$ at which the noise reaches its minimum and the
relevant value $S_\mathrm{*} = S_h (f_\mathrm{*})$.}
\begin{indented}
\lineup
\item[]\begin{tabular}{l|cccccccc}
\br
Configuration & $P_{La}$ & $\lambda$ & $\epsilon$ & L & $D$ & $\eta$ &
$f_\mathrm{*}$ & $S_\mathrm{*}^{1/2}$\\
& (W) & ($\mu$~m) & & (km) & (m) & & Hz & $\mathrm{Hz}^{-1/2}$\\
\mr
BBO-lite & 100 & 1.06 & 0.3 & $2 \times 10^4$ & 3 & 0.1 & 1.3 & $5.5
\times 10^{-24}$\\
BBO-standard & 300 & 0.5 & 0.3 & $5 \times 10^4$ & 3.5 & 0.01 & 0.6& $7.9 \times
10^{-25}$\\
BBO-grand & 500 & 0.5 & 0.5 & $2 \times 10^4$ & 4 & 0.001 & 0.7 & $3.3
\times 10^{-25}$\\
\br
\end{tabular}
\end{indented}
\end{table}
We have computed the signal-to-noise ratio, Equation~(\ref{eqn:snr}),
generated by a single-field inflationary spectrum
$\Omega_{\mathrm{gw}}(f; n_s, r)$, Equation~(\ref{spectrum}) for the
three BBO configurations reported in Table~\ref{tab:bbo-param}. The
parameters of the signal model have been chosen in the range $1.2 \le
n_s \le 0.8$ and $0\le r \le 1$ and satisfy the constraint given by
Eq.~(\ref{constraint}). We have assumed an effective integration
time of 3 years and the noise spectral density has been derived using
the Sensitivity Curve Generator for Space-borne Gravitational Wave
Observatories~\cite{generator} with the parameters reported in
Table~\ref{tab:bbo-param}. Figure~\ref{fig:snr} summarises the results
and compare them with the current upper-limits on $n_s$ and $r$ which
have been inferred from the analysis of the WMAP
data~\cite{Kinneyetal04}. The first interesting result is that the
BBO-lite configuration would not be able to improve our understanding of
standard inflation beyond what is already known; in fact the sensitivity
of BBO-lite is broadly comparable to the limit currently set by WMAP.
This has an immediate implication on the technology programme that will
lead to a BBO-like mission: the parameters reported in
Table~\ref{tab:bbo-param} for BBO-lite are simply too conservative and
would not allow us to achieve the mission science goal.
\begin{figure}[htbp]
\vspace{3pt}
\begin{center}
\includegraphics[height=12cm]{bbo.eps}
\end{center}
\caption{The sensitivity of the Big-Bang-Observer mission to a cosmic
gravitational wave background generated by a single field slow-roll
inflationary model. The plot shows the region in the parameter space
$r$ and $n_s$ (see Section 2) that can be detected (corresponding to a
false alarm alarm probability of $1\%$ and false dismissal rate of
$10\%$). The magenta, yellow and red regions correspond to the limits
obtained in a 3 yr long observation with BBO-lite, BBO-standard, and BBO-grand,
respectively. The solid line correspond to the present best upperlimit
set by WMAP observations~\protect{\cite{Kinneyetal04}}}
\label{fig:snr}
\end{figure}
On the other hand the BBO-standard configuration is able to probe the
entire range of $n_s$ and to reach values of the scalar-to-tensor ratio
$r \approx 5 \times 10^{-3}$ for a 1\% false alarm and 10\% false
dismissal rate; by adopting the BBO-grand configuration it would be
possible to do even better and reach $r \approx 5 \times 10^{-4}$. Notice
that for $r \ll 1$, the minimum value of the scalar-to-tensor ratio
$r_\mathrm{min}$ that can be observed scales as $r_\mathrm{min} \sim
1/S_h$, every other parameter being equal. Not surprisingly a dedicated
mission such as BBO would improve our ability of probing the range of
unknown parameters by (roughly) three orders of magnitudes, with respect to
current limits. However, CMB experiments such as Planck (2007) and, in the
more distant future, CMBPol will also be in a position of searching for the
signature of a CGWB and it is worth comparing the sensitivity that can
be achieved by means of indirect observations with the BBO results.
In order to make this comparison, we have determined the theoretical
confidence intervals on the parameters $n_s$ and $r$ by computing the
corresponding Fisher information matrix for Planck and CMBPol,
including both the polarisation and the temperature anisotropy CMB
spectra. More in detail, we have assumed as cosmology the best-fit
model consistent with WMAP data~\cite{Sper03} and we have
marginalised with respect the ionisation optical depth in order to take
into account its effect on the B-mode polarisation. For Planck, we have
assumed an average pixel sensitivity of $11.6\, \mu$K and $24.3\, \mu$K
for the temperature and polarisation anisotropies respectively,
while for CMBPol the corresponding noise levels are reduced by a factor 40.
Figure~\ref{fig:comp} summarises the results: we show the regions in
the two-dimensional parameter space $(n_s, r)$ corresponding to the
68\% and 95\% confidence level for the null hypothesis (i.e.~no CGWB)
for Planck and CMBPol and compare it with the limit of BBO and
BBO-grand observations, respectively (those reported in
Figure~\ref{fig:snr}).
\begin{figure}
\begin{center}
\mbox{
\scalebox{0.4}{\rotatebox{360}{\includegraphics{planck.eps}}}
\scalebox{0.4}{\rotatebox{360}{\includegraphics{cmbpol.eps}}}
}
\end{center}
\caption{\label{fig:comp}The sensitivity of Planck and CMBPol to
indirect observations of a cosmic gravitational wave background produced
during inflation. The plots show the region of the the parameter space
$r$ and $n_s$ corresponding to the 68\% and 95\% confidence level
upper-limit to a CGWB (red and yellow areas, respectively). The left
plot corresponds to Planck observations and the line refers to the detection limit
obtained with the BBO-standard configuration, cf Figure~\ref{fig:snr}. The
plot on the right corresponds to CMBPol observations and the line
refers to the detection limit obtained with the BBO-grand configuration, cf
Figure~\ref{fig:snr}.}
\end{figure}
One important caveat is that the results that we have presented so far,
both for direct and indirect observations, are computed assuming that
the only factor limiting the sensitivity of the instruments is the
intrinsic noise of the detectors, whereas other effects could actually
provide the limitation. Astrophysical foregrounds and radiation from
individual GW sources can limit the sensitivity of BBO. Stochastic
foregrounds are produced by the incoherent superposition of radiation
from large populations of astrophysical sources. Foregrounds are
particularly dangerous, because they provide a \emph{fundamental}
sensitivity limit for the mission~\cite{UV01}. In the BBO band, the
strongest contributions, according to our present astrophysical
understanding come from rotating neutron stars and supernovae generated
by population III objects\cite{Buonannoetal04}. Foregrounds from rotating neutron stars
should not be a serious limitation, as their contribution to the
spectrum is $\Omega_\mathrm{gw} \lower.5ex\hbox{\ltsima} 10^{-22}$. On the other hand
supernovae from population III objects could be a very serious obstacle to
achieve high sensitivity and might overwhelm the signal produced by
inflation. In fact they could produce a foreground with intensity
$h_0^2\Omega_\mathrm{gw} \sim 10^{-18}$ at $f \sim 1$~Hz. For comparison this
is equivalent to a CGWB with $r \sim 10^{-3}$. Even assuming that no
foreground is sufficiently strong to compete with the signal from
inflation, deterministic signals, primarily from binary neutron stars up to
high redshift, will be present in the data set and need to be identified
and removed to a high degree of precision in order not to introduce spurious effects.
On the other hand the sensitivity of CMB experiments to primordial
gravitational waves strongly depends on the distinctive signature
produced by a CGWB on the B-mode of the CMB polarisation. Indeed the
B-mode polarisation is a particular sensitive probe of primordial tensor
perturbations, since it does not receive contributions from primordial
density perturbations. However, gravitational lensing by cosmological
structure also generates a B-mode component in the CMB polarisation
~\cite{ZS98} and such foreground cannot be fully subtracted. The lensing
contamination poses a fundamental limit on the sensitivity to a B-mode
component due to primordial gravitational waves, corresponding to a
lower limit on the scalar-to-tensor ratio $r$ of about $6 \times
10^{-4}$~\cite{KS02,KCK02}.
\section{Conclusions}
\label{sec:conclusions}
The direct detection of a cosmological gravitational wave stochastic
background produced during inflation is of great importance for the
understanding of early Universe cosmology and shall provide a direct
test of one of the fundamental, and not yet probed predictions of
inflationary theories. In this paper we have explored the sensitivity of
the Big-Bang-Observer mission to backgrounds generated by slow-roll,
single field inflationary models and compared it with indirect
limits that future CMB missions, such as Planck and CMBPol are
expected to set. Our analysis shows that mild technological improvements considered
for the BBO-lite configuration would not meet the science goals of a dedicated
gravitational wave interferometric mission; on the other hand the ambitious
choices of the instrumental parameters for the standard and grand configuration
of BBO would allow us to achieve a sensitivity $h_0^2\Omega_\mathrm{gw} \sim 10^{-19}$
in the frequency band $0.1\,\mathrm{Hz} - 1\,\mathrm{Hz}$. This value is broadly
comparable with what could be achieved by one of the inflationary probes for CMB
observations, such as CMBPol that are currently being discussed.
It is however important to stress that throughout this paper we have assumed
that the effect of foreground emission from unresolved sources and/or lensing would
have a negligible impact on the sensitivity of the missions. This hypothesis is
useful to gain an insight into the ultimate performance of the experiments, but its
range of validity needs to be careful investigated. For direct gravitational wave
observations it is clear that at some point astrophysical foregrounds will provide the
fundamental sensitivity limit. What is the level at which this can occur and the
consequences on our ability of testing prediction needs to be parametrised as a function
of our (still poor) knowledge of the relevant astrophysical scenarios.
\section*{References}
|
1,314,259,996,582 | arxiv | \section{Introduction}
On the landscape of theoretical and observational modern cosmology,
the most revolutionizing fact is believed to be the current cosmic
accelerated expansion. Recent experiments indicate that this
expansion must be due to some enigmatic force with astonishing
anti-gravitational effects, known as dark energy. There are many
proposals to explain its ambiguous nature. The $f(R)$ gravity is one
of such proposals established by replacing geometric part of the
Einstein-Hilbert action with this generic function depending on the
Ricci scalar $R$. The fourth order non-linear field equations of
this gravity keep triggering researchers to evaluate exact solution.
The study of exact solutions under assorted scenarios is extensively
used to explore different cosmic aspects that unveil sophisticated
picture of cosmic evolution. Sharif and Shmair\cite{I1} constructed
vacuum as well as non-vacuum exact solutions of Bianchi I and V
universe models in $f(R)$ gravity and also investigated physical
behavior of these solutions. Guti$\acute{e}$rrez-Pi$\tilde{n}$eres
and L$\acute{o}$pez-Monsalvo \cite{I2} evaluated exact vacuum
solution for static axially symmetric spacetime in the same gravity
and found that solution corresponds to naked singularity. Sharif and
Zubair \cite{11} considered interaction of matter with geometry to
formulate some exact solutions of Bianchi I model. Gao and Shen
\cite{I3} found a new method to formulate exact solutions of static
spherically symmetric metric. They also analyzed some general
properties of solutions like event horizon, singularity and deficit
angle in Jordan and Einstein frames.
Noether symmetry approach is considered to be the most appreciable
technique which explores not only exact solutions but also evaluates
conserved quantities relative to symmetry generators associated with
dynamical system. Capozziello et al. \cite{20} formulated exact
solution of static spherically symmetric metric for $f(R)$ power-law
model. The same authors \cite{I4} generalized this work for
non-static spherically symmetric spacetime and also discussed
possible solutions for axially symmetric model. Vakili \cite{22}
studied the scalar field scenario of flat FRW model through this
approach and discussed current cosmic phase via effective equation
of state parameter corresponding to quintessence phase. Momeni et
al. \cite{a3} investigated the existence of Noether symmetry for
isotropic universe model in mimetic $f(R)$ as well as $f(R,T)$
gravity theories ($T$ denotes trace of energy-momentum tensor).
Sharif and his collaborators \cite{I5} investigated cosmic evolution
as well as current cosmic expansion through Noether symmetry
approach.
Our universe always bring eye opening questions for cosmologists
regrading its surprising and mysterious nature. The existence of
hypothetical geometries is considered as the most debatable issue
which leads to wormhole geometry. A wormhole (WH) structure is
defined through a hypothetical bridge or tunnel which allows a
smooth connection among different regions only if there exists
exotic matter (matter with negative energy density). The existence
of a physically viable WH is questioned due to the presence of
enough amount of exotic matter. Consequently, there is only one way
to have a realistic WH model, i.e., the presence of exotic matter
must be minimized. Besides the existence of such astrophysical
configurations, the most crucial problem is stability analysis which
defines their behavior against perturbations as well as enhances
physical characterization. A singularity-free configuration
identifies a stable state which successfully prevents the WH to
collapse while a WH can also exist for quite a long time even if it
is unstable due to very slow decay. The evolution of unstable system
can lead to many phenomena of interest from structure formation to
supernova explosions. To explore WH existence, different approaches
have been proposed such as modified theories of gravity, non-minimal
curvature-matter coupling, scalar field models etc \cite{I7}.
The study of WH solutions has been of great interest in modified
theories of gravity. Lobo and Oliveira \cite{I6} considered constant
shape function and different fluids to explore WH solution in $f(R)$
gravity. Jamil et al. \cite{I10} formulated viable WH solutions for
$f(R)$ power-law model and also considered particular shape function
in the background of non-commutative geometry. Bahamonde et al.
\cite{I8} constructed cosmological WH threaded by perfect fluid
approaching to FRW universe in the same gravity. Mazharimousavi and
Halilsoy \cite{I9} found a near-throat WH solution of $f(R)$ model
admitting polynomial expansion and also satisfying necessary WH
conditions for both vacuum as well as non-vacuum cases. Sharif and
Fatima \cite{I11} discussed static spherically symmetric WH in
galactic halo region as well as investigated non-static conformal WH
in $f(\mathcal{G})$ gravity, ($\mathcal{G}$ represents Gauss-Bonnet
term). Noether symmetry approach elegantly explores the WH geometry
by formulating exact solutions. Bahamonde et al. \cite{I12} obtained
exact solutions of red-shift as well as shape functions through this
approach and analyzed their geometric behavior graphically in
scalar-tensor theory incorporating non-minimal coupling with torsion
scalar.
In this paper, we study WH geometry threaded by perfect fluid via
Noether symmetry approach in $f(R)$ gravity. The format of the paper
is as follows. Section \textbf{2} explores basic review of $f(R)$
gravity. In section \textbf{3}, we construct point-like Lagrangian
which is used in section \textbf{4} to evaluate WH solutions for
both constant as well as variable red-shift functions. Section
\textbf{5} investigates stability of the constructed WH solutions.
In the last section, we present final remarks.
\section{Basics of $f(R)$ Gravity}
We consider a minimally coupled action of $f(R)$ gravity given by
\begin{equation}\label{1}
\mathcal{I}=\int
d^4x\sqrt{-g}[\frac{f(R)}{2\kappa^2}+\mathcal{L}_m],
\end{equation}
where $g$ identifies determinant of the metric tensor $g_{\mu\nu}$,
$f(R)$ describes a coupling-free function while $\mathcal{L}_m$
denotes Lagrangian density of matter. The metric variation of action
(\ref{1}) leads to
\begin{equation}\label{2}
f_RR_{\mu\nu}-\frac{1}{2}fg_{\mu\nu}-\nabla_\mu\nabla_\nu
f_R+g_{\mu\nu}\Box f_R=\kappa^2T^{(m)}_{\mu\nu},\quad
T^{(m)}_{\mu\nu}=g_{\mu\nu}\mathcal{L}_m-2\frac{\partial\mathcal{L}_m}{\partial
g^{\mu\nu}}.
\end{equation}
Here, $f_R$ shows the derivative of generic function $f$ with
respect to $R$, $\nabla_\mu$ represents covariant derivative,
$\Box=\nabla_{\mu}\nabla^{\mu}$ and $T^{(m)}_{\mu\nu}$ denotes
energy-momentum tensor. The equivalent form of Eq.(\ref{2}) is
\begin{equation}\label{3}
G_{\mu\nu}=\frac{1}{f_R}(T^{(m)}_{\mu\nu}+T^{(c)}_{\mu\nu})=T^{eff}_{\mu\nu},
\end{equation}
where $G_{\mu\nu},~T^{(c)}_{\mu\nu}$ and $T^{eff}_{\mu\nu}$ identify
Einstein, curvature and effective energy-momentum tensors,
respectively. The curvature terms relative to generic function
define $T^{(c)}_{\mu\nu}$ as
\begin{equation}\label{4}
T^{(c)}_{\mu\nu}=\frac{f-Rf_R}{2}g_{\mu\nu}+\nabla_\mu\nabla_\nu
f_R-\Box f_R g_{\mu\nu}.
\end{equation}
The energy-momentum tensor corresponding to perfect fluid is
\begin{equation*}
T^{(m)}_{\mu\nu}=(\rho_m(r)+p_m(r))u_\mu u_\nu+p_m(r)g_{\mu\nu},
\end{equation*}
where $\rho_m$ and $p_m$ characterize energy density and pressure,
respectively whereas $u_\mu$ denotes four velocity of the fluid as
$u_\mu=(-e^{\frac{a(r)}{2}},0,0,0)$.
The static spherically symmetric spacetime is \cite{I13}
\begin{equation}\label{6}
ds^2=-e^{a(r)}dt^2+e^{b(r)}dr^2+M(r)(d\theta^2+\sin^2\theta
d\phi^2),
\end{equation}
where $a,~b$ and $M$ are arbitrary functions depending on radial
coordinate $r$. The geodesic deviation equation determines that
$M(r)=r^2,~\sin r,~\sinh r$ for $\mathcal{K}=0,1,-1$ ($\mathcal{K}$
denotes curvature parameter) under the limiting behavior
$M(r)\rightarrow0$ as $r\rightarrow0$, respectively \cite{I18}. In
case of $M(r)=r^2$, the spherical symmetry defines Morris-Thorne WH
where $a(r)$ is recognized as red-shift function identifying
gravitational red-shift while $e^{b(r)}$ explores the geometry of WH
for $e^b=\left(1-\frac{h(r)}{r}\right)^{-1}$, $h(r)$ is known as
shape function. In order to locate throat of a WH, radial coordinate
must follow non-monotonic behavior such that it decreases from
maximum to minimum value $r_0$ identifying WH throat at $h(r_0)=r_0$
and then it starts increasing from $r_0$ to infinity. To have a WH
solution at throat, the condition $h'(r_0)<1$ is imposed, where
prime denotes derivative with respect to $r$. The flaring-out
condition is the fundamental property of WH which demands
$\frac{h(r)-h(r)'r}{h(r)^2}>0$. For the existence of traversable WH,
the surface should be free from horizons, the red-shift function
must be finite everywhere and $1-h(r)/r>0$. To formulate the field
equations for the action (\ref{1}), we choose $\mathcal{L}_m=p_m(r)$
\cite{I14} and use Eqs.(\ref{2})-(\ref{6}), it follows that
\begin{eqnarray}\nonumber
&&\frac{e^a}{4e^bM^2}(-4M''M+2b'M'M+M'^2+4Me^b)=\frac{1}
{f_R}\left[\frac{e^{-b}(Rf_R-f)}{2}\right.\\\label{7a}&&
-\left.f_{R}'\left(\frac{a'e^a}{2e^b}\right)
+e^{a-b}f_{R}''+e^{a-b}f_{R}'\left(\frac{a'-b'}{2}+\frac{M'}{M}\right)
+e^a\rho_m\right],
\\\nonumber&&-\frac{1}{4M^2}(M'^2+2a'M'M-4Me^b)=\frac{1}{f_R}
\left[\frac{(f-Rf_R)}{2}-\frac{b'f_R'}{2}-f_{R}'\right.\\\label{8a}
&&\times\left.e^{a-b}f_{R}'\left(\frac{a'-b'}{2}+\frac{M'}{M}\right)
\left(\frac{a'-b'}{2}+\frac{M'}{M}\right)+e^bp_m\right],
\\\nonumber&&\frac{1}{4Me^b}(M'M(a'-b')+2M''M+M^2a'^2
-M^2a'b'-M'^2+2M^2a'')\\\nonumber&&=\frac{1}{f_R}\left[Mp_m+\frac{M'f_R'}{2e^bM}
+\frac{M(Rf_R-f)}{2}-\frac{f_R''}{Me^b}-\frac{f_R'}{Me^b}
\left(\frac{a'-b'}{2}+\frac{M'}{M}\right)\right].
\end{eqnarray}
The energy conditions provide a significant way to analyze physical
existence of some cosmological geometries. For WH geometry, the
violation of these conditions ensures the existence of a realistic
WH. To define energy conditions, Raychaudhari equations are
considered to be the most fundamental ingredients given as
\begin{eqnarray}\label{A}
\frac{d\theta}{d\tau}=-\frac{1}{3}\theta^2-\sigma_{\mu\nu}\sigma^{\mu\nu}
+\Theta_{\mu\nu}\Theta^{\mu\nu}-R_{\mu\nu}l^\mu l^\nu,\\\label{B}
\frac{d\theta}{d\tau}=-\frac{1}{2}\theta^2-\sigma_{\mu\nu}\sigma^{\mu\nu}
+\Theta_{\mu\nu}\Theta^{\mu\nu}-R_{\mu\nu}k^\mu k^\nu,
\end{eqnarray}
where $\theta,~l^\mu,~k^\mu,~\sigma$ and $\Theta$ represent
expansion scalar, timelike vector, null vector, shear and rotation
tensors. The first equation is defined for timelike congruence while
the second is for null congruence. The positivity of the last term
of both equations demands attractive gravity. For the
Einstein-Hilbert action, these conditions split into null (NEC)
($\rho_{m}+p_{m}\geq0$), weak (WEC)
($\rho_{m}\geq0,~\rho_{m}+p_{m}\geq0$), strong (SEC)
($\rho_{m}+p_{m}\geq0,~\rho_{m}+3p_{m}\geq0$) and dominant (DEC)
($\rho_{m}\geq0,~\rho_{m}\pm p_{m}\geq0$) energy conditions
\cite{I15}. As the Raychaudhari equations are found to be purely
geometric implying that $T^{(m)}_{\mu\nu}k^\mu k^\nu\geq0$ can be
replaced with $T^{eff}_{\mu\nu}k^\mu k^\nu\geq0$. Thus, the energy
conditions in $f(R)$ gravity turn out to be \cite{I16}
\begin{eqnarray*}\nonumber
\textbf{NEC}:\quad&&\rho_{eff}+p_{eff}\geq0,\\\nonumber
\textbf{WEC}:\quad&&\rho_{eff}\geq0,\quad\rho_{eff}+p_{eff}\geq0,\\\nonumber
\textbf{SEC}:\quad&&\rho_{eff}+p_{eff}\geq0,\quad\rho_{eff}+3p_{eff}\geq0,\\\nonumber
\textbf{DEC}:\quad&&\rho_{eff}\geq0,\quad\rho_{eff}\pm p_{eff}\geq0.
\end{eqnarray*}
Solving Eqs.(\ref{7a}) and (\ref{8a}), we obtain
\begin{eqnarray}\nonumber
p_m&=&-\frac{f}{2}+e^{-b}f_{R}'\left(\frac{a'}{2}+\frac{M'}{M}\right)
-\frac{f_R}{4e^bM^2}\left(2M'^2-4M''M-a'^2M^2\right.
\\\label{7}&+&\left.a'b'M^2+2b'M'M-2M^2a''\right),
\\\nonumber\rho_m&=&\frac{f_R}{4e^bM^2}\left(M^2a'^2-M^2a'b'
+2a'M'M+2M^2a''\right)+e^{-b}f_{R}''+e^{-b}f_{R}'\\\label{8}&\times&
\left(\frac{-b'}{2}+\frac{M'}{M}\right)+\frac{f}{2}.
\end{eqnarray}
In $f(R)$ gravity, NEC relative to the effective energy-momentum
tensor for (\ref{6}) yields
\begin{equation}\label{10}
\rho_{eff}+p_{eff}=\frac{1}{2e^{b}}\left(\frac{M'^2}{M^2}+\frac{a'M'}{M}
+\frac{b'M'}{M}-\frac{2M''}{M}\right).
\end{equation}
\section{Point-like Lagrangian}
In this section, we construct point-like Lagrangian corresponding to
the action (\ref{1}) via Lagrange multiplier approach. In this
regard, we consider following form of gravitational action
\cite{aop1}
\begin{equation}\label{C}
\mathcal{I}=\int\sqrt{-g}[f(R)-\lambda(R-\bar{R})]dr,
\end{equation}
where
\begin{eqnarray}\label{c2}
\sqrt{-g}&=&e^{\frac{a}{2}}e^{\frac{b}{2}}M,\quad\lambda=f_R,\\\nonumber
\bar{R}&=&\frac{1}{e^b}\left(-\frac{a'^2}{2}+\frac{a'b'}{2}-\frac{a'M'}{M}
-\frac{2M''}{M}+\frac{b'M'}{M}+\frac{M'^2}{2M^2}-a''+\frac{2e^b}{M}\right).
\end{eqnarray}
The dynamical constraint $\lambda$ is obtained by varying the action
(\ref{C}) with respect to $R$. In order to determine $p_m$, we
consider Bianchi identity ($\nabla_{\mu}T^{\mu\nu}$) whose radial
component gives
\begin{equation}\label{N1}
\frac{dp_m}{dr}+\frac{a'(r)}{2}\left(p_m+\rho_m\right)=0.
\end{equation}
Solving this differential equation with $p_m=\omega\rho_m$, it
follows that
\begin{equation}\label{c1}
\rho_m=\rho_0a^{-\frac{(1+\omega)}{2\omega}},\quad
p_m=\omega\rho_m=\omega\rho_0a^{-\frac{(1+\omega)}{2\omega}},
\end{equation}
where $\omega$ represents equation of state parameter. Inserting
Eq.(\ref{c2}) and (\ref{c1}) in (\ref{C}), we obtain
\begin{eqnarray}\nonumber
\mathcal{I}&=&\int
e^{\frac{a-b}{2}}M\left[f(R)-Rf_R+\frac{f_R}{e^b}\left(-\frac{a'^2}{2}
+\frac{a'b'}{2}-\frac{a'M'}{M}-\frac{2M''}{M}+\frac{b'M'}{M}
\right.\right.\\\label{C1}&+&\left.\left.\frac{M'^2}{2M^2}-a''
+\frac{2e^b}{M}\right)+\omega\rho_0a^{-\frac{(1+\omega)}{2\omega}}\right]dr.
\end{eqnarray}
Eliminating second order derivatives via integration by parts from
the above action and following Lagrangian density definition, we
obtain point-like Lagrangian as
\begin{eqnarray}\nonumber
&&\mathcal{L}(r,a,b,M,R,a',b',M',R')=e^{\frac{a}{2}}e^{\frac{b}{2}}M\left(f-Rf_R
+\omega\rho_0a^{-\frac{(1+\omega)}{2\omega}}+\frac{2f_R}{M}\right)
\\\label{11}&&+\frac{e^{\frac{a}{2}}M}{e^{\frac{b}{2}}}\left\{f_R\left(\frac{M'^2}
{2M^2}+\frac{a'M'}{M}\right)+f_{RR}\left(a'R'+\frac{2M'R'}{M}\right)\right\}.
\end{eqnarray}
For static spherically symmetric spacetime, the Euler-Lagrange
equation and Hamiltonian of the dynamical system or energy function
associated with point-like Lagrangian are defined as
\begin{eqnarray}\nonumber
&&\frac{\partial\mathcal{L}}{\partial
q^i}-\frac{dp_i}{dr}=0,\quad\mathcal{H}=\sum_iq'^{i}p_i-\mathcal{L},
\end{eqnarray}
where $q^i$ are generalized coordinates and
$p_i=\frac{\partial\mathcal{L}}{\partial{q'^i}}$ represents
conjugate momenta. The variation of Lagrangian with respect to
configuration space leads to
\begin{eqnarray*}
&&e^b\left(f-Rf_R
+\omega\rho_0a^{-\frac{(1+\omega)}{2\omega}}-(1+\omega)\rho_0
a^{-\frac{(1+3\omega)}{2\omega}}+\frac{2f_R}{M}\right)+\left(\frac{M'^2}
{2M^2}+\frac{b'M'}{M}\right.\\\nonumber&&-\left.\frac{2M''}
{M}\right)f_R+f_{RR}\left(b'R'-2R''
-\frac{2M'R'}{M}\right)-2R'^2f_{RRR}=0,
\\\nonumber&&e^b\left(f-Rf_R+\omega\rho_0a^{-\frac{(1+\omega)}
{2\omega}}+\frac{2f_R}{M}\right)-f_R\left(\frac{M'^2}
{2M^2}+\frac{a'M'}{M}\right)-f_{RR}\left(a'R'\right.
\\\nonumber&&+\left.\frac{2M'R'}{M}\right)=0,
\\\nonumber&&e^b\left(f-Rf_R+\omega\rho_0a^{-\frac{(1+\omega)}
{2\omega}}+\frac{2f_R}{M}\right)
+f_R\left(-\frac{a'^2}{2}+\frac{a'b'}{2}-\frac{a'M'}{2M}
-\frac{M''}{M}-a''\right.\\\nonumber&&+\left.\frac{b'M'}
{2M}+\frac{M'^2}{2M^2}\right)+f_{RR}
\left(b'R'-a'R'-2R''-\frac{M'R'}{M}\right)-2R'^2f_{RRR}=0,
\\\nonumber&&\left[e^b\left(\frac{2}{M}-R\right)
-\frac{a'^2}{2}+\frac{a'b'}{2}-\frac{a'M'}{M}
-\frac{2M''}{M}+\frac{b'M'}{M}+\frac{M'^2}{2M^2}-a''\right]f_{RR}=0.
\end{eqnarray*}
The energy function and variation of Lagrangian relative to shape
function yield
\begin{equation}\label{12}
e^b=\frac{\frac{f_R{M}'}{{M}}\left(\frac{{M}'}{2{M}^2}+{a}'{M}'\right)
+R'f_{RR}({a}'{M}+2{M}')}{f-Rf_R
+\omega\rho_0{a}^{-\frac{(1+\omega)}{2\omega}}+\frac{2f_R}{{M}}}.
\end{equation}
\section{Noether Symmetry Approach}
The physical characteristics of a dynamical system can be identified
by constructing the associated Lagrangian which successfully
describes energy content and the existence of possible symmetries of
the system. In this regard, Noether symmetry approach provides an
interesting way to construct new cosmological models and geometries
in modified theories of gravity. According to well-known Noether
theorem, group generator yields associated conserved quantity if
point-like Lagrangian remains invariant under a continuous group. In
order to investigate the presence of Noether symmetry and relative
conserved quantity of static spherically symmetric metric, we
consider a vector field \cite{aop2}
\begin{eqnarray}\label{13}
K&=&\tau(r,q^i)\frac{\partial}{\partial
r}+\zeta^i(r,q^i)\frac{\partial}{\partial q^i},
\end{eqnarray}
where $r$ behaves as an affine parameter while $\tau$ and $\zeta^i$
are unknown coefficients of the vector field $K$.
The presence of Noether symmetry is assured only if point-like
Lagrangian satisfies the invariance condition and the vector field
is found to be unique on tangent space. Consequently, the vector
field acts as a symmetry generator generating associated conserved
quantity. In this case, the invariance condition is defined as
\begin{equation}\label{14}
K^{[1]}\mathcal{L}+(D\tau)\mathcal{L}=DB(r,q^i),
\end{equation}
where $B$ denotes boundary term of the extended symmetry, $K^{[1]}$
describes first order prolongation and $D$ represents total
derivative given by
\begin{equation}\label{15}
K^{[1]}=K+(D\zeta^i-{q'}^iD\tau)\frac{\partial}{\partial
{q'}^i},\quad D=\frac{\partial}{\partial
r}+{q'}^{i}\frac{\partial}{\partial q^i}.
\end{equation}
Noether symmetries coming from invariance condition (\ref{14}) lead
to identify associated conserved quantities through first integral.
If the Lagrangian remains invariant under translation in time and
position, then the first integral identifies energy and linear
momentum conservation while rotationally symmetric Lagrangian yields
conservation of angular momentum \cite{13}. For invariance condition
(\ref{14}), the first integral is defined as
\begin{equation}\label{16}
\Sigma=B-\tau\mathcal{L}-(\zeta^i-{q'}^i\tau)
\frac{\partial\mathcal{L}}{\partial {q'}^i}.
\end{equation}
For configuration space $Q=\{a,b,M,R\}$, the vector field $K$ and
first order prolongation $K^{[1]}$ take the following form
\begin{eqnarray}\nonumber
K&=&\tau\frac{\partial}{\partial r}+\alpha\frac{\partial}{\partial
a}+\beta\frac{\partial}{\partial b}+\gamma\frac{\partial}{\partial
M}+\delta\frac{\partial}{\partial R},\quad
K^{[1]}=\tau\frac{\partial}{\partial
r}+\alpha\frac{\partial}{\partial a}+\beta\frac{\partial}{\partial
b}\\\label{17}&+&\gamma\frac{\partial}{\partial
M}+\delta\frac{\partial}{\partial R}+\alpha'\frac{\partial}{\partial
a'}+\beta'\frac{\partial}{\partial
b'}+\gamma'\frac{\partial}{\partial
M'}+\delta'\frac{\partial}{\partial R'},
\end{eqnarray}
where the radial derivative of unknown coefficients of vector field
are defined as
\begin{eqnarray}\label{18}
\sigma'_{_j}&=&D\sigma_{_j}-{q'}^iD\tau,\quad j=1...4.
\end{eqnarray}
Here $\sigma_1,~\sigma_2,~\sigma_3$ and $\sigma_4$ correspond to
$\alpha,~\beta,~\gamma$ and $\delta$, respectively. Inserting
Eqs.(\ref{11}), (\ref{17}) and (\ref{18}) in (\ref{14}) and
comparing the coefficients of $a'^2,~a'b'M',~a'M'^2$ and $a'R'^2$,
we obtain
\begin{equation}\label{19}
\tau,_{_a}f_R=0,\quad\tau,_{_b}f_R=0,\quad\tau,_{_M}f_R=0,\quad\tau,_{_R}f_{RR}=0.
\end{equation}
This equation implies that either $f_R=0$ or vice verse. The first
choice leads to trivial solution. Therefore, we consider $f_R\neq0$
and compare the remaining coefficients which yield the following
system of equations
\begin{eqnarray}\label{20}
&&B,_{_b}=0,\quad\tau,_{_a}=0,\quad\tau,_{_b}=0,\quad\tau,_{_M}=0,\quad\tau,_{_R}=0,
\\\label{21}&&e^{\frac{a}{2}}(\gamma,_{_r}f_R+M
\delta,_{_r}f_{RR})=e^{\frac{b}{2}}B,_{_a},\\\label{22a}&&e^{\frac{a}{2}}
(\alpha,_{_r}M+2\gamma,_{_r})f_{RR}=e^{\frac{b}{2}}B,_{_R},\\\label{22}&&e^{\frac{a}{2}}
(\alpha,_{_r}f_R+\gamma,_{_r}M^{-1}f_R+2\delta,_{_r}f_{RR})=e^{\frac{b}{2}}B,_{_M},
\\\label{23}&&\gamma,_{_a}f_R+M\delta,_{_a}f_{RR}=0,\\\label{24}&&\gamma,_{_a}
f_R+M\delta,_{_a}f_{RR}=0,\\\label{25}&&\alpha,_{_b}f_R+\gamma,_{_b}M^{-1}
f_R+2\delta,_{_b}f_{RR}=0,\\\label{26}&&M\alpha,_{_b}f_{RR}+2\gamma,_{_b}f_{RR}=0,
\\\label{27}&&M\alpha,_{_R}f_{RR}+2\gamma,_{_R}f_{RR}=0,\\\label{28}&&f_R
(\alpha-\beta-2\gamma
M^{-1}+4M\alpha,_{_M}+4\gamma,_{_M}-2\tau,_{_r})+f_{RR}(2\delta+8M\delta,_{_M})=0,
\\\label{29}&&f_R(\alpha-\beta+2\alpha,_{_a}-2\tau,_{_r}
+2\gamma,_{_M}+2\gamma,_{_a})+f_{RR}(2\delta+2M\delta,_{_M}+4\delta,_{_a})=0,
\\\nonumber&&f_R(\alpha,_{_R}+\gamma,_{_R}M^{-1})+f_{RR}(\alpha-\beta+M\alpha,_{_M}
+2\gamma,_{_M}-2\tau,_{_r}+2\delta,_{_R})+2\delta\\\label{30}&&\times
f_{RRR}=0,\\\nonumber&&2\gamma,_{_R}f_R+f_{RR}(M\alpha-M\beta+2\gamma
+2M\alpha,_{_a}-2M\tau,_{_r}+4\gamma,_{_a}+2M\delta,_{_R})+2M\\\label{31}&&\times\delta
f_{RRR}=0,\\\nonumber&&e^{\frac{a}{2}}e^{\frac{b}{2}}M\{\frac{1}{2}(f-Rf_R
+\omega\rho_0a^{-\frac{(1+\omega)}{2\omega}}+\frac{2f_R}{M})(\alpha+\beta+\tau,_{_r})
-\frac{1}{2}\alpha(1+\omega)\rho_0\\\nonumber&&\times
a^{-\frac{(1+3\omega)}{2\omega}}+\delta
M(2M^{-1}-R)f_{RR}\}+e^{\frac{a}{2}}e^{-\frac{b}{2}}\gamma(f-Rf_R
+\omega\rho_0a^{-\frac{(1+\omega)}{2\omega}})\\\label{32}&&=B,_{_r}.
\end{eqnarray}
In order to solve this system, we consider $M(r)=r^2$ and taking
$B,_{_a},~B,_{_M},~B,_{_R}=0$, Eqs.(\ref{20})-(\ref{27}) give
\begin{equation*}
\alpha=Y_2(a,r),\quad\gamma=Y_1(r),\quad\delta=Y_3(r,R).
\end{equation*}
Inserting these values in Eqs.(\ref{28})-(\ref{31}), we obtain
\begin{equation*}
Y_1(r)=0,\quad Y_2(a,r)=c_2,\quad
Y_3(r,R)=\frac{c_1f_R}{f_{RR}},\quad\beta=2c_1+c_2-2\tau,_{_r},
\end{equation*}
where $c_1$ and $c_2$ are arbitrary constants. For these solutions,
the coefficients of symmetry generator turn out to be
\begin{equation}
\alpha=c_2,\quad\beta=2c_1+c_2,\quad\gamma=0,
\quad\delta=\frac{c_1f_R}{f_{RR}},\quad\tau=c_0.
\end{equation}
Substituting these coefficients in Eq.(\ref{32}), we formulate
boundary term and explicit form of $f(R)$ as follows
\begin{eqnarray*}
f(R)&=&-\frac{1}{2(c_1+c_2)}\left[-(1+\omega)\rho_0a^{-\frac{(1
+3\omega)}{2\omega}}+2\omega(c_1+c_2)\rho_0a^{-\frac{(1+\omega)}
{2\omega}}\right.\\\nonumber
&-&\left.6c_4e^{\frac{-a-b}{2}}\right],\quad B=c_3+c_4r^3.
\end{eqnarray*}
The coefficients of symmetry generator, boundary term and solution
of $f(R)$ satisfy the system of Eqs.(\ref{20})-(\ref{31}) for
$c_1=0$. Thus, the symmetry generator and the corresponding first
integral take the form
\begin{eqnarray*}
K&=&c_0\frac{\partial}{\partial r}+c_2\frac{\partial}{\partial
a}+c_2\frac{\partial}{\partial
b},\\\nonumber\Sigma&=&c_3+c_4r^3-c_0\left[e^{\frac{a}{2}}
e^{\frac{b}{2}}r^2(f-Rf_R
+\omega\rho_0a^{-\frac{(1+\omega)}{2\omega}}+2f_Rr^{-2})
\right.\\\nonumber&+&\left.\frac{e^{\frac{a}{2}}r^2}
{e^{\frac{b}{2}}}\{f_R(2r^{-2}+2a'r^{-1})
+f_{RR}(a'R'+4R'r^{-1})\}\right]\\\nonumber&-&c_2
e^{\frac{a-b}{2}}(R'r^2f_{RR}+2rf_R).
\end{eqnarray*}
The verification of Eq.(\ref{32}) yields
\begin{equation}\label{33}
b(r)=\int\frac{8c_6r^2+a''r^2+4a'r'+a'^2r^2-4c_7}{r(4+a'r)}dr+c_5,
\end{equation}
where $c_i$'s $(i=3...8)$ are arbitrary constants and this solution
satisfies Eq.(\ref{32}) for $\omega=1,1/3,-1/3,-1$. To discuss
physical features and geometry of WH via shape function, we take
red-shift function, $a(r)=k$ and $a(r)=-\frac{k}{r},~k>0$, where $k$
denotes constant \cite{I17}. In the following, we solve integral for
both choices of red-shift function.
\subsubsection*{Case I: $a(r)=k$}
We first consider red-shift function to be constant and evaluate
$b(r)$ such as
\begin{equation}\label{34}
b(r)=c_6r^2-c_7\ln r+c_5.
\end{equation}
Consequently, the shape function turns out to be
\begin{equation}\label{35}
h(r)=r(1-e^{-b(r)})=r(1-c_7re^{-c_6r^2-c_5}).
\end{equation}
In this case, the explicit form of $f(R)$ reduces to
\begin{eqnarray}\nonumber
f(R)=-\frac{1}{2c_2}\left[-(1+\omega)\rho_0
k^{-\frac{(1+3\omega)}{2\omega}}+2\omega
c_2\rho_0k^{-\frac{(1+\omega)}{2\omega}}
-6c_4\sqrt{c_7r}e^{\frac{-c_6r^2-c_5-k}{2}}\right].\\\label{A1}
\end{eqnarray}
The $f(R)$ theory of gravity is one of the competitive candidates in
modified theories of gravity as it naturally unifies two expansion
phases of the universe, i.e., inflation at early times and cosmic
acceleration at current epoch. The higher derivative of curvature
terms with positive power are dominant at the early universe leading
to the inflationary stage. The terms with negative power of the
curvature serve as gravitational alternative for the dark energy
that acts as a possible source to speed-up cosmic expansion
\cite{aop1a}. Despite the fact that the ghost-free $f(R)$ theory is
very interesting and useful as it passes solar system tests, it also
suffers from instabilities. For instance, the theory with
$\frac{1}{R}$ may develop the instability \cite{aop22a} whereas by
adding a term of $R^2$ to this specific form of $f(R)$ model, one
can easily eliminate this instability \cite{aop9a}. Therefore, the
viable $f(R)$ models require to satisfy the following stability
constraints $f_R(R)>0,~f_{RR}(R)>0,~R>R_0$ where $R_0$ is the
current Ricci scalar \cite{aop25a}.
In Figure \textbf{1}, both plots indicate that the constructed
$f(R)$ model (\ref{A1}) preserves the stability conditions. Figure
\textbf{2} shows the graphical analysis of shape function. The upper
left plot represents positive behavior of $h(r)$ while the upper
right indicates that the shape function admits asymptotic behavior.
The lower left plot locates the WH throat at $r_0=4.4$ and the
corresponding right plot identifies that
$\frac{dh(r_0)}{dr}=0.9427<1$. To discuss physical existence of WH,
we insert constant red-shift function and Eq.(\ref{34}) in
(\ref{10}) yielding
\begin{equation*}
\rho_{eff}+p_{eff}=\frac{rh'(r)-h(r)}{r^3}<0,
\end{equation*}
which satisfies the flaring-out condition. Consequently, NEC
violates in this case, $\rho_{eff}+p_{eff}<0$ and assures the
presence of repulsive gravity leading to traversable WH. In order to
study the realistic existence of traversable WH, we analyze the
behavior of NEC and WEC in Figure \textbf{3}. Both plots indicate
that energy density and pressure recover energy bounds as
$\rho_m\geq0$ and $\rho_m+p_m\geq0$ implying physically acceptable
traversable WH.
\begin{figure}\centering{\epsfig{file=f1.eps,
width=0.4\linewidth}\epsfig{file=f2.eps,
width=0.4\linewidth}\caption{Plots of stability conditions of $f(R)$
model versus $r$ for $c_{_2}=5$, $c_{_4}=0.01$, $c_{_5}=-0.35$,
$c_{_6}=0.1$, $c_{_7}=-0.25$, $\rho_0=1$ and $k=0.5$.}}
\end{figure}
\begin{figure}\centering{
\epsfig{file=1.eps, width=0.4\linewidth}\epsfig{file=2.eps, width=0.4\linewidth}\\
\epsfig{file=3.eps, width=0.4\linewidth}\epsfig{file=4.eps,
width=0.4\linewidth} \caption{Plots of $h(r),~\frac{h(r)}
{r},~h(r)-r$ and $\frac{dh(r)}{dr}$ versus $r$ for $c_{_5}=-0.35$,
$c_{_6}=0.1$ and $c_{_7}=-0.25$.}}
\end{figure}
\begin{figure}\centering{
\epsfig{file=ec1.eps, width=0.5\linewidth}\epsfig{file=ec2.eps,
width=0.5\linewidth}\caption{Plots of $\rho_m$ and $\rho_m+p_m$
versus $r$.}}
\end{figure}
\subsubsection*{case II: $a(r)=-k/r$}
In this case, we choose red-shift function in terms of $r$ leading
to
\begin{eqnarray}\nonumber
a(r)&=&-\frac{k}{r},\quad b(r)=\frac{1}{8r}(4c_6r^2(2r-k)-32c_8r\ln
r+(32r-8c_7r+c_6kr^2)\\\label{36}&\times&\ln(4r+k)-8k/c_8)+c_5,\quad
k>0.
\end{eqnarray}
For this solution of $a(r)$ and $b(r)$, the generic function takes
the form
\begin{eqnarray}\nonumber
f(R)&=&-\frac{1}{2c_2}\left[-(1+\omega)\rho_0\left(-\frac{k}
{r}\right)^{-\frac{(1+3\omega)}{2\omega}}+2\omega
c_2\rho_0\left(-\frac{k}{r}\right)^{-\frac{(1+\omega)}{2\omega}}-6c_4
\right.\\\label{A2}&\times&\left.\sqrt{c_8r^4(4r+k)^{-4+c_7-\frac{k^2c_6}
{8}}}e^{\frac{-(c_6r^2-\frac{c_6kr}{2}-\frac{k}{c_8r})-c_5+k}{2}}\right].
\end{eqnarray}
The corresponding shape function becomes
\begin{eqnarray}\label{37}
h(r)=r(1-c_8r^4(4r+k)^{-4+c_7-\frac{k^2c_6}{8}}e^{-(c_6r^2-\frac{c_6kr}{2}
-\frac{k}{c_8r})-c_5}).
\end{eqnarray}
\begin{figure}\centering{\epsfig{file=a.eps,
width=0.55\linewidth}\epsfig{file=b.eps,
width=0.45\linewidth}\caption{Stability conditions of $f(R)$ versus
$r$ for $c_{_2}=5$, $c_{_4}=0.01$, $c_{_5}=-0.35$, $c_{_6}=0.1$,
$c_{_7}=-0.25$ and $k=0.5$.}}
\end{figure}
\begin{figure}\centering{
\epsfig{file=5.eps, width=0.4\linewidth}\epsfig{file=6.eps, width=0.4\linewidth}\\
\epsfig{file=7.eps, width=0.4\linewidth}\epsfig{file=8.eps,
width=0.4\linewidth} \caption{Plots of $h(r),~\frac{h(r)}
{r},~h(r)-r$ and $\frac{dh(r)}{dr}$ versus $r$ for $c_{_5}=-4$,
$c_{_6}=0.1$, $c_{_8}=-1$ and $k=0.25$.}}
\end{figure}
\begin{figure}\centering\epsfig{file=c.eps,
width=0.5\linewidth}\epsfig{file=d.eps,
width=0.5\linewidth}\\\epsfig{file=e.eps,
width=0.5\linewidth}\caption{Plots of $\rho_m$, $\rho_m+p_m$ and
$\rho_{eff}+p_{eff}$ versus $r$.}
\end{figure}
Figure \textbf{4} shows that the model (\ref{A2}) follows the
stability condition for $0<\omega<-0.08$ whereas Figure \textbf{5}
represents the graphical behavior of the shape function. In upper
face, the left plot preserves the positivity of $h(r)$ while the
right plot ensures asymptotic flat geometry of WH. In lower face,
the left plot detects WH throat at $r_0=5.878$ whereas the right
plot indicates that $\frac{dh(r_0)}{dr}=0.1673<1$. For
Eqs.(\ref{10}) and (\ref{36}), we obtain
\begin{equation*}
\rho_{eff}+p_{eff}=\frac{k}{r^2(r-h(r))}+\frac{rh'(r)-h(r)}{r^3}.
\end{equation*}
To investigate the presence of realistic traversable WH, we
establish the graphical behavior of NEC and WEC corresponding to
perfect fluid as well as NEC relative to effective energy-momentum
tensor. Figure \textbf{6} indicates that $\rho_m+p_m\geq0$,
$\rho_m\geq0$ and $\rho_{eff}+p_{eff}<0$ for $1<\omega<-1$. Thus,
the physical existence of WH is assured in this case.
\subsection{Power-law $f(R)$ Model}
Here, we construct a WH solution with symmetry generator and
corresponding conserved quantity for $f(R)$ power-law model, i.e.,
$f(R)= f_0R^n,~n\neq0,1$. For this purpose, we solve
Eqs.(\ref{20})-(\ref{27}) leading to
\begin{equation*}
\alpha=Y_3(a,r),\quad\gamma=Y_1(r),\quad\delta=Y_2(r,R).
\end{equation*}
Inserting this solution into Eqs.(\ref{28})-(\ref{31}), we obtain
\begin{equation*}
Y_1(r)=0,\quad Y_3(a,r)=d_2,\quad
Y_2(r,R)=d_1R,\quad\beta=2(n-1)d_1+d_2-2\tau,_{_r},
\end{equation*}
where $d_1$ and $d_2$ represent arbitrary constants. For these
values, the coefficients of symmetry generator turn out to be
\begin{equation}
\alpha=d_2,\quad\beta=2(n-1)d_1+d_2-2\tau,_{_r},\quad\gamma=0,\quad\delta=d_1R.
\end{equation}
Substituting these coefficients in Eq.(\ref{32}) and assuming
$B=d_0$ and $\tau=\tau_0$, it follows that
\begin{eqnarray}\nonumber
b(r)&=&\int\frac{8d_3r^2+2a''r^2+4a'r'+a'^2r^2-4d_4}{r(4+a'r)}dr
\\\label{40}&-&\ln\left[-d_1+4\int\frac{e^{\int\frac{8r^2
+2a''r^2+4a'r'+a'^2r^2-4}{r(4+a'r)}dr}}{r(4+a'r)}dr\right].
\end{eqnarray}
The resulting coefficients of symmetry generator verifies the system
(\ref{20})-(\ref{31}) for $d_2=-2(n-1)d_1$. Under this condition,
the symmetry generator and associated first integral take the form
\begin{eqnarray*}
K&=&\tau_0\frac{\partial}{\partial
r}-2(n-1)d_1\frac{\partial}{\partial a}+d_1\frac{\partial}{\partial
R},\\\nonumber\Sigma&=&d_0-\tau_0\left[e^{\frac{a}{2}}e^{\frac{b}{2}}r^2(f-Rf_R
+\omega\rho_0a^{-\frac{(1+\omega)}{2\omega}}+2f_Rr^{-2})+\frac{e^{\frac{a}{2}}r^2}
{e^{\frac{b}{2}}}\right.\\\nonumber&\times&\left.\{f_R(2r^{-2}+2a'r^{-1})
+f_{RR}(a'R'+4R'r^{-1})\}\right]-2d_1(1-n)e^{\frac{a-b}{2}}(R'r^2\\\nonumber
&\times&f_{RR}+2rf_R)-d_1Rf_{RR}e^{\frac{a-b}{2}}(a'r^2+4r).
\end{eqnarray*}
Now, we solve the integral (\ref{40}) for constant and variable
forms of red-shift function and study WH geometry via shape
function.
\subsubsection*{Case I: $a(r)=k$}
For constant red-shift function, the integral (\ref{40}) reduces to
\begin{equation}\label{41}
b(r)=d_3r^2-d_4\ln r-\ln\left(\frac{-d_1r+e^{r^2}}{r}\right).
\end{equation}
This satisfies Eq.(\ref{32}) for
$\omega=1,\frac{1}{3},-\frac{1}{3},-1$ and
\begin{equation}\label{42}
\rho_0=-\frac{f_oe^{\frac{3\omega\ln d_1+4n\omega\ln2+\ln
d_1}{2\omega}}}{\omega d_1-(1+\omega)},\quad\omega\neq0.
\end{equation}
In this case, the shape function yields
\begin{equation}\label{43}
h(r)=r\left[1-d_4r\left(\frac{-d_1r+e^{r^2}}{r}\right)e^{-d_3r^2}\right].
\end{equation}
\begin{figure}
\centering{\epsfig{file=9.eps,
width=0.35\linewidth}\epsfig{file=10.eps,
width=0.35\linewidth}\\\epsfig{file=11.eps,
width=0.35\linewidth}\epsfig{file=12.eps,
width=0.35\linewidth}\caption{Plots of $h(r),~\frac{h(r)}
{r},~h(r)-r$ and $\frac{dh(r)}{dr}$ versus $r$ for $d_{_2}=16$,
$d_{_3}=1.001$, $d_{_4}=-0.2$ and $n=\frac{1}{2}$.}}
\end{figure}
We analyze WH geometry via shape function for $n=\frac{1}{2},2$ and
$n=4$. In upper face, the left and right plots of Figure \textbf{7}
show that $h(r)$ remains positive and asymptotic flat for
$n=\frac{1}{2}$. The lower left plot identifies WH throat at
$r_0=5.101$ and right plot satisfies the condition, i.e.,
$h'(r_0)=0.17<1$. In Figures \textbf{8} and \textbf{9}, the shape
function preserves its positivity condition and also admits
asymptotic flat geometry for both $n=2$ and $n=4$. The WH throat is
located at $r_0=0.23$ and $r_0=2.052$ for $n=2$ and $n=4$,
respectively. The derivative condition is also satisfied at throat,
i.e., $h'(r_0)=0.89<1$ and $h'(r_0)=-0.49<1$. The NEC relative to
effective energy-momentum tensor verifies $\rho_{eff}+p_{eff}<0$
while Figure \textbf{10} identifies $\rho_m\geq0$ and
$\rho_m+p_m\geq0$ for $n=0.5$. In case of $n=2$ and $n=4$, the
energy density and pressure corresponding to perfect fluid evolve in
the same way.
\begin{figure}\centering{\epsfig{file=13.eps,
width=0.35\linewidth}\epsfig{file=14.eps,
width=0.35\linewidth}\\\epsfig{file=15.eps,
width=0.35\linewidth}\epsfig{file=16.eps,
width=0.35\linewidth}\caption{Plots of $h(r),~\frac{h(r)}
{r},~h(r)-r$ and $\frac{dh(r)}{dr}$ versus $r$ for $d_{_2}=-200$,
$d_{_3}=1.001$, $d_{_4}=0.2$ and $n=2$.}}
\end{figure}
\begin{figure}\centering{\epsfig{file=17.eps,
width=0.35\linewidth}\epsfig{file=18.eps,
width=0.35\linewidth}\\\epsfig{file=19.eps,
width=0.35\linewidth}\epsfig{file=20.eps,
width=0.35\linewidth}\caption{Plots of $h(r),~\frac{h(r)}
{r},~h(r)-r$ and $\frac{dh(r)}{dr}$ versus $r$ for $d_{_2}=-200$,
$d_{_3}=1.001$, $d_{_4}=0.2$ and $n=4$.}}
\end{figure}
\begin{figure}\centering{\epsfig{file=ec3.eps,
width=0.65\linewidth}\epsfig{file=ec4.eps,
width=0.45\linewidth}\caption{Plots of $\rho_m$ and $\rho_m+p_m$
versus $r$ for $n=0.5$.}}
\end{figure}
\subsubsection*{Case II: $a(r)=-k/r$}
Here we consider red-shift function to be $r$-dependent and solve
the integral (\ref{40}) implying that
\begin{eqnarray}\nonumber
b(r)&=&r^2-\frac{rd_1(1-n)}{2}+\frac{d_1^2(1-n)^2\ln(d_1(1-n)+4r)}{8}
+(d_1(1-n))^2\\\nonumber&\times&\left\{-\frac{1}{rd_1(1-n)}
+\frac{4\ln(d_1(1-n)+4r)}{(d_1(1-n))^2}-\frac{4\ln
r}{(d_1(1-n))^2}\right\}-\ln((1-n)\\\nonumber&\times&d_1+4r)
-\ln\left[4\int\frac{1}{4r+d_1(1-n)}\left(r^{-4}(d_1(1-n)
+4r)^{3+\frac{d_1^2(1-n)^2}{8}}\right.\right.\\\nonumber
&\times&\left.\left.e^{r^2-\frac{rd_1(1-n)}{2}+\frac{d_1(1-n)}{r}}
\right)dr-d_1\right].
\end{eqnarray}
This solution satisfies Eq.(\ref{32}) for $\omega=-1$. The shape
function of WH takes the form
\begin{eqnarray*}
&&\frac{h(r)}{r}=\left(1-r^{4}(d_1(1-n)+4r)^{-3-\frac{d_1^2(1-n)^2}{8}}
e^{-r^2+\frac{rd_1(1-n)}{2}-\frac{d_1(1-n)}{r}}\left[\int\{4r+d_1\right.
\right.\\\nonumber&&\times\left.\left.(1-n)\}^{-1}\left(r^{-4}(d_1(1-n)
+4r)^{3+\frac{d_1^2(1-n)^2}{8}}e^{r^2-\frac{rd_1(1-n)}{2}+\frac{d_1(1-n)}{r}}
\right)dr-d_1\right]\right).
\end{eqnarray*}
When red-shift function is not constant ($a'(r)\neq0$), then the
geometry of WH cannot be analyzed for $f(R)$ power-law model due to
the complicated forms of $b(r)$ and $h(r)$.
\subsection{Exponential Model}
In this section, we consider another example of viable $f(R)$ model,
i.e., exponential model to realize the existence of realistic
traversable WH. The simplest version of this model is proposed as
\cite{exp1}
\begin{equation}\label{exp1}
f(R)=R-2\Lambda(1-e^{-\frac{R}{R_0}}),
\end{equation}
where $\Lambda$ denotes cosmological constant while $R_0$ defines
curvature parameter. If $R\gg R_0$, then the corresponding model
recovers standard cosmological constant cold dark matter model. To
formulate WH solution, we first solve the system of
Eqs.(\ref{20})-(\ref{32}) for the model (\ref{exp1}) which leads to
the following coefficients of symmetry generator and boundary term
\begin{eqnarray*}
\alpha&=&0,\quad\beta=\frac{4\Lambda\chi_1}{R_0},\quad\gamma=0,\quad
\delta=\chi_1(R_0e^{\frac{R}{R_0}}-2\Lambda),\quad\tau=\tau_0,\\\nonumber
B&=&\frac{2e^{\frac{a+b}{2}}\Lambda\chi_1}{R_0^2}\left[-\frac{2r^3R_0\Lambda}{3}\left(1-
e^{-\frac{R}{R_0}}-\frac{2R}{R_0}\right)+\frac{r^3R_0(1-RR_0)}{3}+4r\right.
\\\nonumber&\times&\left.(R_0-2\Lambda e^{-\frac{R}{R_0}})\right]+\chi_2,
\end{eqnarray*}
where $\chi_1$ and $\chi_2$ represent arbitrary constants. These
solutions satisfy the system for $\omega=\rho_0=-1$ and the
following constraint
\begin{equation}\label{exp2}
e^{\frac{R}{R_0}}r^2R_0^2-2r^2R_0\Lambda+4r^2R\Lambda-24\Lambda=0.
\end{equation}
Now we determine the coefficient of radial component of the metric
(\ref{6}) using this constraint with Eq.(\ref{12}) for both constant
as well as variable forms of red-shift function and study WH
geometry via shape function.
\subsubsection*{Case I: $a(r)=k$}
\begin{figure}\centering{\epsfig{file=exp1.eps,
width=0.45\linewidth}\epsfig{file=exp21.eps,
width=0.45\linewidth}\\\epsfig{file=exp3.eps,
width=0.45\linewidth}\epsfig{file=exp4.eps,
width=0.45\linewidth}\caption{Plots of $h(r),~\frac{h(r)}
{r},~h(r)-r$ and $\frac{dh(r)}{dr}$ versus $r$ for $\chi_{_4}=-200$,
$R_0=-0.95=\Lambda$ and $k=0.005$.}}
\end{figure}
\begin{figure}\centering{\epsfig{file=exp5.eps,
width=0.45\linewidth}\epsfig{file=exp6.eps,
width=0.45\linewidth}\caption{Plots of $\rho_m$ and $\rho_m+p_m$
versus $r$.}}
\end{figure}
In this case, we obtain
\begin{eqnarray}\nonumber
e^{b(r)}&=&-(4(-2R_0r^2+(R_0r^2+12r^4)\exp((1/2)(12+R_0r^2)/(R_0r^2))\\\nonumber
&-&48r^4\chi_4+24(1-\chi_4)))\{(r^2((5r^4R_0^2-2r^4R_0
-4R_0r^2)\\\nonumber&\times&\exp((1/2)(12+R_0r^2)(R_0r^2)^{-1})
-6r^4R_0^2+48r^2\chi_4-48r^2\\\label{41e}&
-&120R_0r^2\chi_4+104R_0r^2-96+96\chi_4))\}^{-1}.
\end{eqnarray}
From this expression, we formulate shape function through
$h(r)=r[1-e^{-b(r)}]$ and analyze the WH geometry graphically. In
Figure \textbf{11}, the upper face indicates that the shape function
is positively increasing while the corresponding geometry is found
to be asymptotically flat as $h(r)/r\rightarrow0$ when
$r\rightarrow\infty$. In the lower face, the left plot indicates
that the WH throat exists at $r_0=0.05$ and also preserves the
condition, i.e., $h(0.05)=0.05$ while the right plot shows that
$h'(r_0)=-0.007<1$. Since the red-shift function is constant
therefore, the traversable nature of the constructed WH solution is
preserved by the violation of effective NEC, i.e.,
$p_{eff}+\rho_{eff}<0$. Figure \textbf{12} evaluates the criteria
for physically viable WH as $\rho_m>0$ and $p_m+\rho_m>0$.
\subsubsection*{Case II: $a(r)=-k/r$}
Using Eqs.(\ref{12}) and (\ref{exp2}), it follows that
\begin{eqnarray}\nonumber
e^{b(r)}&=&-(4(24+48kr^2-2R_0r^2-4kr^4R_0-12(r+4)kr^2\chi_4\\\nonumber&-&
24\chi_4(1+2r^4))+(2kr^4R_0+3r^3k+R_0r^2+12r^4)\\\nonumber&\times&
\exp((1/2)(12+R_0r^2)/(R_0r^2)))\{r^2
((-2r^4R_0+5r^4R_0^2-4R_0r^2)\\\nonumber&\times&\exp((1/2)
(12+R_0r^2)/(R_0r^2))-(6r^2R_0+104)R_0r^2-48(r^2+2)
\\\nonumber&-&(120R_0r^2-48r^2+96)\chi_4\}^{-1}.
\end{eqnarray}
Inserting the above expression in $h(r)=r[1-e^{-b(r)}]$, we
construct WH solution relative to variable but finite red-shift
function whose graphical interpretation is given in Figure
\textbf{13}. Both plots of the upper and lower panels indicate that
the constructed WH follows asymptotic flat geometry whose throat is
located at $r_0=0.01$ and $h'(0.01)=-0.001<1$. In order to analyze
the presence of repulsive gravitational effects at throat, we study
the behavior of effective NEC in Figure \textbf{14} which ensures
that the sum of $p_{eff}$ and $\rho_{eff}$ remains negative. Thus,
the constructed WH is found to be traversable. Both plots of Figure
\textbf{15} shows that the WH is physically viable as NEC and WEC
corresponding to ordinary matter are preserved.
\begin{figure}\centering{\epsfig{file=expv1.eps,
width=0.45\linewidth}\epsfig{file=expv21.eps,
width=0.4\linewidth}\\\epsfig{file=expv3.eps,
width=0.45\linewidth}\epsfig{file=expv4.eps,
width=0.45\linewidth}\caption{Plots of $h(r),~\frac{h(r)}
{r},~h(r)-r$ and $\frac{dh(r)}{dr}$ versus $r$ for
$\chi_{_4}=-0.20$, $R_0=-0.95=\Lambda$ and $k=2$.}}
\end{figure}
\begin{figure}\centering{\epsfig{file=expv7.eps,
width=0.45\linewidth}\caption{Evolution of $\rho_{eff}+p_{eff}$
versus $r$.}}
\end{figure}\begin{figure}\centering{\epsfig{file=expv5.eps,
width=0.45\linewidth}\epsfig{file=expv6.eps,
width=0.45\linewidth}\caption{Plots of $\rho_m$ and $\rho_m+p_m$
versus $r$.}}
\end{figure}
\section{Stability Analysis}
Here we discuss the stability of WH solutions relative to both
constant as well as variable red-shift function via
Tolman-Oppenheimer-Volkov (TOV) equation. For isotropic fluid
distribution, the radial component of Bianchi identity
($\nabla_{\mu}T^{\mu\nu}=0$) defines TOV equation as
\begin{equation}\label{N1}
\frac{dp_m}{dr}+\frac{a'(r)}{2}\left(p_m+\rho_m\right)=0.
\end{equation}
The conservation of energy-momentum tensor relative to high order
curvature terms leads to
\begin{eqnarray}\label{N2}
T^{'(c)}_{11}+\frac{a'}{2}\left(T^{(c)}_{00}+T^{(c)}_{11}\right)
-\frac{M'}{M}\left(f_R''-\frac{f_R'}{e^{b(r)}}\left\{\frac{b'}{2}
+\frac{M'}{2M}\right\}\right)=0.
\end{eqnarray}
Combining Eq.(\ref{N1}) and (\ref{N2}), it follows that
\begin{eqnarray}\label{N3}
p'_{(eff)}+\frac{a'(r)}{2}\left(p_{eff}+\rho_{eff}\right)
-\frac{M'}{M}\left(f_R''-\frac{f_R'}{e^{b(r)}}\left\{\frac{b'}{2}
+\frac{M'}{2M}\right\}\right)=0,
\end{eqnarray}
where $p_{eff}=p_m+T^{(c)}_{11}$ and
$\rho_{eff}=\rho_m+T^{(c)}_{00}$. This equation determines the fate
of the WH as it can be expressed as a combination of hydrostatic
$\mathcal{F}_h$ and gravitational force $\mathcal{F}_g$. Using
Eq.(\ref{N3}), these forces take the following form
\begin{eqnarray*}
\mathcal{F}_h&=&p'_{(eff)}=\frac{d}{dr}(p_m+T^{(c)}_{11}),\\\nonumber
\quad\mathcal{F}_g&=&\frac{\mathcal{M}_{eff}e^{\frac{a-b}{2}}}{r^2}
\left(p_{eff}+\rho_{eff}\right)-\frac{M'}{M}\left(f_R''-\frac{f_R'}
{e^{b(r)}}\left\{\frac{b'}{2}+\frac{M'}{2M}\right\}\right),
\end{eqnarray*}
where $\mathcal{M}_{eff}=\frac{a'r^2e^{\frac{b-a}{2}}}{2}$ denotes
effective gravitational mass. The null effect
($\mathcal{F}_h+\mathcal{F}_g=0$) of these dynamical forces leads to
stable state of a WH.
\begin{figure}\centering{\epsfig{file=tov1.eps,
width=0.45\linewidth}\epsfig{file=tov2.eps,
width=0.45\linewidth}\caption{Plots of $\mathcal{F}_g$ (green) and
$\mathcal{F}_h$ (red) versus $r$ for $a(r)=k$ (left) and $a(r)=-k/r$
(right) for $c_{_2}=5$, $c_{_4}=0.01$, $c_{_5}=-0.35$, $c_{_6}=0.1$,
$c_{_7}=-0.25$, $\rho_0=-0.01$ and $k=0.5$.}}
\end{figure}
\begin{figure}\centering\epsfig{file=tovn1.eps,
width=0.4\linewidth}\epsfig{file=tovn2.eps,
width=0.4\linewidth}\\\epsfig{file=tovn3.eps,
width=0.4\linewidth}\caption{Plots of $\mathcal{F}_g$ (green) and
$\mathcal{F}_h$ (red) versus $r$ for $a(r)=k$, $d_{_2}=-2.2$,
$d_{_3}=1.001$, $d_{_4}=0.05$, $f_0=1$ and $\mathcal{M}_{eff}=2$.}
\end{figure}
In Figures \textbf{16}-\textbf{18}, we analyze the stability of WH
solutions constructed with the help of a new $f(R)$ model as well as
power-law and exponential forms of generic function $f(R)$. In
Figure \textbf{16}, the left plot represents the stability of WH
solution (\ref{35}) relative to constant red-shift function and
$f(R)$ model (\ref{A1}). The effect of gravitational and hydrostatic
forces appear to be the same but in opposite directions canceling
each other effect. Thus, the considered WH is found to be stable due
to null effect of these forces. For variable red-shift function, the
equilibrium state of WH solution (\ref{37}) is analyzed in the right
plot of Figure \textbf{16}. Initially, the WH geometry seems to be
unstable but gradually it attains an equilibrium state due to equal
but opposite effect of hydrostatic and gravitational forces. Figure
\textbf{17} determines the existence of stable WH for $n=0.5$, $n=2$
and $n=4$ with constant red-shift function. For $n=0.5$ and $n=0.4$,
the system remains unstable as $\mathcal{F}_g+\mathcal{F}_h\neq0$
whereas the constructed WH attains a stable state for $n=2$. In
Figure \textbf{18}, the WH solutions gradually attain equilibrium
state corresponding to both forms of red-shift function.
\begin{figure}\centering\epsfig{file=exp7.eps,
width=0.45\linewidth}\epsfig{file=expv8.eps,
width=0.45\linewidth}\caption{Plots of $\mathcal{F}_g$ (green) and
$\mathcal{F}_h$ (red) versus $r$ for $a(r)=k$ (left), $k=0.005$,
$\mathcal{M}_{eff}=-2$ and $a(r)=-k/r$ (right), $\chi_{_4}=-0.2$,
$R_{_0}=-0.95$, $k=2$ and $\mathcal{M}_{eff}=2$.}
\end{figure}
\section{Final Remarks}
In general relativity, the physical existence of a static
traversable WH demands the violation of NEC by the energy-momentum
tensor. This violation confirms the presence of exotic matter which
would be minimized to have a physically viable WH. In case of $f(R)$
gravity, the energy-momentum tensor threading WH satisfies NEC and
WEC whereas the existence of exotic matter is assured by the
effective energy-momentum tensor which violates NEC. In this paper,
we have discussed the presence of static traversable WH via Noether
symmetry approach in $f(R)$ gravity. For this purpose, we have
considered perfect fluid distribution and studied possible existence
of realistic WH solutions for generic as well as $f(R)$ power-law
model. We have solved over-determined system by invariance condition
and found symmetry generator, associated conserved quantity, exact
solution of $f(R)$ and $b(r)$ for static spherically symmetric
metric. For these solutions, we have studied WH geometry and also
investigated stable state of WH solutions via modified TOV equation
for the red-shift function when $a(r)=k,~-k/r$.
In case of constant red-shift function, we have obtained viable
$f(R)$ model and the shape function satisfies all the properties,
i.e., $h(r)>0$, WH geometry is found to be asymptotic flat and
$\frac{dh(r)}{dr}<1$ at $r=r_0$. The violation of NEC (using
effective energy-momentum tensor) assures the presence of repulsive
nature of gravity while existence of ordinary matter is supported by
verification of NEC and WEC relative to perfect fluid. When
$a'\neq0$, the $f(R)$ model preserves stability conditions for
$0<\omega<-0.08$ and the shape function has preserved all conditions
of traversable WH while $\rho_{eff}+p_{eff}<0$, $\rho_m+p_m\geq0$
and $\rho_m\geq0$ minimizing the presence of exotic matter due to
the presence of repulsive gravity. These energy bounds confirm the
presence of a realistic WH solution threaded by $T^{(m)}_{\mu\nu}$.
Consequently, we have found a physically viable WH solution for
$a'\neq0$. For both forms of red-shift function, the constructed WH
solutions attain an equilibrium state as
$\mathcal{F}_g+\mathcal{F}_h=0$.
We have also formulated symmetry generator, corresponding first
integral and WH solutions for $f(R)$ power-law model. When
$a'(r)=0$, we have established graphical analysis of traversable WH
conditions for $n=1/2,~n=2$ and $n=4$. In this case, the shape
function is found to preserve all conditions and
$\rho_{eff}+p_{eff}<0$ assures the violation of NEC identifying the
existence of exotic matter at throat. The consistent behavior of
$\rho_m\geq0$ and $\rho_m+p_m\geq$ indicate that the constructed
traversable WH is supported by ordinary matter. The stability
analysis of these realistic traversable WHs identifies that the WH
geometry would be stable only for $n=2$. For $a'\neq0$, we have
found a complicated form of the shape function. For exponential
$f(R)$ model, the WH geometry is discussed near the throat. The
shape of WH is found to be asymptotically flat for both constant as
well as variable forms of the red-shift function. The violation of
effective NEC and verification of NEC as well as WEC of ordinary
matter assure the presence of realistic traversable WH solutions.
The total effect of gravitational and hydrostatic forces identifies
equilibrium state of WHs in both cases.
The WH solutions are found in $f(R)$ gravity which is equivalent to
Brans-Dicke theory under a particular conformal transformation.
Coule \cite{aop3} established static unrealistic WH solutions in
Einstein frame of $f(R)$ theory. Nandi et al. \cite{aop6} examined
the possibility of static WH solutions in the background of both
Jordan and Einstein frames of Brans-Dicke theory. They found that
the non-traversable WH exists in the former frame whereas in the
latter frame, WH solutions do not exist at all unless energy
conditions are violated by hand. Furey and de Benedictis \cite{aop5}
discussed geometry of the WH solutions near the throat while
Bronnikov and Starobinsky \cite{aop4} claimed that the existence of
throat can be preserved under a conformal transformation. In
general, the back transformation from Jordan to Einstein frames does
not assure to get physical solutions. It has been even widely
demonstrated that passing from one frame to the other can completely
change the physical meaning as well as the stability of the
solutions \cite{aop36a}. Bahamonde et al. \cite{aop2a} observed the
presence of big-rip (type I) singularity in the Einstein frame of
$f(R)$ gravity while along back mapping, the universe evolution is
found to be singularity free.
In this paper, we have explored the existence of realistic and
stable traversable WH solutions in the Jordan frame representation
of $f(R)$ theory. It is worth mentioning here that the WH geometry
is discussed at the throat in case of standard power-law and
constructed $f(R)$ models whereas in case of exponential model, we
have analyzed the WH geometry near the throat. The presence of
repulsive gravity due to higher order curvature terms leads to
traversable WHs while the existence of ordinary matter confirms the
realistic nature of these traversable WH solutions in each case. For
$f(R)$ power-law model, the WH solutions are stable only for $n=2$
while stability is preserved for both exponential as well as
constructed $f(R)$ models. It would be interesting to analyze the
presence of these configurations in the Einstein frame where
contribution of scalar field may enhance the traversable nature as
it introduces anti-gravitational effects. On the other hand, the
back mapping of these frames may or may not ensure the presence of
stable as well as realistic traversable wormholes.
\vspace{0.25cm}
{\bf Acknowledgment}
\vspace{0.25cm}
This work has been supported by the \emph{Pakistan Academy of
Sciences Project}.
|
1,314,259,996,583 | arxiv | \section{Introduction}\label{sec:intro}
\tase\ are thermonuclear explosions of carbon and oxygen white dwarfs \citep[C+O WDs;][]{Nugent2011nat}.
They are the main source of iron-peak elements in the universe and crucial for measuring extragalactic distances, leading to the discovery of the accelerated cosmological expansion and dark energy \citep{Riess1998aj, perlmutter1999apj}. Despite their fundamental importance, the explosion mechanisms and progenitor systems of \tase\ remain a matter of extensive debate \citep{Maoz2014araa}. Understanding the origins of \tase, particularly of the ``normal'' events comprising $\sim$ 70\% of their population \citep{Blondin2012aj}, will not only clarify the endstates of stellar evolution but will be essential for improving cosmological distance measurements \citep[e.g.,][]{Wang2013sci, Zhang2021mnras}.
There is a broad consensus that \tase\ explode as a result of mass transfer in binary progenitor systems.
However, uncertainty remains about whether the binary companion involved in normal \tas\ explosions is an evolved non-degenerate star \citep[``single-degenerate scenario'';][]{Whelan&Iben1973apj} or another WD \citep[``double-degenerate scenario'';][]{Iben&Tutukov1984apjs}.
In the latter case, it is unclear whether the explosion would be triggered during WD-WD accretion \citep{Guillochon2010apj, Pakmor2013apj}, or in a complete merger \citep{Pakmor2012apj} or head-on collision of the two WDs \citep{Kushnir2013apj}.
The ``core-degenerate scenario’’ is a third hypothesis where \tase\ result from mergers of WDs with the cores of asymptotic giant branch stars \citep{Aznar2015mnras}.
The mechanisms responsible for triggering normal \tas\ explosions are also unclear.
Normal \tase\ have long been theorized to be ignited by nuclear burning in the core of a WD when accretion or merger causes its mass to reach the critical Chandrasekhar limit \citep[$\sim$ 1.4 \msol;][]{Mazzali2007sci}.
Alternatively, recent theoretical studies have suggested that the detonation of a thin helium layer on the surface of a sub-Chandrasekhar-mass WD can subsequently ignite carbon in the core, producing normal \tase\ via a ``helium-shell double-detonation'' \citep[He-shell DDet;][]{Polin2019apj, Townsley2019apj, Shen2021apjl}.
One scenario that has been thought to result in a He-shell DDet is the detonation of He-rich material on the WD surface during a double-degenerate accretion process, called ``dynamically-driven double-degenerate double-detonation'' (or \d6s), recently supported by the identification of hyper-velocity Galactic WDs interpreted to be survivors of the scenario \citep{Shen2018apj, Bauer2021apj}.
Multiple explosion and progenitor channels may ultimately contribute to the observed population of \tase.
In particular, the normal events consist
of two spectroscopically distinct subtypes \citep{Parrent2014apss}: ``Core-Normal/Normal-Velocity'' (CN/NV); and ``Broad-Line/High-Velocity'' (BL/HV).
Events from the two subtypes are nearly indistinguishable in their light curves, with similar peak brightness and decline rate, but differ in their observed spectroscopic features \citep{Branch2006pasp} and ejecta velocities \citep{Wang2009apj}.
Different explosion mechanisms---such as
Chandrasekhar- and sub-Chandrasekhar-mass
explosions \citep[e.g.,][]{Polin2019apj, Li2021apj}---have been suggested
to explain the differences between
the two subtypes.
Alternatively, unified origins for the observed spectroscopic diversity in normal events have also been proposed, usually involving an asymmetric explosion mechanism \citep[e.g.,][]{Maeda2010natur}.
Early (e.g., $\lesssim$ 5 days post-explosion) light curves of \tase\ can shed light on their origins by providing critical constraints on the binary companion, circumstellar material (CSM) from accretion or merger, and the distribution of elements in the outer ejecta.
Theoretical models have predicted that the collision between the SN ejecta and a binary companion \citep{Kasen2010apj} or CSM \citep{Piro&Morozova2016apj} can shock heat the ejecta, producing blue excess emission.
Multiple explosion processes, including sub-sonic mixing \citep{Reinecke2002aa} and detonation of surface helium \citep{Polin2019apj, Maeda2018apj}, have also been predicted to lead to over-densities of radioactive iron-peak (Fe-peak) elements, including \ni56, \fe52, and \chr48,
in the shallow layers of the ejecta, leading to excess emission and short-lived color evolution associated with Fe spectroscopic features.
Such color and light curve features within $\sim$ 5 days have been reported in many \tase\ \citep{Jiang2017nat, De2019apj, Hosseinzadeh2017apj, Marion2016apj, Miller2018apj, Dimitriadis2019apj, Bulla2020apj, Jiang2018apj, Stritzinger2018apj, Ni2022natas, Deckers2022mnras, Hosseinzadeh2022arxiv}, though there have been recent debates about their interpretation in some normal events \citep[e.g.,][]{Sand2018apj, Shappee2018apj, Ashall2022arxiv}.
However, for the vast majority of \tase\ observed between 1 and 5 days, their light curves match simple power-law profiles in this phase \citep{Bloom2012apj, Foley2012apj, Olling2015nat, Cartier2017mnras, Holmbo2019aa, Yao2019apj, Moon2021apj}.
Such power-law evolution is consistent with an origin that both (1) has a small non-degenerate or WD companion and (2) leads to a \ni56\ distribution in the ejecta that is largely centrally concentrated and
monotonically declining towards the surface.
Another way to critically constrain the explosion mechanism and progenitor system is
to investigate spectral features of \tase\ from
the so-called ``nebular phase" of
$\gtrsim$ 200 days since $B$-band maximum.
Differences in the Doppler shifts of [\feii] and [\niii] emission lines observed in normal \tase\ have been attributed to the viewing angle effects of asymmetric explosion mechanisms \citep{Maeda2010apj, Maeda2010natur, Li2021apj}.
Meanwhile, strong [\caii] emission has been associated with incomplete nuclear burning in the core of sub-Chandrasekhar-mass explosions \citep{Polin2021apj}.
For the progenitor, the presence of H$\alpha$ and \hei\ emission by stripped/ablated H and He from a non-degenerate companion
has been predicted by several recent studies as evidence supporting single-degeneracy \citep{Mattila2005aap, Botyanszki2018apj, Dessart2020aa}.
Such H$\alpha$ emission has been observed in the nebular-phase spectra of a few peculiar events \citep[e.g.,][]{Kollmeier2019mnras},
indicating that they may be from single-degenerate progenitors.
However, a systematic search
for H and He emission in the nebular-phase spectra of 110 \tase\
has failed to find such emission in most of them ($\gtrsim$ 90\%),
disfavouring the single-degenerate scenario as the primary contributor to the \tas\ population \citep{Tucker2020mnras}.
[\oi] emission has also been detected in the nebular-phase spectra
of two peculiar events and interpreted to be evidence for
the presence of swept-up unburned O from a double-degenerate merger \citep{Kromer2013apj, Taubenberger2013apj}.
The identification of such [\oi] emission
has yet to be made for normal events.
SN~2018aoz\ is a recent normal \tas\ detected 1.0 hours after its estimated epoch of first light\footnote{First light refers to the epoch when photons first emerge from the ejecta, which may follow the explosion by a few-hours to days in \tase\ depending on the photon diffusion process \citep{Piro&Nakar2013apj, Piro&Nakar2014apj}. In SN~2018aoz, the epoch of explosion is estimated to be MJD 58205.6 $\pm$ 0.7 based on the observed evolution of photospheric velocity (Paper I).} (MJD~58206.00), the earliest detection for a \tas\ ever made so far \citep[][Paper I hereafter]{Ni2022natas}.
Photometric and spectroscopic observations were obtained
over the ensuing period of $\sim$ 450 days,
including light curves of the first 12 hours from the very low brightness of $-$10.5 absolute AB magnitude.
This data set provides the unique opportunity to study
the entire evolution of a normal \tas\
from 1 hour after first light to the nebular phase.
In Paper I, we presented the discovery of two new infant-phase features
of \tas\ evolution during the first 1.0--12.4 hours:
a brief $B$-band plateau---which disappears after $\sim$ 0.5 days---and
simultaneous excess emission in the $V$ and $i$ bands.
The subsequent evolution of SN~2018aoz\ until $\sim$ 110 days is consistent with that of typical normal \tase, with a power-law light curve rise, peak $B$-band absolute magnitude of $-$19.32 mag and \dm15\ of 1.12 mag.
The two infant-phase features result in a rapid reddening of the \bv\ color, which has been associated with line-blanket absorption by an over-density of Fe-peak elements in the outer 1\% of the SN-ejected mass (Paper I). This has important implications for the normal \tas\ explosion mechanism, as such an ejecta composition is primarily predicted by asymmetric Chandrasekhar-mass explosions and He-shell DDets.
Although SN~2018aoz\ has provided critical information
on the distribution of surface Fe-peak elements, its evolution to the nebular phase has yet to be explored and additional insights into its origin can be gained by (1) placing constraints on the nature of its companion star (2) examining the physical implications of a range of possible power sources for the infant phase excess emission and (3) assessing its precise subtype among normal \tase.
In this paper, we present new photometric and spectroscopic
observations of the nebular phase of SN~2018aoz\ in Section~\ref{sec:obs}, as well as detailed modelling
and interpretation of key features to understand its origin
and evolution as follows.
In Section~\ref{sec:lc}, we describe the evolution of the light curves and spectra of SN~2018aoz, including comparisons of them to those of other \tase\ in order to establish its spectroscopic subtype.
We assess the range of companion stars that are compatible with the luminosity of the observed early light curve in Section~\ref{sec:kasan}.
Sections~\ref{sec:early}, \ref{sec:nebea}, and \ref{sec:hedd} describe our modelling of the infant-phase excess emission, analyses of the nebular-phase observations,
and comparisons to the predictions of He-shell DDet simulations, respectively.
In Section~\ref{sec:orig},
we discus the implications of our results for the progenitor system and explosion mechanism of SN~2018aoz, the nature of its infant-phase excess emission, and the origins of normal \tase.
We summarize our results and conclude in Section~\ref{sec:conc}.
\section{Observations and Data} \label{sec:obs}
SN~2018aoz\ was identified by both the KMTNet Supernova Program \citep[KSP;][]{Moon2016spie, Afsariardchi2019apj, Moon2021apj} and Distance Less Than 40 Mpc Survey \citep[DLT40;][]{Tartaglia2018}.
The earliest detection of the SN with signal-to-noise (S/N) $>$ 3 was made by KSP in the $B$ band at 00h54m on 29 March 2018 Universal Time (UT), or MJD 58206.0378.
DLT40 detected the source 1.1 days later in the $r$ band and reported the discovery of SN~2018aoz\ at 07h25m on 2 April 2018 UT \citep{Sand2018atel}.
The first spectrum obtained by the Las Cumbres Observatory \citep{Brown2013pasp} at 09h25m on 2 April 2018 UT subsequently classified the source as a \tas\ \citep{Hosseinzadeh2018tnscr}.
The discovery triggered an extensive campaign of ground- and space-based photometric observations as well as spectroscopic follow-up, obtaining observations in UV to NIR wavebands.
The early observations of SN~2018aoz\ obtained until $\sim$ 110 days since first light were presented in Paper I.
Here, we present additional KSP photometry continuing from $>$ 250 days since first light, covering the nebular phase (Section~\ref{sec:neblc}), as well as new nebular-phase spectroscopy of the SN (Section~\ref{sec:nebspec}).
\begin{deluxetable*}{cccc}
\tabletypesize{\footnotesize}
\tablecolumns{4}
\tablewidth{0.99\textwidth}
\tablecaption{Nebular-phase magnitudes of SN~2018aoz}
\tablehead{
\colhead{Time [MJD]} & \colhead{Band} & \colhead{Magnitude$\rm ^a$ [mag]} & \colhead{Error [mag]}
}
\startdata
58471.68652 & $B$ & 19.195 & 0.065 \\
58471.68830 & $B$ & 19.267 & 0.061 \\
58471.68979 & $V$ & 19.562 & 0.128 \\
58471.69130 & $i$ & 19.430 & 0.177 \\
58472.68728 & $B$ & 19.302 & 0.074 \\
58472.68862 & $V$ & 19.374 & 0.092 \\
58472.69015 & $i$ & 19.585 & 0.285 \\
58473.68083 & $B$ & 19.346 & 0.082 \\
58473.68233 & $V$ & 19.554 & 0.096 \\
58473.68382 & $i$ & 19.420 & 0.177 \\
\enddata
\tablenotetext{{\rm a}}{The $BV$-band magnitudes are in the Vega system,
while the $i$-band magnitudes are in the AB system (see text).}
\tablecomments{Sample of the observed magnitudes of SN~2018aoz\ during its nebular phase.
The entire observed magnitudes of SN~2018aoz\ are available in the electronic edition.}
\end{deluxetable*}
\label{tab:neblc}
\subsection{Nebular-Phase Photometry} \label{sec:neblc}
We used the three 1.6m telescopes of the Korea Microlensing Telescope Network \citep[KMTNet;][]{Kim2016jkas} in Chile, South Africa, and Australia to conduct photometric observations of SN~2018aoz\ during its nebular phase, $>$ 200 days since $B$-band maximum.
Each telescope of the network is equipped with an identical wide-field CCD camera with 4 square degree field-of-view and multiple filters in the visible band.
Between 2018 December and 2019 June, we conducted high-cadence monitoring of a 2\degr$\times$ 2\degr\ field containing the source, obtaining $\sim$ 500 images of the field with 60-s exposure times at a mean cadence of $\sim$ 9 hours in each of the $BVI$ bands.
The $B$, $V$, and $I$ bands are observed nearly simultaneously at each epoch with a time difference of $\sim$ 2 minutes between adjacent filters.
The typical limiting magnitude for a point source in these images is 21$-$22 mag at a S/N of 3.
Note that the source was not observed between July and November due to its proximity to the Sun.
\begin{figure}[t!]
\epsscale{\scl}
\plotone{NebLCFig210602.pdf}
\caption{The dereddened $BVi$-band light curves of SN~2018aoz\ (colored circles) relative to the epoch of $B$-band maximum light in rest frame covering its nebular phase are compared to those of SN~2011fe \citep[dashed lines;][]{Munari2013newa,Tsvetkov2013coska} that have been scaled so that they match the \mb\ and \dm15\ values of SN~2018aoz.
The errorbars represent the 1-$\sigma$ uncertainty level in this figure and all of the following. The vertical grey lines mark the four epochs with nebular-phase spectroscopy (see Table~\ref{tab:nebspec} and Figure~\ref{fig:nebspec}).
\label{fig:neblc}}
\end{figure}
We performed point-spread function (PSF) photometry of SN~2018aoz\ using the SuperNova Analysis Package (SNAP\footnote{\url{https://github.com/niyuanqi/SNAP}}), a custom python-based pipeline for supernova photometry and analysis.
A local PSF was obtained by fitting a Moffat function \citep{Moffat1969aap, Trujillo2001mnras} to nearby reference stars and simultaneously fitting sky background emission with a first-order polynomial function.
The fluxes of SN~2018aoz\ in the $B$ and $V$ bands were obtained by fitting the local PSF near the source location.
Paper I reported the presence of a faint background source $\sim$ 0\farcs8 north-west of the position of SN~2018aoz\ with apparent magnitudes of 24.90$\,\pm\,$0.27, 24.02$\,\pm\,$0.20, and 22.39$\,\pm\,$0.08 mag in the $BVi$ bands, respectively, that mainly affects the $i$ band.
Therefore, we measure the $i$-band SN flux in the nebular phase by using a Kron aperture containing both sources and subtracting the known flux of the background source from the combined flux in the aperture.
Since the brightness of the background source is significantly fainter than that of the 1-$\sigma$ noise level in $B$- and $V$-band images ($\lesssim$ 23.4 mag) and the SN at any epoch ($<$ 22.0 mag for $B$ band and $<$ 22.1 mag for $V$ band), it is incapable of meaningfully affecting the PSF photometry of the SN in those bands.
Photometric flux calibration was performed against 6--9 standard reference stars within 10\arcmin\ of the source from the AAVSO Photometric All-Sky Survey\footnote{\url{https://www.aavso.org/apass}} database whose apparent magnitudes are in the range of 15--16 mag. The observations in the $BVI$ KMTNet filters were calibrated against reference stars in the nearest AAVSO filters (Johnson~$BV$, and Sloan $i'$; or $BVi$).
For the AAVSO reference stars, their KSP $BVI$ instrumental magnitudes were transformed to standard $BVi$ filters using the equations from \citet{Park2017apj}.
For the SN, since its nebular-phase spectra are significantly different from the AAVSO standard stars used to derive the \citet{Park2017apj} equations, we applied linearly interpolated spectrophotometric (S)--corrections \citep{Stritzinger2002aj}.
These are photometric corrections between instrument and standard filters derived by performing synthetic photometry on spectra obtained at approximately the same epoch.
The calibrated and S-corrected nebular-phase photometry is presented in Table~\ref{tab:neblc} and shown in Figure~\ref{fig:neblc}.
\subsection{Nebular-Phase Spectroscopy} \label{sec:nebspec}
We obtained four low-resolution nebular-phase optical spectra of SN~2018aoz\ at 259.4, 277.3, 296.4, and 382.5 days since $B$-band maximum with a combination of the Gemini Multi-Object Spectrograph \citep[GMOS;][]{Hook2004} on Gemini-South, the Low Resolution Imaging Spectrometer \cite[LRIS;][]{Oke1995} on Keck, and the Low Dispersion Survey Spectrograph-3 \citep[LDSS-3;][]{Allington1994} on Magellan-Clay.
The spectroscopic observations are summarized in Table~\ref{tab:nebspec}.
\begin{deluxetable*}{lccccc}
\tabletypesize{\footnotesize}
\tablecolumns{5}
\tablewidth{0.99\textwidth}
\tablecaption{Nebular-phase spectroscopy of SN~2018aoz}
\tablehead{
\colhead{Date (UT)} & \colhead{Phase} & \colhead{Telescope} & \colhead{Instrument} & \colhead{R} & \colhead{Wavelength [\AA]}
}
\startdata
2018 December 30.28 & $+$259.4 & Gemini S & GMOS & 1690 & 4050--10000\\
2019 January 17.31 & $+$277.3 & Gemini S & GMOS & 1690 & 5000--10000\\
2019 February 5.53 & $+$296.4 & Keck & LRIS & 2000 & 3200--10000 \\
2019 May 3.17 & $+$382.5 & Magellan-Clay & LDSS-3 & 860 & 4250--10000 \\
\enddata
\tablecomments{Phase is observer frame days since $B$-band maximum light (MJD 58221.41).}
\end{deluxetable*}
\label{tab:nebspec}
The spectrum from the Magellan Telescope was reduced using standard tasks within IRAF. Bias and flat-field corrections were performed on the two-dimensional frames, one-dimensional spectra were extracted, and wavelength calibration was performed using calibration lamps taken immediately after target exposures. Flux calibration and telluric corrections were peformed with a set of custom IDL scripts \citep{Matheson2008,Blondin2012aj} using spectrophotometric standards observed on the same night. The GMOS spectra were reduced in a similar manner, but using the custom \texttt{gmos} suite of IRAF tasks. Initial flux calibration for GMOS spectra was performed using the IRAF tasks \texttt{standard} and \texttt{calibrate}, and final scaling was performed based on matching to the observed $V$-band photometry from the same epochs. The Keck-LRIS spectrum was reduced using LPipe, a fully-automated IDL pipeline for the LRIS \citep{Perley2019}.
The reduced and dereddened nebular-phase spectra are shown in Figure~\ref{fig:nebspec}.
\begin{figure}[t!]
\epsscale{\scl}
\plotone{NebLogFig200915.pdf}
\caption{The dereddened spectra of SN~2018aoz\ obtained from four epochs during the nebular phase as labelled on the right side of the figure in days since $B$-band maximum are shown. The spectra are vertically offset for display clarity. The vertical shaded colored regions show the locations of the broad emission features of [\feiii] (red), [\coiii] (green), as well as [\feii] and [\niii] (blue) that are visible.
While the [\feiii] and [\coiii] features are produced by a blend of several broad emission lines, the [\feii] and [\niii] features are thought to be primarily produced by single transitions of [\feii]~$\lambda$7155~\AA\ and [\niii]~$\lambda$7378~\AA\ (vertical solid lines), respectively \citep{Maeda2010apj}.
The dotted vertical lines show the expected locations of narrow emission lines associated with non-degenerate companions and circumstellar material (CSM) in \tase: H$\alpha$, \hei, and [\oi]. None of these narrow emission lines are detected.
\label{fig:nebspec}}
\end{figure}
\subsection{Host Galaxy, Distance, and Reddening}\label{sec:extn}
SN~2018aoz\ is located at (RA, decl.) = ($\rm 11^h51^m01^s.80$, $-28\degr44\arcmin38\farcs.5$) (J2000), in the halo of its host galaxy NGC~3923 (Paper I).
We adopt the host galaxy redshift of $z$ = 0.0058, distance modulus (DM) of 31.75 $\pm$ 0.08~mag
based on normal \tas\ template fitting,
and extinction correction of $E(B-V)\sim$ 0.09 mag, consistent with the observed Na~I~D lines in the spectrum of SN~2018aoz\ as well as the expected Galactic extinction towards the source (Paper I).
The extinction towards the source is also confirmed by fitting the observed color evolution of SN~2018aoz\ during the Lira law phase as detailed in Appendix~\ref{apx:color}.
\section{Early Evolution and Classification}\label{sec:lc}
\subsection{Early Light Curves and the Characteristics of the Infant-Phase Excess Emission}\label{sec:gaus}
\begin{figure}[t!]
\epsscale{\scl}
\plotone{PowerLawGaussianRed200915.pdf}
\caption{(Left) The dereddened early $BVi$-band (top to bottom) forced photometry light curves (circles) of SN~2018aoz\ up to 40\% of maximum light in rest frame are compared to the best-fit power-law $+$ Gaussian (dashed curves) and its power-law component alone (solid curves).
The inset zooms in on the infant phase ($\lesssim$ 1 day).
(Right) The $\sigma$-scaled residual of each data point for the best-fit power-law $+$ Gaussian (open circles) and its power-law component alone (closed circles) are shown over the same time interval as the left panel.
\label{fig:gaus}}
\end{figure}
The infant-phase light curves of SN~2018aoz\ contain the lowest luminosity detected signals from an early \tas\ to date, reaching a depth of $-$10.5 absolute AB magnitude.
In Paper I, we reported that the dominant source of its early luminosity appears to follow a power-law evolution.
The observed $BVi$-band light curves over 1--7 days since first light (or up to $\sim$ 40\% of peak brightness) follows $L_{\nu} \sim t^{\alpha_{\nu}}$, consistent with the majority of other \tase\ that have been observed in these phases \citep{Nugent2011nat,Foley2012apj, Olling2015nat, Cartier2017mnras, Holmbo2019aa, Dimitriadis2019apj, Miller2020apj, Moon2021apj}.
The measured power-law indices for SN~2018aoz, $\alpha_{(B,V,i)}$ = (2.24, 1.99, 2.26), are also close to the Type Ia population average \citep[$\alpha$ = 2.01;][]{Miller2020apj}.
In principle, power-law rise is expected for SNe powered by a smooth, centrally-concentrated \ni56\ distribution with a power-law-like tail towards the ejecta surface, where $\alpha$ depends on the steepness of the tail \citep{Piro&Nakar2014apj}.
However, in addition to this component, we also found evidence for excess emission over the power-law in the $V$ and $i$ bands during the first 0--1 days since first light.
This infant-phase excess emission is present during the same epochs as the $B$-band plateau, which has been attributed to line-blanket absorption by an over-density of Fe-peak elements near the ejecta surface.
While, in Paper I, we highlighted that excess radioactive heating by those same Fe-peak elements is one possible explanation for the excess emission, a range of other possible explanations and their implications remains to be thoroughly explored.
Here, we characterize the properties and statistical significance of the infant-phase excess emission in SN~2018aoz.
Figure~\ref{fig:gaus} (left panels) shows the results of fitting the early light curves of SN~2018aoz\ during 0--7 days with a power-law $+$ excess emission, where the infant-phase excess emission is modelled by a Gaussian in each of the $V$ and $i$ bands.
(Note that the $B$-band light curve during 0--1 days is excluded from the fit since it is affected by $B$-band suppression.)
The $V$- and $i$-band infant-phase light curves share the same Gaussian central epoch, $\mu$, and width, $\sigma$,
but each Gaussian is scaled independently.
In each of the $BVi$ bands, the power-law component has the form $L_{\nu} \propto (t - t_{\rm PL})^{\alpha_{\nu}}$, where the onset of the power-law $t_{\rm PL}$ is shared among the bands while the power-law indices $\alpha_{\nu}$ and scalings are independent parameters in each band.
The best-fit power-law $+$ excess emission (dotted curves in Figure~\ref{fig:gaus}) is obtained with $\mu$ = 0.25 days since first light and $\sigma$ = 0.17 days for the Gaussian component, and $t_{\rm PL}$ = 0.19 days since first light and $\alpha_{(B,V,i)}$ = (2.1, 1.8, 2.1) for the power-law component (represented by the solid curves), which appears to adequately fit the observed early light curves (minus the excluded $B$-band light curve during 0--1 days).
The reduced $\chi$-squared statistic (\chisqr) of 4.0 for this fit is significantly better than the one obtained by fitting a pure power-law to the same light curves (\chisqr\ = 9.2; Paper I), indicating that the $Vi$-band excess emission component is required to explain the observed light curves.
The statistical significance of the $Vi$-band excess emission
is displayed in Figure~\ref{fig:gaus} (right panels), showing the $\sigma$-scaled residual of the best-fit power-law $+$ excess emission (open circles) compared to that of the power-law component alone (closed circles).
The residuals of the power-law component appear to be dominated by the data points from the infant phase.
Note that this is consistent with the \chisqr\ analysis of the power-law fitting in Paper I, where the \chisqr\ error from fitting a pure power-law (\chisqr\ = 9.2) was found to be predominantly from the infant-phase data points (with $\Delta$\chisqr\ = 6.0) than from all subsequent data points (with $\Delta$\chisqr\ = 3.2).
Meanwhile, the power-law $+$ excess emission model significantly reduces the residuals of the $Vi$-band data points from the infant phase, which now provide similar residuals as the data points from later phases.
Thus, the early light curves of SN~2018aoz\ appear to require the distinct excess emission component peaked between $\sim$ 0.08 and 0.42 days since first light.
During this phase, excess emission is the dominant component of the SN light curve, emitting a total of $\sim$ 2.4 $\times$ 10$^{-9}$ ergs~cm$^{-2}$ into the $V$ and $i$ bands along the line of sight (or $\sim$ 1.4 $\times$ 10$^{44}$ ergs, assuming spherically symmetric emission).
In Section~\ref{sec:early}, we examine potential mechanisms that can produce the observed excess emission.
\subsection{Color Evolution}\label{sec:color}
\begin{figure}[t!]
\epsscale{\scl}
\plotone{SNoptcolors200915.pdf}
\caption{The observed (non-dereddened) optical colors of SN~2018aoz\ (black circles) in \bv\ (top) and \vi\ (middle) aligned with its $i$-band light curve (bottom). The data are binned over 0.3 day intervals.
The vertical dotted lines mark the epochs of $-$14.4, $-$4.6, 10.4, and 26.0 days since peak where the optical colors undergo notable phase transitions in their evolution \citep{Moon2021apj}. The dashed line is the Lira law from \citet{Burns2014apj}. Note that a zoomed-in plot of the un-binned early color evolution focused on the early phases before $\sim -$8 days is shown in Figure~\ref{fig:shockmod}.
\label{fig:optcolor}}
\end{figure}
Figure~\ref{fig:optcolor} presents high-cadence KMTNet color curves of SN~2018aoz\ in \bv\ (top) and \vi\ (middle) aligned with its $i$-band light curve (bottom).
The observations, which are nearly simultaneous among different filters,
were linearly interpolated to the union of the two sets of epochs for each pair of adjacent filters during subtraction.
The four vertical dotted lines in the figure mark four epochs, $-$14.4, $-$4.6, 10.4, and 26.0 days since $B$-band maximum, where the colors undergo notable phase transitions in their evolution.
The \bv\ color evolution of SN~2018aoz\ prior to the first color transition epoch, corresponding to the infant phase, was discussed extensively in Paper I. The simultaneous plateau in the $B$-band and rapid rise in the $V$- and $i$-band light curves at these early times lead to an abrupt redward evolution wherein the \bv\ color changes by 1.5 mag between 1.0 and 12.4 hours after first light.
We refer to this redward color evolution as the ``natal red bump'' (NRB), hereafter,
while the ``NRB phase'' refers to the epochs ($\sim$ 1.0--12.4 hours) where the NRB is observed.
The NRB is also identifiable in the \vi\ color,
though with a smaller color change of 0.23 mag between 2.8 and 12.2 hours.
During the NRB phase, the average \bv\ color is $\sim$ 1.7 mag redder than the average \vi\ color, consistent with the presence of Fe absorption lines that selectively suppress the $B$ band.
The entire color evolution after the first color transition epoch is largely consistent with those of other normal \tase, and is best described in relation to the $i$-band light curve \citep{Moon2016spie} as detailed Appendix~\ref{apx:color}.
\begin{figure}[t!]
\epsscale{\scl}
\plotone{SNuvcolors210204.pdf}
\caption{The observed (non-dereddened) UV-optical colors of SN~2018aoz\ (black open circles) in $UVW1-V$ (top) and $UVW2-V$ (bottom) compared to those of SN~2017cbv \citep[red circles;][]{Hosseinzadeh2017apj}, SN~2011fe \citep[blue circles;][]{Brown2012apj}, the NUV-red/blue groups of normal \tase\ \citep[colored shaded areas,][]{Milne2013apj}, and the super-Chandrasekhar-mass \tase~2012dn and 2011aa \citep[green and blue squares, respectively;][]{Brown2014apj}. The red triangles represent the color of SN~2017cbv during its early excess emission.
\label{fig:uvcolor}}
\end{figure}
Figure~\ref{fig:uvcolor} presents the Swift UV-optical color curves of SN~2018aoz\ compared to those of other \tase.
The near-peak UV-optical colors of normal \tase\ have been grouped into two categories \citep{Milne2013apj}: ``NUV-red'' \citep[e.g., SN~2017cbv;][]{Hosseinzadeh2017apj} and ``NUV-blue'' \citep[e.g., SN~2011fe;][]{Brown2012apj}.
\citet{Brown2018at} initially reported that SN~2018aoz\ displayed blue UV-optical colors that are similar to \tase\ with super-Chandrasekhar ejecta masses \citep{Brown2014apj}.
Indeed, prior to peak, the colors are bluer than SN~2011fe, which is one of the bluest events in the NUV-blue group \citep{Brown2017apj}.
However, subsequent evolution shows that while lying on the blue edge of the group, SN~2018aoz\ overall appears to follow the NUV-blue group.
In particular, the observed colors near peak are not as extreme as those of the super-Chandrasekhar-mass events---for instance, SNe 2012dn and 2011aa \citep[Figure~\ref{fig:uvcolor};][]{Brown2014apj}.
\subsection{Classification}\label{sec:class}
\begin{figure}[t!]
\epsscale{\scl}
\plotone{ClassSpecFig200915.pdf}
\caption{The dereddened spectrum of SN~2018aoz\ (black solid line; Paper I) taken 1.9 days before $B$-band maximum is
compared to spectra of \tase\ of different subtypes obtained at comparable epochs:
SN~1994D \citep{Meikle1996mnras} and
2011fe \citep{Parrent2012apj}
(Core-Normal subtype; blue);
SN~1981B \citep{Branch1983apj} and SN~2002dj \citep{Pignata2008mnras} (Broad-Line subtype; cyan);
SN~1992A \citep{Kirshner1993apj} (intermediate type; orange).
Observed absorption features of \caii, \fex, \sii, and \siii\ are labelled at the top of the panel.
\label{fig:class-spec}}
\end{figure}
We classify SN~2018aoz\ as a normal \tas\ that is intermediate
between the CN/NV and BL/HV subtypes based on its spectral properties as follows.
(Note that the SN light curves also support this classification as detailed in Appendix~\ref{apx:class}.)
Figure~\ref{fig:class-spec} compares the spectrum of SN~2018aoz\ taken 1.9 days before $B$-band maximum (Paper I) to spectra of normal \tase\ from the CN and BL subtypes of \citet{Branch2006pasp}: SNe~1994D \citep[CN subtype;][]{Meikle1996mnras},
2011fe \citep[CN subtype;][]{Parrent2012apj},
1981B \citep[BL subtype;][]{Branch1983apj}, 2002dj \citep[BL subtype;][]{Pignata2008mnras} and 1992A \citep[intermediate between CN and BL;][]{Kirshner1993apj} from a similar phase.
The spectrum of SN~2018aoz\ is consistent with
those of the other normal \tase\ overall, whereas the detailed shapes of key absorption features seem to be intermediate between CN and BL events.
Sharp \fex\ absorption features seen in the spectrum of SN~2018aoz\ are typical of CN events (e.g., SNe~1994D and 2011fe; blue spectra);
however, the \caii\ and \siii\ absorption features of SN~2018aoz\ are relatively strong, which is a step in the direction of typical BL events such as SNe~1981B and 2002dj (cyan spectra).
SN~1992A (orange spectrum), classified as marginally BL while bordering CN \citep{Branch2006pasp},
is the closest spectroscopic analogue to SN~2018aoz\ with nearly identical features
in the figure, suggesting that SN~2018aoz\ is also intermediate between CN and BL.
\begin{figure}[t!]
\epsscale{\scl}
\plotone{ClassWangFig200915.pdf}
\caption{The observed velocity evolution of the \siii\ spectral feature of SN~2018aoz\ (orange circles; Paper I) is compared to the average velocity evolution for NV \tase\ \citep[blue solid curve with shaded 1-$\sigma$ error region;][]{Wang2009apj} as well as that of 91bg-like (green dashed curve) and 91T-like (red dashed curve) events in rest frame. SNe~2011fe \cite[blue triangles;][]{Pereira2013aa} and 2002dj \citep[cyan triangles;][]{Pignata2008mnras} are examples of NV and HV events, respectively, with similar \dm15\ as SN~2018aoz.
\label{fig:class-si}}
\end{figure}
\begin{figure}[t!]
\epsscale{\scl}
\plotone{ClassBlondinFig220513.pdf}
\caption{Comparison of pseudo-equivalent widths of \siii\ lines (Top) and \siii\ velocity (Bottom) of SN~2018aoz\ (orange star) with those of other \tase\ \citep{Blondin2012aj}. The colored symbols represent events from the four main subtypes of \tase: CN/NV (blue circles), BL/HV (cyan squares), Cool/91bg-like (green diamonds), and Shallow-Silicon/91T-like (red triangles). Note that BL/NV and CN/BL are both subsets of normal \tase, while Cool/91bg-like and Shallow-Silicon/91T-like are considered peculiar.
\label{fig:class-blondin}}
\end{figure}
Figure~\ref{fig:class-si} compares the evolution of the velocity
of the \siii~$\lambda$6355~\AA\ feature (``\siii\ velocity'', hereafter)
of SN~2018aoz\ (Paper I) to what is expected for the NV and HV subtypes of \tase.
Note that the NV events (e.g., SN~2011fe; blue triangles) are characterized by
near-peak \siii\ velocities of about $(10.6\pm0.4)\times 10^{3}$~km~s$^{-1}$,
while the HV events (e.g., SN~2002dj; cyan triangles) have higher near-peak \siii\
velocities in the range of $\sim (11.8-17.0) \times 10^{3}$ km~s$^{-1}$ \citep[vertical cyan interval;][]{Wang2009apj}.
The NV and HV subtypes largely overlap with CN and BL, respectively \citep{Parrent2014apss}.
The \siii\ velocity evolution of SN~2018aoz\ (orange circles)
during early ($<-$5 days since $B$-band maximum) and late ($>$ 15 days)
evolutionary phases appears to follow the NV subtype (blue curve with shaded area).
Around the peak between $\sim$ $-$5 and 15 days, however, its velocity becomes
significantly higher than the NV population and approaches those of HV events.
The peak \siii\ velocity of $(11.4\pm0.1)\times 10^{3}$~km~s$^{-1}$ in SN~2018aoz\
is about 2-$\sigma$ higher than the NV population average and
near the lower boundary of the HV subtype.
The expected \siii\ velocity evolutions of 91bg-like (green dashed curve)
and 91T-like (red dashed curve), the two most common peculiar types of \tase,
are apparently different from that of SN~2018aoz\ in Figure~\ref{fig:class-si} during late ($\gtrsim$ 15 days since $B$-band maximum) evolutionary phases.
Thus, the \siii\ velocity evolution of SN~2018aoz\ also supports its intermediate
nature between NV/CN and HV/BL,
while it is clearly incompatible with those of the prototypical peculiar subtypes.
The intermediate nature of SN~2018aoz\ between the normal subtypes of CN and BL is confirmed by the pseudo equivalent widths (pEWs) of \siii\ lines from its spectrum taken 1.9 days prior to $B$-band maximum.
We measure pEWs of 20.22 and 106.4 for the \siii~5972~\AA\ and 6355~\AA\ lines, respectively, using the method of \citet{Branch2006pasp}.
Figure~\ref{fig:class-blondin} compares the peak \siii\ pEWs and \siii\ velocity of SN~2018aoz\ to those of a sample of \tase\ \citep{Blondin2012aj} from the CN/NV (blue circles) and BL/HV (cyan squares) subtypes, as well as the peculiar 91bg-like (or ``Shallow-Silicon''; red triangles) and 91T-like (or ``Cool''; green diamonds) subtypes.
The parameters of SN~2018aoz\ are located at the boundary between the CN/NV and BL/HV subtypes of normal \tase\ in both panels.
The \siii\ pEWs of SN~2018aoz\ (top panel) are consistent with BL events with \siii~6355~\AA\ pEW $>$ 105 \citep{Blondin2012aj}, while the \siii\ velocity of SN~2018aoz\ (bottom panel) is consistent with NV events with \siii\ velocity $< 11.8\times 10^{3}$~km~s$^{-1}$, leading to the intermediate classification between the BL (/HV) and NV (/CN) subtypes.
\section{Early Light Curve Constraints on the Companion}\label{sec:kasan}
Early observations of \tase\ have been used to search for excess emission due to ejecta collision with companions \citep[e.g.,][]{Bloom2012apj, Olling2015nat, Marion2016apj, Hosseinzadeh2017apj, Hosseinzadeh2022arxiv, Moon2021apj}.
With early light curves from the low brightness of $-$10.5 absolute AB magnitudes, observations of SN~2018aoz\ probe the luminosities expected not only for non-degenerate, but also WD companions for the first time. It therefore provides a unique opportunity to search for such emission and place strict constraints on the nature of the companion star.
Here, we compare the light curves of SN~2018aoz\ with the analytic ejecta-companion interaction model of \citet[][``K10'' hereafter]{Kasen2010apj} that has been widely adopted for this type of analysis.
The luminosity ($\Gamma$) and effective temperature of the interaction emission in the model depend on the size of the companion (related to the binary separation distance in Roche overflow), as well as the opacity, mass, and kinetic energy of the ejecta.
When observed with a viewing angle $\theta$, the luminosity is $\Gamma\times S(\theta)$, where
\begin{equation}
S(\theta)\simeq 0.982 \times \exp{[-(\theta/99.7)^2]} + 0.018
\label{eq:kasangle}
\end{equation}
\noindent
describes the angle dependence of the observed luminosity \citep{Olling2015nat}.
Note that the emission is strongest when the progenitor system is observed from the side of the companion star (0\degr; $S=1$) and it is weakest from the side of the progenitor star (180\degr; $S = 0.056$).
\subsection{Comparison to Fiducial Models}\label{sec:kasfid}
Figure~\ref{fig:kasmcVi} (left panels) compares the early $Vri$-band light curves
of SN~2018aoz\ (black filled circles)
during 0--3 days since first light with what is predicted by the K10 model
for three cases of non-degenerate binary companions at $\theta$ = 0\degr\
in Roche overflow: 1$\,$\msol\ red giant (1RG; red solid curve), 6$\,$\msol\
main sequence subgiant (6MS; blue solid curve),
and 2$\,$\msol\ main sequence subgiant (2MS; indigo solid curve).
In the K10 model, we adopt the electron scattering opacity of $\kappa$ = 0.2~cm$^2$~g$^{-1}$ for H-poor \tas\ ejecta and the ejecta mass and kinetic energy of 0.80~\msol\ and 0.63 $\times$ 10$^{51}$~ergs, respectively, for SN~2018aoz\ (Paper I).
For all three cases, the predicted emission is brighter than the observed luminosity, disallowing those configurations for the progenitor system under the K10 model.
The $B$-band light curve during 0--1 days was excluded from our comparisons because it is affected by $B$-band suppression while the K10 model assumes a pure blackbody spectral energy distribution.
Note that the values of ejecta mass and kinetic energy we adopted
are the lower limits of the ranges---$\sim$ 0.8--1.0~\msol\ and $\sim$ (0.6--0.8) $\times$ 10$^{51}$ ergs, respectively---that have
been considered for SN~2018aoz\ (Paper I).
Since larger ejecta mass and kinetic energy both lead to brighter emission
in the K10 model, the constraints provided in Figure~\ref{fig:kasmcVi} against the companion are conservative with respect to ejecta mass and kinetic energy.
(While $B$-band light curves have usually been used in the search for ejecta-companion interaction emission, we show in Appendix~\ref{apx:kasBsup} that model comparisons with the suppressed $B$-band light curve in the infant phase over-constrains the companion in the case of SN~2018aoz.)
The K10 model is based on the assumption of local thermodynamic equilibrium (LTE)
between the shock-heated ejecta and its radiated emission.
According to \citet[][``KS15'' hereafter]{Kutsuna2015pasj},
the matter-radiation coupling may not be strong enough to reach LTE due to the low
gas density in the ejecta-companion interaction, indicating that the K10 model may over-estimate
the emission temperature (and luminosity).
Figure~\ref{fig:kasmcVi} (left panels) also compares the observed light curves with the predictions of the two cases of companions from KS15:
1RG (red dotted curve) and 1MS (1~\msol\ main sequence subgiant companion; magenta dotted curve),
both at $\theta$ = 0\degr\ and in Roche overflow.
While the 1RG case clearly over-predicts the observed emission at $\theta$ = 0\degr,
the case of 1MS is at a very similar brightness to what
is observed during 0--0.5 days.
We note, however, that KS15 excludes free-free emission
and Compton scattering---two processes known to accelerate
equilibrium \citep{Weaver1976apjs, Katz2010apj}---in their estimation of
the strength of matter-radiation coupling, likely leading to
under-prediction of emission temperature and luminosity.
Furthermore, no underlying radioactive SN emission is included
in the luminosity calculations by KS15 (and also by K10).
Therefore, it is highly likely that the $\theta$ = 0\degr\ 1MS case is also disallowed
given the close similarity between its prediction and the observed brightness,
though it is difficult to precisely quantify
the effects of excluding the two radiation processes
and the underlying SN emission in the predicted luminosities.
For the predicted luminosities of ejecta-companion
interaction alone, the KS15 and K10 models may be regarded as providing upper and lower bounds, respectively.
\subsection{Companion Constraints from Generalized Modeling}\label{sec:kasgen}
\begin{figure*}[t!]
\plotone{KasenExpViRegFig210204.pdf}
\caption{(Left) The dereddened early $Vri$-band (from top to bottom) light curves of SN~2018aoz\ (black circles) within 3 days after first light in rest frame are compared to ejecta-companion interaction models with 0\degr\ viewing angle. The models are of 2MS (indigo solid curves), 6MS (blue solid curves), and 1RG (red solid curves) companions from \citet{Kasen2010apj} as well as 1MS (magenta dotted curves) and 1RG (red dotted curves) companions from \citet{Kutsuna2015pasj}. The black inverted arrows are 3-$\sigma$ detection limits. (Right) The parameter space of separation distances and viewing angles of possible progenitor systems is shown. The vertical dot-dashed lines divide the x-axis (binary separation distance) into WD and He star, Main Sequence and Subgiant, and Red Giant regimes.
The parameters in the shaded area underneath the solid and dashed black curves are ruled out at 84.1$\%$ and 97.7$\%$ confidence levels, respectively, by the early light curves of SN~2018aoz\ and the model predictions of \citet{Kasen2010apj}.
The magenta, indigo, blue, and red stars at the bottom of the panel show the parameters for the correspondingly colored models in the left panels.
The green shaded region shows the best-fit separation distances obtained by fitting power-law $+$ \citet{Kasen2010apj} ejecta-companion interaction models for a set of viewing angles between 0--180\degr (see Section~\ref{sec:kasfit}).
\label{fig:kasmcVi}}
\end{figure*}
We generalize our analysis using the K10 model to allow for ejecta-companion
interactions from all possible viewing angles between 0\degr\ and 180\degr\
and binary separation distances in the range of 10$^{9}$--10$^{14}$~cm,
following the methods of \citet{Moon2021apj}.
The range of separation distances corresponds to those of companions
as small as WDs and as large as red supergiants at the Roche limit.
The right panel in Figure~\ref{fig:kasmcVi} shows the extent of this parameter space, where the separation distances are divided into the regimes of WD and He star, Main sequence and Subgiant (MS), and Red Giant (RG)
with two vertical dot-dashed lines approximating the lower bounds
for the MS \citep{Boetticher2017aa} and late-phase RG cases \citep{Seeds1984}.
By comparing the models represented by pairs of these parameters (i.e., viewing angle and separation distance) with the observed luminosities and pre-detection upper limits, we obtain the solid and dashed curves in the figure,
representing the lower limits of acceptable viewing angles as a function of separation distance (i.e., the area under the curve is ruled out) for the 84.1$\%$ and 97.7$\%$
confidence levels, respectively.\footnote{Note that 84.1$\%$ and 97.7$\%$ correspond to the
the 1- and 2-$\sigma$ levels, respectively, of a Gaussian distribution in one direction.}
The confidence levels account for photometry errors as well as those of the model
parameters, including redshift, explosion epoch, ejecta mass,
and ejecta kinetic energy,
estimated using bootstrap assuming Gaussian error distribution.
Note that there are additional systematic uncertainties in the model comparison as mentioned in Section~\ref{sec:kasfid} above:
those associated with (1) the adoption of lower limits for the ejecta mass and kinetic energy of SN~2018aoz\ and (2) the exclusion of the radioactive SN emission, that are not included in our analysis.
However, both of these uncertainties only allow for stronger
constraints against the companion (see Section~\ref{sec:kasfid}).
Based on the comparison in Figure~\ref{fig:kasmcVi} (right panel),
a low-mass ($\lesssim$ few solar mass) main sequence star or subgiant at a high ($\gtrsim$ 80\degr) viewing angle, He-star, or WD are the most likely binary companions in SN~2018aoz.
\emph{Note that these results are independent of whether ejecta-companion interaction emission has really been detected in SN~2018aoz.}
Separation distances from $\sim$ 5 $\times$ 10$^{11}$ to $\sim$ 10$^{14}$~cm, corresponding to companions larger than 2MS, are disallowed (at 84.1\% confidence level) for most ($\gtrsim$ 80\% of) viewing angles because the expected luminosity from their ejecta-companion interaction emission would exceed the observed
luminosity of SN~2018aoz\ in the first three days for $\theta <$ 140--175\degr.
Thus, under the K10 model, if SN~2018aoz\ had a large main sequence or red giant companion, it would need to have been located within a small range of viewing angles behind the SN.
The ejecta-companion interaction luminosity can be significantly lower than the K10 model predicts if LTE is not reached, as mentioned above, with KS15 providing a lower bound.
However, if we adopt the KS15 model for the 1RG case, the luminosity of 1RG (red dotted curve in Figure~\ref{fig:kasmcVi}, left panels) would be similar to that of 6MS in the K10 model (blue solid curve), for which $\sim$ 90\% of viewing angles are still ruled out (Figure~\ref{fig:kasmcVi}, right panel).
Therefore, the presence of a red giant companion is very unlikely even if the LTE assumption of K10 is not satisfied.
Although there is a small region in the upper-right corner (i.e., large separation and viewing angle)
of Figure~\ref{fig:kasmcVi} that is not directly ruled out by the comparison, the separation distances correspond to short-lived companions (e.g. red supergiants) which are very unlikely to be found in the halo region of an elliptical galaxy---where SN~2018aoz\ is located---due to the lack of recent star formation (see Section~\ref{sec:prog}).
\section{Infant-Phase Excess Emission Modelling}\label{sec:early}
SN~2018aoz\ shows significant excess emission over the power-law rise during 0--1 days since first light (Section~\ref{sec:gaus}).
An over-density of \ni56\ near the ejecta surface can produce excess thermal emission in this phase (Paper I),
but other possibilities---such as ejecta shock interaction---and their subsequent implications for the progenitor system remain unexplored.
We examine the origin of the infant-phase excess emission by fitting the early light curves of SN~2018aoz\
using a model combining the underlying SN emission (which is represented by a power-law; see Section~\ref{sec:gaus}) and excess emission.
We compare the fits obtained using models of four conceivable mechanisms for the excess emission: surface \ni56\ heating (Section~\ref{sec:nipl}), ejecta-companion interaction (Section~\ref{sec:kasfit}), ejecta-CSM interaction (CSM; Section~\ref{sec:csm}), and shock breakout (Section~\ref{sec:sbo}).
Note that the characteristic ejecta velocity of SN~2018aoz, estimated using its observed peak \siii\ velocity of 11400~km~s$^{-1}$ (Paper I), broadly constrains the possible sources of infant-phase emission from ejecta shock interactions to be within $\lesssim$ 10$^{14}$~cm of the progenitor, which includes only the binary companion,
nearby CSM, and the shock-heated progenitor surface.
For all of the four excess emission mechanisms, we adopt blackbody spectral energy distributions, because they are based on thermal processes.
We fit the light curves of SN~2018aoz\ during 0--8 days, but exclude the $B$-band light curve during 0--1 days since it is affected by $B$-band suppression and incompatible with a pure blackbody process (Paper I).
The results obtained using the four models are compared below,
followed by detailed descriptions of each model and the fitting process in the subsequent subsections.
\begin{figure}[t!]
\epsscale{\scl}
\plotone{EarlyModelFig210204.pdf}
\caption{(Left) The dereddened \bv\ (top) and \vi\ (bottom) colors of SN~2018aoz\ in rest frame (circles) are compared with what is expected from power-law (PL) $+$ three models of early excess emission in \tase: (1) surface \ni56\ heating (blue dotted curves; Section~\ref{sec:nipl}), (2) ejecta-companion interaction (green dashed curves; Section~\ref{sec:kasfit}),
and (3) ejecta-CSM interaction (red dot-dashed curves; Section~\ref{sec:csm}).
The vertical grey line marks the epoch when the first spectrum was taken (4.4 days since first light or $-$11.0 days since peak). (Right) The dereddened $BVi$-band (from top to bottom) light curves of SN~2018aoz\ in rest frame are compared with those predicted by the same models from the left panels. The inverted arrows are detection limits at a S/N of 3.
\label{fig:shockmod}}
\end{figure}
Figure~\ref{fig:shockmod} compares the dereddened colors,
$(B-V)_0$ (top left panel) and $(V-i)_0$ (bottom left panel),
and $BVi$ light curves (right panels) of SN~2018aoz\
with the best-fit model predictions:
blue dotted curves for surface \ni56\ heating,
green dashed curves for ejecta-companion interaction,
and red dot-dashed curves for ejecta-CSM interaction.
(Shock breakout is not shown because it is too faint
to be compared for a reasonable set of model input parameters; Section~\ref{sec:sbo}).
The fit quality is not significantly different for the three best-fits, which have similar \chisqr\ values of 3.4, 3.5, and 3.2, respectively.
As seen in the figure, all three models appear to reproduce the observed $Vi$-band light curves
of SN~2018aoz\ as well as the \vi\ color curve similarly well; however, these blackbody excess emission models
all over-predict the $B$-band luminosity by $\sim$ 0.5$-$1.0 mag in 0.1--0.5 days,
leading to bluer \bv\ color than observed during the period.
Note that the lower infant-phase $B$-band luminosity compared to the $V$ and $i$ bands in SN~2018aoz, which is incompatible with pure blackbody emission,
has been attributed to $B$-band suppression caused by surface Fe-peak elements (Paper I).
As detailed in the following subsections, the best-fit parameters of the surface \ni56\ heating, ejecta-companion interaction, and ejecta-CSM interaction models are all compatible with viable physical processes that can produce the observed infant-phase excess emission in SN~2018aoz.
\subsection{Radioactive Heating by Excess Surface \ni56}\label{sec:nipl}
We first fit the observed early light curves of SN~2018aoz\ using the combination of power-law emission (for the underlying SN emission) and the emission from a \ni56 shell distribution (for surface \ni56\ heating).
For the power-law component, we use the power-law described in Section~\ref{sec:gaus} with onset $t_{\rm PL}$ and indices $\alpha_{(B,V,i)}$.
For the infant-phase \ni56\ shell emission, we developed the following model in three steps:
\begin{enumerate}
\item We adopt the luminosity calculation for \ni56-powered SNe from \citet[][PN14 herafter]{Piro&Nakar2014apj} based on \ni56\ decay and photon diffusion. In the model, the SN luminosity is determined by the \ni56\ distribution and the ``diffusion depth'', defined as the deepest layer in the ejecta that is visible via photon diffusion.
For the evolution of the diffusion depth, we adopt the following equation from Paper I (based on Equation 1 in PN14) describing the fractional mass of the ejecta ($\Delta M/M_{\rm ej}$) in the layers above the diffusion depth at $t-t_0$ days since explosion:
\begin{equation}
\frac{\Delta M}{M_{\rm ej}} \approx 1.3
\left(\frac{t-t_0}{\tau_m}\right)^{1.76}\ M_{\odot}
\label{eq:mdiff}
\end{equation}
\begin{equation}
\tau_m = \left( \frac{\kappa}{13.8\, c}\right)^{1/2} \left( \frac{6\,M_{\rm ej}^3}{5 \,E_{\rm ej}}\right)^{1/4}
\label{eq:taum}
\end{equation}
where $\tau_m$ is the geometric mean of the diffusion and expansion timescales \citep{Arnett1982apj, Moon2021apj} related to the ejecta mass, ejecta kinetic energy, and opacity ($M_{\rm ej}$, $E_{\rm ej}$, and $\kappa$, respectively).
We use the value of $\tau_m$ = 9.51 $\pm$ 0.26 measured from the bolometric light curve of SN~2018aoz\ (Paper I).
Note that the explosion epoch, $t_0$, can be different than the onset of the power-law component of the light curve, $t_{\rm PL}$, due to the possibility of a few-hours to days delay \citep[or ``dark phase'';][]{Piro&Nakar2013apj, Piro&Nakar2014apj} before the diffusion depth reaches the underlying main distribution of centrally-concentrated \ni56\ in the ejecta, which is responsible for the power-law rise (Section~\ref{sec:gaus}).
\item The \ni56\ distribution in PN14 (described by a logistic function; see Equation 11 therein) is replaced by the \ni56\ shell distribution with the following functional form:
\begin{equation}
X_{56}(t) = \begin{cases}
X_s, & t - t_0 < t_s\\
0\ , & t - t_0 > t_s
\label{eq:nishell}
\end{cases}
\end{equation}
where $X_{56}(t)$ is the mass fraction of \ni56\ at the diffusion depth at $t - t_0$ days since explosion and $t_s$ is the time when the diffusion depth reaches the inner radius of the shell. (Note that the time coordinate $t$ is related to the radial mass coordinate of the diffusion depth by Equation~\ref{eq:mdiff}).
In the fitting below, we represent the distribution using two physical parameters, $t_s$ and $M_s$, where $M_s = X_s \Delta M(t_s)$ is equal to the total mass of \ni56\ above the diffusion depth at time $t_s$ in the ejecta.
\item The \ni56\ shell emission, originating from radioactive heating of the high-density SN ejecta in infant phases, is assumed to be blackbody distributed (i.e., we assume that the emission is fully thermalized and that gamma-rays are fully trapped) in order to fit the multi-band SN light curves.
We estimate the blackbody temperature, or ``color temperature'' ($T_c$), of the \ni56\ shell emission using the following equation \citep[based on Equation 12 from][]{Piro&Nakar2013apj}:
\begin{equation}
T_c^4 = \frac{L \tau_s}{4\pi \sigma_{SB} r_{ph}^2}\ \ , \ \ \tau_s = \tau_c \left(\frac{r_{ph}}{r_c}\right)^2
\label{eq:colorT}
\end{equation}
where $L$ is the luminosity of the \ni56\ shell emission; $r_{ph}$ is the radius of the photosphere; and $\tau_s$ is a parameter combining the radius, $r_c$ (or ``color depth''), in the ejecta where the \ni56\ radioactive emission is thermalized and the optical depth, $\tau_c$, at the color depth.
We estimate $r_{ph}$ based on a polytropic (n = 3) ejecta profile expected for an exploding WD undergoing homologous expansion \citep{Piro&Nakar2013apj}, and assume $\tau_s$ is roughly constant over the $\sim$ 1-day infant phase.
Note that $\tau_s$ is expected to be close to unity since both $\tau_c$ and $r_{ph}/r_{c}$ are typically not much larger than unity \citep{Piro&Nakar2013apj}, so the assumption can at most contribute error with near-unity order.
\end{enumerate}
Fitting the $BVi$-band light curves of SN~2018aoz\ up to 8 days since first light, excluding the $B$-band light curve during 0--1 days, we obtain the best-fit power-law $+$ surface \ni56\ heating model with \chisqr\ = 3.4.
The parameters are $t_0$ = $-$0.17 days, $t_s$ = 0.30 days, $t_{\rm PL}$ = 0.38 days, all since the epoch of first light (MJD~58206.00) in rest frame, $M_s$ = 8.3 $\times$ 10$^{-4}$~\msol, $\tau_s$ = 18.2, and $\alpha_{B,V,i}$ = (2.03, 1.74, 2.08).
The best-fit (blue dotted curves in Figure~\ref{fig:shockmod}) appears to provides an excellent match to the observed infant-phase excess emission of SN~2018aoz\ in the $V$ and $i$ bands with a 8.3 $\times$ 10$^{-4}$~\msol\ shell of excess \ni56\ in the outer 0.65\% of the SN-ejected mass.
If this is the origin of the infant-phase excess emission, then the difference between the best-fit $t_0$ and $t_{\rm PL}$ parameters indicates the presence of a $\sim$ 0.55-day dark phase in SN~2018aoz, similar in length to the one reported in the normal \tas~2011fe \citep[$\sim$ 0.5 days;][]{Piro&Nakar2014apj}.
We also note that the best-fit indices $\alpha_{B,V,i}$ are slightly lower than those obtained in Section~\ref{sec:gaus}, though they are still consistent with the $\alpha\sim$ 2 expectation for power-law rise that has been found in other normal \tase.
The best-fit mass and location of surface \ni56\ obtained above, 8.3 $\times$ 10$^{-4}$~\msol\ of \ni56\ in the outer 0.65\% of the SN-ejected mass, are larger and deeper in the ejecta, respectively, than 1.8 $\times$ 10$^{-4}$~\msol\ of \ni56\ in the outer 0.31\% of the SN-ejected mass obtained in Paper I by fitting the infant-phase excess emission of SN~2018aoz\ with a purely \ni56-powered blackbody model (as opposed to the power-law $+$ surface \ni56\ model).
These numbers are broadly comparable with the location and quantity of Fe-peak elements required to explain the $B$-band suppression associated with the NRB, $\sim$ 10$^{-3}$~\msol\ in the outer $\sim$ 1\% of the SN ejecta (Paper I).
However, radiative transfer simulations that account for both line formation and incomplete gamma-ray trapping are required to determine if any single distribution of Fe-peak elements can reproduce both the infant-phase excess emission and NRB features in SN~2018aoz\ simultaneously.
We discuss this in the context of thin-shell He-shell DDet simulations in Section~\ref{sec:hedd}, below.
\subsection{Ejecta Interaction with the Companion}\label{sec:kasfit}
We model the
infant-phase emission of SN~2018aoz\ with a combination
of radioactive SN emission and ejecta-companion interaction emission---the former
with a power-law (Section~\ref{sec:gaus}) and the latter with the K10 model (Section~\ref{sec:kasan}).
For the K10 model, we use the electron scattering opacity of $\kappa$ = 0.2~cm$^2$~g$^{-1}$ for H-poor \tas\ ejecta following Section~\ref{sec:kasan}.
Fitting the observed $BVi$ light curves during 0--8 days, excluding the $B$-band light curve during 0--1 days, we obtain the green shaded region in Figure~\ref{fig:kasmcVi} (right panel) showing
the distribution of the best-fit companion separation distances ($a$) for viewing angles ($\theta$) between 0\degr\ and 180\degr.
The upper and lower boundaries of the region were obtained using two cases of relatively small and large ejecta masses and kinetic energies, respectively, derived by modelling the light curves of SN~2018aoz\ (Paper I) as follows:
(1) $M_{\rm ej}$ = 0.80~\msol\ and $E_{\rm ej}$ = 0.63 $\times$ 10$^{51}$~ergs based on the \citet{Arnett1982apj} model; and (2) $M_{\rm ej}$ = 1.05~\msol\ based on He-shell DDet simulations, corresponding to $E_{\rm ej}$ = 0.82 $\times$ 10$^{51}$~ergs for the characteristic ejecta velocity of 11400 km~s$^{-1}$.
\begin{deluxetable*}{cccll}
\tabletypesize{\footnotesize}
\tablecolumns{5}
\tablewidth{0.99\textwidth}
\tablecaption{Ejecta-companion interaction model fit parameters}
\tablehead{
\colhead{$M_{\rm ej}$ and $E_{\rm ej}$} & \colhead{Viewing angle} & \colhead{Fit parameters} & \colhead{\chisqr}
}
\startdata
0.80~\msol\ and 0.63 $\times$ 10$^{51}$ ergs{$\rm ^a$} & $\theta$ = 0\degr\ & $a$ = 1.0 $\times$ 10$^{10}$~cm & 3.48 (lowest) \\
& & $t_{0}$ = $-$0.01 days & \\
& & $t_{PL}$ = 0.27 days & \\
& & $\alpha_{B, V, i}$ = (2.07, 1.77, 2.10) & \\
\hline
& $\theta$ = 180\degr & $a$ = 1.5 $\times$ 10$^{12}$~cm & 3.54 \\
& & $t_{0}$ = $-$0.23 days & \\
& & $t_{PL}$ = 0.37 days & \\
& & $\alpha_{B, V, i}$ = (2.05, 1.75, 2.08) & \\
\hline
1.05~\msol\ and 0.82 $\times$ 10$^{51}$ ergs{$\rm ^b$} & $\theta$ = 0\degr\ & $a$ = 6.8 $\times$ 10$^{9}$~cm & 3.47 (lowest) \\
& & $t_{0}$ = 0.00 days & \\
& & $t_{PL}$ = 0.25 days & \\
& & $\alpha_{B, V, i}$ = (2.08, 1.78, 2.11) & \\
\hline
& $\theta$ = 180\degr & $a$ = 8.8 $\times$ 10$^{11}$~cm & 3.52 \\
& & $t_{0}$ = $-$0.19 days & \\
& & $t_{PL}$ = 0.37 days & \\
& & $\alpha_{B, V, i}$ = (2.04, 1.74, 2.08) & \\
\enddata
\tablenotetext{{\rm a}}{From applying the \citet{Arnett1982apj} model to the light curves of SN~2018aoz, as typically done for radioactively-powered SNe \citep[e.g.,][]{Li2019apj, Drout2016apj}, approximating the \ni56-dominated opacity in the photospheric phase as $\kappa\sim$ 0.1~cm$^2$~g$^{-1}$ (Paper I).}
\tablenotetext{{\rm b}}{From He-shell DDet simulations (Paper I).}
\tablecomments{$t_0$ and $t_{\rm PL}$ are in days since the epoch of first light (MJD~58206.00) in rest frame.}
\end{deluxetable*}
\label{tab:kas}
Table~\ref{tab:kas} shows the range of fit parameters obtained using different $M_{\rm ej}$, $E_{\rm ej}$, and $\theta$.
The best-fit model light and color curves with $\theta$ = 0\degr, which are nearly identical for the two cases of $M_{\rm ej}$ and $E_{\rm ej}$ (green dashed curves in Figure~\ref{fig:shockmod}), provide a very similar goodness of fit to those of SN~2018aoz\ as the best-fit surface \ni56\ heating model (Section~\ref{sec:nipl}).
The differences between the onsets of the K10 and power-law components in the models (= $t_0$ and $t_{\rm PL}$, respectively) range from $\sim$ 0.3 days for the lowest-\chisqr\ case of $\theta$ = 0\degr\ to $\sim$ 0.6 days for the case of $\theta$ = 180\degr.
These differences are consistent with $t_{PL} - t_0$ of 0.54 days obtained with the surface \ni56\ heating model, pointing to an approximately half-day post-explosion delay (or dark phase) in SN~2018aoz\ for the diffusion of the radioactive SN emission responsible for the power-law rise.
As seen in the table, the change in \chisqr\ between $\theta$ = 0\degr\ and 180\degr\ is less than 2\%, indicating that the goodness of fit of the ejecta-companion interaction model does not change significantly with separation distance ($a$) ranging from (0.7--1.0) $\times$ 10$^{10}$~cm for $\theta$ = 0\degr\ to (0.9--1.5) $\times$ 10$^{12}$~cm for $\theta$ = 180\degr.
These separation distances correspond to two types of companions that appear to be nearly equally compatible with the observed infant-phase excess emission of SN~2018aoz\ under the K10 model: (1) a low-mass ($\lesssim$ few solar mass) main sequence star or subgiant at $\gtrsim$ 80\degr\ viewing angle; or (2) a WD or He-star at $\lesssim$ 80\degr\ viewing angle.
\subsection{Ejecta Interaction with Circumstellar Material}\label{sec:csm}
The interaction between the SN ejecta and CSM near the progenitor can produce excess emission with properties dependent on the mass and spatial distribution of the CSM.
We model the early light curves of SN~2018aoz\ as a combination of power-law (for the underlying SN emission) and ejecta-CSM interaction emission (for the infant-phase excess emission), adopting the model of \citet[][P15, hereafter]{Piro2015apj} for the latter.
Here, we describe the CSM model and geometries (Section~\ref{sec:csmmod}) considered, and then discuss the results in the context of both H-poor (Section~\ref{sec:merger}) and H-rich CSM (Section~\ref{sec:accrete}).
\subsubsection{Model Description}\label{sec:csmmod}
The observed interaction emission is largely determined by properties of the outermost CSM layer \citep{Piro2015apj, Nakar2014apj}, represented as a uniform-density and spherically-symmetric envelope with mass $M_{\rm env}$ and radius $R_{\rm env}$ in the P15 model.
The luminosity ($L_{\rm CSM}$) is provided by the following equation determined by $M_{\rm env}$, $R_{\rm env}$, ejecta mass ($M_{\rm ej}$), ejecta kinetic energy ($E_{\rm ej}$), and opacity ($\kappa$):
\begin{equation}
L_{\rm CSM}(t) = \frac{t_{\rm env} E_{\rm env}}{t_p^2} \exp{\left[- \frac{t(t+2t_{\rm env})}{2t_p^2}\right]}
\label{eq:Lcsm}
\end{equation}
where $t_{\rm env} \propto E_{\rm ej}^{-0.5} M_{\rm ej}^{0.35} M_{\rm env}^{0.15} R_{\rm env}$ is the envelope expansion timescale post-explosion, $E_{\rm env} \propto E_{\rm ej} M_{\rm ej}^{0.7} M_{\rm env}^{-0.7}$ is the total energy transferred from the ejecta to the envelope, $t_p \propto \kappa^{0.5} E_{\rm ej}^{-0.25} M_{\rm ej}^{0.17} M_{\rm env}^{0.57}$ is the emission peak epoch, and $t$ is time in seconds since the explosion epoch ($t_0$).
Adopting a blackbody for the spectral energy distribution of the interaction emission, the blackbody temperature follows
\begin{equation}
T_{\rm CSM}(t) = \left[ \frac{L_{\rm CSM}(t)}{4\pi \sigma_{\rm SB} (R_{\rm env} + \varv_{\rm env} t)^2} \right]^{1/4}
\label{eq:Tcsm}
\end{equation}
where $\varv_{\rm env} \propto E_{\rm ej}^{-0.5} M_{\rm ej}^{0.35} M_{\rm env}^{0.15}$ is the envelope expansion velocity post-explosion.
We also consider ejecta interaction with CSM distributed in an equatorially-concentrated disk or torus as follows.
Such CSM may divert the flow of SN ejecta away from the equatorial plane, where it obscures the ejecta-CSM interaction from viewing angles ($\theta$) above and below the equatorial plane ($\theta$ = 0\degr).
Note that similar obscuration is expected for ejecta-companion interaction due to the diverted flow of SN ejecta around the companion \citep{Kasen2010apj}, resulting in attenuated brightness of the interaction as described by Equation~\ref{eq:kasangle} for viewing angles $\theta$ away from the binary axis towards the companion ($\theta$ = 0\degr\ in Equation~\ref{eq:kasangle}).
We approximate the attenuation of ejecta-CSM interaction brightness for a viewing angle $\theta$ above or below the equatorial plane as similar to that of ejecta-companion interaction for the same angle $\theta$ away from the binary axis towards the companion for a distant observer, assuming similar flow of SN ejecta away from the interaction region.
The brightness of ejecta interaction with equatorially-concentrated CSM would thus be $L_{\rm CSM}\times S(\theta)$ for $\theta$ ranging from the equatorial plane (0\degr) to the poles (90\degr), using $S(\theta)$ from Equation~\ref{eq:kasangle}.
$S(0\degr)$ = 1.0 means the brightness along the equatorial plane is identical to the case of spherically symmetric CSM, $L_{\rm CSM}$, while $\theta$ = 90\degr\ provides the minimum observed brightness of $L_{\rm CSM}\times 0.45$.
\subsubsection{Circumstellar Material from a WD or He-star Companion}\label{sec:merger}
\begin{deluxetable*}{cclcl}
\tabletypesize{\footnotesize}
\tablecolumns{5}
\tablewidth{0.99\textwidth}
\tablecaption{Ejecta-CSM interaction model fit parameters for H-poor CSM ($\kappa$ = 0.2~cm$^2$~g$^{-1}$)}
\tablehead{
\colhead{$M_{\rm ej}$ and $E_{\rm ej}${$\rm ^a$}} & \colhead{Viewing angle} & \colhead{Fit parameters} & \colhead{\chisqr} & \colhead{CSM properties}
}
\startdata
0.80~\msol\ and 0.63 $\times$ 10$^{51}$ ergs & $\theta$ = 0\degr & $M_{\rm env}$ = 2.0 $\times$ 10$^{-3}$~\msol\ & 3.29 & $\rho_{\rm env}$ = 22~g~cm$^{-3}$ \\
& & $R_{\rm env}$ = 3.5 $\times$ 10$^{9}$~cm & & $M_{\rm CSM}$ ($\rho \propto r^{-3}$) = 0.0046~\msol\ \\
& & $t_{0}$ = $-$0.04 days & \\
& & $t_{PL}$ = 0.19 days & \\
& & $\alpha_{B, V, i}$ = (2.12, 1.81, 2.14) & \\
\hline
& $\theta$ = 90\degr & $M_{\rm env}$ = 1.7 $\times$ 10$^{-3}$~\msol\ & 3.29 & $\rho_{\rm env}$ = 0.47~g~cm$^{-3}$ \\
& & $R_{\rm env}$ = 1.2 $\times$ 10$^{10}$~cm & & $M_{\rm CSM}$ ($\rho \propto r^{-3}$) = 0.0065~\msol\ \\
& & $t_{0}$ = $-$0.06 days & \\
& & $t_{PL}$ = 0.19 days & \\
& & $\alpha_{B, V, i}$ = (2.12, 1.82, 2.14) & \\
\hline
1.05~\msol\ and 0.82 $\times$ 10$^{51}$ ergs & $\theta$ = 0\degr & $M_{\rm env}$ = 2.1 $\times$ 10$^{-3}$~\msol\ & 3.29 & $\rho_{\rm env}$ = 34~g~cm$^{-3}$ \\
& & $R_{\rm env}$ = 3.1 $\times$ 10$^{9}$~cm & & $M_{\rm CSM}$ ($\rho \propto r^{-3}$) = 0.0045~\msol\ \\
& & $t_{0}$ = $-$0.03 days & \\
& & $t_{PL}$ = 0.19 days & \\
& & $\alpha_{B, V, i}$ = (2.12, 1.81, 2.14) & \\
\hline
& $\theta$ = 90\degr & $M_{\rm env}$ = 1.7 $\times$ 10$^{-3}$~\msol\ & 3.29 & $\rho_{\rm env}$ = 0.61~g~cm$^{-3}$ \\
& & $R_{\rm env}$ = 1.1 $\times$ 10$^{10}$~cm & & $M_{\rm CSM}$ ($\rho \propto r^{-3}$) = 0.0065~\msol\ \\
& & $t_{0}$ = $-$0.06 days & \\
& & $t_{PL}$ = 0.20 days & \\
& & $\alpha_{B, V, i}$ = (2.12, 1.82, 2.14) & \\
\enddata
\tablenotetext{{\rm a}}{The two cases of $M_{\rm ej}$ and $E_{\rm ej}$ are the same as the ones used for ejecta-companion interaction in Table~\ref{tab:kas}.}
\tablecomments{$t_0$ and $t_{\rm PL}$ are in days since the epoch of first light (MJD~58206.00) in rest frame.}
\end{deluxetable*}
\label{tab:csm}
We primarily consider the case of H-poor CSM originating from a WD or He-star companion, using the electron scattering opacity of $\kappa$ = 0.2~cm$^2$~g$^{-1}$, since those are the most likely companions for SN~2018aoz\ based on the constraints derived from the early light curves (Section~\ref{sec:kasan}) and nebular-phase spectra (Section~\ref{sec:neb}).
In this case, the SN explosion could occur after the merger of the binary or during an earlier stage of binary mass transfer \citep{Shen2015apj}.
The distribution of CSM initially after the merger and during earlier stages of mass transfer is expected to be equatorially-concentrated,
rather than spherically symmetric as assumed in P15,
though the distribution can evolve towards spherical symmetry on a timescale of hours after the merger \citep{Guillochon2010apj, Pakmor2013apj, Schwab2012mnras}.
Table~\ref{tab:csm} presents the best-fit parameters obtained by
fitting the early light curves of SN~2018aoz\ during 0--8 days since first light, excluding the $B$-band light curve during 0--1 days, using two extreme cases of viewing angles, $\theta$ = 0\degr\ (equal to spherically symmetric case) and $\theta$ = 90\degr, and two cases of relatively small and large ejecta masses and kinetic energies for SN~2018aoz\ following Section~\ref{sec:kasfit}.
Note that the reduced $\chi$-squared statistics of the four cases are nearly identical (\chisqr\ $\sim$ 3.3), indicating that the goodness of fit is very similar for the different cases.
Figure~\ref{fig:shockmod} compares the light and color curves of the best-fit ejecta-CSM interaction model obtained in the case of $M_{\rm ej}$ = 0.80~\msol, $E_{\rm ej}$ = 0.63 $\times$ 10$^{51}$~ergs, and $\theta$ = 0\degr\ (red dot-dashed curves) to those of SN~2018aoz\ and the two other best-fit models of surface \ni56\ heating (Section~\ref{sec:nipl}) and ejecta-companion interaction (Section~\ref{sec:kasfit}), where the goodness of fit is very similar for the three models.
As seen in the table, the difference of $t_{PL} - t_0 \sim$ 0.22--0.26 days obtained for the ejecta-CSM interaction model is near the lower extreme of the range obtained for the surface \ni56\ heating (0.54 days) and ejecta-companion interaction (0.28--0.60 days) models, consistent with there being a delay (or dark phase) of $\lesssim$ 1 day between the explosion and the onset of power-law rise in SN~2018aoz\ (Paper I).
We examine whether the CSM mass, $M_{\rm env}$, required to fit the observed infant-phase excess emission
is compatible with the expectations of CSM after a merger (or ``post-merger CSM'').
The total CSM mass ($M_{\rm CSM}$) is not necessarily equal to $M_{\rm env}$
since the envelope represents only the outermost layer of CSM near $R_{\rm env}$ that dominates the ejecta-CSM interaction emission.
$M_{\rm CSM}\gtrsim M_{\rm env}$ in general, where $M_{\rm CSM}$ = $M_{\rm env}$ is for the case of entirely uniform-density CSM, and $M_{\rm CSM}/M_{\rm env}$ increases with the central-concentration of the CSM density distribution.
Adopting a $\rho \propto r^{-3}$ density distribution expected for post-merger CSM \citep{Piro&Morozova2016apj}, we obtain the following equation for $M_{\rm CSM}$ in terms of $M_{\rm env}$ and $R_{\rm env}$:
\begin{equation}
M_{\rm CSM} \simeq 4\pi R_{\rm env}^3 \rho_{\rm env} \log{(R_{\rm env}/R_*)}
\label{eq:Mcsm}
\end{equation}
where $\rho_{\rm env} = 3 M_{\rm env} / 4\pi R_{\rm env}^3$ is the CSM density in the outermost layer (= envelope density) and $R_*$ is the progenitor radius.
$R_*$ is taken to be $\sim$ 6 $\times$ 10$^8$~cm, the expected shock breakout radius of SN~2018aoz\ (Section~\ref{sec:sbo}).
Table~\ref{tab:csm} column 5 provides the derived CSM properties of $\rho_{\rm env}$
and $M_{\rm CSM}$ that would be implied by the fit parameters using Equation~\ref{eq:Mcsm}.
Overall, these properties appear to be incompatible with the theoretical expectations for post-merger CSM.
In simulations of violent merger, the post-merger CSM mass can be $\sim$ 0.1--0.7\,\msol\ depending on the companion mass \citep{Dan2014mnras}, which are much larger than $M_{\rm CSM}\lesssim$ 0.007~\msol\ required to fit the observed infant-phase excess emission.
The post-merger CSM radius is also expected to expand on short timescales, beginning from $\sim$ 10$^{10}$~cm during the merger and expanding to $\sim$ 10$^{11}$~cm in only a few hours after the merger \citep{Piro&Morozova2016apj}, becoming less compatible with the fitted CSM radii of $R_{\rm env} \lesssim$ 10$^{10}$~cm on the timescale of the infant-phase excess emission.
Thus, the emission is not likely to be from post-merger CSM.
We instead consider CSM of smaller mass and radius expected in ``pre-merger'' stages of binary mass transfer, before the WD or He-star companion is disrupted, for the origin of the infant-phase excess emission.
For example, in simulations of He-shell DDets from WD-WD mergers, $\lesssim$ 0.1~\msol\ of CSM is expected to be present at the time of explosion, which occurs before the merger is completed, distributed in a torus around the progenitor star.
The outermost layers of the torus are located at $\gtrsim$~10$^9$~cm where the CSM density is expected to be $\lesssim$~10$^3$~g~cm$^{-3}$ \citep{Guillochon2010apj, Pakmor2013apj}.
These pre-merger CSM properties are comparable to $R_{\rm env}$ = (3.1--3.5) $\times$ 10$^{9}$~cm and $\rho_{\rm env}$ = 22--34~g~cm$^{-3}$ obtained
for the cases with $\theta$ = 0\degr\ viewing angle (see Table~\ref{tab:csm}).
If the pre-merger CSM density distribution is similar to the post-merger case ($\rho \propto r^{-3}$), then the corresponding total CSM mass of $M_{\rm CSM}\sim$ 0.005~\msol\ is relatively small compared to the CSM masses expected in He-shell DDet simulations \citep[$\sim$ 0.05--0.10~\msol;][]{Guillochon2010apj}.
However, since $M_{\rm CSM}$ can be larger for steeper pre-merger CSM density distributions,
ejecta interaction with pre-merger CSM remains possible for the origin of the observed infant-phase excess emission in SN~2018aoz.
\subsubsection{Circumstellar Material from a Main Sequence or Subgiant Companion}\label{sec:accrete}
\begin{deluxetable*}{cclcl}
\tabletypesize{\footnotesize}
\tablecolumns{5}
\tablewidth{0.99\textwidth}
\tablecaption{Ejecta-CSM interaction model fit parameters for solar-composition CSM ($\kappa$ = 0.34~cm$^2$~g$^{-1}$)}
\tablehead{
\colhead{$M_{\rm ej}$ and $E_{\rm ej}${$\rm ^a$}} & \colhead{Viewing angle} & \colhead{Fit parameters} & \colhead{\chisqr} & \colhead{CSM properties}
}
\startdata
0.80~\msol\ and 0.63 $\times$ 10$^{51}$ ergs & $\theta$ = 0\degr\ & $M_{\rm env}$ = 1.3 $\times$ 10$^{-3}$~\msol\ & 3.30 & $\rho_{\rm env}$ = 5.3~g~cm$^{-3}$ \\
& & $R_{\rm env}$ = 4.9 $\times$ 10$^{9}$~cm & & $M_{\rm CSM}$ ($\rho \propto r^{-3}$) = 0.0035~\msol\ \\
& & $t_{0}$ = $-$0.03 days & \\
& & $t_{PL}$ = 0.19 days & \\
& & $\alpha_{B, V, i}$ = (2.12, 1.81, 2.14) & \\
\hline
& $\theta$ = 90\degr\ & $M_{\rm env}$ = 1.1 $\times$ 10$^{-3}$~\msol\ & 3.28 & $\rho_{\rm env}$ = 0.19~g~cm$^{-3}$ \\
& & $R_{\rm env}$ = 1.4 $\times$ 10$^{10}$~cm & & $M_{\rm CSM}$ ($\rho \propto r^{-3}$) = 0.0045~\msol\ \\
& & $t_{0}$ = $-$0.05 days & \\
& & $t_{PL}$ = 0.20 days & \\
& & $\alpha_{B, V, i}$ = (2.12, 1.82, 2.14) & \\
\hline
1.05~\msol\ and 0.82 $\times$ 10$^{51}$ ergs & $\theta$ = 0\degr\ & $M_{\rm env}$ = 1.4 $\times$ 10$^{-3}$~\msol\ & 3.30 & $\rho_{\rm env}$ = 7.8~g~cm$^{-3}$ \\
& & $R_{\rm env}$ = 4.4 $\times$ 10$^{9}$~cm & & $M_{\rm CSM}$ ($\rho \propto r^{-3}$) = 0.0035~\msol\ \\
& & $t_{0}$ = $-$0.03 days & \\
& & $t_{PL}$ = 0.19 days & \\
& & $\alpha_{B, V, i}$ = (2.12, 1.81, 2.14) & \\
\hline
& $\theta$ = 90\degr\ & $M_{\rm env}$ = 1.2 $\times$ 10$^{-3}$~\msol\ & 3.28 & $\rho_{\rm env}$ = 0.33~g~cm$^{-3}$ \\
& & $R_{\rm env}$ = 1.2 $\times$ 10$^{10}$~cm & & $M_{\rm CSM}$ ($\rho \propto r^{-3}$) = 0.0046~\msol\ \\
& & $t_{0}$ = $-$0.05 days & \\
& & $t_{PL}$ = 0.20 days & \\
& & $\alpha_{B, V, i}$ = (2.12, 1.82, 2.14) & \\
\enddata
\tablenotetext{{\rm a}}{The two cases of $M_{\rm ej}$ and $E_{\rm ej}$ are the same as the ones used for ejecta-companion interaction in Table~\ref{tab:kas}.}
\tablecomments{$t_0$ and $t_{\rm PL}$ are in days since the epoch of first light (MJD~58206.00) in rest frame.}
\end{deluxetable*}
\label{tab:csmaccrete}
For the less likely case of a few solar mass main-sequence or subgiant companion in SN~2018aoz\ (Section~\ref{sec:kasan}), we briefly consider the possibility of ejecta interaction with solar composition CSM from such companions, adopting the electron scattering opacity of $\kappa$ = 0.34~cm$^2$~g$^{-1}$.
Since those companions are mainly expected to trigger \tase\ via accretion \citep{Maoz2014araa}, the CSM is likely to be an equatorially-concentrated disk or toroid.
Table~\ref{tab:csmaccrete} shows the best-fit parameters obtained by
fitting the early light curves of SN~2018aoz\ using the aforementioned two extreme cases of viewing angles for equatorially-concentrated CSM, $\theta$ = 0\degr\ for equatorial viewing angle and 90\degr\ for polar viewing angle, and two cases of relatively small and large ejecta masses and kinetic energies for SN~2018aoz.
For uniform-density CSM, the total CSM mass is expected to be $M_{\rm CSM}$ = $M_{\rm env}$, ranging in 0.0011--0.0014~\msol, while the total CSM mass expected for the relatively steep CSM density distribution of $\rho \propto r^{-3}$ ranges in 0.0035--0.0046~\msol\ (Table~\ref{tab:csmaccrete} column 5 based on Equation~\ref{eq:Mcsm}).
Overall, if SN~2018aoz\ was triggered by accretion from a few solar mass main-sequence or subgiant companion, then the observed infant-phase excess emission can be produced by ejecta interaction with a $\sim$ 0.001--0.005~\msol\ accretion disk near $\sim$ (4--14) $\times$ 10$^{9}$~cm.
\subsection{Search for Shock Breakout}\label{sec:sbo}
Shock breakout is expected to occur shortly after a SN explosion
when the outgoing shockwave breaks through the surface of the progenitor star \citep{Piro2010apj, Nakar2010apj}.
Early observations of \tase\ have been used to search for evidence of shock breakout based on the expected thermal emission from the shock-heated envelope.
However, since the luminosity of the shock breakout emission scales with the radius of the progenitor star, this emission has not yet been observed in \tase\ due to the small size of WDs.
The non-detection of shock breakout emission in early \tase\ has been used to constrain the radius of the SN progenitor, e.g., in the case of SN~2011fe to be $\lesssim$ 0.02~\rsol\ \citep{Bloom2012apj}.
We investigate the origin of the infant-phase excess emission by comparing the observed early light curves of SN~2018aoz\ with what is expected from shock breakout emission.
Adopting the model of \citet{Piro2010apj}, which assumes an approximately spherically-symmetric explosion and a radial shock acceleration law,
the shock breakout emission luminosity ($L_{\rm SBO}$) and temperature ($T_{\rm SBO}$) are determined by
the ejecta mass ($M_{\rm ej}$) and the radius of the progenitor star at the time of shock breakout ($R_{\rm SBO}$) as follows:
\begin{align}
L_{\rm SBO}(t) &= 7^{-4/3} \times 2 \times 10^{40} (g_9/K_{13})^{-0.41} V_9^{1.9} \rho_6^{0.36} R_{8.5}^{0.83} t_4^{-0.16}\ {\rm erg s}^{-1}
\label{eq:Lsbo}\\
T_{\rm SBO}(t) &= 7^{-1/3} \times 2 \times 10^{4} (g_9/K_{13})^{-0.058} V_9^{0.030} \rho_6^{0.0058} R_{8.5}^{0.11} t_4^{-0.44}\ {\rm K}
\label{eq:Tsbo}
\end{align}
where the first terms are correction factors to fix the improper scalings \citep{Bloom2012apj}, $g_9 \propto M_{\rm ej}/R_{\rm SBO}^2$ represents surface gravity, $K_{13} = K/(10^{13}\ {\rm cgs})$ represents the non-relativistic degenerate equation of state constant ($K$) with $\mu_e \sim 2$ for C+O WDs, $V_9 \sim 0.6$ and $\rho_6 \sim 2 g_9^{0.11}$ represent the shock velocity and density, respectively, $R_{8.5} = R_{\rm SBO}/(10^{8.5}\ {\rm cm})$, and $t_4 = (t-t_0)/(10^4\ {\rm s})$ represents time since the epoch of explosion ($t_0$).
We fit the early light curves of SN~2018aoz\ during 0--8 days since first light (excluding the $B$-band light curve during 0--1 days) as a combination of power-law (for the underlying SN emission) and shock breakout emission (for the infant-phase excess emission).
The best-fit shock breakout radius is $R_{\rm SBO}$ = (3.5--3.7) $\times$ 10$^9$~cm, or 0.050--0.053~\rsol, where the lower and upper limits of the range represent the results obtained using two cases of relatively small and large values of $M_{\rm ej}$ (= 0.80 and 1.05~\msol), respectively, for SN~2018aoz\ following the methods of Sections~\ref{sec:kasfit} and \ref{sec:csm}.
This range of fitted $R_{\rm SBO}$ is larger than the $R_{\rm SBO}$ that can be reasonably expected for the progenitor of SN~2018aoz, indicating that shock breakout is unlikely to be the origin of the observed infant-phase excess emission.
For a typical C+O WD with mass $\sim$ 1.0~\msol, the shock breakout radius after possible expansion due to a deflagration phase is expected to be $\sim$ 6~$\times$~10$^{8}$~cm \citep{Piro2010apj}.
While explosion asymmetry can lead to a factor of $\lesssim$ 2 difference in the inferred $R_{\rm SBO}$, corresponding to the expected range of angular variation of shock breakout luminosity for a compact progenitor \citep{Afsariardchi2018apj}, an unreasonably intense deflagration phase is still be required to achieve such a large radius as the fitted $R_{\rm SBO}$ = (3.5--3.7)~$\times$~10$^{9}$~cm.
Moreover, explosion mechanisms dominated by deflagration typically leave substantial amounts of unburnt carbon \citep{Nomoto1984apj}, while no C spectral features are seen in SN~2018aoz\ (Section~\ref{sec:carbon}).
For the case of a He-shell DDet origin, which lacks a deflagration phase, the radius of the best-fit He-shell DDet model progenitor for SN~2018aoz---a 1.05~\msol\ C+O WD with a 0.01~\msol\ He-shell---is only 5.14~$\times$~10$^8$~cm (Paper I).
In this case, while the He-shell can generally be expected to undergo some shock-driven expansion during the He-shell DDet process, the fitted $R_{\rm SBO}$ of (3.5--3.7)~$\times$~10$^{9}$~cm is unrealistic for the best-fit He-shell DDet model progenitor due to the extremely small He-shell mass.
\section{Nebular-Phase Evolution: Constraints on the Progenitor System and Explosion Mechanism}\label{sec:nebea}
\subsection{Nebular-Phase Light Curves}\label{sec:neblcev}
Figure~\ref{fig:neblc} compares the evolution of SN~2018aoz\ light curves in $BVi$ (blue, green, and red circles) bands from the beginning to the nebular phase with those of normal \tas~2011fe\footnote[1]{The $I$-band magnitudes of SN~2011fe were converted to $i$ band by subtracting $-2.5\log_{10}(3631~{\rm Jy} /2416~{\rm Jy})$.} \citep[dashed lines;][]{Munari2013newa,Tsvetkov2013coska} that have been scaled to match
the peak absolute magnitude, \mb\ = $-$19.32 mag, and post-peak decline rate, \dm15 = 1.12 mag,
of SN~2018aoz.
The light curves of SNe~2018aoz and 2011fe show a good agreement overall throughout their evolution, especially in the $B$-band,
confirming that the evolution of SN~2018aoz\ in the nebular phase continues to match those of normal \tase.
The nebular-phase light curves of SN~2018aoz\ decline linearly at rates of 0.0131 $\pm$ 0.0001, 0.0129 $\pm$ 0.0001, and 0.0083 $\pm$ 0.0003 mags~day$^{-1}$ in the $B$, $V$, and $i$ bands, respectively, with $BVi$-averaged decline rate of 0.0127 mags~day$^{-1}$.
For comparison, we measure the light curve decline rates of SN~2011fe
during the nebular phase to be
0.0134, 0.0138, 0.0099 mags~day$^{-1}$
in the $B$, $V$, and $i$ bands, respectively.
\subsection{Nebular-Phase Spectra}\label{sec:nebspecev}
Figure~\ref{fig:nebspec} shows the identification of nebular-phase [\feiii]~$\lambda$4658~\AA\ and [\coiii]~$\lambda$5888~\AA\ features in SN~2018aoz\ (red and green vertical regions, respectively), whose flux ratio is associated with the evolution of \ni56\ $\rightarrow$ \co56\ $\rightarrow$ \fer56\ radioactive decay in \tase\ \citep{Kuchner1994apj}, as well as that of a double-peaked feature near 7290~\AA\ (blue vertical region).
The nebular-phase 7290~\AA\ feature of \tase\ can be from the [\caii] $\lambda$7291, 7323~\AA\ doublet, [\feii] $\lambda$7155~\AA, [\niii] $\lambda$7378~\AA, or some combination thereof \citep{Polin2021apj, Flors2020mnras}.
The double-peaked 7290~\AA\ feature observed in SN~2018aoz, as well as most normal events \citep[e.g., SN~2011fe;][]{Mazzali2015mnras}, is most likely dominated by [\feii] and [\niii] emission since the [\caii] feature would not be resolved as a doublet at typical \tas\ velocities \citep{Polin2021apj}.
For each of the four nebular-phase spectra of SN~2018aoz, Table~\ref{tab:nebflux} provides the measured fluxes of [\feiii], [\coiii], and the 7290~\AA\ feature ([\feii] $+$ [\niii]), as well as the ratios between the fluxes of the 7290~\AA\ feature to those of [\feiii]~$\lambda$4658~\AA, called ``7290~\AA/[\feiii]'' hereafter.
To obtain uncertainties in the fluxes, we estimated noise levels by smoothing each spectrum with a second-order Savitsky–Golay filter with a width of 150~\AA, which is $\lesssim$ 1/4 of the feature widths.
The average 7290~\AA/[\feiii] ratio of 0.149 $\pm$ 0.007 between 120 and 320 days since peak for SN~2018aoz\ is near the lower extreme of what has been found in normal \tase\ in the range of $\sim$ 0.1--1.0 \citep{Polin2021apj}.
\begin{deluxetable*}{ccccc}
\tabletypesize{\footnotesize}
\tablecolumns{4}
\tablewidth{0.99\textwidth}
\tablecaption{Nebular-phase broad emission line fluxes}
\tablehead{
\colhead{Phase$\rm ^a$} & \colhead{$[$\feiii$]$} & \colhead{$[$\coiii$]$} &
\colhead{[\feii] $+$ [\niii]} &
\colhead{7290~\AA/[\feiii]$\rm ^b$}\\
\colhead{} & \colhead{$\lambda$~4658~\AA} & \colhead{$\lambda$~5888~\AA} &
\colhead{$\lambda$~7155~\AA, 7378~\AA} &
\colhead{}\\
\colhead{} & \colhead{($10^{-14}$ erg s$^{-1}$ cm$^{-2}$)} & \colhead{($10^{-14}$ erg s$^{-1}$ cm$^{-2}$)} & \colhead{($10^{-14}$ erg s$^{-1}$ cm$^{-2}$)} & \colhead{}
}
\startdata
$+$259.4 & 10.934 $\pm$ 0.010 & 1.347 $\pm$ 0.003 & 1.710 $\pm$ 0.003 & 0.1564 $\pm$ 0.0003 \\
$+$277.3 & N/A & 1.029 $\pm$ 0.003 & 1.443 $\pm$ 0.002 & N/A \\
$+$296.4 & 7.629 $\pm$ 0.009 & 0.632 $\pm$ 0.003 & 1.086 $\pm$ 0.002 & 0.1424 $\pm$ 0.0003 \\
$+$382.5 & 2.658 $\pm$ 0.005 & 0.245 $\pm$ 0.002 & 0.580 $\pm$ 0.002 & 0.2181 $\pm$ 0.0008 \\
\enddata
\tablenotetext{{\rm a}}{Phases are measured in observer frame days since $B$-band maximum.}
\tablenotetext{{\rm b}}{The ratio of the flux of the 7290~\AA\ feature to that of [\feiii] \citep{Polin2021apj}}
\end{deluxetable*}
\label{tab:nebflux}
Figure~\ref{fig:nebspec} also shows the expected positions of
narrow emission lines of H, He, and O near H$\alpha$ $\lambda$6563~\AA, \hei~$\lambda$5875, 6678~\AA, and [\oi]~$\lambda$6300, 6364~\AA, respectively.
In \tase, these low-velocity lines may be produced by swept-up material from the companion \citep[e.g.,][]{Kollmeier2019mnras} or CSM, including disrupted companion material following a violent merger \citep{Kromer2013apj, Mazzali2022mnras, Tucker2022apj}.
All of the lines appear to be absent in SN~2018aoz, which argues against the presence of a substantial amount of swept-up material (see below).
By injecting synthetic emission lines of H and He with a FWHM = 1000~km~s$^{-1}$ and modelling
Doppler shifts from the rest wavelength of up to $\pm$1000~km~s$^{-1}$ into the observed nebular spectra following the methods of \citet{Sand2018apj, Sand2019apj}, we find 3$\sigma$ flux upper limits for the H$\alpha$ and \hei\ lines.
We do the same for [\oi], but using FWHM = 2000~km~s$^{-1}$ and up to $\pm$2000~km~s$^{-1}$ Doppler shifts that can be expected for [\oi] \citep{Taubenberger2013apj}.
Table~\ref{tab:neblims} presents the measured upper limits and their corresponding luminosities.
\begin{deluxetable*}{cccccc}
\tabletypesize{\footnotesize}
\tablecolumns{6}
\tablewidth{0.99\textwidth}
\tablecaption{Nebular-phase emission line flux and luminosity limits}
\tablehead{
\colhead{Line} & \colhead{Phase$\rm ^a$} & \colhead{Flux Limit} & \colhead{Luminosity Limit} & \colhead{Mass Limit$\rm ^b$} & \colhead{Mass Limit$\rm ^c$} \\
\colhead{} & \colhead{} & \colhead{($10^{-17}$ erg s$^{-1}$ cm$^{-2}$)} & \colhead{($10^{36}$ erg s$^{-1}$)} & \colhead{($10^{-4}$ \msol)} & \colhead{($10^{-4}$ \msol)}
}
\startdata
H$\alpha$~$\lambda$6563~\AA\ & $+$259.4 & 5.4 & 3.2 & 4 & 10--16 \\
& $+$277.3 & 5.0 & 3.0 & 4 & 12--18 \\
& $+$296.4 & 6.3 & 3.8 & 5 & 16--24 \\
& $+$382.5 & 7.7 & 4.6 & 9 & 100--130\\
\hline
\hei~$\lambda$5875~\AA\ & $+$259.4 & 10.0 & 6.0 & 25 & \\
& $+$277.3 & 7.8 & 4.7 & 25 & \\
& $+$296.4 & 9.0 & 5.4 & 33 & \\
& $+$382.5 & 17.6 & 10.5 & 104 & \\
\hline
\hei~$\lambda$6678~\AA\ & $+$259.4 & 5.4 & 3.2 & 18 & \\
& $+$277.3 & 6.7 & 4.0 & 25 & \\
& $+$296.4 & 10.5 & 6.3 & 43 & \\
& $+$382.5 & 7.7 & 4.6 & 55 & \\
\hline
$[$\oi$]$~$\lambda$6300~\AA\ & $+$259.4 & 16.9 & 10.1 & & \\
& $+$277.3 & 17.6 & 10.5 & & \\
& $+$296.4 & 21.6 & 13.0 & & \\
& $+$382.5 & 25.8 & 15.5 & & \\
\hline
$[$\oi$]$~$\lambda$6364~\AA\ & $+$259.4 & 16.9 & 10.1 & & \\
& $+$277.3 & 21.1 & 12.6 & & \\
& $+$296.4 & 21.6 & 13.0 & & \\
& $+$382.5 & 25.8 & 15.5 & & \\
\enddata
\tablenotetext{{\rm a}}{Phases are measured in observer frame days since $B$-band maximum.}
\tablecomments{All implanted lines have peak fluxes corresponding to three times the local rms with a FWHM = 1000 km s$^{-1}$, except for the \hei~$\lambda$5875~\AA\ line, where we used a peak flux of four times the local rms. We infer upper limits on the mass of the emitting elements based on the luminosity limits and the model predictions of $\rm ^b$\citet{Botyanszki2018apj} and $\rm ^c$\citet{Dessart2020aa}.}
\end{deluxetable*}
\label{tab:neblims}
\subsection{Constraints on Non-Degenerate Companion and Circumstellar Material}\label{sec:neb}
We now place constraints on the presence of a
non-degenerate companion or CSM from the violent merger case of double-degeneracy in the progenitor of SN~2018aoz\ based on the absence of predicted emission lines from their unburned swept-up material in the observed nebular-phase spectra.
The ejecta of \tase\ are expected to strip/ablate $\sim$ 0.1~\msol\ of H- or He-rich materials from a non-degenerate companion \citep[e.g.,][]{Botyanszki2018apj}, while in the case of a violent merger, $\sim$ 0.1--0.7~\msol\ of H-poor CSM composed of O or He can be expected depending on the mass and composition of the companion WD \citep{Dan2014mnras}.
Multiple spectral synthesis studies have shown that even trace amounts ($\sim 10^{-3}$\,\msol) of low-velocity H will lead to observable nebular-phase H$\alpha$ emission \citep{Mattila2005aap, Botyanszki2018apj, Dessart2020aa}.
For H-poor material, some recent studies \citep{Dessart2020aa} find that $\lesssim$ 0.2~\msol\ of O or He could be hidden by metal line-blanketing while other studies find that even very small amounts, $\sim$ 0.05~\msol\ of O \citep{Mazzali2022mnras} or $\sim$ 10$^{-3}$~\msol\ of He \citep{Botyanszki2018apj}, would be observable.
In columns 5 and 6 of Table~\ref{tab:neblims}, we use the models of \citet{Botyanszki2018apj} and \citet{Dessart2020aa}, respectively,
to obtain upper limits on the masses of low-velocity H and He based on the observed upper limits on the luminosities of their lines.
The range of values in column 6 correspond to the range of mass upper limits derived for the set of delayed-detonation as well as sub-Chandrasekhar-mass explosion models from \citet{Dessart2020aa}.
As seen in the columns, even the most conservative upper limits on H and He masses in SN~2018aoz\ permit less than 1.3 $\times$ 10$^{-2}$~\msol\ and 1.0 $\times$ 10$^{-2}$~\msol\ of each, respectively, making the presence of even a trace amount of H extremely unlikely, as supported by both models, while \citet{Botyanszki2018apj} would also exclude the presence of significant He.
Overall, the nebular-phase spectra of SN~2018aoz\ disfavour the presence of a non-degenerate companion and, to a lesser extent, the large mass of H-poor CSM expected from a violent merger.
The presence of a main sequence or red giant companion is especially disfavoured by the H constraints based on both \citet{Botyanszki2018apj} and \citet{Dessart2020aa}, while the He constraints based on \citet{Botyanszki2018apj} would even disfavour a naked He-star companion.
\subsection{Constraints on Sub-Chandrasekhar-Mass and Asymmetric Explosion Mechanisms}\label{sec:nebex}
Nebular-phase spectra of \tase\ can also offer constraints on the explosion mechanism and geometry.
In particular, the strength of [\caii] emission near 7290~\AA\ can be linked to the mass of the progenitor WD.
Pure detonations of low-mass WDs are expected to undergo incomplete burning due to their lower density \citep{Polin2021apj}, leading to the production of Ca mixed with other Fe-peak elements, which then cool efficiently via [\caii] in the nebular phase \citep{Polin2021apj, Hoeflich2021apj}.
As explained above, the double-peaked 7290~\AA\ features seen in SN~2018aoz\ and most normal \tase\ (e.g., SN~2011fe) are likely dominated by [\feii] and [\niii], indicating weak [\caii] emission.
This may imply the explosion of a relatively massive WD \citep[$\gtrsim$ 1.2~\msol; based on comparisons to the He-shell DDet models of][see Section~\ref{sec:heddneb}]{Polin2021apj}.
For instance, a 1.26~\msol\ WD explosion can produce weak and double-peaked emission near 7290~\AA\ as seen in SN~2011fe \citep{Mazzali2015mnras}.
In this case, however, reconciling the relatively large total mass with the estimated ejecta mass of SN~2018aoz\ based on 1-D modelling of its fast-rising light curves (0.8--1.05~\msol; Paper I) may rely on explosion asymmetry and viewing angle effects.
Such an explosion asymmetry may leave imprints on other lines in the nebular phase.
In particular, the [\feii]~$\lambda$7155~\AA\ and [\niii]~$\lambda$7378~\AA\ emission features seen in normal \tase\ are expected to be Doppler shifted as a result of the motion of the ejecta core in an asymmetric explosion mechanism \citep{Maeda2010apj, Maeda2010natur, Li2021apj}.
Note that these two emission features are thought to be primarily produced by single line transitions, whereas other emission features of Fe-peak elements, including [\feiii] and [\coiii], are produced by a blend of several broad absorption features with wavelength separations smaller than their typical line widths \citep{Maeda2010apj, Flors2020mnras}.
Thus, shifts in the central wavelengths of [\feiii] and [\coiii] are not solely attributable to Doppler velocity.
As seen in Figure~\ref{fig:nebspec}, both [\feii] and [\niii] emission features in the nebular-phase spectra of SN~2018aoz\ are blueshifted, corresponding to average velocities of $-$2240 $\pm$ 0.29 and $-$1900 $\pm$ 0.53 km~s$^{-1}$, respectively.
These are among the most blueshifted velocities reported for those features in \tase\ \citep{Li2021apj, Maeda2010natur}.
For both asymmetric Chandrasekhar-mass explosion mechanisms \citep{Maeda2010apj} and double-detonations \citep{Boos2021apj}, the observed velocities of Fe and Ni in SN~2018aoz\ point to a viewing angle where the primary component of the ejecta core is approaching the observer.
However, we note that asymmetric explosions only provide faster-rising light curves from viewing angles where the ejecta core is \emph{receding} from the observer \citep[see][]{Shen2021apj, Boos2021apj} since those directions provide higher ejecta velocities and lower densities, which leads to shorter diffusion times.
Thus, asymmetric effects alone are unable to reconcile the fast-rising SN light curves with a relatively large total ejecta mass.
We propose three possible scenarios below to explain the absence of [\caii] emission and blueshift of [\feii] and [\niii] emission in the nebular phase of SN~2018aoz\ together with its short rise time.
\begin{enumerate}
\item SN~2018aoz\ may originate from the explosion of a relatively high-mass ($\gtrsim$ 1.2~\msol) WD, where complete nuclear burning in the core results in no nebular-phase [\caii] emission.
The explosion may be moderately asymmetric, resulting in blueshifted [\feii] and [\niii] lines in the nebular phase from a viewing angle where the ejecta core is approaching the observer.
In this case, the fast-rising light curves of SN~2018aoz\ can only be explained by the presence of a preceding dark phase \citep{Piro&Nakar2013apj}.
However, this scenario is disfavoured for two reasons. First, we found no evidence for a long ($>$ 1 day) dark phase in SN~2018aoz\ (Paper I) based on its observed ejecta velocity evolution \citep{Piro&Nakar2014apj}.
Second, for an asymmetric Chandrasekhar-mass explosion, such a viewing angle would be less compatible with the presence of surface Fe-peak elements, which is primarily predicted in the direction where the ejecta core is receding from the observer \citep[e.g.,][]{Maeda2010apjCh}.
\item Alternatively, the explosion of a high-mass WD can be compatible with faster light curve rise if the ejecta core is receding from the observer. In this case, blueshifted [\feii] and [\niii] lines in the nebular phase may require those lines to be optically thick, causing the receding part of the ejecta to be shielded by the incoming part.
Note that Fe-peak elements have been suggested to act as an “Fe-curtain”, blocking the radiation from obstructed regions \citep{Leonard2007apj, Dessart2020aa}.
\item SN~2018aoz\ may originate from the asymmetric explosion of a lower-mass WD.
Since nuclear burning is more complete in the densest regions of the ejecta, which is near the off-center point of carbon ignition, Ca production increases towards the low-density opposing direction \citep[e.g.,][]{Boos2021apj}.
For a highly asymmetric explosion, there may be limited overlap between the distributions of Ca and Fe-peak elements in the core, resulting in weak [\caii] emission in the nebular phase \citep{Polin2021apj, Hoeflich2021apj}.
For a moderately asymmetric explosion where the core is approaching the observer, shielding of the most Ca-rich regions by parts of the intervening core may cause blueshifted [\feii] and [\niii] to dominate the 7290~\AA\ feature if their lines are optically thick.
\end{enumerate}
\section{Comparison to He-shell Double-Detonation Models}\label{sec:hedd}
The $B$-band plateau and rapid redward \bv\ color evolution of SN~2018aoz\ within the first $\sim$ 1 day post-explosion have been attributed to the presence of an over-density of Fe-peak elements in the outer 1\% of the SN-ejected mass (Paper I).
In addition, the relatively short 15.3-day rise-time of SN~2018aoz\ among normal \tase\ indicates that SN~2018aoz\ either (1) was a spherically symmetric explosion with a total ejecta mass of $\sim$0.8--1.0 \msol, which is significantly smaller than the Chandrasekhar mass of $\sim$ 1.4~\msol, or (2) was an asymmetric explosion.
Among the proposed explanations for the distribution of Fe-peak elements in the outer ejecta of SN~2018aoz---off-center deflagration, gravitationally confined detonation, and He-shell DDet---only He-shell DDet is compatible with a sub-Chandrasekhar total ejecta mass \citep{Kromer2010apj, Woosley2011apj, Polin2019apj}.
Below we examine the compatibility between the other observed properties of SN~2018aoz\ and a set of 1-D He-shell DDet model predictions.
\subsection{He-shell Double-Detonation Simulations}\label{sec:heddsim}
For our comparisons, we primarily use the set of thin He-shell DDet models from Paper I with core C+O WD and He-shell masses ranging in 1.00--1.10~\msol\ and 0.01--0.012~\msol, respectively, created following the methods of \citet{Polin2019apj}.
The modelling process involves two stages. First we perform hydrodynamic simulations with full nucleosynthesis using Castro, a compressible hydrodynamics code built on the AMReX framework \citep{Almgren2010apj, Zingale2018jphc}. After the SN ejecta has reached homologous expansion we perform radiative transport calculations with Sedona \citep{Kasen2006bapj} in order to produce synthetic light curves and spectra of our models.
The only way our methods differ from the \citet{Polin2019apj} study is that we begin our radiative transport simulations earlier than the previously published models (beginning at 0.1 days instead of 0.25 days) in order to model the natal epochs observed in SN~2018aoz.
In Paper I, we found that the model with a 1.05 M$_\odot$ WD $+$ 0.01 M$_\odot$ He-shell provided the best-fit to the early (0--8 days since first light) \bv\ color and near-peak $BVi$ luminosity evolution of SN~2018aoz. Here, we provide an expanded comparison between these models and both the infant-phase and near-peak properties of the SN. We supplement these with comparisons to the predictions for the nebular-phase emission line ratios of the sub-Chandrasekhar-mass He-shell DDet models from \citet{Polin2021apj}.
\begin{figure}[t!]
\epsscale{\scl}
\plotone{DDetPeak220303.pdf}
\caption{(Left) The dereddened spectrum of SN~2018aoz\ (black and grey curves) from 1.9 days before $B$-band maximum in the rest frame ($\sim$ 13.9 days since explosion) is compared to the outcomes of
He-shell DDet simulations (colored curves) from the nearest post-explosion phase for various WD and He-shell masses as labelled (WD mass $+$ He-shell mass) in \msol.
For clear comparison, the simulated spectra have been smoothed by box-car convolution, resulting in effective spectral resolutions of $R$ = 300.
(Top-right) The dereddened pre-peak $B$-band light curve of SN~2018aoz\ (black circles) is compared to the predictions of the models from the left panel (colored curves).
(Bottom-right) Adaptation of Figure 11 from \citet{Polin2019apj} with the addition of SN~2018aoz\ and the models from the left panel.
The peak $B$-band absolute magnitude and ejecta velocity of SN~2018aoz\ (orange star) and the models (colored stars) are compared to those of other \tase\ \citep[grey circles,][]{Zheng2018apj} and the set of He-shell DDet models with 0.01~\msol\ He-shells (black dashed line) from \citet{Polin2019apj}.
\label{fig:ddetlate}}
\end{figure}
To give context for the comparison of these model predictions to the observations of SN~2018aoz\ presented in the subsections below, we first summarize general trends observed in this suite of models:
(1) for a fixed He-shell mass, larger WD masses lead to larger ejecta velocities as well as brighter peak luminosities (Figure~\ref{fig:ddetlate}, bottom right panel)---a relationship first identified by \citet{Polin2019apj}; (2) for a fixed WD mass, larger He-shell masses also lead to larger ejecta velocities, while the difference in luminosities is less significant for different He-shell masses; (3) the luminosity and duration of the infant-phase excess emission from surface Fe-peak elements increases for larger He-shell masses and decreases for larger WD masses (Figure~\ref{fig:ddetearly}, right panels); (4) the durations of the NRB-phases for both of the NRBs in \bv\ and \vi\ tend to increase for larger He-shell masses and decrease for larger WD masses, while the maximum colors attained during the NRB-phases decrease slightly for larger WD masses (Figure~\ref{fig:ddetearly}, left panels); and (5) the maximum NRB color is usually attained earlier in the \vi\ NRB than the \bv\ NRB, and both of them follow a few ($\sim$ 1--3) days after the peak of the infant-phase excess emission (Figure~\ref{fig:ddetearly}).
\begin{figure}[t!]
\epsscale{\scl}
\plotone{DDetEarly211219.pdf}
\caption{The early observations of SN~2018aoz\ (black circles), including the dereddened color (left panels) and light (right panels) curves in rest frame, are compared to the outcomes of the He-shell DDet simulations from Figure~\ref{fig:ddetlate} (same-colored curves).
The inverted arrow (top-right) is a detection limit at a S/N of 2.
\label{fig:ddetearly}}
\end{figure}
\subsection{Comparison to Infant-Phase Evolution}\label{sec:heddearly}
Figure~\ref{fig:ddetearly} compares the observed early color
and light curves (filled black circles) of SN~2018aoz\ with those (colored curves)
predicted by the He-shell DDet models during the first 5 days.
The modelled \bv\ and \vi\ color evolution (left panels) is very sensitive to the adopted masses of the WD and He shell.
As noted in Paper I, the 1.05~\msol\ WD + 0.010~\msol\ He-shell model (magenta curves) provides the best-fit to the early \bv\ color evolution of SN~2018aoz, including the timing and magnitude of the rapidly evolving NRB-phase color.
All of the models, on the other hand,
poorly fit the observed \vi\ color, significantly over-predicting the amount of reddening observed.
This discrepancy could be due to either (1) a line effect, such as differences in the modelled and observed strength of \caii\ features in the $I$-band around 8000~\AA\ due to Ca produced by the initial He-shell detonation \citep{Polin2021apj}, or (2) a continuum effect, such as differences in the color temperature that are influenced by the radioactive heating rate (Section~\ref{sec:thin-shell}).
In addition, none of the current suite of models can fit the early light curves entirely during the first 5 days (right panels), although some can reproduce the observed light curve evolution of the SN at various phases.
From 0.5 days onward, the cyan and magenta models (1.10 and 1.05~\msol\ WDs with 0.011~\msol\ He-shells) provide the best-fits, although both models significantly under-predict the observed luminosity over the earliest $\lesssim$ 0.5 days.
On the one hand, it is only necessary to add an additional 1.1 $\times$ 10$^{-3}$~\msol\ of He to the best-fit model to match the $\lesssim$ 0.5 day luminosity, as shown by the brown curves (1.05~\msol\ WD + 0.0111~\msol\ He-shell), demonstrating the high sensitivity of model predictions to the He-shell mass.
However, this model provides a worse fit to the subsequent light curves during 0.5--5 days, the timing and duration of the early color evolution (left panels), and near-peak observations (see Section~\ref{sec:heddpeak} below) of SN~2018aoz\ than the magenta model.
The implications of these discrepancies are discussed in Section~\ref{sec:heddexp} below.
\subsection{Comparison to Maximum-Light Properties}\label{sec:heddpeak}
Figure~\ref{fig:ddetlate} compares the observations of SN~2018aoz\ to the modelled
(1) near-peak spectra (left panel);
(2) pre-peak $B$-band light curves (top-right panel); and
(3) peak $B$-band luminosities and ejecta velocities measured by the peak \siii\ velocities of the He-shell DDet models (bottom-right panel) along with those of other \tase\ and previously published He-shell DDet models from \citet{Polin2019apj}.
All of the models predict clear absorption features in the vicinity of the observed \siii, \sii, \fex, and \caii\ features in SN~2018aoz, as labelled at the top of the left panel.
The models with 1.05~\msol\ WD mass provide the best match to the observed spectral features overall.
As seen in the top-right panel, the models with 1.05~\msol\ and 1.10~\msol\ WD mass both fit the pre-peak $B$-band light curve of SN~2018aoz\ better than the models with smaller WD masses, which predict relatively under-luminous light curves.
\begin{figure}[t!]
\epsscale{\scl}
\plotone{PolinPlot211227.pdf}
\caption{Adaptations of Figure 12 from \citet[][Top]{Polin2019apj} and Figure 9 from \citet[][Bottom]{Polin2021apj} with the addition of SN~2018aoz.
The observed peak $B$-band absolute magnitude, $M_B$ (peak), peak \siii\ velocity, $v_{\rm peak}$ (\siii), peak $B_{\rm max} - V_{\rm max}$ color, and nebular-phase 7290\AA/[\feiii] line ratio of SN~2018aoz\ (colored star) are compared to those of other \tase\ \citep[colored circles;][]{Zheng2018apj, Polin2021apj} and He-shell DDet models (colored squares) with 0.01~\msol\ He-shell and 0.9--1.2~\msol\ WD masses \citep{Polin2019apj}.
The dashed lines represent the peak brightness and velocity relationship predicted by the He-shell DDet models with 0.01~\msol\ He-shells.
The SNe with large $B_{\rm max} - V_{\rm max}$ (red-colored circles; top) and large 7290\AA/[\feiii] (blue-colored circles; bottom) scattered near the dashed lines have been suggested to be \tase\ from He-shell DDets \citep{Polin2019apj, Polin2021apj}.
\label{fig:ddetpolin}}
\end{figure}
In the bottom-right panel, the cyan and red models provide the best matches to the observed peak $B$-band luminosity and ejecta velocity of SN~2018aoz\ (orange star), respectively, while the magenta model with 1.05~\msol\ WD $+$ 0.010~\msol\ He-shell provides the closest match when both features are simultaneously considered. However, we note that there is some separation between SN~2018aoz\ and the models.
In particular, \citet{Polin2019apj} identified two broad populations of SNe within this plot: an apparent clustering at $v_{\rm peak}$~(\siii) $\sim$ 11,000~km~s$^{-1}$ and high peak magnitudes, and a non-clustered population with a tail extending to higher velocities.
The former is mainly composed of CN and 91T-like \tase, while the latter is composed of BL and 91bg-like events (see Figure~\ref{fig:class-parrent}, bottom panel).
As noted by \citet{Polin2019apj}, the He-shell DDet models exhibit a peak brightness and velocity relationship (black dashed line) that generally follows the BL/91bg-like tail.
Several key predicted features of He-shell DDet are also prevalent among SNe from the BL/91bg-like tail,
including sub-Chandrasekhar inferred ejecta masses \citep{Scalzo2019mnras} and lack of C spectral features \citep{Maguire2014mnras}, consistent with a He-shell DDet origin for them \citep{Polin2019apj}.
For SN~2018aoz, the measured values of $M_B$~(peak) = $-$19.319 $\pm$ 0.009 mag and $v_{\rm peak}$~(\siii) = (11.43 $\pm$ 0.12) $\times$ 10$^3$ km~s$^{-1}$ place it close to the boundary between the CN/91T-like cluster and the BL/91bg-like tail populations, consistent with its intermediate nature between CN and BL (Section~\ref{sec:class}), though it is more similar to events from the CN/91T-like cluster overall.
While SN~2018aoz\ could simply be an edge case between these two populations, in some other maximum-light features there is even less agreement between SN~2018aoz\ and the He-shell DDet models.
In Figure~\ref{fig:ddetpolin} (top panel), we add SN~2018aoz\ to Figure 12 from \citet{Polin2019apj}, which plots peak $B$-band magnitude, \siii\ velocity, and \bv\ ``color'', $B_{\rm max} - V_{\rm max}$, for a population of \tase\ (circles) and He-shell DDet models (squares).
\citet{Polin2019apj} noted that most objects from the BL/91bg-like tail population in this plot of peak $B$-band magnitude versus peak \siii\ velocity exhibit red $B_{\rm max} - V_{\rm max}$ consistent with the models, further suggesting a common He-shell DDet origin.
In contrast, SN~2018aoz\ with a relatively blue $B_{\rm max} - V_{\rm max}$ of $-$0.093 $\pm$ 0.013 is more consistent with the clustered CN events.
\subsection{Comparison to Nebular-Phase Properties}\label{sec:heddneb}
The nebular-phase flux ratio of the 7290~\AA\ emission feature to [\feiii] $\lambda$~4658~\AA, (``7290~\AA/[\feiii]"; Section~\ref{sec:nebspecev})
from $\sim$ 120--320 days since peak has also been suggested to distinguish He-shell DDet events from other normal \tase\ \citep{Polin2021apj}.
He-shell DDet models typically produce substantially more Ca than what is predicted in Chandrasekhar-mass explosions as a result of incomplete nuclear burning in the core of sub-Chandrasekhar-mass WDs, which may be observed as strong [\caii] emission near 7290\AA\ in the optically-thin nebular phase (Section~\ref{sec:nebex}).
In the bottom panel of Figure~\ref{fig:ddetpolin} \citep[adapted from][Figure 9]{Polin2021apj}, we compare SN~2018aoz\ to the same population of \tase\ and set of He-shell DDet models as in the top panel with the color scale now representing their nebular-phase 7290\AA/[\feiii] ratios.
As noted by \citet{Polin2021apj}, SNe from the BL/91bg-like tail show stronger contributions from [\caii], leading to larger 7290\AA/[\feiii] ratios consistent with the He-shell DDet model predictions, while those from the CN/91T-like cluster have smaller ratios.
For SN~2018aoz, its relatively small 7290\AA/[\feiii] ratio of 0.149 $\pm$ 0.007 once again identifies it with the SNe from the CN-subtype cluster, which are inconsistent with the He-shell DDet models.
\subsection{Search for Carbon}\label{sec:carbon}
Another key prediction of He-shell DDet models is efficient carbon burning, which leaves $\lesssim 10^{-5}\,$\msol\ of unburnt carbon in the SN ejecta \citep{Polin2019apj} in contrast to the substantial amount ($\sim 0.03\,$\msol) typically predicted by some other explosion models such as pure deflagration \citep{Nomoto1984apj} and pulsating delayed detonation \citep{Hoeflich1995apj}.
We search for the \cii~$\lambda$6580~\AA\ absorption feature near \siii~$\lambda$6355~\AA\ that has been used to examine the presence of carbon in \tas\ ejecta \citep{Parrent2011apj, Blondin2012aj, Maguire2014mnras}.
As detailed below, although \cii~$\lambda$6580~\AA\ is expected to be visible if the carbon mass fraction in the photosphere is greater than $\sim 0.005$ \citep{Heringer2019apj}, we detect no such feature in SN~2018aoz\ throughout its evolution. This indicates that the carbon mass fraction is below this value in most layers of the ejecta of SN~2018aoz, compatible with the He-shell DDet prediction.
We note, however, that the absence of unburnt carbon is also possible for some non-DDet explosion models \citep[e.g., pulsating delayed detonations with very slow deflagration speeds;][]{Hoeflich1995apj}.
\begin{figure}[t!]
\epsscale{\scl}
\plotone{DDetSpecPre211227.pdf}
\caption{(Left) The dereddened early spectra of SN~2018aoz\ (black) are compared to the predictions of the best-fit He-shell DDet model (1.05~\msol\ WD + 0.010~\msol\ He-shell; magenta curves) in rest frame, as labeled in days since $B$-band maximum.
For clear comparison, the model spectra have been filtered by box-car convolution, resulting in effective spectral resolutions of $R$ = 300.
(Right) Same as the left panel, but showing unfiltered model spectra zoomed in on the vicinity of the observed \siii\ absorption feature.
The observed (black) spectra are translated downwards (grey) by subtracting a constant value for better comparison with the best-fit He-shell DDet model predictions (magenta).
The approximate minima of the observed \siii\ features and the expected relative positions of the \cii\ absorption feature are shown with black and grey dashed lines, respectively.
Note that \cii\ is not visible in any of the spectra.
\label{fig:ddetspec}}
\end{figure}
Figure~\ref{fig:ddetspec} compares the predicted spectral evolution (magenta curves) of the best-fit He-shell DDet model (1.05~\msol\ WD + 0.010~\msol\ He-shell) to the observed spectra of SN~2018aoz\ until approximately $B$-band maximum.
We find an absence of the \cii~$\lambda$6580~\AA\ feature in all the spectra starting from as early as 4.4 days since first light ($-$11.0 days since $B$-band maximum),
consistent with a lack of unburnt carbon throughout most layers of the ejecta.
However, the lack of earlier spectroscopic observations before 4.4 days probing
carbon in the fast-expanding outer ejecta potentially allows
a substantial amount of unburnt carbon to be
hidden in the outer $\sim$ 30\% of the ejected mass (Equation~\ref{eq:mdiff}).
Note also that \cii\ in \tase\ has been detected as early as $-$15 days
since $B$-band maximum \citep{Parrent2011apj}, earlier than our first spectroscopic observations.
In some cases, the \cii\ feature fades to become undetectable long before $B$-band maximum \citep{Brown2019apj}; however,
in NUV-blue events \citep[e.g., SN~2011fe;][]{Pereira2013aa}, the \cii~$\lambda$6580~\AA\ feature is almost always visible until roughly $B$-band maximum \citep[see][and references therein]{Milne2013apj}.
Since SN~2018aoz\ is an extremely NUV-blue event (Section~\ref{sec:color}), the absence of \cii\ from $-$11 days since $B$-band maximum in the source appears to be an exceptional case.
\subsection{Summary of Comparison to He-Shell Double-Detonation Models}\label{sec:thin-shell}
Our 1-D simulations of He-shell DDets with thin He-shells appear to be capable of reproducing the rapid \bv\ color evolution of the NRB phase in SN~2018aoz, as well as its overall light curves and spectral features, including the absence of C spectral features, with the 1.05~\msol\ WD + 0.010~\msol\ model providing the best-fit.
However, \citet{Polin2019apj,Polin2021apj} also propose three other key observables that can be used to distinguish explosions caused by thin He-shell DDets: (i) early excess emission, (ii) a peak velocity-magnitude-color relationship, and (iii) the strength of nebular-phase [\caii]. Observations of SN~2018aoz\ show important discrepancies when compared to the 1-D models in all three metrics.
First, the current suite of He-shell DDet models cannot entirely reproduce the infant-phase features of SN~2018aoz\ (Section~\ref{sec:heddearly}).
The 1.05~\msol\ WD + 0.010~\msol\ He-shell model is the
only He-shell DDet model that can match the early ($\lesssim$ 5 days since first light) \bv\ color evolution of SN~2018aoz\ associated with surface Fe-peak elements; however, this model under-predicts the observed luminosity of the infant-phase excess emission and produces early ($\lesssim$ 4 days) \vi\ colors that are redder than observed.
While models with very slightly increased He-shell masses can produce more surface radioactive heating to match the observed infant-phase luminosity, such models also produce delayed reddening that is incompatible with the observed NRB.
Second, although SN~2018aoz\ exhibits properties in common with both CN and BL subtypes of Type Ia SNe (Section~\ref{sec:class}), its observed near-peak features appear to be more compatible overall with the bulk of CN events as opposed to the swath of BL/91bg-like events that show similar properties to the He-shell DDet models (Section~\ref{sec:heddpeak}).
In particular, the ashes of the He-shell detonation are expected to redden the near-peak SN spectrum, leading to red $B_{\rm max} - V_{\rm max}$ color which is absent in SN~2018aoz.
Third, the absence of [\caii] emission near 7290~\AA\ in the optically-thin nebular phase is inconsistent with the predictions of 1-D sub-Chandrasekhar-mass explosion models \citep{Polin2021apj, Mazzali2015mnras}.
Compared to the He-shell DDet models of \citet{Polin2021apj} and other Type Ia SNe, the nebular-phase 7290~\AA/[\feiii] flux ratio observed in SN~2018aoz\ is much lower than what is predicted in the models as well as what is observed in the BL/91bg-like events suspected to be from He-shell DDet (Section~\ref{sec:heddneb}).
Thus, the observed properties of SN~2018aoz\ appear less compatible with the model predictions overall and show a closer resemblance to SNe that are not suspected of being thin He-shell DDets than SNe that are.
This disfavours the He-shell DDet explosion mechanism for the origin of SN~2018aoz, or at least requires modifications to the standard scenario of thin He-shell DDets as described by \citet{Polin2019apj,Polin2021apj}.
We discuss the remaining possibilities for a He-shell DDet origin of SN~2018aoz\ in Section~\ref{sec:heddexp} below, along with modifications that could help to ameliorate the above-mentioned model discrepancies.
\section{The Nature of SN~2018aoz and Implications for the Origins of Type Ia Supernovae}\label{sec:orig}
\subsection{Nature of the Companion Star}\label{sec:prog}
Our analyses of the early light curves (Section~\ref{sec:kasan}) and nebular-phase spectra (Section~\ref{sec:neb}) of SN~2018aoz\ indicate that the binary companion of its progenitor is most likely to be a secondary WD.
First, our analysis of the early light curves disfavours binary companions larger than low-mass (few solar mass) main sequence stars based on the absence of their ejecta-companion interaction emission, leaving low-mass main sequence stars at large viewing angles ($\gtrsim$ 80\degr), naked He-stars, and WDs as the most likely possibilities for the companion.
Note that all three possibilities have been predicted to be involved in \tas\ explosions \citep{Maoz2014araa}.
Second, our modelling of the nebular-phase spectra of SN~2018aoz\ further disfavours low-mass main sequence stars and naked He-stars as follows.
In the single-degenerate scenario, the SN ejecta is expected to strip/ablate $\sim$ 0.1--0.5~\msol\ of H-/He-rich material from the companion \citep{Dessart2020aa}, and most models predict that this leads to H emission in the nebular phase \citep{Mattila2005aap, Botyanszki2018apj, Dessart2020aa} while one model also predicts He emission \citep{Botyanszki2018apj}.
Our modelling of the nebular-phase spectra permits $\lesssim$ 10$^{-2}$~\msol\ of each element, disfavouring the single-degenerate scenario for SN~2018aoz, consistent with what has been found in 94\% of \tase\ \citep{Tucker2020mnras}.
We note, however, that disagreements between current model predictions for the emission from early ejecta-companion interactions \citep{Kasen2010apj, Kutsuna2015pasj} and from stripped/ablated materials in the nebular phase \citep{Botyanszki2018apj, Dessart2020aa} precludes a definitive determination of the companion nature based on these analyses for now.
Although we cannot rule out a single-degenerate progenitor for SN~2018aoz, we can almost completely rule out the case for a red giant companion, as predicted in the classical single-degenerate scenario.
Such a companion likely requires an extreme viewing angle in order to hide the ejecta-companion interaction in the early phase and emission from stripped/ablated H in the nebular phase, as supported by multiple models \citep{Kasen2010apj, Kutsuna2015pasj, Mattila2005aap, Botyanszki2018apj, Dessart2020aa}.
The He-star channel for \tase\ is also a less favourable progenitor for SN~2018aoz\ due to the short ($\lesssim$ 0.2 Gyr) delay-time of the channel after star formation \citep{Wang2010aa}.
This delay-time is difficult to reconcile with the immediate environment of the SN due to the lack of recent star formation therein.
Metal abundance ratios $\gtrsim$ 40\arcsec\ from the center of \ngc reflect an old ($\sim$ 8--14 Gyr) stellar age in those regions \citep{Kim2012apj}, while recent star formation in the halo of \ngc where
SN~2018aoz\ was found is even less feasible.
The lack of recent star formation at the SN location is also supported by the lack of local dust extinction \citep{Sakurai2013eps}, as evidenced by the absence of \nai\ doublet lines at the host galaxy redshift (Paper I).
\subsection{Origin of the Infant-Phase Excess Emission}\label{sec:orgex}
We have shown that three mechanisms are capable of producing the observed infant-phase excess emission in SN~2018aoz\ (Section~\ref{sec:early}): (1) radioactive heating by surface Fe-peak elements; (2) ejecta interaction with the binary companion; or (3) ejecta interaction with CSM.
Since the presence of surface Fe-peak elements is also required to explain the observed $B$-band suppression, it is likely that at least \emph{some} of the emission is due to radioactive heating by those same Fe-peak elements.
However, the luminosity produced will depend sensitively on both the specific nucleosynthetic products in the outer ejecta (which will vary with explosion mechanism) as well as the degree of gamma-ray trapping. Indeed, as shown in Section~\ref{sec:heddearly}, it is possible for physical models to produce the level of $B$-band suppression needed to explain the NRB, while under-predicting the infant-phase luminosity.
Thus, ejecta interactions with the binary companion and/or CSM may also contribute to---and potentially dominate---the luminosity at early times.
Within this context, we note that
both the ejecta-companion and ejecta-CSM interaction cases for the origin of the infant-phase excess emission in SN~2018aoz\
are compatible with its favoured double-degenerate progenitor (Section~\ref{sec:prog}).
In the case of ejecta-companion interaction, the observed infant-phase excess emission requires a small binary companion size, consistent with either a WD, He-star, or low-mass (few solar mass) main sequence star (Section~\ref{sec:kasfit}).
In the case of ejecta-CSM interaction, CSM with small mass and radius is required, consistent with what is expected from the companion accretion process (Section~\ref{sec:csm}).
The mass of CSM needed for the observed infant-phase excess emission ($\gtrsim$ 10$^{-3}$~\msol) and our strongest constraints on swept-up H from the nebular-phase spectra ($\lesssim$ 4 $\times$ 10$^{-4}$~\msol; Table~\ref{tab:nebflux}) further
requires the fractional mass of H in the total CSM mass to be $\lesssim$ 50\%.
These mass requirements are compatible with H-poor CSM originating from the accretion process of either a WD or He-star companion.
\subsection{Implications for the Asymmetric Chandrasekhar-Mass Explosion Mechanism}\label{sec:chexp}
One possible origin of surface Fe-peak elements associated with the observed $B$-band suppression in SN~2018aoz\ is subsonic mixing in an asymmetric Chandrasekhar-mass explosion (Paper I), which has long been theorized to produce normal \tase\ and their observed properties.
In particular, the relationship between the observed light curves of \tase\ and \dm15\ \citep[i.e., Phillips relation][]{Phillips1999aj}, as well as the residual differences in their peak luminosities, peak colors, and nebular-phase [\feii] and [\niii] line shifts, have been found to be attributable to viewing angle effects and the details of the deflagration-to-detonation transition in the model \citep{Kasen2009nat, Maeda2010natur, Maeda2010apj, Maeda2011mnras}.
A Chandrasekhar-mass explosion is also the main scenario that is thought to produce \tase\ with weak [\caii] emission in the nebular phase \citep{Polin2021apj, Mazzali2015mnras} such as SN~2018aoz, since complete nuclear burning in the core of high-mass WDs produces little Ca.
However, as explained in Section~\ref{sec:nebex}, both the short rise-time and presence of surface Fe-peak elements in SN~2018aoz\ point to a viewing angle where the ejecta core is receding from the observer under the asymmetric Chandrasekhar-mass explosion scenario.
Reconciling the receding motion of the ejecta core with the observed blueshifts of nebular-phase [\feii] and [\niii] in SN~2018aoz\ may thus require those lines to remain optically thick until $\sim$ 380 days post-peak.
Between low-mass (few solar mass) main sequence and WD companions for the progenitor system of SN~2018aoz\ (Section~\ref{sec:prog}), the former is more compatible with the standard Chandrasekhar-mass explosion model, though it would require modifications to models that predict material will be stripped/ablated from the companion and visible at late times \citep{Dessart2020aa, Botyanszki2018apj}.
In contrast, if the companion is a WD, then this scenario faces a number of constraints due to the accretion process between WDs often being dynamically unstable \citep{Shen2015apj} and tending to result in either a He-shell DDet or a violent merger.
The former (= He-shell DDet) leads to a different explosion mechanism as discussed below (Section~\ref{sec:heddexp}), while the latter (= violent merger) is disfavoured by the absence of unburnt O and He signatures in the nebular phase (Section~\ref{sec:neb}).
To avoid dynamically evolving towards He-shell DDet or violent merger, the case of a Chandrasekhar-mass explosion for SN~2018aoz\ under double-degeneracy may require a relatively massive and rare primary WD that is
already near the Chandrasekhar mass ($\sim$ 1.4~\msol) at the start of accretion,
significantly larger than that of most WDs in the range of
0.5--0.8~\msol\ \citep{Tremblay2016mnras}.
Alternatively, violent merger is still possible if the nebular-phase O and He emission lines are hidden by metal line-blanket absorption.
In this case, the observed infant-phase excess emission would need to be from surface radioactive heating since a pre-merger explosion is required for both (1) ejecta-companion interactions to occur and (2) ejecta-CSM interaction properties to be compatible with the infant-phase observations (Section~\ref{sec:csm}).
\subsection{Implications for the He-shell Double-Detonation Explosion Mechanism}\label{sec:heddexp}
He-shell DDet is another explosion mechanism that naturally explains the presence of surface Fe-peak elements associated with the observed $B$-band suppression in SN~2018aoz\ (Paper I).
1-D simulations of thin He-shell DDets (Section~\ref{sec:hedd}) are able to reproduce the rapid \bv\ color evolution of the NRB phase in SN~2018aoz, as well as its overall spectroscopic features and light curves, with the 1.05~\msol\ WD + 0.010~\msol\ model providing the best-fit.
Thus, a He-shell DDet origin for SN~2018aoz\ would confirm the predictions of recent theoretical models indicating that detonations of He-shells as thin as 0.01~\msol\ can successfully trigger \tase, including normal events.
In addition to the presence of surface Fe-peak elements, SN~2018aoz\ also exhibits a number of other features that may be explained by a He-shell DDet origin.
This includes (1) the absence of C spectral features (Section~\ref{sec:carbon}),
(2) the short observed rise time,
which can be explained by a sub-Chandrasekhar ejecta mass (Paper I),
and (3) the small inferred companion size (Section~\ref{sec:prog}),
as the typical progenitor channels for He-shell DDets involve accretion from a He-star, He-WD, or He/CO hybrid companion \citep{Shen2014apj, Guillochon2010apj}.
However, as detailed in (Section~\ref{sec:thin-shell}), SN~2018aoz\ \emph{fails} several additional diagnostic criteria that are used to recognize this explosion mechanism, including the (i) details of the early excess emission, (ii) colors at maximum light, and (iii) strength of the nebular-phase [\caii] feature.
Thus, for a He-shell DDet to remain a viable option for the origin of SN~2018aoz, a physical scenario that differs from standard thin He-shell DDet models would be required.
The first two discrepancies may be attributable to differences in the ashes of the initial He-shell detonation between the models and SN~2018aoz, which would impact both the radioactive heating rate in the infant phase and spectroscopic features before maximum light.
Different nucleosynthetic yields may be possible if the evolutionary path leads to a He-shell with different initial conditions (e.g., composition, density) from those adopted by the models.
The under-prediction of early luminosity by the best-fit He-shell DDet model may also indicate that an additional source of luminosity beyond radioactive heating is required at early times (e.g. ejecta interaction with the companion or CSM).
In contrast, production of nebular-phase [\caii] emission in sub-Chandrasekhar-mass explosions primarily depends on the total mass of the progenitor WD and the relative distribution of Ca and radioactive Fe-peak elements in the ejecta core.
Recent multi-dimensional He-shell DDet simulations have found that the explosion mechanism is inherently non-spherically symmetric \citep[e.g.,][]{Boos2021apj}, and its viewing angle effects have been suggested to explain the different ejecta velocities of \tase\ from the CN and BL subtypes as well as their differences in peak colors and nebular-phase [\feii] line shifts \citep{Boos2021apj, Li2019apj}.
The asymmetric explosion can also shift the distributions of Ca and Fe-peak elements.
As detailed in Section~\ref{sec:nebex}, nuclear burning in an asymmetric sub-Chandrasekhar-mass explosion is more complete near the ignition point of central carbon, which results in the distribution of Ca being offset towards the opposite side of the ejecta core.
Thus, a viewing angle where the Ca-rich region is shielded by the core may result in [\caii] being hidden if [\feii] and [\niii] lines remain optically thick in the nebular phase.
Weak [\caii] emission may also result from a sub-Chandrasekhar-mass explosion with a
higher total mass \citep[e.g., 1.26~\msol;][]{Mazzali2015mnras}.
However, as with the case of a Chandrasekhar-mass explosion (Section~\ref{sec:chexp}), reconciling the short rise time of SN~2018aoz\ with the high ejecta mass in this case may still require optically thick [\feii] and [\niii] lines.
More detailed multi-dimensional modelling is necessary to ascertain if such effects can explain the observations of SN~2018aoz.
\subsection{The \d6s Scenario}\label{sec:d6exp}
One specific He-shell DDet scenario that may yield initial conditions that vary from the hydrostatic models of \citet{Polin2019apj,Polin2021apj} and also arises from the favoured double-degenerate progenitor of SN~2018aoz\ is a ``dynamically-driven double-degenerate double-detonation'', or D$^{\wedge}$6 \citep{Shen2018apj}, scenario.
D$^{\wedge}$6 is a proposed origin for \tase\ wherein dynamic (unstable) accretion during the coalescence of a double-degenerate binary composed of two WDs leads to a \tas\ triggered by He-shell DDet.
While detailed models would be necessary to assess the overall consistency of \d6s\ with observations of SN~2018aoz,
motivated by the possible requirement of additional emission sources beyond radioactive heating at early times (Section~\ref{sec:heddexp}), we show below that \d6s\ also naturally provides infant-phase emission at the level observed in SN~2018aoz\ via ejecta interactions with CSM and/or the companion.
First, due to the dynamical nature of the accretion, a torus of CSM is expected to be present around the primary WD at the time of explosion \citep[e.g.,][]{Guillochon2010apj,Pakmor2013apj}.
As detailed in Section~\ref{sec:csm}, the small mass and radius ($\lesssim$ 0.007~\msol\ and $\lesssim$ 10$^{10}$~cm) of CSM required to fit the observed infant-phase excess emission is compatible with CSM properties predicted in hydrodynamic simulations of this accretion process.
Note that the fitted CSM properties were obtained by assuming an ejecta mass of 1.05~\msol, which is favoured by He-shell DDet models (Section~\ref{sec:heddsim}).
Second, models have shown that when all nuclear reactions are considered \citep{Shen&Moore2014apj} the He-shell DDet of the primary can occur during the early phases of dynamical accretion, before the companion WD has been fully disrupted \citep{Pakmor2013apj}. Thus, for the D$^{\wedge}$6 scenario, ejecta interaction with the companion should occur.
Dynamically unstable mass transfer between two WDs is expected for mass ratios $\gtrsim$ 0.2 \citep{Shen2018apj}, corresponding to companion masses of $\gtrsim$ 0.2 M$_\odot$ for a 1.05 M$_\odot$ primary.
Adopting the temperature of $\sim$ 3.0$\times$ 10$^4$~K for a tidally-heated He-WD in Roche overflow and the corresponding mass-radius relationship \citep{Panei2007mnras}, the expected separation for a 0.2 M$_\odot$ He-WD is $\sim$ 1.2 $\times$ 10$^{10}$~cm while larger companion masses lead to smaller separation distances.
These separations overlap with the lower end of the binary separation distances (6.8 $\times$ 10$^9$~cm; Section~\ref{sec:kasfit}) that can fit the observed infant-phase excess emission for an assumed ejecta mass of 1.05~\msol.
In addition, rapid mass loss during dynamical accretion is expected to both widen the binary and inflate the donor WD \citep{Kremer2015apj}, indicating that both higher mass He-WDs and He/CO hybrids could also provide non-negligible contribution to the infant-phase excess emission of SN~2018aoz\ in the D$^{\wedge}$6 scenario.
We note that if either ejecta-companion or ejecta-CSM interaction are the origin of the observed infant-phase excess emission under \d6s, two distinct physical processes would be required to
produce the infant-phase features of SN~2018aoz: line-blanket absorption by surface Fe-peak elements produced in the He-shell DDet; and shock interaction from the ejecta colliding with either the companion or CSM.
While these processes are naturally predicted together in the D$^{\wedge}$6 scenario at the low luminosity level probed by the infant-phase observations of SN~2018aoz, we emphasize that there are currently no theoretical models that consider the observational outcomes of both processes simultaneously.
\subsection{Implications for the Explosion Mechanisms of Normal Type Ia SNe}\label{sec:tase}
The exact explosion mechanism of SN~2018aoz\ remains uncertain---as neither asymmetric Chandrasekhar mass explosion nor He-shell DDet models are currently capable of explaining all of the observations.
However, whatever its nature, the explosion mechanism of SN~2018aoz\ appears to produce a \tas\ with normal properties after the infant phase, indicating that it is a potentially prevalent explosion mechanism among \tase.
As shown in Section~\ref{sec:class}, SN~2018aoz\ is intermediate between
the CN and BL subtypes of normal \tase---corresponding to 38\% and 30\% of the entire \tas\ population \citep{Blondin2012aj}, respectively---and
shares spectroscopic similarities with both groups.
Thus, assuming that the reported infant-phase features first identified in SN~2018aoz\ are found among spectroscopically similar SNe,
then an explosion mechanism capable of producing normal \tase\ with surface Fe-peak elements may be responsible for up to 68\% of \tase\ from these two normal subtypes.
\section{Summary and Conclusion}\label{sec:conc}
The observations of SN~2018aoz\ starting from the infant phase ($<$ 1 day since first light) and continuing to the late nebular phase ($\gtrsim$ 200 days since peak) have provided one of the most extensive sets of clues for understanding the origin and evolution of a \tas.
We summarize our main results and conclusions as follows.
\begin{itemize}
\item The near-peak light curves and spectroscopic features of SN~2018aoz\
show that it is intermediate between the CN/NV and BL/HV subtypes of normal \tase, manifesting its nature as a normal event.
The evolution of its \bv\ and \vi\ colors after the infant
phase are also consistent with those of other normal \tase,
while the infant-phase color evolution
is revealed for the first time, showing the rapid reddening of both colors over the first $\sim$ 0.5 days (or ``NRB'').
SN~2018aoz\ belongs to the NUV-blue group of normal \tase\ based on its UV-optical
colors, with some of the bluest UV-optical colors reported in the group prior to $B$-band maximum.
No C spectral features are detected throughout the SN evolution beginning from the first spectrum $\sim$ 4.4 days since first light, which is exceptional among NUV-blue events while similar to typical BL events.
\item The early $BVi$-band light curves of SN~2018aoz\ during 0--7 days consist of
three components wherein two infant-phase features are embedded in an underlying
power-law component that rises overall during the period.
The two infant-phase features are
(1) $B$-band plateau during $\sim$ 0--1 days (Paper I) and
(2) excess emission during 0.08--0.42 days, together resulting in the NRB color evolution.
\item The $B$-band plateau feature has been attributed to $B$-band suppression by surface Fe-peak elements (Paper I), while we find that three mechanisms can contribute to the observed infant-phase excess emission: (1) radioactive heating by the surface Fe-peak elements; (2) ejecta shock interaction with the binary companion; and (3) ejecta shock interaction with CSM.
\item Shock breakout is unlikely to be a significant contributor to the infant-phase excess emission.
\item A small companion---such as a WD, He-star, or low-mass (few solar mass)
main sequence star---is required to attribute the infant-phase excess
emission to ejecta-companion interaction,
and the absence of H and He emission lines throughout
the nebular phase favours the WD companion.
The presence of a red giant companion is particularly incompatible with the observed luminosity over the first few days, while the environment of the SN in the halo of the NGC~3923 elliptical galaxy argues against short delay-time companions, including He-stars as well as high-mass giants.
\item Attributing the infant-phase excess emission to ejecta-CSM interaction requires a CSM distribution with small total mass ($\lesssim$ 0.007~\msol) and radius ($\lesssim$ 10$^{10}$~cm) at the time of explosion,
more consistent with what is expected during the binary accretion process
than after a violent merger.
The presence of CSM from a violent merger is further disfavoured by the absence of He and O lines in the nebular phase.
\item The absence of nebular-phase [\caii] emission and the observed blueshifts of [\feii] and [\niii] are not well explained by either explosions of high- or low-mass WDs. Both cases may require [\feii] and [\niii] lines to remain optically thick until $\sim$ 380 days since peak in addition to explosion asymmetry.
\item Our 1-D thin He-shell DDet simulations are capable of explaining the observed \bv\ NRB color evolution associated with the $B$-band suppression by surface Fe-peak elements, overall evolution of optical luminosity and spectra, and absence of C spectral features in SN~2018aoz. However, the model that best matches the observed \bv\ color evolution of the SN fails to reproduce its infant-phase excess emission and early \vi\ color. In addition, in a number of observed properties that have been suggested to identify the explosion mechanism, including $B_{\rm max} - V_{\rm max}$ and nebular-phase [\caii]/[\feiii] line ratio, SN~2018aoz\ is more similar to the bulk of CN \tase, as opposed to the population of BL/91bg-like SNe that closely resemble the He-shell DDet models. Modifications to the standard thin He-shell DDet scenario (e.g., explosion asymmetry) may ameliorate some of these discrepancies.
\item Both asymmetric Chandrasekhar-mass explosion and the \d6s\ scenario accommodate the presence of surface Fe-peak elements and the observed infant-phase excess emission in SN~2018aoz. However, neither model is currently capable of explaining all of the observations.
\item The normal Type Ia nature of SN~2018aoz\ and its spectroscopic similarity with a significant fraction of the \tas\ population indicates that SN~2018aoz\ shares a common origin with at least some fraction of normal events, assuming that the reported infant-phase features first identified in SN~2018aoz\ are found among spectroscopically similar SNe.
\end{itemize}
Our analyses highlight the importance of (1) deep, high-cadence survey observations that are capable of probing the low-luminosity signals of \tase\ in their earliest phases and (2) follow-up observations of light curves and spectra over the entire evolution of the SN until the nebular phase.
As the only \tas\ to date with sufficiently early (= $\sim$ 0--0.5 days) and deep (= $\sim -$10 to $-$12 absolute AB magnitudes) multi-band observations to detect the infant-phase $B$-band suppression and excess emission, SN~2018aoz\ can provide an important point of reference for future efforts to model crucial physical processes in the infancy of Type Ia SN explosions.
\acknowledgments
This research has made use of the KMTNet system operated by the Korea Astronomy and Space Science Institute (KASI) and the data were obtained at three host sites of CTIO in Chile, SAAO in South Africa, and SSO in Australia.
This research is also based on observations obtained at the international Gemini-S Observatory, a program of NSF’s NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation on behalf of the Gemini Observatory partnership: the National Science Foundation (United States), National Research Council (Canada), Agencia Nacional de Investigaci\'{o}n y Desarrollo (Chile), Ministerio de Ciencia, Tecnolog\'{i}a e Innovaci\'{o}n (Argentina), Minist\'{e}rio da Ci\^{e}ncia, Tecnologia, Inova\c{c}\~{o}es e Comunica\c{c}\~{o}es (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea).
The Gemini-S observations were obtained under the K-GMT Science Program (PID: GS-2018A-Q-117 and GS-2018B-Q-121) of KASI and acquired through the Gemini Observatory Archive at NSF’s NOIRLab.
This paper includes data gathered with the 6.5 meter Magellan Telescopes located at Las Campanas Observatory, Chile.
This work makes use of observations from the Las Cumbres Observatory (LCO) global telescope network. The LCO team is supported by NSF grants AST-1911225 and AST-1911151, and NASA Swift grant 80NSSC19K1639.
The Swift observations were triggered through the Swift GI program 80NSSC19K0316. SOUSA is supported by NASA's Astrophysics Data Analysis Program through grant NNX13AF35G.
Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. The Computational HEP program in The Department of Energy's Science Office of High Energy Physics provided simulation resources through Grant \#KA2401022. This research used resources of the National Energy Research Scientific Computing Center, a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231.
D.-S.M., M.R.D., and C.D.M. are supported by Discovery Grants from the Natural Sciences and Engineering Research Council of Canada.
D.-S.M. was supported in part by a Leading Edge Fund from the Canadian Foundation for Innovation (project No. 30951).
M.R.D. was supported in part by the Canada Research Chairs Program, the Canadian Institute for Advanced Research (CIFAR), and the Dunlap Institute at the University of Toronto.
D.J.S. acknowledges support by NSF grants AST-1821987, 1821967, 1908972 and from the Heising-Simons Foundation under grant \#2020-1864.
S.G.-G. acknowledges support by FCT under Project CRISP PTDC/FIS-AST-31546 and Project UIDB/00099/2020.
S.C.K., Y.L., and H.S.P. acknowledge support by KASI under the R\&D program (Project No. 2022-1-868-04) supervised by the Ministry of Science and ICT.
H.S.P. was supported in part by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT, Ministry of Science and ICT; No. NRF-2019R1F1A1058228).
P.J.B. acknowledges support from the Swift GI program 80NSSC19K0316.
S.V., Y.D., and K.A.B. acknowledge support by NSF grants AST-1813176 and AST-2008108.
C.M. acknowledges support by NSF grant AST-1313484.
R.L.B. acknowledges support by NASA through Hubble Fellowship grant \#51386.01 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555.
A.G.-Y's research is supported by the EU via ERC grant No. 725161, the ISF GW excellence center, an IMOS space infrastructure grant and BSF/Transformative and GIF grants, as well as the André Deloro Institute for Advanced Research in Space and Optics, the Schwartz/Reisman Collaborative Science Program and the Norman E. Alexander Family M Foundation ULTRASAT Data Center Fund, Minerva and Yeda-Sela; A.G.-Y. is the incumbent of The Arlyn Imberman Professorial Chair.
L.G. acknowledges financial support from the Spanish Ministerio de Ciencia e Innovaci\'on (MCIN), the Agencia Estatal de Investigaci\'on (AEI) 10.13039/501100011033, and the European Social Fund (ESF) "Investing in your future" under the 2019 Ram\'on y Cajal program RYC2019-027683-I and the PID2020-115253GA-I00 HOSTFLOWS project, and from Centro Superior de Investigaciones Cient\'ificas (CSIC) under the PIE project 20215AT016.
G.P. acknowledges support by ANID -- Millennium Science Initiative -- ICN12\_009 and by FONDECYT Regular 1201793.
J.A. is supported by the Stavros Niarchos Foundation (SNF) and the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the 2nd Call of ``Science and Society'' Action Always strive for excellence -- ``Theodoros Papazoglou’' (Project Number: 01431).
\vspace{5mm}
\software{SNooPy \citep{Burns2011aj}, Castro \citep{Almgren2010apj, Zingale2018jphc}, Sedona \citep{Kasen2006bapj}, SNAP (\url{https://github.com/niyuanqi/SNAP}), IRAF}
|
1,314,259,996,584 | arxiv | \section{Introduction}
Complex network architectures and dynamical processes
taking place on them play a central role in current research
\cite{Havlin2010,Newman2010,Estrada2011}.
Since the 1960s,
mathematical studies of networks
were focused on model systems
such as the
Erd{\H o}s-R\'{e}nyi (ER) network
\cite{Erdos1959,Erdos1960,Erdos1961},
which exhibits a Poisson degree distribution
of the form
$\pi(k|c) = e^{-c} c^k /k!$,
where $c$ is the mean degree
\cite{Bollobas2001}.
In an ER network of $N$ nodes,
each pair of nodes is connected with probability $p$,
where $p=c/(N-1)$.
In fact, ER networks
form a maximum entropy ensemble under the constraint
that the mean degree is fixed
\cite{Bauer2002,Bogacz2006,Bianconi2008,Bianconi2009}.
In the 1990s, the growing availability of data on large biological, social and
technological networks revolutionized the field.
Motivated by the observation that the World Wide Web
\cite{Albert1999} and
scientific citation networks
\cite{Redner1998}
exhibit power-law degree distributions,
Barab\'asi and Albert (BA) introduced a simple model
that captures the essential growth dynamics of such networks
\cite{Barabasi1999,Albert2002}.
A key feature of the BA model is the
preferential attachment mechanism, namely, the tendency of new nodes to
attach preferentially to high degree nodes.
Using mean-field equations and computer simulations
it was shown that the combination of growth and preferential attachment leads to the
emergence of scale-free networks with power-law degree distributions
\cite{Barabasi1999}.
This result was later confirmed and generalized using a more rigorous
formulation based on the master equation
\cite{Krapivsky2000,Dorogovtsev2000}.
It was subsequently found that a large variety of empirical
networks exhibit such scale-free structures,
which are remarkably different from ER networks
\cite{Albert2002,Barabasi2009}.
In many of these networks the growth phase is not likely to proceed indefinitely.
Moreover, networks may be exposed to node deletion processes due to
node failures, attacks and epidemics, which may eventually halt the expansion
phase and induce the contraction and eventual collapse of the network.
Since network growth is a kinetic nonequilibrium processes,
it is not a reversible process, namely, the contraction process
is not the same as the growth process when played backwards in time.
A particularly interesting example of the contraction phase can be seen in
the field of social networks. Such networks may lose users due to loss of
interest, concerns about privacy or due to their migration to other social networks
\cite{Torok2017,Lorincz2019}.
Another example of great practical importance is the cascading failure
of power-grids
\cite{Daqing2014,Schafer2018}.
Infectious processes such as epidemics that spread in a network
\cite{Satorras2001,Satorras2015}
lead to the
contraction of the subnetwork of uninfected nodes and may thus be considered
as network contraction processes.
Similarly, network immunization schemes
\cite{Satorras2002}
also belong to the class of network contraction
processes because they induce the contraction of the subnetwork of
susceptible nodes.
Three generic scenarios of network contraction were identified:
the scenario of random node deletion that describes the
random, inadvertent failure of nodes, the
scenario of preferential node deletion that describes intentional attacks that
are more likely to focus on highly connected nodes
and the scenario of propagating node deletion that describes viral and
infectious processes that spread like epidemics.
It was found that scale-free networks are resilient to attacks targeting
random nodes, but are vulnerable to attacks that target high degree nodes or hubs.
Using the framework of percolation theory,
it was shown that when the number of deleted nodes
exceeds some threshold, the network breaks down into disconnected components
\cite{Albert2000,Cohen2000,Cohen2001,Braunstein2016,Zdeborova2016}.
However, the evolution of the network structure
throughout the contraction phase was not addressed.
In a recent paper we analyzed the
structural evolution of networks during the contraction process
\cite{Tishby2019}.
To this end we derived a master equation for
the time dependence of the degree distribution
during network contraction via the random deletion, preferential
deletion and the propagating deletion scenarios.
Using the relative entropy and the degree-degree correlation function
we showed that the ER graph structure,
which exhibits a Poisson degree distribution, is an asymptotic structure for
these network collapse scenarios, in analogy to the way in which the
scale-free structure is an asymptotic solution for the
preferential attachment growth scenario.
In this paper we use the relative entropy to provide a rigorous proof
that the ER structure is an attractive solution for the three contraction
scenarios. This means that the ER structure is a universal asymptotic
structure for contracting networks.
For simplicity, we consider initial networks drawn from configuration model network ensembles
that exhibit a desired degree distribution $P_0(k)$ and no degree-degree correlations.
During the contraction process the degree distribution of the network evolves.
We denote the degree distribution at time $t$ by $P_t(k)$ and its mean
degree by $\langle K \rangle_t$.
We use the relative entropy
$S_t=S[P_t(k) || \pi(k|\langle K \rangle_t)]$
as a distance measure between
the degree distribution $P_t(k)$ of the contracting
network and the corresponding Poisson distribution
$\pi(k|\langle K \rangle_t)$ with the same mean degree $\langle K \rangle_t$.
Using this measure we obtain rigorous results for
the convergence of the degree distribution of contracting
networks towards a Poisson distribution.
To this end, we derive an equation for the time derivative $dS_t/dt$
of the relative entropy during network contraction.
This equation can be expressed in the
form $dS_t/dt = \Delta_{\rm A}(t) + \Delta_{\rm B}(t)$.
We show that $\Delta_{\rm A}(t) < 0$ for any degree distribution.
We also show that $\Delta_{\rm B}(t) < 0$
for degree distributions whose tails decay more slowly
than the tail of the Poisson distribution with the same mean degree.
This condition is generically satisfied by the heavy-tail distributions that emerge
from network growth processes.
In contrast, in networks that exhibit narrow degree distributions
the $\Delta_{\rm B}(t)$ term turns out to be small and has little
effect on the convergence, which is dominated by $\Delta_{\rm A}(t)$.
This implies that the relative entropy
decreases monotonically during the contraction process.
Since the relative entropy satisfies $S_t \ge 0$ for any degree
distribution $P_t(k)$, while equality is obtained only for
$P_t(k) = \pi(k|\langle K \rangle_t)$
we conclude that the degree distributions of contracting networks
converge towards a Poisson distribution.
This conclusion is corroborated by the fact that the relative
entropy provides an upper bound for the total variation distance,
which is a standard measure of the difference between
probability distributions.
We demonstrate the convergence for configuration model networks with a
degenerate degree distribution (random regular graphs), exponential
degree distribution and power-law degree distribution (scale-free networks).
The paper is organized as follows.
In Sec. II we present the three generic network contraction scenarios
studied in this paper.
In Sec. III we present the master equation
and show that the Poisson distribution is a solution of
the master equation for the three contraction scenarios.
In Sec. IV we present the relative entropy and express it in
terms of the Shannon entropy and the cross-entropy.
In Sec. V we present rigorous results showing that the
relative entropy decays to zero in any of the three contraction scenarios.
In Sec. VI we present analytical results and computer simulations for
the contraction of configuration model networks with a degenerate degree
distribution (random regular graphs), an exponential degree distribution
and a power-law degree distribution (scale-free networks).
The results are discussed in Sec. VII and summarized in Sec. VIII.
\section{Network contraction processes}
We consider network contraction processes in which at each time
step a single node is deleted together with its links.
The initial network consists of $N_0$ nodes, so at time $t$ the
network size is reduced to $N_t = N_0 - t$ nodes.
The deletion of a node of degree $k$, whose neighbors are of
degrees $k'_i$, $i=1,2,\dots,k$,
eliminates the deleted node from the degree sequence
and reduces the degrees of its neighbors to
$k'_i-1$, $i=1,2,\dots,k$.
The node deleted at each time step is selected randomly.
However, the probability of a node to be selected for deletion may depend on
its degree, according to the specific network contraction scenario.
Here we focus on three generic scenarios of network contraction:
the scenario of random node deletion that describes the
random, inadvertent failure of nodes, the
scenario of preferential node deletion that describes intentional attacks that
are more likely to focus on highly connected nodes
and the scenario of propagating node deletion that describes cascading failures and
infectious processes that spread throughout the network.
In the random deletion scenario, at each time step a random
node is selected for deletion.
In this scenario each one of the nodes in the network
at time $t$ has the same probability to be
selected for deletion, regardless of its degree.
Since at time $t$ there are $N_t$ nodes in the network,
the probability of each one of them to be selected for deletion is $1/N_t$.
In the preferential deletion scenario
the probability of a node to be selected for deletion at time $t$
is proportional to its degree at that specific time.
This means that the probability of a given node of degree $k$ to be
deleted at time $t$ is $k/[N_t \langle K \rangle_t]$.
This is equivalent to selecting a random edge in the network and
randomly choosing for deletion one of the two nodes at its ends.
In the propagating deletion scenario at each time
step the node to be deleted is randomly selected among the
neighbors of the node deleted in the previous time step.
In case that the node deleted in the previous time step does
not have any yet-undeleted neighbor we pick a random node,
randomly select one of its neighbors for deletion and continue
the process from there.
Here we focus on the contraction of undirected networks of initial size $N$, which are drawn from a
configuration model network ensemble
with a given initial degree distribution $P_0(k)$ and no degree-degree correlations.
The degree distribution is bounded from above and below
such that $k_{\rm min} \le k \le k_{\rm max}$.
For example, the commonly used choice of $k_{\rm min}=1$
eliminates the possibility of isolated nodes in the network.
Choosing $k_{\rm min}=2$ also eliminates the leaf nodes.
Controlling the upper bound is important in the case of
fat-tail degree distributions such as power-law degree distributions.
The configuration model network ensemble is a maximum entropy ensemble
under the condition that the degree distribution $P(k)$ is imposed
\cite{Molloy1995,Molloy1998,Newman2001,Annibale2009,Roberts2011,Coolen2017}.
In such uncorrelated networks the deletion
of a node at time $t$ does not induce correlations between the remaining $N_t-1$ nodes.
Thus, upon deletion of a node from a configuration model
network of size $N_t$, the resulting network remains a
configuration model network with a suitably adjusted degree distribution $P_{t+1}(k)$.
\section{The master equation and its Poisson solution}
Consider an ensemble of
networks of size $N_0$
and degree distribution $P_0(k)$, with
mean degree $\langle K \rangle_0$.
At each time step a single node is deleted from the network.
In addition to the primary effect of the loss of the deleted node,
the damage to the network also includes a secondary effect as
each neighbor of the deleted node loses one link.
An intrinsic property of the secondary effect is that it is
always of a preferential nature.
This is due to the fact that the probability of a node of degree $k'$
to be a neighbor of the deleted node is proportional to $k'$.
The number of nodes
in the network at time $t$ is
$N_t = N_0 - t$.
The number of nodes of degree $k$ at time $t$ is denoted by $N_t(k)$,
where
$\sum_{k} N_t(k) = N_t$.
The time dependent degree distribution is given by
\begin{equation}
P_t(k) = \frac{N_t(k)}{N_t}.
\label{eq:P_t}
\end{equation}
\noindent
The mean degree and the second moment of the degree distribution at time $t$ are denoted by
$\langle K^n \rangle_t$
where $n=1$ and $2$, respectively.
The master equation
\cite{vanKampen2007,Gardiner2004}
for the temporal evolution of
the degree distribution $P_t(k)$
during network contraction processes was derived in Ref.
\cite{Tishby2019}.
To demonstrate the derivation of the master equation
we consider below the relatively simple case of random node deletion.
The time dependence of $N_t(k)$
depends on the primary effect, given by the probability that the node selected
for deletion is of degree $k$, as well as on the secondary effect of node deletion
on neighboring nodes of degrees $k$ and $k+1$.
In random node deletion the probability that the node
selected for deletion at time $t$ is of degree $k$ is
given by $N_t(k)/N_t$.
Thus, the rate at which $N_t(k)$ decreases due to the primary effect of
the deletion of nodes of degree $k$
is given by
\begin{equation}
R_t(k \rightarrow \varnothing) = \frac{N_t(k)}{N_t},
\label{eq:Rk}
\end{equation}
\noindent
where $\varnothing$ represents the empty set.
In case that the node deleted at time $t$ is of degree $k'$,
it affects $k'$ adjacent nodes, which lose one link each.
The probability of each one of these $k'$ nodes
to be of degree $k$ is given by
$k N_t(k)/[ N_t \langle K\rangle_t ]$.
We denote by $W_t(k \rightarrow k-1)$ the expectation value of
the number of nodes of degree $k$ that lose a link at time $t$ and
are reduced to degree $k-1$.
Summing up over all possible values of $k'$,
we find that the secondary effect of random node deletion on nodes of
degree $k$ amounts to
\begin{equation}
W_t(k \rightarrow k-1) = \frac{kN_t(k)}{N_t}.
\label{eq:Wk}
\end{equation}
\noindent
Similarly, the secondary effect on nodes of degree $k+1$
amounts to
\begin{equation}
W_t(k+1 \rightarrow k) = \frac{ (k+1)N_t(k+1)}{N_t}.
\label{eq:Wk+1}
\end{equation}
\noindent
The time evolution of $N_t(k)$ can be expressed in terms
of the forward difference
\begin{equation}
\Delta_t N_t(k) = N_{t+1}(k) - N_t(k).
\end{equation}
\noindent
Combining the primary and the
secondary effects on the time dependence of $N_t(k)$
we obtain
\begin{equation}
\Delta_t N_t(k) =
- R_t(k \rightarrow \varnothing) + \left[ W_t(k+1 \rightarrow k) - W_t(k \rightarrow k-1) \right].
\label{eq:RWW}
\end{equation}
\noindent
Since nodes are discrete entities the process of node deletion
is intrinsically discrete. Therefore, the replacement of the forward difference
$\Delta_t N_t(k)$
by a time derivative of the form
$d N_t(k)/dt$ involves an approximation.
The error associated with this approximation was evaluated in
Ref.
\cite{Tishby2019}.
It was shown that except for the limit of extremely narrow
degree distributions the error is of order $1/N_t^2$,
which quickly vanishes in the large network limit.
This means that the replacement of the forward difference by
a time derivative has little effect on the results,
and a clear technical advantage.
Inserting the expressions for $R_t(k \rightarrow \varnothing)$, $W_t(k \rightarrow k-1)$ and
$W_t(k+1 \rightarrow k)$ from Eqs.
(\ref{eq:Rk}), (\ref{eq:Wk}) and (\ref{eq:Wk+1}), respectively
into Eq. (\ref{eq:RWW})
and replacing $\Delta_t N_t(k)$ by $d N_t(k)/dt$
we obtain
\begin{equation}
\frac{d}{dt} N_t(k) =
\frac{(k+1)[ N_t(k+1) - N_t(k) ]}{N_t}.
\label{eq:DeltaNtk}
\end{equation}
\noindent
The derivation of the master equation is completed by taking the
time derivative of Eq. (\ref{eq:P_t}), which is given by
\begin{equation}
\frac{d}{dt} P_t(k) = \frac{1}{N_t} \frac{d}{dt} N_t(k) - \frac{N_t(k)}{N_t^2} \frac{d}{dt} N_t.
\label{eq:dPt_Nt}
\end{equation}
\noindent
Inserting the time derivative of $N_t(k)$ from Eq. (\ref{eq:DeltaNtk})
into Eq. (\ref{eq:dPt_Nt})
and using the fact that
$d N_t/dt=-1$,
we obtain the master equation for the random deletion scenario,
which is given by
\begin{equation}
\frac{d}{dt} P_t(k)=
\frac{1}{N_t}
\left[ (k+1)P_t(k+1) - k P_t(k) \right].
\label{eq:dP(t)/dtRC0}
\end{equation}
\noindent
The derivation of the master equations for the preferential deletion
and the propagating deletion scenarios can be performed along similar lines
\cite{Tishby2019}.
Interestingly, the resulting master equations for these three network contraction scenarios
can be written in a unified manner, in the form
\begin{equation}
\frac{d}{dt} P_t(k)
=
F_{\rm A}(t) + F_{\rm B}(t),
\label{eq:dP/dt}
\end{equation}
\noindent
where
\begin{equation}
F_{\rm A}(t)
=
\frac{A_t}{N_t}
\left[ (k+1) P_t(k+1) - k P_t(k) \right]
\label{eq:dP/dtA}
\end{equation}
\noindent
accounts for the secondary effect on the neighbors of the deleted node,
which lose one link each,
while
\begin{equation}
F_{\rm B}(t)
=
-
\frac{B_t(k)}{N_t}
P_t(k)
\label{eq:dP/dtB}
\end{equation}
\noindent
accounts for the primary effect, namely, the loss of the deleted node
\cite{Tishby2019}.
The coefficients $A_t$ and $B_t(k)$ are given by
\begin{equation}
A_t =
\left\{
\begin{array}{ll}
1 & {\rm \ \ \ \ \ random \ deletion} \\
\frac{\langle K^2 \rangle_t }{\langle K \rangle_t^2}
& {\rm \ \ \ \ \ preferential \ deletion} \\
\frac{\langle K^2 \rangle_t - 2 \langle K \rangle_t}{\langle K \rangle_t^2} &
{\rm \ \ \ \ \ propagating \ deletion,}
\end{array}
\right.
\label{eq:A_t}
\end{equation}
\noindent
and
\begin{equation}
B_t(k) =
\left\{
\begin{array}{ll}
0 & {\rm \ \ \ \ \ random \ deletion} \\
\frac{k - \langle K \rangle_t}{\langle K \rangle_t}
& {\rm \ \ \ \ \ preferential \ deletion} \\
\frac{k - \langle K \rangle_t}{\langle K \rangle_t}
& {\rm \ \ \ \ \ propagating \ deletion}.
\end{array}
\right.
\label{eq:B_t(k)}
\end{equation}
\noindent
The master equation consists of a set of coupled ordinary differential equations
for $P_t(k)$, $k=0,1,2,\dots,k_{\rm max}$,
or in other words it is a partial difference-differential equation.
In order to calculate the time evolution of the degree distribution $P_t(k)$ during
the contraction process one solves the master equation using direct numerical integration
\cite{Butcher2003},
starting from the initial network that consists of $N_0$ nodes whose degree distribution
is $P_0(k)$. For any finite network the degree distribution is bounded from above
by an upper bound denoted by $k_{\rm max}$, which satisfies
the condition $k_{\rm max} \le N_0-1$. Since the contraction process can only
delete edges from the remaining nodes and cannot increase the degree of any
node, the upper cutoff $k_{\rm max}$ is maintained throughout the contraction
process.
\begin{figure}
\begin{center}
\includegraphics[width=6.0cm]{fig1.eps}
\caption{
(Color online)
Illustration of the time dependence of the degree distribution
$P_t(k)$
during network contraction processes,
described by the master equation (\ref{eq:dP/dt}).
(a) In the trickle-down term $F_{\rm A}(t)$, given by Eq. (\ref{eq:dP/dtA}),
the probability flows downwards step by step from degree $k+1$ to $k$
and from $k$ to $k-1$. This way high degree nodes become less probable
and low degree nodes become more probable as the contraction process evolves.
(b) In the redistribution term $F_{\rm B}(t)$, given by Eq. (\ref{eq:dP/dtB}),
for values of $k$ above the mean degree $\langle K \rangle_t$ the probability
$P_t(k)$ decreases at a rate proportional to $k-\langle K \rangle_t$,
while for values of $k$ below $\langle K \rangle_t$ the probability $P_t(k)$
increases at a rate proportional to $\langle K \rangle_t - k$.
Here the flow of probability is non-local in the $k$ axis, namely, probability
is lost at high degrees and instantaneously emerges at low degrees.
}
\label{fig:1}
\end{center}
\end{figure}
The $F_{\rm A}(t)$ term of the master equation, given by Eq. (\ref{eq:dP/dtA}),
is referred to as the trickle-down term
\cite{TrickleDown}.
This term represents
the step by step downwards flow of probability from high to low degrees.
This process is illustrated in Fig. 1(a).
The coefficient $A_t$ of the trickle-down term depends on the network
contraction scenario according to Eq. (\ref{eq:A_t}).
In the case of random node deletion $A_t=1$,
because the probability of a node to be selected for
deletion does not depend on its degree.
In the case of preferential node deletion $A_t$ is proportional to $\langle K^2 \rangle_t$
because the probability of a node to be deleted is proportional
to its degree $k$ while the magnitude of the secondary effect is also proportional to $k$.
The $F_{\rm B}(t)$ term of the master equation, given by Eq. (\ref{eq:dP/dtB}),
is referred to as the redistribution term.
As can be seen in Eq. (\ref{eq:B_t(k)}),
this term vanishes in the random deletion scenario.
However, in the preferential and propagating deletion scenarios
the redistribution term
is negative for
$k > \langle K \rangle_t$
and positive for
$k < \langle K \rangle_t$.
Thus the redistribution term decreases the probabilities
$P_t(k)$ for values of $k$ that are above the mean degree
and increases them for values of $k$ that are below the mean degree,
as illustrated in Fig. 1(b).
The size of the redistribution term is proportional to the absolute value
$|k - \langle K \rangle_t|$,
which means that nodes of degrees that are much higher
or much lower than $\langle K \rangle_t$ are most strongly affected
by this term.
Consider an ER network of $N_t$ nodes with mean degree
$c_t$.
Its degree distribution follows
a Poisson distribution of the form
\begin{equation}
\pi(k|c_t) = \frac{ e^{-c_t} c_t^k }{k!}.
\label{eq:poisson}
\end{equation}
\noindent
The second moment of this degree distribution is equal to
$c_t(c_t + 1)$.
To examine the contraction process of ER networks
we start from an initial network of $N_0$ nodes
whose degree distribution follows a Poisson distribution $\pi(k|c_0)$,
where $c_0$ is the mean degree of the initial network.
Inserting
$\pi(k|c_t)$
into the master equation (\ref{eq:dP/dt}) we find that
the time derivative on the left hand side is given by
\begin{equation}
\frac{d}{dt} \pi(k|c_t) =
- \frac{d c_t}{dt}
\left( 1 - \frac{k}{c_t} \right) \pi(k|c_t),
\label{eq:dpi/dt1}
\end{equation}
\noindent
On the other hand,
inserting $\pi(k|c_t)$ on the right hand side of Eq. (\ref{eq:dP/dt}),
we obtain
\begin{equation}
\frac{d}{dt} \pi(k|c_t) =
\frac{A_t}{N_t} (c_t - k) \pi(k|c_t)
- \frac{B_t(k)}{N_t} \pi(k|c_t),
\label{eq:dpi/dt2}
\end{equation}
\noindent
In order that $\pi(k|c_t)$ will be a solution of Eq. (\ref{eq:dP/dt}),
the right hand sides of Eqs. (\ref{eq:dpi/dt1}) and (\ref{eq:dpi/dt2})
must coincide.
In the case of random deletion this implies that
\begin{equation}
\frac{1}{c_t} \frac{d c_t}{dt} = - \frac{1}{N_t}.
\end{equation}
\noindent
Integrating both sides for $t'=0$ to $t$, we obtain
the solution
$c_t = c_0 N_t/N_0$.
Repeating the analysis presented above for the
cases of preferential deletion and propagating deletion
it is found that $\pi(k|c_t)$
solves the master equation (\ref{eq:dP/dt})
for the three network contraction scenarios,
while
the mean
degree, $c_t$ decreases linearly in time
according to
\begin{equation}
c_t = c_0 - R t,
\label{eq:c_linear}
\end{equation}
\noindent
where
the rate $R$
depends on the network contraction scenario,
and is given by
\begin{equation}
R =
\left\{
\begin{array}{ll}
\frac{c_0}{N_0} & {\rm \ \ \ \ \ random \ deletion} \\
\frac{c_0+2}{N_0} & {\rm \ \ \ \ \ preferential \ deletion} \\
\frac{c_0}{N_0} & {\rm \ \ \ \ \ propagating \ deletion}.
\end{array}
\right.
\label{eq:c_t}
\end{equation}
\noindent
This means that an ER network exposed to
any one of the three contraction scenarios
remains an ER network at all times,
with a mean degree that decreases according to Eq.
(\ref{eq:c_linear}).
\section{The relative entropy}
In order to establish that networks exposed to these contraction scenarios actually converge
towards the ER structure, it remains to show that the Poisson solution is attractive.
To quantify the convergence of $P_t(k)$,
whose mean degree is $\langle K \rangle_t$,
towards a Poisson distribution,
we use the relative entropy
(also referred to as the Kullback-Leibler divergence),
defined by
\cite{Kullback1951}
\begin{equation}
S_t = S[P_t(k) || \pi(k|\langle K \rangle_t)] =
\sum_{k=0}^{\infty} P_t(k)
\ln \left[ \frac{P_t(k)}{\pi(k|\langle K \rangle_t)} \right],
\label{eq:S}
\end{equation}
\noindent
where
$\pi(k|\langle K \rangle_t)$
is the Poisson distribution,
given by Eq. (\ref{eq:poisson}),
with the same mean degree
as $P_t(k)$, namely, $\langle K \rangle_t$.
The relative entropy $S_t$ is a distance measure between
the whole degree distribution $P_t(k)$ and the reference distribution $\pi(k|\langle K \rangle_t)$.
It also quantifies the added information associated with constraining the degree
distribution $P_t(k)$ rather than only the mean degree $\langle K \rangle_t$,
as nicely shown in Refs.
\cite{Annibale2009,Roberts2011,Coolen2017}.
The Poisson distribution is a proper reference distribution
for the relative entropy because it satisfies
$\pi(k|\langle K \rangle_t) > 0$ for all the non-negative integer values of $k$.
Using the log-sum inequality
\cite{Csiszar2004},
one can show that the
relative entropy is always non-negative and satisfies
$S_t=0$ if and only if $P_t(k) = \pi(k|\langle K \rangle_t)$
\cite{Kullback1969,Cover2006}.
Therefore, $S_t$ can be used as
a measure of the distance between a given network and
the corresponding ER network with the same mean degree.
The relative entropy $S[P(k) || \pi(k|c)]$ of a degree distribution
$P(k)$ with mean degree $\langle K \rangle$ with respect to a
Poisson distribution $\pi(k|c)$ with mean degree $c$
can be decomposed in the form
\begin{equation}
S[P(k) || \pi(k|c)] =
- S[P(k)]
+ C[P(k) || \pi(k|c)]
\label{eq:SPpic}
\end{equation}
\noindent
where
\begin{equation}
S[P(k)] = - \sum_{k=0}^{\infty} P(k) \ln [ P(k) ]
\label{eq:Shannon}
\end{equation}
\noindent
is the Shannon entropy
\cite{Shannon1948}
of $P(k)$,
while
\begin{equation}
C[P(k) || \pi(k|c)] =
- \sum_{k=0}^{\infty} P(k) \ln [ \pi(k|c) ],
\label{eq:CPp}
\end{equation}
\noindent
is the cross-entropy
\cite{Shore1980}
between $P(k)$ and $\pi(k|c)$.
The Poisson distribution $\pi(k|c)$ satisfies
\begin{equation}
\ln [ \pi(k|c) ]= -c + k \ln (c) - \ln (k!).
\label{eq:lnpi}
\end{equation}
\noindent
Inserting $\ln [\pi(k|c)]$ from Eq. (\ref{eq:lnpi}) into Eq. (\ref{eq:CPp}),
we obtain
\begin{equation}
S[P(k) || \pi(k|c)] = \sum_{k=0}^{\infty} P(k) \ln [ P(k) ]
+ c - \langle K \rangle \ln (c)
+ \sum_{k=0}^{\infty} \ln(k!) P(k).
\label{eq:Sk}
\end{equation}
\noindent
Eq. (\ref{eq:Sk}) provides the relative entropy of any degree distribution
$P(k)$ whose mean degree is $\langle K \rangle$, with respect to a Poisson distribution
with mean degree $c$.
In order to find the value of $c$ for which
$S[P(k) || \pi(k|c)]$
is minimal
we differentiate $S[P(k) || \pi(k|c)]$ with respect to $c$ and solve the equation
\begin{equation}
\frac{d}{dc} S[P(k) || \pi(k|c)] = 1 - \frac{\langle K \rangle}{c} = 0.
\end{equation}
\noindent
We find that $S[P(k) || \pi(k|c)]$ is minimized when the condition
$c = \langle K \rangle$
is satisfied.
This implies that for any degree distribution $P(k)$ with mean degree $\langle K \rangle$,
the closest Poisson distribution $\pi(k|c)$,
in terms of the relative entropy,
is the Poisson distribution with mean degree
$c=\langle K \rangle$.
Using the result discussed above, one can express the relative entropy
$S[P(k) || \pi(k|c)]$ in the form
\begin{equation}
S[P(k) || \pi(k|c)] =
S[P(k) || \pi(k|\langle K \rangle)]
+
\delta S(c,\langle K \rangle)
\end{equation}
\noindent
where
$S[P(k) || \pi(k|\langle K \rangle)]$
is the relative entropy of $P(k)$ with respect to a Poisson distribution
whose mean is $\langle K \rangle$,
and
\begin{equation}
\delta S(c,\langle K \rangle) = \langle K \rangle \left[ \left( \frac{c}{\langle K \rangle} - 1 \right)
-
\ln \left( \frac{c}{\langle K \rangle} \right) \right]
\label{eq:DeltaS}
\end{equation}
\noindent
is the added entropy due to the difference between $c$ and $\langle K \rangle$.
Note that $\delta S(c,\langle K \rangle) \ge 0$ for any choice of $\langle K \rangle > 0$ and $c > 0$,
while $\delta S(c,\langle K \rangle) = 0$ only in the case that $c = \langle K \rangle$.
Going back to Eq. (\ref{eq:SPpic}), the
relative entropy
$S[P(k) || \pi(k|\langle K \rangle)]$
can be expressed in the form
\begin{equation}
S[P(k) || \pi(k|\langle K \rangle)] =
- S[P(k)]
+ C[P(k) || \pi(k|\langle K \rangle)] ,
\end{equation}
\noindent
where $S[P(k)]$ is given by
Eq. (\ref{eq:Shannon})
and
\begin{equation}
C[P(k) || \pi(k|\langle K \rangle)] =
\langle K \rangle - \langle K \rangle \ln (\langle K \rangle)
+ \sum_{k=0}^{\infty} \ln(k!) P(k).
\label{eq:Sk2}
\end{equation}
\noindent
To evaluate the last term in Eq. (\ref{eq:Sk2}) we recall that
$\ln (0!) = \ln(1!) =0$,
while the $k=2$ term is $\ln (2) P(2)$.
For $k \ge 3$ we use the Stirling approximation
\cite{Olver2010}
\begin{equation}
\ln (k!) = \left( k + \frac{1}{2} \right) \ln (k) - k + \frac{1}{2} \ln (2 \pi).
\label{eq:Stirling}
\end{equation}
\noindent
Inserting $\ln (k!)$ for $k \ge 3$ from Eq. (\ref{eq:Stirling}) into Eq. (\ref{eq:Sk2}) and
rearranging terms, we obtain
\begin{eqnarray}
C[P(k) || \pi(k|c)] &=&
- \langle K \rangle \ln (\langle K \rangle)
+ \sum_{k=2}^{\infty} \left( k + \frac{1}{2} \right) \ln (k) P(k)
\nonumber \\
&+& \frac{1}{2} \ln (2 \pi)
-\frac{1}{2} \ln (2 \pi) P(0)
+ \left[ 1 - \frac{1}{2} \ln (2 \pi) \right] P(1)
\nonumber \\
&+& \left[ 2 - \frac{3}{2} \ln (2) - \frac{1}{2} \ln (2 \pi) \right] P(2),
\label{eq:Skk}
\end{eqnarray}
\noindent
where the terms involving $P(0)$, $P(1)$ and $P(2)$
result from the adjustment of the summation due to the fact that
Eq. (\ref{eq:Stirling}) is used only for $k \ge 3$.
Note that in the case of distributions in which $k_{\rm min} \ge 1$, one
assigns $P(k) = 0$ for $0 \le k \le k_{\rm min}-1$.
Using Eq. (\ref{eq:Skk}), the relative entropy of the degree distribution $P_t(k)$ of a contracting network
with respect to the corresponding Poisson distribution $\pi_t(k|\langle K \rangle_t)$ with the same mean
degree $\langle K \rangle_t$,
is given by
\begin{eqnarray}
S_t &=&
\sum_{k=0}^{\infty} P_t(k) \ln [ P_t(k) ]
- \langle K \rangle_t \ln (\langle K \rangle_t)
+ \sum_{k=2}^{\infty} \left( k + \frac{1}{2} \right) \ln (k) P_t(k)
\nonumber \\
&+& \frac{1}{2} \ln (2 \pi)
-\frac{1}{2} \ln (2 \pi) P_t(0)
+ \left[ 1 - \frac{1}{2} \ln (2 \pi) \right] P_t(1)
\nonumber \\
&+& \left[ 2 - \frac{3}{2} \ln (2) - \frac{1}{2} \ln (2 \pi) \right] P_t(2).
\label{eq:Skk7}
\end{eqnarray}
\noindent
Eq. (\ref{eq:Skk7}) is used in order to evaluate the relative entropy
during the contraction process, where $P_t(k)$ is obtained either from
numerical integration of the master equation or from computer simulations.
\section{Convergence of the relative entropy}
In each of the network contraction scenarios,
the degree distribution $P_t(k)$ evolves in time according to
the master equation
[Eq. (\ref{eq:dP/dt})].
As a result, the relative entropy $S_t$ of the network also evolves as the
network contracts.
The time derivative of $S_t$ is given by
\begin{equation}
\frac{d}{dt} S_t =
\sum_{k=0}^{\infty}
\ln \left[ \frac{P_t (k)}{\pi(k|\langle K \rangle_t)} \right]
\frac{d}{dt}P_t(k)
+
\sum_{k=0}^{\infty}
\frac{d}{dt} P_t(k)
-
\sum_{k=0}^{\infty}
\frac{P_t(k)}{\pi(k|\langle K \rangle_t)}
\frac{d}{dt} \pi(k|\langle K \rangle_t).
\label{eq:ds/dt_full}
\end{equation}
\noindent
Replacing the order of the summation and the derivative in
the second term on the right hand side of Eq.
(\ref{eq:ds/dt_full}),
we obtain
\begin{equation}
\sum_{k=0}^{\infty}
\frac{d}{dt} P_t(k) =
\frac{d}{dt} \left[ \sum_{k=0}^{\infty} P_t(k) \right] =0.
\end{equation}
\noindent
Inserting the derivative $d \pi(k|\langle K \rangle_t)/dt$
from Eq. (\ref{eq:dpi/dt1})
into the third term on the right hand side of
Eq. (\ref{eq:ds/dt_full}),
we obtain
\begin{equation}
\sum_{k=0}^{\infty}
\frac{P_t(k)}{\pi(k|\langle K \rangle_t)}
\frac{d}{dt} \pi(k|\langle K \rangle_t) =
-\frac{d \langle K \rangle_t}{dt}
\sum_{k=0}^{\infty}
\left( 1-\frac{k}{\langle K \rangle_t} \right) P_t(k) = 0.
\end{equation}
\noindent
Since the second and third terms in
Eq. (\ref{eq:ds/dt_full}) vanish,
the time derivative of the relative entropy is simply given by
\begin{equation}
\frac{d}{dt} S_t
=
\sum_{k=0}^{\infty}
\ln \left[ \frac{P_t(k)}{\pi(k|\langle K \rangle_t)} \right]
\frac{d}{dt} P_t(k).
\label{eq:ds/dt}
\end{equation}
\noindent
This is a general equation that applies to any network contraction scenario
in which the Poisson distribution $\pi(k|\langle K \rangle_t)$ is a solution.
The relative entropy satisfies $S_t \ge 0$ for any degree distribution $P_t(k)$.
It vanishes if and only if $P_t(k) = \pi(k|\langle K \rangle_t)$.
Therefore, in order to prove the convergence of the degree distribution $P_t(k)$
towards a Poisson distribution in a given network contraction scenario,
one needs to show that for this scenario
$dS_t/dt < 0$.
To this end, we use Eq. (\ref{eq:ds/dt}), where
we replace the derivative
$dP_t/dt$ by the right hand side of the master equation,
Eq. (\ref{eq:dP/dt}).
For the analysis below it is convenient to
express the time evolution of the relative
entropy, given by Eq. (\ref{eq:ds/dt}),
in the form
\begin{equation}
\frac{d}{dt} S_t = \Delta_{\rm A}(t) + \Delta_{\rm B}(t),
\label{eq:dSAB}
\end{equation}
\noindent
where $\Delta_{\rm A}(t)$ emanates from the $F_{\rm A}(t)$ term
(trickle-down term) of the master equation and
$\Delta_{\rm B}(t)$ emanates from the $F_{\rm B}(t)$ term
(redistribution term).
The contribution of the trickle-down term
to $dS_t/dt$ is given by
\begin{equation}
\Delta_{\rm A}(t) =
\frac{A_t}{N_t}
\sum_{k=0}^{\infty}
\ln \left[\frac{P_{t}\left(k\right)}{\pi(k|\langle K \rangle_t)}\right]
\left[ (k+1) P_t(k+1) - k P_t(k) \right],
\label{eq:DeltaA1}
\end{equation}
\noindent
where $A_t$ is given by Eq. (\ref{eq:A_t}),
and the contribution of the redistribution term is given by
\begin{equation}
\Delta_{\rm B}(t) =
- \frac{B}{N_t}
\sum_{k=0}^{\infty}
\ln \left[\frac{P_{t}\left(k\right)}{\pi_{t}\left(k| \langle K \rangle_t \right)}\right]
\left( \frac{k}{\langle K \rangle_t} - 1 \right)
P_t(k),
\label{eq:DB1}
\end{equation}
\noindent
where
\begin{equation}
B =
\left\{
\begin{array}{ll}
0 & {\rm \ \ \ \ \ random \ deletion} \\
1
& {\rm \ \ \ \ \ preferential \ deletion} \\
1
& {\rm \ \ \ \ \ propagating \ deletion}.
\end{array}
\right.
\end{equation}
\noindent
In order to show that the degree distribution of the contracting
network converges towards a Poisson distribution,
one needs to show that during the
contraction process
$\Delta_{\rm A}(t) + \Delta_{\rm B}(t) < 0$.
Below we consider each one of these terms separately.
We show that in all the three network contraction scenarios
and for any initial degree distribution $P_0(k)$,
the trickle-down term satisfies
$\Delta_{\rm A}(t) < 0$
at all times during the contraction process.
For the redistribution term $\Delta_{\rm B}(t)$
we obtain a necessary and sufficient condition on
the instantaneous degree distribution $P_t(k)$
under which
$\Delta_{\rm B}(t) < 0$.
The condition essentially states that
$\Delta_{\rm B}(t) < 0$
for any degree distribution whose tail decays
more slowly than the tail of the Poisson distribution,
which decays super exponentially.
This condition is generically satisfied by empirical networks,
which are formed via growth processes.
The degree distributions of such networks typically exhibit fat tails,
which decay much more slowly than Poisson.
\subsection{Convergence due to the trickle-down term}
To gain more insight on the structure of the $\Delta_{\rm A}(t)$ term,
given by Eq. (\ref{eq:DeltaA1}), it is useful to express it in the form
\begin{equation}
\Delta_{\rm A}(t) =
\frac{A_t}{N_t}
\left\{
\sum_{k=0}^{\infty}
\ln \left[
\frac{P_t(k)}{\pi(k|\langle K \rangle_t)}
\right]
(k+1) P_t(k+1)
-
\sum_{k=1}^{\infty}
\ln \left[
\frac{P_t(k)}{\pi(k|\langle K \rangle_t)}
\right]
k P_t(k)
\right\}.
\label{eq:DeltaA1b}
\end{equation}
\noindent
Taking a factor of $\langle K \rangle_t$ out of the curly parentheses
and multiplying the numerators and denominators in the arguments of
the logarithmic functions by $k/\langle K \rangle_t$ (for $k \ge 1$), we obtain
\begin{eqnarray}
\Delta_{\rm A}(t) &=&
\frac{A_t}{N_t}
\left\{\langle K \rangle_t + \ln[P_t(0)] \right\} P_t(1)
\\
&+&
\frac{A_t \langle K \rangle_t}{N_t}
\left\{
\sum_{k=1}^{\infty}
\ln \left[
\frac{\widetilde P_t(k)}{\pi(k-1|\langle K \rangle_t)}
\right]
\widetilde P_t(k+1)
-
\sum_{k=1}^{\infty}
\ln \left[
\frac{\widetilde P_t(k)}{\pi(k-1|\langle K \rangle_t)}
\right]
\widetilde P_t(k)
\right\},
\nonumber
\label{eq:DeltaA1c}
\end{eqnarray}
\noindent
where
\begin{equation}
\widetilde P_t(k) = \frac{ k }{\langle K \rangle_t} P_t(k),
\label{eq:tP_t(k)}
\end{equation}
\noindent
is the degree distribution of nodes selected via a random edge in a random
network with degree distribution $P_t(k)$.
Similarly, the distribution
\begin{equation}
\pi(k-1|\langle K \rangle_t) = \frac{k}{\langle K \rangle_t} \pi(k|\langle K \rangle_t)
\label{eq:pi(k-1)}
\end{equation}
\noindent
can be interpreted as the degree distribution of nodes selected via a random edge
in an ER network with a Poisson degree distribution of the form
$\pi(k|\langle K \rangle_t)$.
Rewriting $\widetilde P_t(k+1)$
in the form $[\widetilde P_t(k+1)/\widetilde P_t(k)] \widetilde P_t(k)$,
one can express the
$\Delta_{\rm A}(t)$ term as a covariance of the form
\begin{eqnarray}
\Delta_{\rm A}(t) &=&
\frac{A_t}{N_t}
\left\{ \langle K \rangle_t P_t(1) + \ln[P_t(0)] P_t(1)
- \frac{P_t(1)}{\langle K \rangle_t}
S[\widetilde P_t(k) || \pi(k-1|\langle K \rangle_t)]
\right.
\\
&+&
\left.
\widetilde {\mathbb E}_t \left[
\frac{\widetilde P_t(k+1)}{\widetilde P_t(k)}
\ln \left(
\frac{\widetilde P_t(k)}{\pi(k-1|\langle K \rangle_t)}
\right)
\right]
-
\widetilde {\mathbb E}_t \left[
\frac{\widetilde P_t(k+1)}{\widetilde P_t(k)}
\right]
\widetilde {\mathbb E}_t \left[
\ln \left(
\frac{\widetilde P_t(k)}{\pi(k-1|\langle K \rangle_t)}
\right)
\right]
\right\},
\nonumber
\label{eq:DeltaA1d}
\end{eqnarray}
\noindent
where
$\widetilde {\mathbb E}_t[f(k)] = \sum_k f(k) \widetilde P_t(k)$.
In particular,
\begin{equation}
\widetilde {\mathbb E}_t \left[
\frac{\widetilde P_t(k+1)}{\widetilde P_t(k)}
\right]
=
\sum_{k=1}^{\infty}
\left( \frac{\widetilde P_t(k+1)}{\widetilde P_t(k)} \right)
\widetilde P_t(k) = 1 - \frac{P_t(1)}{\langle K \rangle_t}.
\end{equation}
\noindent
In order that the covariance will be negative,
in domains in which
$\widetilde P_t(k)$
is an increasing function [namely, $\widetilde P_t(k+1) > \widetilde P_t(k)$],
it should be lower than the corresponding
Poisson distribution
[namely, $\widetilde P_t(k) < \pi(k-1|\langle K \rangle_t)$],
while in domains in which
$\widetilde P_t(k)$
is a decreasing function
it should be higher than the corresponding Poisson distribution.
In order to prove that $\Delta_{\rm A}(t) < 0$
for any degree distribution $P_t(k)$ at all stages of the contraction process
we rewrite Eq. (\ref{eq:DeltaA1}) in
the form
\begin{equation}
\Delta_{\rm A}(t) = \Delta_{\rm A}^{\rm P}(t) - \Delta_{\rm A}^{\pi}(t),
\label{eq:DeltaSplit}
\end{equation}
\noindent
where
\begin{equation}
\Delta_{\rm A}^{\rm P}(t) =
\frac{A_t}{N_t}
\sum_{k=0}^{\infty}
\ln \left[ P_{t}\left(k\right) \right]
\left[ (k+1) P_t(k+1) - k P_t(k) \right],
\label{eq:DAP}
\end{equation}
\noindent
and
\begin{equation}
\Delta_{\rm A}^{\pi}(t) =
\frac{A_t}{N_t}
\sum_{k=0}^{\infty}
\ln \left[\pi \left(k| \langle K \rangle_t \right)\right]
\left[ (k+1) P_t(k+1) - k P_t(k) \right].
\label{eq:DApi}
\end{equation}
\noindent
Separating the sum in Eq. (\ref{eq:DAP})
into two sums and replacing $k+1$ by $k$ in the first sum, we obtain
\begin{equation}
\Delta_{\rm A}^{\rm P}(t) =
\frac{A_t}{N_t}
\left\{
\sum_{k=1}^{\infty}
\ln \left[ P_{t}\left(k-1\right) \right]
k P_t(k)
-
\sum_{k=1}^{\infty}
\ln \left[ P_{t}\left(k\right) \right]
k P_t(k)
\right\}.
\label{eq:DAPs}
\end{equation}
\noindent
Expressing the degree distribution $P_t(k)$ in terms of
$\widetilde P_t(k)$,
\begin{eqnarray}
\Delta_{\rm A}^{\rm P}(t) &=&
\frac{A_t \langle K \rangle_t}{N_t}
\left\{
\sum_{k=1}^{\infty}
\ln \left[ P_{t}\left(k-1\right) \right]
\widetilde P_t(k)
-\sum_{k=1}^{\infty}
\ln \left[ \widetilde P_{t}\left(k\right) \right]
\widetilde P_t(k)
\right\}
\nonumber \\
&+&
\frac{A_t}{N_t}
\sum_{k=1}^{\infty} \ln \left( \frac{k}{\langle K \rangle_t} \right) k P_t(k)
.
\label{eq:DAP2}
\end{eqnarray}
\noindent
Combining the first two terms
in Eq. (\ref{eq:DAP2})
and splitting the last term, we obtain
\begin{equation}
\Delta_{\rm A}^{\rm P} (t) =
- \frac{ A_t \langle K \rangle_t }{N_t}
\sum_{k=1}^{\infty}
\widetilde P_t(k) \ln \left[ \frac{ \widetilde P_t(k) }{ P_t(k-1) } \right]
+ \frac{A_t}{N_t}
\langle K \ln (K) \rangle_t
- \frac{A_t}{N_t} \langle K \rangle_t \ln (\langle K \rangle_t).
\label{eq:DAP3}
\end{equation}
In order to evaluate $\Delta_{\rm A}^{\pi}$ we insert
\begin{equation}
\ln [\pi(k| \langle K \rangle_t)] = - \langle K \rangle_t + k \ln (\langle K \rangle_t) - \ln (k!)
\end{equation}
into Eq. (\ref{eq:DApi})
and obtain
\begin{equation}
\Delta_{\rm A}^{\pi}(t) =
\frac{A_t}{N_t}
\sum_{k=0}^{\infty}
[- \langle K \rangle_t + k \ln (\langle K \rangle_t) - \ln(k!)]
\left[ (k+1) P_t(k+1) - k P_t(k) \right].
\end{equation}
\noindent
Carrying out the summation and
using the identity
\begin{equation}
\ln (k!) = \ln [(k+1)!] - \ln (k+1),
\end{equation}
\noindent
we obtain
\begin{equation}
\Delta_{\rm A}^{\pi}(t) =
\frac{A_t}{N_t}
\langle K \ln (K) \rangle_t
-
\frac{A_t}{N_t}
\langle K \rangle_t \ln (\langle K \rangle_t).
\label{eq:DApi2}
\end{equation}
\noindent
Inserting the results for $\Delta_{\rm A}^{\rm P}$ and
$\Delta_{\rm A}^{\pi}$,
from Eqs. (\ref{eq:DAP3}) and (\ref{eq:DApi2}), respectively,
into Eq. (\ref{eq:DeltaSplit}),
we obtain
\begin{equation}
\Delta_{\rm A}(t) =
- \frac{A_t \langle K \rangle_t }{N_t}
S[ \widetilde P_t(k) || P_t(k-1) ]
\end{equation}
\noindent
where
\begin{equation}
S[ \widetilde P_t(k) || P_t(k-1) ] =
\sum_{k=1}^{\infty}
\widetilde P_t(k) \ln \left[ \frac{ \widetilde P_t(k) }{ P_t(k-1) } \right]
\label{eq:Skkm1}
\end{equation}
\noindent
is the relative entropy of $\widetilde P_t(k)$ with respect to
$P_t(k-1)$.
Note that Eq. (\ref{eq:Skkm1}) is valid only if
$P_t(k-1) > 0$ for all values of $k$ for which
$\widetilde P_t(k) > 0$.
This means that the degree distribution should not have any gaps,
namely, values of $k'$ for which $P_t(k') =0$ while $P_t(k) > 0$
for any $k > k'$.
In practice, even if there are such gaps in the initial degree distribution
$P_0(k)$, they are quickly filled up due to the trickle-down term $F_{\rm A}(t)$ of the
master equation, given by Eq. (\ref{eq:dP/dtA}).
Since the relative entropy must be positive, we find that
$\Delta_A(t) < 0$ for any degree distribution $P_t(k)$ that differs from $\pi(k|\langle K \rangle_t)$.
Actually, since the only distribution for which
$S[ \widetilde P_t(k) || P_t(k-1) ] = 0$ is the Poisson distribution,
this process can converge only to the Poisson distribution.
In the random deletion scenario, only the $\Delta_A(t)$ term contributes to the
time evolution of $S_t$, while the $\Delta_B(t)$ term vanishes.
This means that in the random deletion scenario
the distance between $P_t(k)$ and the corresponding Poisson
distribution $\pi(k|\langle K \rangle_t)$ with the same mean degree
$\langle K \rangle_t$ decreases monotonically
at any stage during the contraction process.
In the preferential deletion and the propagating deletion scenarios
the convergence also depends on the $\Delta_{\rm B}(t)$ term,
which is considered below.
\subsection{Convergence due to the redistribution term}
In order to gain insight on the $\Delta_{\rm B}(t)$ term,
we rewrite Eq. (\ref{eq:DB1})
in the form
\begin{equation}
\Delta_{\rm B}(t) =
- \frac{B}{N_t}
\left\{
\sum_{k=1}^{\infty}
\ln \left[\frac{P_{t}\left(k\right)}{\pi\left(k| \langle K \rangle_t \right)}\right]
\frac{k}{ \langle K \rangle_t }
P_t(k)
-
\sum_{k=0}^{\infty}
\ln \left[\frac{P_{t}\left(k\right)}{\pi(k| \langle K \rangle_t )}\right]
P_t(k)
\right\}.
\label{eq:DB2}
\end{equation}
\noindent
Taking the factor of $1/\langle K \rangle_t$ out of the curly brackets, we obtain
\begin{equation}
\Delta_{\rm B}(t) =
- \frac{B}{\langle K \rangle_t N_t}
\left\{
\sum_{k=1}^{\infty}
k
\ln \left[\frac{P_{t}\left(k\right)}{\pi\left(k| \langle K \rangle_t \right)}\right]
P_t(k)
-
\langle K \rangle_t
\sum_{k=0}^{\infty}
\ln \left[\frac{P_{t}\left(k\right)}{\pi(k| \langle K \rangle_t )}\right]
P_t(k)
\right\}.
\label{eq:DB3}
\end{equation}
\noindent
The expression in the curly brackets is, in fact,
equal to the covariance
between $k$ and $\ln[P_t(k)/\pi(k|\langle K \rangle_t)]$ under the
distribution $P_t(k)$, namely
\begin{equation}
\Delta_{\rm B}(t) =
- \frac{B}{\langle K \rangle_t N_t}
\left\{
\left\langle
k
\ln \left[\frac{P_{t}\left(k\right)}{\pi(k| \langle K \rangle_t )}\right]
\right\rangle
-
\langle K \rangle_t
\left\langle
\ln \left[\frac{P_{t}\left(k\right)}{\pi(k| \langle K \rangle_t )}\right]
\right\rangle
\right\}.
\label{eq:DB4}
\end{equation}
\noindent
Therefore, in the case of distributions for which the correlation between
$k$ and $\ln[P_t(k)/\pi(k|\langle K \rangle_t)]$ is positive, the term in the
curly brackets is positive and $\Delta_{\rm B}(t) < 0$.
In this case the $\Delta_{\rm B}(t)$ term contributes to the convergence of
$P_t(k)$ towards a Poisson distribution.
Such positive correlation essentially implies that for large values of $k$,
$P_t(k)$ tends to be larger than $\pi(k|\langle K \rangle_t)$, namely,
it has a heavier tail than the Poisson distribution
with the same mean value.
Since network growth processes generically lead to fat tail distributions
such as the power-law distributions of scale-free networks, it is expected
that most empirical networks will exhibit a positive correlation between
$k$ and $\ln[P_t(k)/\pi(k|\langle K \rangle_t)]$.
In those cases in which the correlation between
$k$ and $\ln[P_t(k)/\pi(k|\langle K \rangle_t)]$ is negative, the term in the
curly brackets is negative and $\Delta_{\rm B}(t)>0$.
In this case the $\Delta_{\rm B}(t)$ term works against the convergence
of $P_t(k)$ towards a Poisson distribution.
However, comparing the coefficients of $\Delta_{\rm A}(t)$ and $\Delta_{\rm B}(t)$
one finds that the coefficient of $\Delta_{\rm A}(t)$ is effectively larger by a factor
of $\langle K^2 \rangle/\langle K \rangle$ than the coefficient of $\Delta_{\rm B}(t)$.
Therefore, it is expected that the $\Delta_{\rm A}(t)$ term will be dominant and
induce the convergence of $P_t(k)$ towards Poisson even in those cases
in which $\Delta_{\rm B}(t)>0$.
To gain more insight into the sign of $\Delta_{\rm B}(t)$
from a different perspective, we use
Eqs. (\ref{eq:tP_t(k)}) and (\ref{eq:pi(k-1)})
to express $\Delta_{\rm B}(t)$
of Eq. (\ref{eq:DB2})
in the form
\begin{equation}
\Delta_{\rm B}(t) =
-
\frac{B}{N_t}
\sum_{k=1}^{\infty}
\ln \left[\frac{\widetilde P_{t}\left(k\right)}{\pi\left(k-1| \langle K \rangle_t \right)}\right]
\widetilde P_t(k)
+
\frac{B}{N_t}
\sum_{k=0}^{\infty}
\ln \left[\frac{P_{t}\left(k\right)}{\pi \left(k| \langle K \rangle_t \right)}\right]
P_t(k).
\label{eq:DeltaB}
\end{equation}
\noindent
The first sum in Eq. (\ref{eq:DeltaB}) is the
relative entropy of the degree distribution
$\widetilde P_t(k)$ with respect to the shifted Poisson
distribution $\pi(k-1| \langle K \rangle_t)$.
This is essentially a distance measure between the
degree distribution of nodes selected preferentially in
a network whose degree distribution is $P_t(k)$ and
the degree distribution of nodes selected preferentially
from the corresponding Poisson distribution with the
same mean degree.
The second term in Eq. (\ref{eq:DeltaB}) is the relative entropy of the
degree distribution $P_t(k)$ with respect to the
Poisson distribution $\pi(k|\langle K \rangle_t)$,
which is essentially a distance measure between $P_t(k)$ and $\pi(k|\langle K \rangle_t)$.
Thus, Eq. (\ref{eq:DeltaB}) can be written in the form
\begin{equation}
\Delta_{\rm B}(t) =
- \frac{B}{N_t}
\left\{ S[\widetilde P_t(k) || \pi(k-1|\langle K \rangle_t)] - S[P_t(k) || \pi(k | \langle K \rangle_t)]
\right\}.
\label{eq:DeltaB3}
\end{equation}
\noindent
In the case that the degree distributions obtained for the preferential selection
are farther apart than the degree distributions obtained for random selection,
$\Delta_{\rm B}(t) < 0$, while in the opposite case $\Delta_{\rm B}(t) > 0$.
There is an important distinction between the two terms in Eq. (\ref{eq:DeltaB3}).
The second term is the relative entropy of $P_t(k)$ with respect to the
Poisson distribution $\pi(k| \langle K \rangle_t)$
with the same mean degree $\langle K \rangle_t$.
In contrast, the first term is the relative entropy of $\widetilde P_t(k)$ with
respect to the Poisson distribution $\pi(k-1|\langle K \rangle_t)$.
The mean degree of
$\widetilde P_t(k)$ is
\begin{equation}
\langle \widetilde K \rangle_t = \frac{ \langle K^2 \rangle_t }{ \langle K \rangle_t },
\end{equation}
\noindent
while the mean degree of $\pi(k-1| \langle K \rangle_t)$ is $\langle K \rangle_t + 1$.
Therefore, Eq. (\ref{eq:DeltaB3}) can be written in the form
\begin{eqnarray}
\Delta_{\rm B}(t)
&=&
- \frac{B}{N_t}
\left\{ S[\widetilde P_t(k) || \pi(k-1|\langle \widetilde K \rangle_t - 1)] - S[P_t(k) || \pi(k | \langle K \rangle_t)]
\right\}
\nonumber \\
&-& \frac{B}{N_t}
\delta S(\langle \widetilde K \rangle_t, \langle K \rangle_t + 1),
\label{eq:DeltaB4}
\end{eqnarray}
\noindent
where
$\delta S(\langle \widetilde K \rangle_t, \langle K \rangle_t+1)$
is given by Eq. (\ref{eq:DeltaS}).
This implies that $\Delta_{\rm B}(t) < 0$ as long as
\begin{equation}
S[\widetilde P_t(k) || \pi(k-1|\langle \widetilde K \rangle_t - 1)] > S[P_t(k) || \pi(k | \langle K \rangle_t)]
- \delta S(\langle \widetilde K \rangle_t, \langle K \rangle_t + 1).
\end{equation}
\noindent
Since
$\delta S(\langle \widetilde K \rangle_t, \langle K \rangle_t+1)$
is always positive and its value increases as $P(k)$ becomes broader,
this condition is expected to be satisfied for any degree distribution
that exhibits a heavy tail.
From our experience, degree distributions for which $\Delta_{\rm B} > 0$
are very special, usually hand-crafted for the mission.
In those cases, $\Delta_{\rm A}$, which is always negative, as proven above,
is much larger in absolute value than $\Delta_{\rm B}$.
\section{Contraction of networks with given initial degree distributions}
Here we apply the framework presented above
to three examples of configuration model networks,
with a degenerate degree distribution (also known as random regular graphs),
an exponential degree distribution and
a power-law degree distribution
(scale-free networks).
\subsection{Random regular graphs}
A random regular graph (RRG) is a configuration model network in which all
the nodes are of the same degree, $k=c_0$, namely
\begin{equation}
P_0(k) = \delta_{k,c_0},
\label{eq:deg}
\end{equation}
\noindent
where $c_0$ is an integer.
Here we consider the case of
$c_0 \ge 3$, in which the giant component encompasses the whole network.
In order to leave room for contraction into a non-trivial degree distribution,
we choose RRGs with $c_0 \gg 1$. Since in node deletion processes
the degrees of nodes in the network are only reduced and never increase
it is clear that the range of
degrees of the contracted network will be limited to $0 \le k \le c_0$.
This means that in the case that the initial network is an RRG the tail of
the degree distribution of the contracted network will be truncated
above $k=c_0$. Thus, the convergence towards Poisson is expected to
be relatively slow.
To evaluate the relative entropy of the initial RRG network with respect to
the corresponding Poisson distribution
we insert the degenerate distribution of Eq. (\ref{eq:deg})
into Eq. (\ref{eq:S}).
We obtain the initial relative entropy
\begin{equation}
S_0 = \ln \left[ \frac{1}{\pi(c_0|c_0)} \right].
\label{eq:Srrg}
\end{equation}
\noindent
Inserting the Poisson degree distribution
into Eq. (\ref{eq:Srrg}) we obtain
\begin{equation}
S_0 = c_0 - c_0 \ln (c_0) + \ln (c_0!).
\end{equation}
\noindent
Using the Stirling approximation to evaluate $\ln (c_0!)$,
we obtain
\begin{equation}
S_0 =
\frac{1}{2} \ln (c_0) + \frac{1}{2} \ln (2 \pi).
\label{eq:S0rrg}
\end{equation}
Below we analyze the convergence of a configuration model network
with a degenerate degree distribution towards an ER graph structure upon
contraction.
In particular, we calculate the time-dependent degree distribution
$P_t(k)$ during contraction and examine its convergence towards $\pi(k|\langle K \rangle_t)$.
To this end we perform direct numerical integration of the
master equation (\ref{eq:dP/dt}) and computer simulations,
starting from a configuration model network
with a degree distribution
given by Eq. (\ref{eq:deg})
and evaluate the time-dependent relative entropy $S_t$.
In Fig. \ref{fig:2} we present the relative entropy $S_t$ as a function of time
(represented by $N_t/N_0 = 1 - t/N_0$)
for a random regular graph of size $N_0=10^4$ with
a degenerate degree distribution in which all the nodes are of degree $c_0=10$,
that contracts via: (a) random node deletion; (b) preferential node deletion; and (c) propagating node deletion.
The results obtained from numerical integration of the master equation (solid lines)
are in excellent agreement with the results obtained from computer simulations,
namely, direct simulations of contracting networks (circles).
In all three cases the relative entropy quickly decays,
which implies that the degree distribution $P_t(k)$ of the contracting network
converges towards a Poisson distribution.
The decay rate of $S_t$ is comparable in all the three scenarios.
This implies that for extremely narrow degree distributions such as the degenerate
distribution the preferential and the propagating deletion scenarios do not exhibit
faster convergence than the random deletion scenario.
\begin{figure}
\begin{center}
\includegraphics[width=6.0cm]{fig2.eps}
\caption{
(Color online)
The relative entropy $S_t$ as a function of time
for a random regular graph
of initial size $N_0=10^4$ and initial degree $c_0=10$
that contracts via random deletion (a), preferential deletion (b) and propagating
deletion (c), obtained from numerical integration of the master equation (solid lines).
In all three cases the relative entropy quickly decays,
which implies that the degree distribution of the contracting network
converges towards a Poisson distribution.
The master equation results
are in excellent agreement with the results obtained from computer simulations (circles).
Also, the initial value $S_0 \simeq 2.08$ is in perfect agreement with
the result obtained from Eq. (\ref{eq:S0rrg}).
}
\label{fig:2}
\end{center}
\end{figure}
In Fig. \ref{fig:3}(a) we present the degree distribution $P_0(k)$ of a random regular graph (solid line)
of size $N_0=10^4$ with a degenerate degree
distribution in which all the nodes are of degree $c_0=10$.
The corresponding Poisson distribution
with the same mean degree
$\langle K \rangle_0=c_0$
is also shown (dashed line).
Clearly, it is highly dissimilar to the degenerate distribution.
The random regular graph undergoes a network contraction process
via the random node deletion scenario.
In Fig. \ref{fig:3}(b) we present the degree distribution $P_t(k)$ of the contracted network
at time $t=8000$, where the contracted network size is $N_t=2000$.
The results obtained from the numerical integration of the master equation (solid line)
are in excellent agreement with the results of computer simulations (circles).
They are very well converged towards the
corresponding Poisson distribution $\pi(k|\langle K \rangle_t)$
with the same mean degree $\langle K \rangle_t$ (dashed line).
\begin{figure}
\begin{center}
\includegraphics[width=13.4cm]{fig3.eps}
\caption{
(Color online)
(a) The degree distribution $P_0(k)$ of a random regular graph (solid line)
in which all the nodes are of degree $c_0=10$.
The circles represent the degree sequence of a single network
instance of $N_0=10^4$
nodes, which was used in the computer simulations.
The corresponding Poisson distribution
with the same mean degree is also shown (dashed line).
The network contracts via random node deletion.
(b) The degree distribution $P_t(k)$ of the contracted network
at time $t=8000$, when the network size is reduced to $N_t=2000$.
The results obtained from numerical integration of the master equation (solid line)
are in excellent agreement with the results obtained
from computer simulations (circles).
They are both very well converged towards the
corresponding Poisson distribution $\pi(k|\langle K \rangle_t)$
with the same mean degree $\langle K \rangle_t$ (dashed line).
}
\label{fig:3}
\end{center}
\end{figure}
\subsection{Configuration model networks with exponential degree distributions}
Consider a configuration model network with an exponential degree
distribution of the form $P_0(k) \sim e^{- \alpha k}$,
where $k \ge k_{\rm min}$
and $k_{\rm min}$ is the lower cutoff of the initial degree distribution.
It is convenient to parametrize the degree distribution
using the mean degree $\langle K \rangle_0$,
in the form
\begin{equation}
P_0(k) =
\left\{
\begin{array}{ll}
0 & \ \ \ \ \ k < k_{\rm min} \\
D \left( \frac{\langle K \rangle_0 - k_{\rm min}}{\langle K \rangle_0 - k_{\rm min} + 1} \right)^k
& \ \ \ \ \ k \ge k_{\rm min},
\end{array}
\right.
\label{eq:exp}
\end{equation}
\noindent
where $D$ is the normalization constant, given by
\begin{equation}
D =
\frac{1}{(\langle K \rangle_0 - k_{\rm min})+1}
\left( \frac{ \langle K \rangle_0 - k_{\rm min} }
{ \langle K \rangle_0 - k_{\rm min} + 1 } \right)^{- k_{\rm min}}.
\end{equation}
\noindent
Below we evaluate the relative entropy of an initial network
with an exponential degree distribution with respect to
the corresponding Poisson distribution.
Inserting the exponential degree distribution of Eq. (\ref{eq:exp})
into Eq. (\ref{eq:Shannon})
and carrying out the summation, we obtain the Shannon entropy
\begin{eqnarray}
S[P_0(k)]
&=& -
\sum_{k=k_{\rm min}}^{\infty} P_0(k) \ln [ P_0(k) ]
\nonumber \\
&=&
- (\langle K \rangle_0 - k_{\rm min}) \ln (\langle K \rangle_0 - k_{\rm min})
\nonumber \\
&+& (\langle K \rangle_0 - k_{\rm min}+1) \ln (\langle K \rangle_0 - k_{\rm min}+1).
\label{eq:S0exp}
\end{eqnarray}
\noindent
In order to calculate the cross-entropy
$C[P_0(k) || \pi(k|\langle K \rangle_0)]$,
we insert the exponential
distribution $P_0(k)$
of Eq. (\ref{eq:exp})
into Eq. (\ref{eq:Skk}).
We obtain
\begin{eqnarray}
C[P_0(k) || \pi(k|\langle K \rangle_0)] &=&
- \langle K \rangle_0 \ln(\langle K \rangle_0)
+ \sum_{k=k_{\rm min}}^{\infty} \left( k + \frac{1}{2} \right) \ln (k)
\left[ D \left( \frac{\langle K \rangle_0-k_{\rm min}}{\langle K \rangle_0 - k_{\rm min}+1} \right)^k \right]
\nonumber \\
&+& \frac{1}{2} \ln (2 \pi)
-\frac{1}{2} \ln (2 \pi)
P_0(0)
+ \left[ 1 - \frac{1}{2} \ln (2 \pi) \right]
P_0(1)
\nonumber \\
&+& \left[ 2 - \frac{3}{2} \ln (2) - \frac{1}{2} \ln (2 \pi) \right]
P_0(2).
\label{eq:Skke7}
\end{eqnarray}
\noindent
Carrying out the summation, we obtain
\begin{eqnarray}
C[P_0(k) || \pi(k|\langle K \rangle_0)] &=&
- \langle K \rangle_0 \ln (\langle K \rangle_0)
\nonumber \\
&-& \frac{1}{2(\langle K \rangle_0 - k_{\rm min} + 1)}
\left[ 2 \frac{\partial}{\partial \gamma}
\Phi
\left. \left( \frac{ \langle K \rangle_0-k_{\rm min} }
{ \langle K \rangle_0-k_{\rm min}+1},\gamma,k_{\rm min} \right) \right|_{\gamma=-1}
\right.
\nonumber \\
&+&
\left.
\frac{\partial}{\partial \gamma}
\Phi
\left. \left( \frac{ \langle K \rangle_0-k_{\rm min} }
{ \langle K \rangle_0-k_{\rm min}+1},\gamma,k_{\rm min} \right) \right|_{\gamma=0} \right]
\nonumber \\
&+& \frac{1}{2} \ln (2 \pi)
-\frac{1}{2} \ln (2 \pi)
P_0(0)
+ \left[ 1 - \frac{1}{2} \ln (2 \pi) \right]
P_0(1)
\nonumber \\
&+& \left[ 2 - \frac{3}{2} \ln (2) - \frac{1}{2} \ln (2 \pi) \right]
P_0(2),
\label{eq:Skke8}
\end{eqnarray}
\noindent
where $\Phi(x,\gamma,k)$ is the Lerch transcendent
\cite{Olver2010}.
The relative entropy takes the form
$S_0 = -S[P_0(k)] + C[P_0(k)||\pi(k|\langle K \rangle_0)]$,
where $S[P_0(k)]$ is given by Eq. (\ref{eq:S0exp})
and $C[P_0(k)||\pi(k|\langle K \rangle_0)]$
is given by Eq. (\ref{eq:Skke8}).
Below we analyze the convergence of a configuration model network
with an exponential degree distribution towards an ER graph structure upon
contraction.
In particular, we calculate the time dependent degree distribution
$P_t(k)$ during contraction and examine its convergence towards $\pi(k|\langle K \rangle_t)$.
To this end we perform direct numerical integration of the
master equation (\ref{eq:dP/dt}) and computer simulations,
starting from a configuration model network
with a degree distribution
given by Eq. (\ref{eq:exp})
and evaluate the time-dependent relative entropy $S_t$.
\begin{figure}
\begin{center}
\includegraphics[width=5.8cm]{fig4.eps}
\caption{
(Color online)
The relative entropy $S_t$ as a function of time
for a configuration model network
of initial size $N_0=10^4$ and mean degree $\langle K \rangle_0=20$
with an exponential degree distribution in which $k_{\rm min}=10$,
that contracts via random deletion (a), preferential deletion (b) and propagating
deletion (c), obtained from numerical integration of the master equation (solid lines).
In all three cases the relative entropy quickly decays,
which implies that the degree distribution of the contracting network
converges towards a Poisson distribution.
The convergence is dramatically faster in the preferential and the propagating deletion
scenarios compared to random deletion scenario.
The master equation results
are in very good agreement with the results obtained from computer simulations (circles).
Also, the initial value $S_0 \simeq 1.32$ is in perfect agreement with
the result obtained from Eqs. (\ref{eq:S0exp}) and (\ref{eq:Skke8}).
}
\label{fig:4}
\end{center}
\end{figure}
In Fig. \ref{fig:4} we present the
relative entropy $S_t$ as a function of time
for a configuration model network
of initial size $N_0=10^4$ and initial mean degree $\langle K \rangle_0=20$
with an exponential degree distribution
that contracts via random deletion (a), preferential deletion (b) and propagating
deletion (c), obtained from numerical integration of the master equation (solid lines).
In all three cases the relative entropy quickly decays,
which implies that the degree distribution of the contracting network
converges towards a Poisson distribution.
The convergence is dramatically faster in the preferential and the propagating deletion
scenarios compared to random deletion scenario.
The master equation results
are in very good agreement with the results obtained from computer simulations (circles).
\begin{figure}
\begin{center}
\includegraphics[width=13.4cm]{fig5.eps}
\caption{
(Color online)
(a) The degree distribution $P_0(k)$ of a configuration model network
with mean degree $\langle K \rangle_0=20$
and an exponential degree distribution,
given by Eq. (\ref{eq:exp}) with $k_{\rm min}=10$ (solid line).
The circles represent the degree sequence of the $N_0=10^4$ nodes
in a single realization of the initial network, which was used in the computer simulation.
The corresponding Poisson distribution
with the same mean degree is also shown (dashed line).
The network contracts via the preferential node deletion scenario.
(b) The degree distribution $P_t(k)$ of the contracted network
at time $t=7000$, when the network size is reduced to $N_t=3000$,
obtained from numerical integration of the master equation (solid line).
The master equation results
are in excellent agreement with the results
obtained from computer simulations (circles).
The corresponding Poisson distribution $\pi(k|\langle K \rangle_t)$
with the same mean degree is also shown (dashed line).
The master equation results and the computer simulation results
are in very good agreement with the
corresponding Poisson distribution with the same mean degree.
}
\label{fig:5}
\end{center}
\end{figure}
In Fig. \ref{fig:5}(a) we present the degree distribution $P_0(k)$ of a configuration model network
of size $N_0=10^4$ and an exponential degree distribution with mean degree $\langle K \rangle_0=20$ (solid line).
The corresponding Poisson distribution
with the same mean degree is also shown (dashed line).
The network contracts via preferential node deletion.
In Fig. \ref{fig:5}(b) we present the degree distribution $P_t(k)$ of the contracted network
at time $t=7000$, when the network size is reduced to $N_t=3000$,
obtained from numerical integration of the master equation (solid line)
and from computer simulations (circles).
The corresponding Poisson distribution $\pi(k|\langle K \rangle_t)$
with the same mean degree is also shown (dashed line).
The master equation results, the computer simulation results and the
corresponding Poisson distribution are found to be in very good agreement
with each other.
In Fig. \ref{fig:6} we present the
time derivative of the relative entropy, $dS_t/dt=\Delta_{\rm A}(t)+\Delta_{\rm B}(t)$,
as a function of time,
for a configuration model network
of initial size $N_0=10^4$ and exponential degree distribution with mean degree $\langle K \rangle_0=20$
that contracts via preferential node deletion,
obtained from numerical integration of the master equation (solid lines).
The terms $\Delta_{\rm A}(t)$ (dashed line) and $\Delta_{\rm B}(t)$ (dotted line),
which sum up to the derivative $dS_t/dt$ are also shown.
As expected, both $\Delta_{\rm A}(t)$ and $\Delta_{\rm B}(t)$
are negative at all times during the contraction process.
\begin{figure}
\begin{center}
\includegraphics[width=6.4cm]{fig6.eps}
\caption{
(Color online)
The time derivative of the relative entropy, $dS_t/dt=\Delta_{\rm A}(t)+\Delta_{\rm B}(t)$,
as a function of time,
for a configuration model network
of initial size $N_0=10^4$ and exponential degree distribution with mean degree $\langle K \rangle_0=20$
and $k_{\rm min}=10$,
that contracts via preferential node deletion, obtained from numerical integration of the master equation (solid lines).
The terms $\Delta_{\rm A}(t)$ (dashed line) and $\Delta_{\rm B}(t)$ (dotted line),
which sum up to the derivative $dS_t/dt$ are also shown.
Note that both $\Delta_{\rm A}(t)$ and $\Delta_{\rm B}(t)$
are negative at all times during the contraction process.
}
\label{fig:6}
\end{center}
\end{figure}
\subsection{Configuration model networks with power-law degree distributions}
Consider a configuration model network with a power-law degree distribution
of the form
$P_0(k) \sim k^{-\gamma}$,
where
$1 \le k_{\rm min} \le k \le k_{\rm max}$.
Here we focus on the case of $\gamma > 2$, in which the
mean degree,
$\langle K \rangle_0$,
is bounded even for
$k_{\rm max} \rightarrow \infty$.
Power-law distributions do not exhibit a typical scale, and are therefore
referred to as scale-free networks.
The normalized degree distribution is given by
\begin{equation}
P_0(k) =
\left\{
\begin{array}{ll}
0 & \ \ \ \ \ k<k_{\rm min} \\
D \ {k^{-\gamma}} & \ \ \ \ \ k_{\rm min} \le k \le k_{\rm max} \\
0 & \ \ \ \ \ k > k_{\rm max},
\end{array}
\right.
\label{eq:PLnorm}
\end{equation}
\noindent
where $D$ is the normalization constant, given by
\begin{equation}
D = D(\gamma,k_{\rm min},k_{\rm max}) = \frac{ 1 }{ \zeta(\gamma,k_{\rm min}) - \zeta(\gamma,k_{\rm max}+1) },
\label{eq:PLnormA}
\end{equation}
\noindent
and $\zeta(\gamma,k)$ is the Hurwitz zeta function
\cite{Olver2010}.
For $2 < \gamma \le 3$ the mean degree is bounded while the second moment,
$\langle K^2 \rangle$, diverges in the limit of
$k_{\rm max} \rightarrow \infty$.
For $\gamma > 3$ both moments are bounded.
The mean degree is given by
\begin{equation}
\langle K \rangle_0 =
\frac{ \zeta(\gamma-1,k_{\rm min}) - \zeta(\gamma-1,k_{\rm max}+1) }
{ \zeta(\gamma,k_{\rm min}) - \zeta(\gamma,k_{\rm max}+1) }.
\label{eq:Kmsf}
\end{equation}
\noindent
The second moment of the degree distribution, when finite, is
\begin{equation}
\langle K^2 \rangle_0 =
\frac{ \zeta(\gamma-2,k_{\rm min}) - \zeta(\gamma-2,k_{\rm max}+1) }
{ \zeta(\gamma,k_{\rm min}) - \zeta(\gamma,k_{\rm max}+1) }.
\label{eq:K2msf}
\end{equation}
Below we evaluate the relative entropy of an initial network
with a power law degree distribution with respect to
the corresponding Poisson distribution.
In order to calculate the Shannon entropy
$S[P_0(k)]$ we insert the power-law distribution of Eq. (\ref{eq:PLnorm})
into Eq. (\ref{eq:Shannon}).
We obtain
\begin{equation}
S[P_0(k)] =
-
\sum_{k=k_{\rm min}}^{\infty}
P_0(k) \ln [ P_0(k) ] =
-
\ln (D) + \gamma \sum_{k=k_{\rm min}}^{\infty}
D k^{- \gamma} \ln (k).
\label{eq:PLPsf}
\end{equation}
\noindent
Since $\ln (1) = 0$ the summation in Eq. (\ref{eq:PLPsf})
actually starts from the larger value between $k=2$ and $k_{\rm min}$,
denoted by $\overline k_{\rm min}=\max \{2,k_{\rm min} \}$.
We thus obtain
\begin{equation}
S[P_0(k)] =
-\ln (D) +
\gamma \sum_{ k= \overline k_{\rm min} }^{\infty}
D k^{- \gamma} \ln (k).
\label{eq:PLPsfi}
\end{equation}
\noindent
Carrying out the summation, we obtain
\begin{equation}
S[P_0(k)] =
- \ln (D)
+ \gamma D \left[ \zeta'(\gamma,k_{\rm max}+1) - \zeta'(\gamma,\overline k_{\rm min}) \right],
\label{eq:PLPsfis}
\end{equation}
\noindent
where
$\zeta'(\gamma,k) = \partial \zeta(\gamma,k)/\partial \gamma$.
In order to calculate the cross-entropy
$C[P_0(k) || \pi(k|\langle K \rangle_0)]$,
we insert the power-law
distribution $P_0(k)$ into Eq. (\ref{eq:Skk}).
We obtain
\begin{eqnarray}
C[P_0(k) || \pi(k|\langle K \rangle_0)] &=&
- \langle K \rangle_0 \ln (\langle K \rangle_0)
+ \sum_{k = \overline k_{\rm min}}^{\infty} \left( k + \frac{1}{2} \right) \ln (k) D k^{-\gamma}
\nonumber \\
&+& \frac{1}{2} \ln (2 \pi)
+ \left[ 1 - \frac{1}{2} \ln (2 \pi) \right] P_0(1)
\nonumber \\
&+& \left[ 2 - \frac{3}{2} \ln (2) - \frac{1}{2} \ln (2 \pi) \right] P_0(2).
\label{eq:Skk17}
\end{eqnarray}
\noindent
Carrying out the summation, we obtain
\begin{eqnarray}
C[P_0(k) || \pi(k|\langle K \rangle_0)] &=&
- \langle K \rangle_0 \ln (\langle K \rangle_0)
+
D \left[ \zeta'(\gamma-1,\overline k_{\rm min}) - \zeta'(\gamma-1,k_{\rm max}+1) \right]
\nonumber \\
&+& \frac{D}{2}
\left[ \zeta'(\gamma, k_{\rm min}) - \zeta'(\gamma,k_{\rm max}+1) \right]
\nonumber \\
&+& \frac{1}{2} \ln (2 \pi)
+ \left[ 1 - \frac{1}{2} \ln (2 \pi) \right]
P_0(1)
\nonumber \\
&+& \left[ 2 - \frac{3}{2} \ln (2) - \frac{1}{2} \ln (2 \pi) \right]
P_0(2).
\label{eq:Skk8}
\end{eqnarray}
\noindent
The relative entropy
of the initial network with a power-law degree distribution given by Eq. (\ref{eq:PLnorm})
takes the form
$S_0 = -S[P_0(k)] + C[P_0(k)||\pi(k|\langle K \rangle_0)]$,
where $S[P_0(k)]$ is given by Eq. (\ref{eq:PLPsfis})
and $C[P_0(k)||\pi(k|\langle K \rangle_0)]$
is given by Eq. (\ref{eq:Skk8}).
Below we analyze the convergence of a configuration model network
with a power-law degree distribution towards an ER graph structure upon
contraction.
In particular, we calculate the time dependent degree distribution
$P_t(k)$ during contraction and examine its convergence towards $\pi(k|\langle K \rangle_t)$.
To this end we perform direct numerical integration of the
master equation (\ref{eq:dP/dt}) and computer simulations,
starting from a configuration model network
with a degree distribution
given by Eq. (\ref{eq:PLnorm})
and evaluate the time-dependent relative entropy $S_t$.
\begin{figure}
\begin{center}
\includegraphics[width=5.8cm]{fig7.eps}
\caption{
(Color online)
The relative entropy $S_t$ as a function of time
for a configuration model network
with a power-law degree distribution
of initial size $N_0=10^4$ and mean degree $\langle K \rangle_0=20$,
where $k_{\rm min}=10$, $k_{\rm max}=100$ and $\gamma=2.65$,
that contracts via random deletion (a), preferential deletion (b) and propagating
deletion (c), obtained from numerical integration of the master equation (solid lines).
In all three cases the relative entropy quickly decays,
which implies that the degree distribution of the contracting network
converges towards a Poisson distribution.
The convergence is dramatically faster in the preferential and the propagating deletion
scenarios compared to random deletion scenario.
The master equation results
are in very good agreement with the results obtained from computer simulations (circles).
Also, the initial value $S_0 \simeq 2.59$ is in perfect agreement with
the result obtained from Eqs. (\ref{eq:PLPsfis}) and (\ref{eq:Skk8}).
}
\label{fig:7}
\end{center}
\end{figure}
In Fig. \ref{fig:7} we present the
relative entropy $S_t$ as a function of time
for a configuration model network
with a power-law degree distribution,
of initial size $N_0=10^4$ and initial mean degree $\langle K \rangle_0=20$,
where $k_{\rm min}=10$, $k_{\rm max}=100$ and $\gamma=2.65$,
that contracts via random deletion (a), preferential deletion (b) and propagating
deletion (c), obtained from numerical integration of the master equation (solid lines).
In all three cases the relative entropy quickly decays,
which implies that the degree distribution of the contracting network
converges towards a Poisson distribution.
The convergence is dramatically faster in the preferential and the propagating deletion
scenarios compared to random deletion scenario.
The master equation results
are in very good agreement with the results obtained from computer simulations (circles).
In Fig. \ref{fig:8}(a) we present the degree distribution $P_0(k)$ of a configuration model network
of size $N_0=10^4$ and a power-law degree distribution with mean degree $\langle K \rangle_0=20$ (solid line).
The corresponding Poisson distribution
with the same mean degree is also shown (dashed line).
The network contracts via propagating node deletion.
In Fig. \ref{fig:8}(b) we present the degree distribution $P_t(k)$ of the contracted network
at $t=7000$, when the network size is reduced to $N_t=3000$,
obtained from numerical integration of the master equation (solid line)
and from computer simulations (circles).
The corresponding Poisson distribution $\pi(k|\langle K \rangle_t)$
with the same mean degree is also shown (dashed line).
The master equation results, the computer simulation results and the
corresponding Poisson distribution are found to be in very good agreement
with each other.
\begin{figure}
\begin{center}
\includegraphics[width=13.4cm]{fig8.eps}
\caption{
(Color online)
(a) The degree distribution $P_0(k)$ of a configuration model network
with a power-law degree distribution,
given by Eq. (\ref{eq:PLnorm}), and mean degree
$\langle K \rangle_0=20$ (solid line),
where $k_{\rm min}=10$, $k_{\rm max}=100$ and $\gamma=2.65$,
is shown on a log-log scale.
The circles represent the degree sequence of the $N_0=10^4$ nodes
in a single realization of the initial network, which was used in the computer simulation.
The corresponding Poisson distribution
with the same mean degree is also shown (dashed line).
The network contracts via the propagating node deletion scenario.
(b) The degree distribution $P_t(k)$ of the contracted network
at time $t=7000$, when the network size is reduced to $N_t=3000$,
obtained from numerical integration of the master equation
is shown on a linear scale (solid line).
The master equation results
are in excellent agreement with the results
obtained from computer simulations (circles).
The corresponding Poisson distribution $\pi(k|\langle K \rangle_t)$
with the same mean degree is also shown (dashed line).
The master equation results and the computer simulation results
are in very good agreement with the
corresponding Poisson distribution with the same mean degree.
}
\label{fig:8}
\end{center}
\end{figure}
In Fig. \ref{fig:9} we present the
time derivative of the relative entropy, $dS_t/dt$
as a function of time,
for a configuration model network
of initial size $N=10^4$ and a power-law degree distribution with mean degree $\langle K \rangle_0=20$
that contracts via propagating node
deletion, obtained from numerical integration of the master equation (solid lines).
The terms $\Delta_{\rm A}(t)$ (dashed line) and $\Delta_{\rm B}(t)$ (dotted line),
which sum up to the derivative $dS_t/dt$ are also shown.
As expected, both $\Delta_{\rm A}(t)$ and $\Delta_{\rm B}(t)$
are negative at all times during the contraction process.
\begin{figure}
\begin{center}
\includegraphics[width=6.4cm]{fig9.eps}
\caption{
(Color online)
The time derivative of the relative entropy $dS_t/dt$
as a function of time,
for a configuration model network
of initial size $N_0=10^4$ and a power-law degree distribution with mean degree $\langle K \rangle_0=20$,
where $k_{\rm min}=10$, $k_{\rm max}=100$ and $\gamma=2.65$,
that contracts via propagating node
deletion, obtained from numerical integration of the master equation (solid lines).
The terms $\Delta_{\rm A}(t)$ (dashed line) and $\Delta_{\rm B}(t)$ (dotted line),
which sum up to the derivative $dS_t/dt$ are also shown.
Note that both $\Delta_{\rm A}(t)$ and $\Delta_{\rm B}(t)$
are negative at all times during the contraction process.
}
\label{fig:9}
\end{center}
\end{figure}
\section{Discussion}
In Ref.
\cite{Tishby2019}
we used direct numerical integration of the master equation and computer simulations
to show that the degree distributions of
contracting networks converge towards the Poisson distribution.
To this end, we used the relative entropy as a distance measure between
the degree distribution $P_t(k)$ of the contracing network and the
corresponding Poisson distribution $\pi(k|\langle K \rangle_t)$, and showed that this
distance decreases as the network contracts.
A computer simulation of network contraction provides results for
a single instance of the initial network and a single stochastic path of
the contraction process. In order to obtain statistically significant results
for a given ensemble of initial networks and given network contraction scenario
one needs to combine the results of a large number of independent runs.
The direct numerical integration of the master equation is advantageous in the sense
that a single run of the numerical integration process provides results
for a whole ensemble of initial networks.
However, a given network ensemble represents a single point in the
high dimensional parameter space of possible network ensembles.
Therefore, in order to explore the general properties of network contraction
processes one needs to repeatedly apply the direct integration of the master equation
to a large sample of distinct network ensembles.
Our aim in this paper was to obtain rigorous analytical results for the
convergence of contracting networks towards the ER
network ensemble. To this end we devised a rigorous argument, which is based
on the master equation that describes the temporal evolution of the
degree distribution $P_t(k)$ and the relative entropy $S_t$.
Such an argument is advantageous over the direct numerical integration of
the master equation or
computer simulations in the sense that it is universally applicable to all
possible degree distributions.
The relative entropy $S[P(k) || Q(k)]$ of a distribution $P(k)$ with
respect to a distribution $Q(k)$ is a special case of the R\'enyi divergence
$S_{\alpha}[P(k) || Q(k)]$, with $\alpha=1$
\cite{Renyi1961}.
The choice of $\alpha=1$ is advantageous in the sense that it has
a natural information theoretic interpretation
\cite{Annibale2009,Roberts2011}.
The relative entropy is an asymmetric
distance measure, or quasi-distance
\cite{Deza2016}.
Interestingly, the relative entropy
is related to other distance measures between discrete probability distributions.
For example, the total variation distance between probability distributions
$P(k)$ and $Q(k)$ is given by
$T[ P(k), Q(k)] = \sum_{k} | P(k) - Q(k) |$,
namely, the sum of the differences (in absolute value) between the
probabilities assigned to all values of $k$ by the two distributions.
Clearly, for any two distributions $P(k)$ and $Q(k)$, the total variation distance
satisfies
$0 \le T[ P(k), Q(k)] \le 2$.
The relative entropy provides an additional upper bound on the total variation
distance via the Pinsker inequality, which takes the form
\cite{Pinsker1964,Kullback1966,Csiszar1967,Vajda1970}
\begin{equation}
T[ P(k), Q(k)] \le \sqrt{ \frac{1}{2} S[P(k) || Q(k)] }.
\end{equation}
\noindent
This relation implies that whenever the relative entropy between
$P(k)$ and $Q(k)$ vanishes, so does the total variation distance
between them, meaning that the two distributions become identical
in the $L_1$ norm. This shows that when the relative entropy vanishes
the distributions become identical.
In this paper we focused on the case of configuration model networks, which exhibit
a given degree distribution and no degree-degree correlations. The
theoretical framework presented here may provide the foundations for
the study of network contraction processes in a much broader class of
complex networks, which exhibit degree-degree correlations as well
as other structural correlations.
This will require a more general formulation of the relative entropy, expressed
in terms of the joint degree distributions of pairs or adjacent nodes,
which take into account the correlations between their degrees.
The theoretical framework presented here may be relevant in the broad
context of neurodegeneration, which is the progressive loss
of structure and function of neurons in the brain.
Such processes occur in
normal aging
\cite{Morrison1997}
as well as in
a large number of incurable neurodegenerative
diseases such as Alzheimer, Parkinson, Huntington and Amylotrophic
Lateral Sclerosis, which result in a gradual loss of cognitive and
motoric functions
\cite{Heemels2016}.
These diseases differ in the specific brain regions or circuits
in which the degeneration occurs.
The characterization of the evolving structure
using the relative entropy
may provide useful insight into
the structural aspects of
the loss of neurons and synapses in
neurodegenerative processes
\cite{Arendt2015}.
It is worth mentioning that there is
another class of network dismantling processes that involve
optimized attacks, which maximize the damage to the network for a
minimal set of deleted nodes
\cite{Braunstein2016,Zdeborova2016}.
Such optimization is achieved by first decycling the network,
namely, by selectively deleting nodes that reside on cycles, thus
driving the giant component into a tree structure.
The branches of the tree are then trimmed such
that the giant component is quickly disintegrates.
Clearly, these optimized dismantling processes do not
converge towards an ER structure.
\section{Summary}
In summary, we have analyzed the structural evolution of
complex networks undergoing contraction processes via
generic node deletion scenarios, namely,
random deletion, preferential deletion
and propagating deletion.
Focusing on configuration model networks
we have shown using a rigorous argument
that upon contraction the degree distributions of these
networks converge towards a
Poisson distribution.
In this analysis we used
the relative entropy $S_t=S[P_t(k) || \pi(k|\langle K \rangle_t)]$
of the degree distribution $P_t(k)$ of the contracting
network at time $t$ with respect to the corresponding Poisson distribution
$\pi(k|\langle K \rangle_t)$ with the same mean degree $\langle K \rangle_t$
as a distance measure between $P_t(k)$ and Poisson.
The relative entropy
is suitable as a distance measure since it
satisfies $S_t \ge 0$ for any degree
distribution $P_t(k)$, while equality is obtained only for
$P_t(k) = \pi(k|\langle K \rangle_t)$.
We derived an equation for the time evolution
of the relative entropy $S_t$
during network contraction
and expressed its time derivative $dS_t/dt$ as a sum of two terms, $\Delta_{\rm A}(t)$
and $\Delta_{\rm B}(t)$.
We have shown that the first term satisfies $\Delta_{\rm A}(t) < 0$
for any degree distribution $P_t(k)$.
This means that the $\Delta_{\rm A}(t)$ term always pushes the relative entropy down towards zero,
driving the convergence of $P_t(k)$ towards Poisson.
For the $\Delta_{\rm B}(t)$ term we provide a condition
that can be used for any given degree distribution $P_t(k)$ to determine
whether this term would accelerate the convergence to Poisson or slow it down.
The condition implies that for degree distributions $P_t(k)$ whose tail falls more slowly than
the tail of the corresponding Poisson distribution, the $\Delta_{\rm B}(t)$ term
would accelerate the convergence to Poisson, while in the case that the tail falls more
quickly than Poisson the $\Delta_{\rm B}(t)$ term whould slow down the convergence.
We analyzed the convergence for configuration model networks with
degenerate degree distributions (random regular graphs), exponential
degree distributions and power-law degree distributions (scale-free networks)
and showed that the relative entropy
decreases monotonically to zero during the contraction process,
reflecting the
convergence of the degree distribution towards a Poisson distribution.
Since the contracting networks remain uncorrelated,
this means that their structures converge towards an
Erd{\H o}s-R\'enyi (ER) graph structure,
substantiating earlier results obtained using
direct integration of the master equation and computer simulations
\cite{Tishby2019}.
This work was supported by the Israel Science Foundation grant no.
1682/18.
|
1,314,259,996,585 | arxiv | \section{Introduction}
Abstract solvers are a method to formally analyse solving
algorithms. In this methodology, the states of a computation are represented as
nodes of a graph, the solving techniques as edges between such nodes, the
solving process as a path in the graph, and formal properties of the
algorithms are reduced to related graph properties. This framework enjoys some advantages w.r.t. traditional ways such as pseudo-code-based descriptions, e.g., being based on formal and well-known, yet simple, mathematical objects like graphs, which helps $(i)$ comparing solving algorithms by means of comparison of their related graphs, $(ii)$ mixing techniques in different algorithms in order to design novel (combination of) solving solutions, by means of mixing arcs in the related graphs, and $(iii)$ stating and proving formal properties of the solving algorithms, by means of reachability within the related graphs. Abstract solvers already proved to be a useful tool for formally describing, comparing and composing solving
techniques in various fields such as Propositional Satisfiability (SAT) and Satisfiability Modulo Theories (SMT)~\cite{nie06}, Quantified SAT~\cite{bro15}, Answer Set Programming~\cite{lier10,lier11,blm14}, and Constraint ASP~\cite{lie14}.
In ASP, such methodology led even to the development of a new ASP solver, {\sc sup}~\cite{lier10}; however, abstract solvers have been so far mainly applied to ASP solvers for brave reasoning tasks where, given an input query and a knowledge base expressed in ASP, answers are witnessed by ASP solutions, i.e., stable models~\cite{bar03,eite-etal-97f,gelf-lifs-88,gelf-lifs-91,mare-trus-98sm,nie99}.
However, in ASP, also cautious reasoning has been deeply studied in the literature: answers here must be witnessed by all stable models. This task has found a significant number of interesting applications as well, including consistent query answering~\cite{ArenasBC03,MannaRT13}, data integration~\cite{Eiter05}, multi-context systems~\cite{bre07}, and ontology-based reasoning~\cite{eit08}. Two well-known ASP solvers, i.e., {{\sc dlv}\xspace}~\cite{LeonePFEGPS06} and {{\sc clasp}\xspace}~\cite{GebserKS12}, have been extended for computing cautious consequences of ASP programs.
More recently, \citeN{AlvianoDR14} presented a unified, high-level view of such solving procedures, and designed
several algorithms for cautious reasoning in ASP, including those implemented in {{\sc dlv}\xspace} and {{\sc clasp}\xspace}, borrowed from the backbone computation of Boolean formulas~\cite{JanotaLM15}: all these techniques are implemented (and tested) on top of the ASP solver {{\sc wasp}\xspace}~\cite{alv15}.
In this paper we design, implement and test novel abstract solutions for cautious reasoning tasks in ASP. We show how to improve the current abstract solvers~\cite{bro15b} for cautious reasoning in ASP with further techniques borrowed from backbone computation in SAT, in order to design new solving algorithms.
In particular, we import a technique called ``chunk'', which generalizes over- and under-approximation by testing a set soft atoms simultaneously for being added in the under-approximation, and core-based algorithms, which can be considered either a solution per se, or a way for pruning the set of atoms to be considered, given that they can not guarantee completeness.
By doing so, we also formally show, through a uniform treatment, that the algorithms for solving cautious reasoning tasks in ASP are strongly related to those for computing backbones of Boolean formulas. Finally, we implement some of the new solutions in the ASP solver \textsc{wasp}: results of a wide experimental analysis confirm that abstract solvers are a useful tool also for designing abstract solving procedures, given the performances of the related implementations are overall comparable to state-of-the-art solutions on the benchmark problems from the past ASP Competitions.
The paper is structured as follows. Section~\ref{sec:prel} introduces
needed preliminaries, including a review in Section~\ref{sec:cm} of current
algorithms for cautious reasoning trough abstract solving methodology. Section~\ref{sec:backbone} shows
how the algorithms for computing backbones of Boolean formulas can be imported into ASP, to design new solving algorithms. It also contains a general theorem showing the relation between backbones computation in SAT and cautious reasoning in ASP. Section~\ref{sec:exp} then presents the results of the new solutions on devoted ASP benchmarks. The paper ends by discussing
related work in Section~\ref{sec:related}, and by drawing conclusions in Section~\ref{sec:concl}.
\section{Preliminaries}
\label{sec:prel}
In this section, we first recall basics on (ground) non-disjunctive answer set programming (ASP) and Boolean logic formulas in Conjunctive Normal Form (CNF).
Then, we introduce the abstract solvers framework and its methodology.
Finally, we recall existing abstract solvers for computing cautious consequences of ASP programs.
\subsection{Boolean Formulas and Answer Set Programs}
We define (ground) non-disjunctive ASP programs and CNF formulas so as to underline similarities, in order to make it easier in later sections to compare algorithms working on CNF formulas with those working on ASP programs.
\paragraph{Syntax.}
Let $\Sigma$ be a propositional signature.
An element $a\in \Sigma$ is called \textit{atom} or \emph{positive literal}. The negation of an atom $a$, in symbols $\logicalnota$, is called \emph{negative literal}.
Given a literal $l$, we define $|l|=a$, if $l=a$ or $l=\neg a$, for some $a\in\Sigma$.
For a set of atoms $X\subseteq \Sigma$, a \textit{literal relative to $X$} is a
literal $l$ such that $|l|\in X$, and
$\literalsof{X}$ is the set of all literals relative to
$X$.
We set $\bar{l}=a$, if $l=\neg a$, and $\bar{l}=\neg a$, if $l=a$.
A \emph{clause} is a finite set of literals (seen as a disjunction).
A \emph{CNF formula} is a finite set of clauses (seen as a conjunction).
Given a set of literals $M$, we denote by $M^+$ the set of positive literals of
$M$, by $M^-$ the set of negative literals of $M$, and by $\overline{M}$ the set $\{\bar{l}\mid l\in M\}$.
We say that $M$ is \emph{consistent} if it does not contain
both a literal and its negation.
A (non-disjunctive) \textit{rule} is a pair $(A,B)$, written
$A\aspimplicationB$, where $B$ is a finite set of literals and
$A$ is an atom or the empty set.
We may write a rule as $A\aspimplicationB^+,B^-$,
as an abbreviation for $A\aspimplicationB^+\setunionB^-$, and
$A\aspimplicationl,B$ as an abbreviation for
$A\leftarrow\{l\}\setunionB$.
A \textit{program} is a finite set of rules.
Given a set of literals $M$, a program $\Pi$, and a CNF formula $\Phi$, we denote by $\atoms{M}$, $\atoms{\mathit{\Pi}}$, and $\atoms{\Phi}$ the set of
atoms occurring in $M$, $\mathit{\Pi}$, and $\Phi$, respectively.
It is important to emphasize here that the interpretation of negation is different in propositional formulas and in ASP programs. Indeed, in propositional formulas $\neg$ represents the classical negation, while in ASP programs it represents the \textit{negation by default}.
\paragraph{Semantics.}
An \emph{assignment} to a set $X$ of atoms is a total mapping
from $X$ to $\{\bot,\top\}$.
We identify a consistent
set $M$ of literals with an assignment to
$\atoms{M}$ such that
$a\inM$ iff $a$ is mapped to $\top$, and
$\logicalnota\inM$ iff $a$ is mapped to
$\bot$.
A \emph{classical model} of a CNF formula $\Phi$ is an assignment $M$ to $\atoms{\Phi}$
such that for each clause $C \in \Phi$, $M \cap C \neq \emptyset$.
A \emph{classical model} of a program $\mathit{\Pi}$
is an assignment $M$ to $\atoms{\mathit{\Pi}}$ such that for
each rule $(A,B)$ $\in$ $\mathit{\Pi}$,
$A\cap M\neq \emptyset$ or $B\not\subseteq M$.
We denote $M(\Phi)$ (resp. $M(\Pi)$) the set of all classical models of $\Phi$ (resp. $\Pi$).
The \emph{reduct} $\reduct{\mathit{\Pi}}{X}$ of a program $\mathit{\Pi}$
w.r.t. a set of atoms $X$ is obtained from $\mathit{\Pi}$ by deleting
each rule $A\aspimplicationB^+,B^-$ such that
$X\cap\atoms{B^-}\neq\emptyset$
and replacing each remaining rule $A\aspimplicationB^+,B^-$ with
$A\aspimplicationB^+$.
An \emph{answer set} (or \textit{stable model}) of a program $\mathit{\Pi}$ is an assignment
$M$ to $\atoms{\mathit{\Pi}}$ such that
$M^+$ is minimal among the $M_0^+$ such that $M_0$
is a classical model of $\reduct{\mathit{\Pi}}{M^+}$.
We denote by $AS(\Pi)$ the set of all answer sets of $\Pi$.
Given a formula $\Phi$ and a program $\Pi$, we define
$\backboneof{\Phi} = \bigcap_{M\in M(\Phi)} M^+$%
and
$\cautiousof{\mathit{\Pi}} = \bigcap_{M\in AS(\Pi)} M^+$.
\begin{example}\label{ex:cautious}
Consider the following program $\mathit{\Pi} =\{
a \leftarrow \logicalnotb, \
b \leftarrow \logicalnota, \
c \leftarrow a,\
c \leftarrow b\}$.
$\Pi$ has two answer sets, namely $A_1=\{\neg a, b, c\}$ and $A_2=\{a,\neg b, c\}$.
Hence, $A_1^+=\{b,c\}$ and $A_2^+=\{a,c\}$.
Therefore, $\cautiousof{\mathit{\Pi}} = \{b, c\} \cap \{a, c\} = \{c\}$.
Now, consider the following CNF formula $\Phi = \{ a\vee b, \neg a\vee c, \neg b \vee c\}$. $\Phi$ has three classical models, namely $M_1=\{\neg a, b, c\}$, $M_2=\{a,\neg b, c\}$, and $M_3=\{a,b,c\}$.
Hence, $M_1^+=\{b,c\}$, $M_2^+=\{a,c\}$, and $M_3^+=M_3$.
Therefore, $\backboneof{\Phi} = \{b, c\} \cap \{a, c\}\cap \{a,b,c\} = \{c\}$.
\end{example}
\subsection{Abstract Solvers for Solving CNF Formulas and ASP Programs}
\label{sec:as}
Now, we introduce the abstract solvers framework and its methodology employed later on in Section~\ref{sec:cm} and Section~\ref{sec:backbone} for computing cautious consequences of ASP programs. As we have mentioned in the introduction, abstract solvers are graphs that represent the status of the computation, and how it changes in response to an application of a technique in a search for a solution with certain properties, e.g., the satisfiability of a formula. Correspondingly, in the next paragraphs we first present the concept of a {\sl state}, i.e., all possible paths of the computation in terms of assignments, then the {\sl transition rules} are introduced, that showing how the state changes as a consequence of an application of a search technique if some conditions are met. The last paragraph of this subsection introduces {\sl abstract solver graphs}, where the states are the possible nodes of the graph, while transition rules define arcs among reachable nodes. \\
\paragraph{States.}
Given a set of atoms $X$, an \emph{action relative to $X$} is an element of the set $\mathcal{A}(X)=\{\mathit{over},\mathit{under}_\emptyset\}\cup \{\underapproxaction{\{a \}} \mid a\inX\}$.
For a set $X$ of atoms, a \emph{record} relative to~$X$ is a string $L$ from $\mathit{lit}(X)$ without repetitions.
A record $L$ is {\em consistent} if it does not contains both a literal and its negation.
We may view a record as the set containing all its elements stripped from their annotations.
For example, we may view $\neg a b$ as $\{\neg a,b\}$, and hence as the assignment that maps $a$ to $\bot$ and $b$ to $\top$.
Given a set $X$ of atoms, the set of \emph{states relative to $X$}, written
$\statesof{X}$, is the union of:
\begin{itemize}
\item[$(i)$] the set of \emph{core states relative to $X$}, that are all $\corestate{L} {O}{U}{A}{f}$ such that $L$ is a record relative to $X$;
$O$, $U$ $\inX$; and
$A \in \mathcal{A}(X)$;
\item[$(ii)$] the set of \emph{control states relative to $X$}, that are all the $\controlstate{\mathit{Cont}}{L} {O}{U}{A}{f}$ where $O$, $U$ $\in X$;
and \item[$(iii)$] the set of \emph{terminal states relative to $X$}, that are all $\terminalstate{W}$, where $W \in X$.
\end{itemize}
Intuitively, these states represent computation steps of the algorithms that search for assignments with certain properties, in our case being backbone or cautious consequence.
The computation starts from a specific core state, called \textit{initial state}, depending on the specific algorithm (concrete examples are given later when presenting the techniques).
Other core states $\corestate{L} {O}{U}{A}{f}$
and the control states $\controlstate{\mathit{Cont}}{L} {O}{U}{A}{f}$ represent all the intermediate steps of the computation, where $L$ is the current state of the computation of a model;
$O$ is the current over-approximation of the solution; $U$ is the current under-approximation of the solution; and $A$ is the action currently carried out:
$\mathit{over}$ (resp. $\mathit{under}_\emptyset$ or $\underapproxaction{\{a\}}$) if over-approximation (resp. under-approximation) is being applied.
Intuitively, a core state represents the computation within a call to an ASP oracle, i.e., an ASP solver, while a control state controls the computation between different calls to ASP oracles, depending on over-approximation and under-approximation. The terminal states represent the end of the computation, i.e., the termination of the algorithm.
For instance, consider the following set of atoms $X=\{a,b,c\}$. Hence, $lit(X)=\{a,b,c,\neg a,$ $\neg b, \neg c\}$.
Therefore, $\neg a b_{\{a,b\},\emptyset,\mathit{over}}$ is an example of core state relative to $X$ where $a$ is assigned to false and $b$ to true, the over-approximation is the set $\{a,b\}$ while the under-approximation is empty, and the action executed is over. Other examples of core states are $\emptyset_{\{a\},\{b\},\mathit{under}_\emptyset}$ and
$\neg a\neg b\neg c_{\emptyset,\emptyset,\mathit{under}_{\{ a\}}}$.
Instead, $\mathit{Cont}(\{a,b\},\{a\})$, $\mathit{Cont}(\{a,b,c\},\emptyset)$, $\mathit{Cont}(\emptyset,\emptyset)$ are examples of control states relative to $X$, where e.g., in the first example the over-approximation is the set $\{a,b\}$ and the under-approximation is $\{a\}$.
$\mathit{Ok}(\{a,b,c\})$ and $\mathit{Ok}(\emptyset)$ are examples of terminal states relative to $X$, where set $\{a,b,c\}$ and $\emptyset$ are solutions.
\paragraph{Transition Rules.}
\textit{Transition rules} are represented with the following structure:
$$
\begin{array}{llll}
ruleName & S & \Longrightarrow \ S' & \textrm{if}\left\{\ conditions\right.
\end{array}
$$
where,
$(i)$ $ruleName$ is the name of the rule;
$(ii)$ $S \Longrightarrow S'$ represents a transition from the starting state $S$ to the arriving state $S'$ (if the rule is applied);
and $(iii)$ $conditions$ is a set of conditions for the rule to be applicable.
We also consider a special transition rule, called $\mathit{Oracle}$, which starts from a state $L_{O,U,A}$ and arrives to a state $L'_{O,U,A}$, if $L=\emptyset$. In symbols:
$$
\begin{array}{llll}
\mathit{Oracle}
& \corestate {L} {O}{U} {A}{f}
& \Longrightarrow \ \corestate {L'} {O}{U} {A}{f}
& \textrm{if}\left\{\ L = \emptyset \right.
\end{array}
$$
Intuitively, the $\mathit{Oracle}$ rule represents an oracle call to an ASP [resp., SAT] solver by providing as result a set of literals $L'$ corresponding to the output of an ASP [resp., SAT] solver, i.e., $L'$ will correspond to an answer set of a logic program [resp., a classical model of a Boolean formula], if such an answer set [resp. classical model] exists, and to an inconsistent set of literals, otherwise. Transition rules in our paper are organized into $\mathit{Return}$ and ${Control}$ rules. Return rules deal with the outcome of an oracle call, or the application of a given technique, depending on the status of the set of literals $L$ returned, while Control rules start from a control state an direct the computation depending on the content of the over- and under-approximation.
\paragraph{Abstract Solver Graphs.}
Given a set of atoms $X$ and a set of transition rules $T$, we define an \textit{abstract solver graph} $G_{X,T}=\langle V_X, E_{T}\rangle$, where $(S,S')\in E_{T}$ if, and only if, a transition rule of the form $S \Longrightarrow S'$ can be applied.
We also denote the set of edges $E_{T}$ by the set of transition rules $T$.
We say that a state $S\in V_X$ is \textit{reachable from} a state $S'\in V_X$, if there is a path from $S'$ to $S$.
Every state reachable from the initial state is called \textit{reachable state}, and represents a possible state of a computation.
Each path starting from the initial state represents the description of possible search for a certain model.
We say that \emph{no cycle is reachable} if there is no
reachable state which is reachable from itself.
Finally, note that transition rules $T$ and the set $X$ will depend from the specific input program $\Pi$, thus instead of writing $G_{X,T}$, we will write just $G_\Pi$.
\subsection{Naive Abstract Solvers for Computing Cautious Consequences}
\label{sec:cm}
In this section, we recall the abstract \textit{over-approximation}, \textit{under-approximation} and \textit{mixed} strategies
for computing cautious consequences of ASP programs.
\begin{definition}
Given a program $\Pi$ [resp., a CNF formula $\Phi$], we say that an abstract solver graph $G_\Pi$ [resp., $G_\Phi$] \textit{solves cautious reasoning} [resp., backbone computation], if
$(i)$ $G_\Pi$ [resp., $G_\Phi$] is finite and no cycle is reachable; and
$(ii)$ the unique terminal reachable state in $G_\Pi$ [resp., $G_\Phi$] is
$Ok(cautious(\Pi))$ [resp., $Ok(backbone(\Pi))$].
\end{definition}
In the following, without loss of generality, we only focus on the computation of cautious consequences for an ASP program $\Pi$.
\paragraph{General Structure.}
Given a program $\Pi$, over-approximation is set to all atoms in the program, i.e., $O=\atoms{\Pi}$, while the under-approximation is empty, i.e., $U=\emptyset$. Note that $U\subseteq \mathit{cautious}(\Pi)\subseteq O$.
Iteratively either under-approximation or over-approximation are applied.
When they coincide, i.e., $U=O$,
the set of cautious consequences, i.e., $O$, has been found and the computation terminates.
It means that the state $\terminalstate{O}$ is a reachable state.
Hence, the full extent of states relative to $X$ becomes useful.
The unique terminal state is $\terminalstate{W}$, where $W$ is the set of all cautious consequences of $\Pi$.
\paragraph{Over-approximation.}
\begin{figure}[t] \footnotesize{
$$
\arraycolsep=3pt
\begin{array}{llll}
\multicolumn{4}{l}{\textrm{Return rules}}\\
\mathit{Fail}_{\mathit{over}}
& \corestate
{L}
{O}{U}
{\mathit{over}}{\top}
& \Longrightarrow
\controlstate
{\mathit{Cont}}
{L}
{O}{O}
{A}{\top}
& \textrm{if}\left\{
\begin{array}{l}
L\textrm{ is inconsistent}
\end{array}
\right.\\
\mathit{Find}
& \corestate
{L}
{O}{U}
{A}{f}
& \Longrightarrow
\controlstate
{\mathit{Cont}}
{L}
{O\setintersectionL}{U}
{A}{\top}
& \textrm{if}\left\{
\begin{array}{l}
L\textrm{ is consistent and } L\neq\emptyset
\end{array}
\right.\\
\\
\multicolumn{4}{l}{\textrm{Control rules}}\\
\mathit{Terminal}
& \controlstate
{\mathit{Cont}}
{L}
{O}{U}
{A}{f}
& \Longrightarrow
\terminalstate{O}&
\textrm{if}\left\{
\begin{array}{l}
O=U
\end{array}
\right.\\
\mathit{OverApprox}
& \controlstate
{\mathit{Cont}}
{L}
{O}{U}
{A}{f}
& \Longrightarrow
\corestate
{\emptyset}
{O}{U}
{\mathit{over}}{\top}
& \textrm{if}\left\{
\begin{array}{l}
O\neqU
\end{array}
\right.\\
\end{array}
$$
}
\normalsize
\caption{The transition rules of $\mathit{ov}$.}
\label{fig:trover}
\end{figure}
Let
$\mathit{\Pi}_{O,U,\mathit{over}}$
$=$
$\mathit{\Pi}
\cup\{\leftarrowO\}$.
The initial state is
$\corestate{\emptyset}
{\atoms{\mathit{\Pi}}}{\emptyset}
{\mathit{over}}{\bot}$.
We call $\mathit{ov}$ the set of all the rules reported in
Figure~\ref{fig:trover}, that is
$\mathit{ov}=\{\mathit{Fail}_{\mathit{over}},\linebreak[1]\mathit{Find},\linebreak[1]\mathit{Terminal},\linebreak[1]\mathit{OverApprox}\}$.
Intuitively,
$\mathit{Fail}_{\mathit{over}}$ means that a call to an oracle did not find an answer set, so $O$ is the solution. If $\mathit{Find}$ is
triggered, instead, we go to a control state where $O$ is updated according to the answer set found: then, if
$O=U$ a solution is found through
$\mathit{Terminal}$, otherwise the search is restarted
($L=\emptyset$) in an oracle state with $\mathit{OverApprox}$. For
any $\mathit{\Pi}$, the graph $\overstablegraph{\mathit{\Pi}}$ is
$(\statesof{\atoms{\mathit{\Pi}}},\{\mathit{Oracle}\}\cup\mathit{ov})$.
Thus, in $\overstablegraph{\mathit{\Pi}}$, the oracle is called to
find answer sets that reduce the over-approximation
$O$ in the $\mathit{over}$ action, unless no answer set exists. If an answer set $M$ is
found, then $M\cap\overline{O}\neq\emptyset$, as $\mathit{\Pi}_{O,U,\mathit{over}}$
$=$ $\mathit{\Pi}
\cup\{\leftarrowO\}$.
Indeed, assume by contradiction that $M\cap \overline{O}=\emptyset$, then $O\subseteq M$.
Hence, $M$ is not a model of the rule $(\emptyset, O)$, as $M\cap \emptyset =\emptyset$ and $O\subseteq M$.
Therefore, $M$ should not be a model of $\mathit{\Pi}_{O,U,\mathit{over}}$, against the assumption that $M$ is an answer set of $\mathit{\Pi}_{O,U,\mathit{over}}$.
\paragraph{Under-approximation.}
Let
$\mathit{\Pi}_{O,U,\underapproxaction{\{a\}}}$
$=$ $\mathit{\Pi}\cup\{\leftarrow a\}$ and $\mathit{\Pi}_{O,U,\underapproxaction{\emptyset}}$ $=$ $\mathit{\Pi}$.
The initial state is
$\corestate{\emptyset}
{\atoms{\mathit{\Pi}}}{\emptyset}
{\mathit{under}_\emptyset}{\bot}$.
We call $\mathit{un}$ the set
$\{\mathit{Fail}_{\underapproxaction{}},\linebreak[1]\mathit{Find},\linebreak[1]\mathit{Terminal},\linebreak[1]\mathit{UnderApprox}\}$
containing the rules presented in Figure~\ref{fig:trunder} plus
$\mathit{Find}$ and $\mathit{Terminal}$ from Figure~\ref{fig:trover}. Intuitively,
$\mathit{Fail}_{\underapproxaction{}}$ updates over- and under-approximations in case a test on the atom $a$ failed, and leads to a control state, while $\mathit{UnderApprox}$ restarts a new test if $\mathit{Find}$ is not applicable.
For any $\mathit{\Pi}$,
the graph $\understablegraph{\mathit{\Pi}}$ is
$(\statesof{\atoms{\mathit{\Pi}}},\{\mathit{Oracle}\}\cup\mathit{un})$.
In $\understablegraph{\mathit{\Pi}}$, again, a first oracle call takes place with
the action $\mathit{under}_\emptyset$, which provides first over-approximation, then calls with actions $\underapproxaction{\{a\}}$, where the element $a$ is the tested atom.
Figure~\ref{fig:example2-under} shows a possible path in $\understablegraph{\mathit{\Pi}}$ for the program $\Pi$ of Example~\ref{ex:cautious}.
For compactness, the syntax in which the path is presented is slighly different, with ``$\Longrightarrow$'' replaced by ``:'', and with the initial state not explicitly tagged.
\begin{figure}[t] \footnotesize{
$$
\arraycolsep=3pt
\begin{array}{llll}
\multicolumn{4}{l}{\textrm{Return rule}}\\
\mathit{Fail}_{\underapproxaction{}}
& \corestate
{L}
{O}{U}
{\underapproxaction{S}}{f}
& \Longrightarrow
\controlstate
{\mathit{Cont}}
{L}
{O}{U\cup S}
{\underapproxaction{S}}{f}
& \textrm{if}\left\{
\begin{array}{l}
L\textrm{ is inconsistent, and } S=\emptyset \mbox{ or } S=\{a\}
\end{array}
\right.\\
\\
\multicolumn{4}{l}{\textrm{Control rule}}\\
\mathit{UnderApprox}
& \controlstate
{\mathit{Cont}}
{L}
{O}{U}
{A}{f}
& \Longrightarrow
\corestate
{\emptyset}
{O}{U}
{\underapproxaction{\{a \}}}{\top}
& \textrm{if}\left\{
\begin{array}{l}
a\inO\setminusU
\end{array}
\right.\\
\end{array}
$$
}
\normalsize
\caption{The transition rules of $\mathit{un}$ that are not in
$\mathit{ov}$.}
\label{fig:trunder}
\end{figure}
\begin{figure}[t]
\footnotesize{
$$
\arraycolsep=3pt
\begin{array}{l|l}
\begin{array}{ll}
\multicolumn{2}{l}{ \mathit{\Pi}
= \mathit{\Pi}_{
\{a,b,c\},\emptyset,
\mathit{under}_\emptyset
}
= \left\{
\begin{array}{l}
a \leftarrow \logicalnotb\\
b \leftarrow \logicalnota\\
c \leftarrow a\\
c \leftarrow b\
\end{array}\right\} }\\
\\
\\
\\
\multicolumn{2}{l}{
\mathit{\Pi}_{
\{a,c\},\emptyset,
\underapproxaction{\{c\}}
}
= \mathit{\Pi}\cup\{
\leftarrow c
\} }\\
\\
\vspace{4pt}
\\
\multicolumn{2}{l}{
\mathit{\Pi}_{
\{a,c\},\{c\},
\underapproxaction{\{a\}}
}
= \mathit{\Pi}\cup\{
\leftarrow a
\} }\\
\\
\\
\end{array}
&
\begin{array}{ll}
&
\corestate
{\emptyset}
{\{a,b,c\}}{\emptyset}
{\mathit{under}_\emptyset}{\top}
\\
\mathit{Oracle}\mbox{ :}
&
\corestate
{ac{\logicalnotb}}
{\{a,b,c\}}{\emptyset}
{\mathit{under}_\emptyset}{\top}
\\
\mathit{Find}:
&
\controlstate
{\mathit{Cont}}
{\decision{a}c{\logicalnotb}}
{\{a,c\}}{\emptyset}
{\mathit{chunk}}{\top}
\\
\\
\mathit{UnderApprox}:
&
\corestate
{\emptyset}
{\{a,c\}}{\emptyset}
{\underapproxaction{\{c\}}}{\top}
\\
\mathit{Oracle}\mbox{ :}
&
\corestate
{\neg{c}\neg{a}bc}
{\{a,c\}}{\emptyset}
{\underapproxaction{\{c\}}}{\top}
\\
\mathit{Fail}_{\underapproxaction{}}:
&
\controlstate
{\mathit{Cont}}
{\neg{c}\decision{a}c}
{\{a,c\}}{\{c\}}
{\underapproxaction{\{c\}}}{\top}
\\
\\
\mathit{UnderApprox}:
&
\corestate
{\emptyset}
{\{a,c\}}{\{c\}}
{\underapproxaction{\{a\}}}{\top}
\\
\mathit{Oracle}\mbox{ :}
&
\corestate
{\neg{a}bc}
{\{a,c\}}{\{c\}}
{\underapproxaction{\{a\}}}{\top}
\\
\mathit{Find}:
&
\controlstate
{\mathit{Cont}}
{\decision{a}c{\logicalnotb}}
{\{c\}}{\{c\}}
{\underapproxaction{\{a\}}}{\top}
\\
\mathit{Terminal}:
&
\terminalstate{\{c\}}
\end{array}
\end{array}
$$
}
\caption{A path in $\understablegraph{\mathit{\Pi}}$.}
\label{fig:example2-under}
\end{figure}
\paragraph{Mixed strategy.}
An abstract mixed strategy can be obtained by defining $\mixedstablegraph{\mathit{\Pi}}$ as
$(\statesof{\atoms{\mathit{\Pi}}},$ $\{\mathit{Oracle}\}\cup\mathit{un}\cup\mathit{ov})$.
Therefore, it is possible to combine techniques described by the graph for over-approximation and those in the graph for under-approximation, by envisaging the design of new additional algorithms.
Here, we have two potential initial states, i.e., $\emptyset_{\mathit{atoms}(\Pi),\emptyset,A}$, where $A\in\{\mathit{over},\mathit{under}_\emptyset\}$, i.e., depending whether over-appoximation or under-approximation is first applied.
\section{Advanced Abstract Solvers for Computing Cautious Consequences}
\label{sec:backbone}
In this section we import in ASP further algorithms from \cite{JanotaLM15} through abstract solvers.
First, we generalize the concepts of under- and over-approximation via chunks, which consider a set of atoms simultaneously.
Then, we model core-based algorithms.
Finally, we state a general theorem, which includes all previous results, that shows how the techniques presented can be combined to design new solving methods for finding cautious consequences of ASP programs, and states a strong analogy between algorithms for computing cautious consequences of ASP programs and those for backbones of CNF formulas.
The sets of states now include also the following:
$\{\chunkapproxaction{N} | N\subseteq\atoms{\mathit{\Pi}}\}$, ${\mathit{chunk}}$ and
$\{\mathit{core}_N | N\subseteq lit(\atoms{\mathit{\Pi}})\}$.
\begin{figure}[t] \footnotesize{
$$
\arraycolsep=3pt
\begin{array}{llll}
\multicolumn{4}{l}{\textrm{Return rules}}\\
\mathit{Fail}_{\chunkapproxaction{}}
& \coreunderstate
{L}
{O}{U}
{\chunkapproxaction{N}}{f}
& \Longrightarrow
\controlunderstate
{\mathit{Cont}}
{L}
{O\setminus N}{U\cupN}
{A}{f}
& \textrm{if}\left\{
\begin{array}{l}
L\textrm{ is inconsistent}
\end{array}
\right.\\
\\
\multicolumn{4}{l}{\textrm{Control rules}}\\
\mathit{Chunk}
& \controlunderstate
{\mathit{Cont}}
{L}
{O}{U}
{A}{f}
& \Longrightarrow
\coreunderstate
{\emptyset}
{O}{U}
{\chunkapproxaction{N}}{f}
& \textrm{if}\left\{
\begin{array}{l}
N\subseteqO\setminusU
\textrm{ and }
N\neq\emptyset
\end{array}
\right.\\
\end{array}
$$
}
\caption{The transition rules of $\mathit{ch}$ that are not in $\mathit{ov}$.}
\label{fig:classchunk}
\end{figure}
\normalsize
\subsection{Chunking}
\label{sec:ch}
In~\cite{JanotaLM15} a more general technique for
under-approximation that allows to test multiple literals at once is presented (see, Algorithm 5 in~\cite{JanotaLM15}).
We define $\mathit{ch}$ as the set
$\{\mathit{Fail}_{\chunkapproxaction{}},\linebreak[1]\mathit{Find},\linebreak[1]\mathit{Terminal},\linebreak[1]\mathit{Chunk}\}$
containing the rules presented in Figure~\ref{fig:classchunk} plus
$\mathit{Find}$ and $\mathit{Terminal}$ from Figure~\ref{fig:trover}. The newly
introduced rules in Figure~\ref{fig:classchunk} model the new technique. In
particular, $\mathit{Fail}_{\chunkapproxaction{}}$ updates the over- and
under-approximations accordingly in case the test
on the set $N$ fails (the ASP oracle call failed, thus all literals in $N$ must be cautious consequences), and goes to a control state. Meanwhile,
$\mathit{Chunk}$ restarts a new ASP oracle call with a new (nonempty) set $N$
such that $N\subseteqO\setminusU$ in case the computation must continue (cf. condition of this transition rule).
For any $\mathit{\Pi}$, the graph $\mathit{CS}_\mathit{\Pi}$ is
$(\statesof{\atoms{\mathit{\Pi}}},\{\mathit{Oracle}\}\cup\mathit{ch})$. The initial state is
$\corestate{\emptyset}
{\atoms{\mathit{\Pi}}}{\emptyset}
{\mathit{chunk}}{\top}$.
We define
$\mathit{\Pi}_{O,U,\chunkapproxaction{N}}$
as $\mathit{\Pi}\cup\{\leftarrowN\}$.
\begin{theorem}~\label{prop:chunkclass}
Let $\Pi$ be a program. Then, the graph $\mathit{CS}_\mathit{\Pi}$ solves cautious reasoning.
\end{theorem}
\begin{figure}[t]
\footnotesize{
$$
\arraycolsep=3pt
\begin{array}{l|l}
\begin{array}{ll}
\multicolumn{2}{l}{ \mathit{\Pi}
= \mathit{\Pi}_{
\{a,b,c,d\},\emptyset,
\mathit{chunk}
}
= \left\{
\begin{array}{l}
a \leftarrow \logicalnotb\\
b \leftarrow \logicalnota\\
c \leftarrow a\\
c \leftarrow b\\
d \leftarrow c
\end{array}\right\} }\\
\\
\\
\multicolumn{2}{l}{
\mathit{\Pi}_{
\{a,c,d\},\emptyset,
\chunkapproxaction{\{c,d\}}
}
= \mathit{\Pi}\cup\{
\leftarrow c, d\ \}}
\vspace{1cm}
\\
\\
\multicolumn{2}{l}{
\mathit{\Pi}_{
\{a,c,d\},\{c,d\},
\chunkapproxaction{\{a\}}
}
= \mathit{\Pi}\cup\{
\leftarrow \anato
\}}
\\
\\
\\
\end{array}
&
\begin{array}{ll}
& \corestate
{\emptyset}
{\{a,b,c,d\}}{\emptyset}
{\mathit{chunk}}{\top}
\\
\mathit{Oracle}\mbox{ :}
&
\corestate
{ac\logicalnotbd}
{\{a,b,c,d\}}{\emptyset}
{\mathit{chunk}}{\top}
\\
\mathit{Find}:
&
\controlstate
{\mathit{Cont}}
{c\decision{a}\logicalnotb}
{\{a,c,d\}}{\emptyset}
{\mathit{chunk}}{\top}
\\
\\
\mathit{Chunk}:
&
\corestate
{\emptyset}
{\{a,c,d\}}{\emptyset}
{\chunkapproxaction{\{c,d\}}}{\top}
\\
\mathit{Oracle}\mbox{ :}
&
\corestate
{\neg{a}bcd\neg{d}}
{\{a,c,d\}}{\emptyset}
{\chunkapproxaction{\{c,d\}}}{\top}
\\
\mathit{Fail}_{\chunkapproxaction{}}:
&
\controlstate
{\mathit{Cont}}
{c\logicalnotab}
{\{a,c,d\}}{\{c,d\}}
{\mathit{over}}{\top}
\\
\\
\mathit{Chunk}:
&
\corestate
{\emptyset}
{\{a,c,d\}}{\{c,d\}}
{\chunkapproxaction{\{a\}}}{\top}
\\
\mathit{Oracle}\mbox{ :}
&
\corestate
{\neg{a}bcd}
{\{a,c,d\}}{\{c,d\}}
{\chunkapproxaction{\{a\}}}{\top}
\\
\mathit{Find}:
&
\controlstate
{\mathit{Cont}}
{\neg{c}\decision{a}c}
{\{c,d\}}{\{c,d\}}
{\chunkapproxaction{\{a\}}}{\top}
\\
\mathit{Terminal}:
&
\terminalstate{\{c,d\}}
\\
\\
\end{array}
\end{array}
$$
}
\caption{A path in $\chunkstablegraph{\mathit{\Pi}}$.}
\label{fig:example-chunk}
\end{figure}
In order to design a meaningful example of Chunk, we slightly modify our running example adding the rule $d \leftarrow c$. Figure~\ref{fig:example-chunk} shows a possible path in $\mathit{CS}_\mathit{\Pi}$ for the new defined program.
\subsection{Designing New Abstract Solvers}
The composition of techniques described in Section \ref{sec:cm} and \ref{sec:ch} can be
readily applied to computing cautious consequences of a program, but
actually is not included in any solver. This outlines another important feature
of the abstract solvers methodology, i.e., its capability to design new solutions by means of combination of techniques implemented in different solvers.
More generally, it is possible to mix under-approximation, over-approximation, and chunking
technique, and apply them for computing
either cautious consequences or backbones. We next state a general
theorem that subsumes all the techniques previously described, showing a strong analogy among the algorithms for computing cautious consequences and those for backbones.
\begin{theorem}~\label{prop:general}
For any program $\mathit{\Pi}$, and for any set
$S\subseteq\{\mathit{un},\mathit{ov},\mathit{ch}\}$ such that
$S\neq\emptyset$,
the graph
$(\statesof{\atoms{\mathit{\Pi}}},$
$\{\mathit{Oracle_{ASP}}\}
\cup\bigcup_{x\in S}x
)$
solves cautious reasoning, and
the graph
$(\statesof{\atoms{\mathit{\Pi}}},$
$\{\mathit{Oracle_{SAT}}\}
\cup\bigcup_{x\in S}x
)$
solves backbone computation, where $\mathit{Oracle_{ASP}}$ and $\mathit{Oracle_{SAT}}$ represent an oracle call to an ASP solver and to a SAT solver, respectively.
\end{theorem}
\subsection{Core-based Methods}
We now model core-based algorithms from~\cite{JanotaLM15} in terms of abstract solvers, in particular Algorithm 6, and apply it to the computation of cautious consequences of ASP programs.
First, note that
$\Pi_{O,U,\mathit{core}_{N}}$
is
$\Pi\cup\{\leftarrow\overline{l}|l\inN\}$,
and $\emptyset_{atoms(\Pi),\emptyset,\mathit{core}_{\overline{atoms(\Pi)}}}$ is the initial state.
Moreover, given a logic program $\Pi$, we say that a set $C\subseteq lit(atoms(\Pi))$ is \textit{a core} of $\Pi$, if $\Pi \cup \{\leftarrow \overline{l} | l\in C\}$ is incoherent.
It is important to emphasize here that this definition is in line with the one proposed by \citeN{AlvianoDJMP18}. In particular, unsatisfiable cores have two important properties:
\begin{itemize}
\item if $C$ is an unsatisfiable core of $\Pi$ then all of its supersets are also unsatisfiable cores of $\Pi$;
\item an atom $p \in atoms(\Pi)$ is a cautious consequence of $\Pi$ if and only if $\{\neg p\}$ is an unsatisfiable core (Proposition 4.1 of \cite{AlvianoDJMP18}).
\end{itemize}
Moreover, in general unsatisfiable cores are not guaranteed to be minimal, albeit several strategies can be used to obtain a minimal unsatisfiable core~\cite{LynceM04,DBLP:journals/tplp/AlvianoD16,AlvianoDJMP18}.
\begin{example}
Consider the program $\Pi$ of the Example~\ref{ex:cautious} and let $N=\{\lnot a,\lnot b,\lnot c\}$. Hence, $\{\lnot c\}$, $\{\lnot a,\lnot c\}$, $\{\lnot b,\lnot c\}$, $\{\lnot a,\lnot b\}$, and $\{\lnot a,\lnot b,\lnot c\}$ are all cores of $\Pi_{O,U,\mathit{core}_N}$.
\end{example}
First, we consider a transition rule, called $\mathit{CoreOracle}$, which starts from a state $\emptyset_{O,U,\mathit{core}_N}$ and arrives to a state $L'_{O,U,\mathit{core}_N}$. In symbols:
$$
\begin{array}{llll}
\mathit{CoreOracle}
& L_{O,U,\mathit{core}_N}
& \Longrightarrow \
L'_{O,U,\mathit{core}_N}
& \textrm{if}\left\{\ L =\emptyset \right.
\end{array}
$$
The $\mathit{CoreOracle}$ rule represents an oracle call to compute a set of literals $L'$, which is
an inconsistent set of literals such that the set $\widehat{L'}=\{\neg a\mid \{a,\neg a\}\subseteq L'\} $ is
a core of $\Pi_{O,U,\mathit{core}_N}$ and a subset of $N$, whenever $\mathit{\Pi}_{O,U,\mathit{core}_{N}}$ is incoherent; and is an answer set of $\Pi_{O,U,\mathit{core}_N}$, otherwise.
Then, we define $\mathit{in}$ as the set of rules of
Figure~\ref{fig:trcontrolrules}.
Therefore, we consider a graph
$\mathit{FS}_\mathit{\Pi}=
(V_\atoms{\mathit{\Pi}},
\{\mathit{CoreOracle}\}
\cup\mathit{in})$
which represents Algorithm 6 in~\cite{JanotaLM15}.
Here, we need to introduce two intermediate control states: $\mathit{Pre}_{N}$ and $\mathit{Eval}$. In particular, $\mathit{Pre}_{N}$
is reached in case of inconsistency,
where $N$ is the set of literals that may be used for the
potential upcoming $\mathit{core}$ action; while $\mathit{Eval}$ is reached in case of consistency.
From an outermost state, of the type $\mathit{Eval}$, a
new $\mathit{core}$ is started with $\mathit{NewSet}$, whenever there is a gap between over- and under-approximation; otherwise, the $\mathit{Final}$ control rule leads to the terminal state.
$\mathit{Fail}^1_{\prelaction{}}$ and $\mathit{Fail}^2_{\prelaction{}}$ lead to the intermediate type of control state, $\mathit{Pre}_N$, that can either restart a
$\mathit{core}$ action with $Continue$, or continue with the $Main$ rule. Figure~\ref{fig:example-core} shows a possible path in $\mathit{FS}_\Pi$ for the program $\Pi$ of Example~\ref{ex:cautious}.
\begin{figure}[t] \footnotesize{
$$
\arraycolsep=3pt
\begin{array}{llll}
\multicolumn{4}{l}{\textrm{Return rules}}\\
\mathit{Fail}^1_{pre}
& L_{O,U,\mathit{core}_N}
& \Longrightarrow
\mathit{Pre}_{N\setminus \{l\}}(O, U\cup\{\bar{l}\})
& \textrm{if}\left\{
\begin{array}{l}
L \mbox{ is inconsistent and } \widehat{L}\cap N =\{l\}
\end{array}
\right.\\
\mathit{Fail}^2_{pre}
& L_{O,U,\mathit{core}_N}
& \Longrightarrow
\mathit{Pre}_{N\setminus \widehat{L}}(O, U)
& \textrm{if}\left\{
\begin{array}{l}
L \mbox{ is inconsistent and } |\widehat{L}\cap N| > 1
\end{array}
\right.\\
\mathit{Find}_{\prelaction{}}
& L_{O,U,\mathit{core}_N}
& \Longrightarrow
\mathit{Eval}(O\cap L,U)
& \textrm{if}\left\{
\begin{array}{l}
\textrm{} L \mbox{ is consistent and } L\neq\emptyset
\end{array}
\right.\\
\\
\multicolumn{4}{l}{\textrm{Control rules}}\\
Main
& \mathit{Pre}_N(O,U)
& \Longrightarrow
\mathit{Cont}(O,U)
& \textrm{if}\left\{
\begin{array}{l}
N=\emptyset
\end{array}
\right.\\
Continue
& \mathit{Pre}_N(O,U)
& \Longrightarrow
\emptyset_{O,U,\mathit{core}_N}
& \textrm{if}\left\{
\begin{array}{l}
\textrm{} N\neq \emptyset
\end{array}
\right.\\
\mathit{NewSet}
& \mathit{Eval}(O,U)
& \Longrightarrow
\emptyset_{O,U,\mathit{core}_{\overline{O}} }
& \textrm{if}\left\{
\begin{array}{l}
O\neq U
\end{array}
\right.\\
\mathit{Final}
& \mathit{Eval}(O,U)
& \Longrightarrow
\mathit{Ok} (O)
& \textrm{if}\left\{
\begin{array}{l}
O= U
\end{array}
\right.\\
\end{array}
$$
}
\caption{The transition rules of $\mathit{in}$.}
\label{fig:trcontrolrules}
\end{figure}
\normalsize
\begin{theorem}
Let $\Pi$ be a program, and let $O$ and $U$ be two set of atoms. Then,
$(i)$ the only reachable terminal states are either $Cont(O,U)$ or $Ok(O)$;
$(ii)$ if $Ok(O)$ is reachable in $FS_\Pi$,
then $FS_\Pi$ solves cautious reasoning;
$(iii)$ if $Cont(O,U)$ is reachable in $FS_\Pi$,
then $U\subseteq \mathit{cautious}(\Pi)\subseteq O$.
\end{theorem}
\begin{figure}[t]
\footnotesize{
$$
\arraycolsep=2pt
\begin{array}{l|l}
\begin{array}{ll}
\multicolumn{2}{l}{
\Pi_{
\{a,b,c\},\emptyset,
\mathit{core}_{\{\lnot a,\lnot b,\lnot c\}}}
= \left\{
\begin{array}{l}
a \leftarrow \logicalnotb\\
b \leftarrow \logicalnota\\
c \leftarrow a\\
c \leftarrow b
\end{array}\right\}
\cup
\left\{
\begin{array}{l}
\leftarrow a\\
\leftarrow b\\
\leftarrow c
\end{array}\right\} }\\
\\
\\
\multicolumn{2}{l}{
\Pi_{
\{a,b,c\},\{c\},
\mathit{core}_{\{\lnot a,\lnot b\}}}
= \left\{
\begin{array}{l}
a \leftarrow \logicalnotb\\
b \leftarrow \logicalnota\\
c \leftarrow a\\
c \leftarrow b
\end{array}\right\}
\cup
\left\{
\begin{array}{l}
\leftarrow a\\
\leftarrow b\\
\end{array}\right\}} \\
\end{array}
&
\begin{array}{ll}
& \emptyset_{\{a,b,c\},\emptyset,\mathit{core}_{\{\lnot a,\lnot b,\lnot c\}}}
\\
\mathit{CoreOracle}\mbox{ :}
&
c\lnot c_{\{a,b,c\},\emptyset,\mathit{core}_{\{\lnot a,\lnot b,\lnot c\}}}
\\
\mathit{Fail}^1_{pre}\mbox{ :}
&
\mathit{Pre}_{\{\lnot a,\lnot b\}}(\{a,b,c\},\{c\})
\\
\mathit{Continue}\mbox{ :}
&
\emptyset_{\{a,b,c\},\{c\},
\mathit{core}_{\{\lnot a,\lnot b\}}}
\\
\\
\mathit{CoreOracle}\mbox{ :}
&
ab\lnot a\lnot b_{\{a,b,c\},\{c\},
\mathit{core}_{\{\lnot a,\lnot b\}}}
\\
\mathit{Fail}^2_{pre}\mbox{ :}
&
\mathit{Pre}_\emptyset(\{a,b,c\},\{c\})
\\
\mathit{Main}\mbox{ :}
&
\mathit{Cont}(\{a,b,c\},\{c\})
\end{array}
\end{array}
$$
}
\caption{A path in $\mathit{FS}_\Pi$.}
\label{fig:example-core}
\end{figure}
Chunking and core-based methods can be combined using our methodology to abstract Algorithm 7 from~\cite{JanotaLM15}. Such a combination will be employed in the experiments.
\section{Experimental Analysis}
\label{sec:exp}
The abstract solvers reported in this paper have been used for implementing several algorithms in the ASP solver \textsc{wasp} \cite{alv15,DBLP:conf/lpnmr/AlvianoADLMR19}, resulting in the following new versions of \textsc{wasp}:
\begin{itemize}
\item \textsc{wasp-chunk-2}, i.e., \textsc{wasp} running the algorithm based on chunking, with the size of the chunk set to 2;
\item \textsc{wasp-chunk-20\%}, i.e., \textsc{wasp} running the algorithm based on chunking, with the size of the chunk set to the 20\% of the initial number of candidates, where the initial set of candidates is the whole set of atoms;
\item \textsc{wasp-cb}, i.e., \textsc{wasp} running the algorithm based on cores.
\item \textsc{wasp-cb-2}, i.e., \textsc{wasp} running the algorithm based on cores and chunking, with the size of the chunk set to 2;
\item \textsc{wasp-cb-20\%}, i.e., \textsc{wasp} running the algorithm based on cores and chunking, with the size of the chunk set to the 20\% of the initial number of candidates.
\end{itemize}
\paragraph{Benchmark selection.}
The performance of these versions of \textsc{wasp} was measured on the benchmarks considered in~\cite{AlvianoDJMP18}.
In particular, \cite{AlvianoDJMP18} includes \textit{(i)}
all the 193 instances from the latest ASP Competitions \cite{DBLP:journals/tplp/CalimeriIR14,DBLP:journals/ai/CalimeriGMR16,GebserMR17} involving non-ground queries;
\textit{(ii)} 115 instances of ASP Competitions classified as \textit{easy}, that is, those for which a stable model is found within 20 seconds of computation by mainstream ASP systems; and
\textit{(iii)} instances from abstract argumentation frameworks submitted to the 2nd International Competition on Computational Models of Argumentation.
In this paper, instances from \textit{(iii)} are not included since they are trivial for all tested solvers \cite{AlvianoDJMP18}.
\paragraph{Compared approaches.}
As a reference to the state of the art, we used \textsc{clasp} v. 3.3.3 \cite{GebserKS12}, which implements algorithm $\textsc{or}$ (i.e., \textit{over-approximation}), and the best performing algorithms implemented by \textsc{wasp} \cite{AlvianoDR14,AlvianoDJMP18}, namely $\textsc{or}$ (i.e., \textit{over-approximation}), $\textsc{ict}$ (i.e., \textit{under-approximation}), $\textsc{opt},$ and $\textsc{cm}$.
Algorithm $\textsc{opt}$ was presented in~\cite{AlvianoDJMP18}. The idea is as follows. Given a set of objective atoms $A$, the branching heuristic of the solver is forced to select $\neg p$ for $p \in A$, before any other unassigned literal.
In this way, the search is driven to falsify as many atoms in $A$ as possible.
When all atoms in $A$ are assigned, standard answer set search procedure is applied without further modifications to the branching heuristic.
Therefore, whenever an answer set is found, it is guaranteed to be minimal with respect to the set of objective atoms \cite{DBLP:journals/constraints/RosaGM10}.
When the current assignment to atoms in $A$ cannot be extended to an answer set, then the assignment of some atom in $A$ is flipped, and hence the procedure is repeated with a different assignment for the objective atoms.
For cautious reasoning, $A$ is initialized to the set of all candidates and updated whenever an answer set is found.
Algorithm $\textsc{cm}$ was also presented in~\cite{AlvianoDJMP18} and is based on the property that an atom is a cautious consequence of a given program if and only if the negation of the atom is an unsatisfiable core.
Hence, the algorithm searches for an answer set falsifying all candidates, with the aim of eliminating all remaining candidates at once. As soon as
no such an answer set exists, the returned unsatisfiable core is either minimized to a singleton or used to discard candidates.
Note that all tested algorithms take advantage of the incremental interface of \textsc{clasp} and \textsc{wasp}, which is based on the concept of \textit{assumptions literals}. The incremental interface allows the solver to reuse part of the computation among different calls, e.g., learned constraints and heuristic parameters.
The \textsc{dlv} solver is not considered here as its performance on cautious reasoning has been shown in earlier work to be dominated by
the other approaches considered in~\cite{AlvianoDR14}.
\paragraph{Hardware configurations and limits.}
The experiments were run on computing nodes with Intel Xeon 2.4-GHz processors and 16 GB of memory.
Time and memory limits were set to 600 seconds and 15 GB,
respectively.
\begin{figure}[t]
\figrule
\begin{tikzpicture}[scale=0.85]
\pgfkeys{%
/pgf/number format/set thousands separator = {}}
\begin{axis}[
scale only axis
, font=\normalsize
, x label style = {at={(axis description cs:0.5,0.0)}}
, y label style = {at={(axis description cs:0.0,0.5)}}
, xlabel={Number of solved instances}
, ylabel={Per-instance time limit (s)}
, xmin=15, xmax=200
, ymin=0, ymax=620
, legend style={at={(0.16,0.96)},anchor=north, draw=none,fill=none}
, legend columns=1
, width=1\textwidth
, height=0.4\textwidth
, ytick={0,150,300,450,600}
, major tick length=2pt
]
\addplot [mark size=3pt, color=green, mark=star] [unbounded coords=jump] table[col sep=semicolon, y index=1] {./qa.csv};
\addlegendentry{\textsc{clasp}}
\addplot [mark size=3pt, color=blue, mark=x] [unbounded coords=jump] table[col sep=semicolon, y index=7] {./qa.csv};
\addlegendentry{\textsc{wasp-or}}
\addplot [mark size=3pt, color=blue, mark=square] [unbounded coords=jump] table[col sep=semicolon, y index=8] {./qa.csv};
\addlegendentry{\textsc{wasp-ict}}
\addplot [mark size=3pt, color=blue, mark=square*] [unbounded coords=jump] table[col sep=semicolon, y index=9] {./qa.csv};
\addlegendentry{\textsc{wasp-cm}}
\addplot [mark size=3pt, color=blue, mark=triangle*] [unbounded coords=jump] table[col sep=semicolon, y index=10] {./qa.csv};
\addlegendentry{\textsc{wasp-opt}}
\addplot [mark size=3pt, color=black, mark=o] [unbounded coords=jump] table[col sep=semicolon, y index=5] {./qa.csv};
\addlegendentry{\textsc{wasp-chunk-20\%}}
\addplot [mark size=3pt, color=black, mark=*] [unbounded coords=jump] table[col sep=semicolon, y index=6] {./qa.csv};
\addlegendentry{\textsc{wasp-chunk-2}}
\addplot [mark size=3pt, color=red, mark=triangle] [unbounded coords=jump] table[col sep=semicolon, y index=2] {./qa.csv};
\addlegendentry{\textsc{wasp-cb}}
\addplot [mark size=3pt, color=red, mark=o] [unbounded coords=jump] table[col sep=semicolon, y index=3] {./qa.csv};
\addlegendentry{\textsc{wasp-cb-20\%}}
\addplot [mark size=3pt, color=red, mark=*] [unbounded coords=jump] table[col sep=semicolon, y index=4] {./qa.csv};
\addlegendentry{\textsc{wasp-cb-2}}
\end{axis}
\end{tikzpicture}
\caption{Benchmark \textit{(i)}: Performance comparison on non-ground queries in ASP Competitions.}\label{fig:cactusi}
\figrule
\end{figure}
\subsection{Results}
Concerning benchmark \textit{(i)}, results are shown in the cactus plot of Figure~\ref{fig:cactusi}, where for each algorithm the number of solved instances in a given time is reported, producing an aggregated view of its overall performance.
As a first observation, \textsc{wasp} cannot reach the performance of \textsc{clasp} on the execution of algorithm \textsc{or}, and indeed \textsc{clasp} solved 41 instances more than \textsc{wasp-or}.
However, such a huge gap is completely filled by \textsc{wasp-cb-20\%}, which actually solves 13 instances more than {\sc clasp}\xspace.
Indeed, \textsc{wasp-cb-20\%} is able to solve all instances with an average running time of 56 seconds, and is comparable to the best performing algorithm, namely \textsc{wasp-opt}, which solves all instances with an average running time of 36 seconds.
Notably, even a small size of the chunk may have a huge impact on the performance of the algorithms. Indeed, \textsc{wasp-cb} outperforms \textsc{wasp-cb-2}, solving 13 instances more.
Finally, we observe that \textsc{wasp-chunk-20\%} and \textsc{wasp-chunk-2} are not competitive with algorithms based on cores.
Concerning benchmark \textit{(ii)}, results are shown in the cactus plot of Figure~\ref{fig:cactusii}.
It is possible to observe that \textsc{clasp} is the best performing solver on this benchmark, solving 53 instances overall.
If we focus on \textsc{wasp}, the best performance is obtained by \textsc{wasp-chunk-2}, \textsc{wasp-or}, \textsc{wasp-cm}, and \textsc{wasp-chunk-20\%} which are able to solve 41, 41, 41, and 40 instances, respectively.
Moreover, \textsc{wasp-cb} cannot reach the same performance on this benchmark, solving only 25 instances.
We observe that the poor performance depends on the first calls to the oracle, since they are expensive in terms of solving time.
This negative effect is mitigated by chunking since \textsc{wasp-cb-20\%} and \textsc{wasp-cb-2} solve 37 and 39 instances, respectively.
Finally, detailed results of benchmarks \textit{(i)} and \textit{(ii)} are shown in Table~\ref{tab:queryanswering}, where we report the 5 algorithms solving the largest number of instances. In particular, for each algorithm we report the number of solved instances and the cumulative solving time (for each timeout we added 600 seconds).
We also observe that \textsc{wasp-cb-20\%} is comparable with \textsc{clasp} solving only 3 instances less.
\begin{figure}[t]
\figrule
\begin{tikzpicture}[scale=0.85]
\pgfkeys{%
/pgf/number format/set thousands separator = {}}
\begin{axis}[
scale only axis
, font=\normalsize
, x label style = {at={(axis description cs:0.5,0.0)}}
, y label style = {at={(axis description cs:0.0,0.5)}}
, xlabel={Number of solved instances}
, ylabel={Per-instance time limit (s)}
, xmin=15, xmax=55
, ymin=0, ymax=620
, legend style={at={(0.16,0.96)},anchor=north, draw=none,fill=none}
, legend columns=1
, width=1\textwidth
, height=0.4\textwidth
, ytick={0,150,300,450,600}
, major tick length=2pt
]
\addplot [mark size=3pt, color=green, mark=star] [unbounded coords=jump] table[col sep=semicolon, y index=1] {./easy.csv};
\addlegendentry{\textsc{clasp}}
\addplot [mark size=3pt, color=blue, mark=x] [unbounded coords=jump] table[col sep=semicolon, y index=7] {./easy.csv};
\addlegendentry{\textsc{wasp-or}}
\addplot [mark size=3pt, color=blue, mark=square] [unbounded coords=jump] table[col sep=semicolon, y index=8] {./easy.csv};
\addlegendentry{\textsc{wasp-ict}}
\addplot [mark size=3pt, color=blue, mark=square*] [unbounded coords=jump] table[col sep=semicolon, y index=9] {./easy.csv};
\addlegendentry{\textsc{wasp-cm}}
\addplot [mark size=3pt, color=blue, mark=triangle*] [unbounded coords=jump] table[col sep=semicolon, y index=10] {./easy.csv};
\addlegendentry{\textsc{wasp-opt}}
\addplot [mark size=3pt, color=black, mark=o] [unbounded coords=jump] table[col sep=semicolon, y index=5] {./easy.csv};
\addlegendentry{\textsc{wasp-chunk-20\%}}
\addplot [mark size=3pt, color=black, mark=*] [unbounded coords=jump] table[col sep=semicolon, y index=6] {./easy.csv};
\addlegendentry{\textsc{wasp-chunk-2}}
\addplot [mark size=3pt, color=red, mark=triangle] [unbounded coords=jump] table[col sep=semicolon, y index=2] {./easy.csv};
\addlegendentry{\textsc{wasp-cb}}
\addplot [mark size=3pt, color=red, mark=o] [unbounded coords=jump] table[col sep=semicolon, y index=3] {./easy.csv};
\addlegendentry{\textsc{wasp-cb-20\%}}
\addplot [mark size=3pt, color=red, mark=*] [unbounded coords=jump] table[col sep=semicolon, y index=4] {./easy.csv};
\addlegendentry{\textsc{wasp-cb-2}}
\end{axis}
\end{tikzpicture}
\caption{Benchmark \textit{(ii)}: Performance comparison on computation of cautious consequences for \emph{easy} instances of ASP Competitions.}\label{fig:cactusii}
\figrule
\end{figure}
\begin{table}[b!]
\caption{
Numbers of solved instances and cumulative running time (in seconds; each timeout adds 600 seconds) on instances from benchmarks \textit{(i)} and \textit{(ii)}.
}
\label{tab:queryanswering}
\centering
\tabcolsep=0.040cm
\begin{tabular}{lrrrrrrrrrrrrrrrr}
\toprule
& && \multicolumn{2}{c}{\textbf{\textsc{clasp}}} && \multicolumn{2}{c}{\textbf{\textsc{wasp-cm}}} && \multicolumn{2}{c}{\textbf{\textsc{wasp-opt}}} && \multicolumn{2}{c}{\textbf{\textsc{wasp-cb}}} && \multicolumn{2}{c}{\textbf{\textsc{wasp-cb-20\%}}} \\
\cmidrule{4-5} \cmidrule{7-8} \cmidrule{10-11} \cmidrule{13-14} \cmidrule{16-17}
\textbf{Benchmark} & \textbf{\#} && \textbf{sol.} & \textbf{sum t} && \textbf{sol.} & \textbf{sum t} && \textbf{sol.} & \textbf{sum t} && \textbf{sol.} & \textbf{sum t} && \textbf{sol.} & \textbf{sum t}\\
\cmidrule{1-17}
CQA-Q3 & 40 && 40 & 4354 && 40 & 1313 && 40 & 1276 && 40 & 1291 && 40 & 1303\\
CQA-Q6 & 40 && 40 & 8505 && 40 & 2149 && 40 & 1956 && 40 & 3544 && 40 & 1849\\
CQA-Q7 & 40 && 40 & 8929 && 40 & 1741 && 40 & 1681 && 40 & 1735 && 40 & 1724\\
MCSQ & 73 && 60 & 12701 && 65 & 11007 && 73 & 1995 && 60 & 15757 && 73 & 5924\\
\cmidrule{1-17}
GracefulGraphs & 1 && 1 & 51 && 1 & 45 && 1 & 32 && 1 & 44 && 1 & 57\\
GraphCol & 1 && 0 & 600 && 0 & 600 && 0 & 600 && 0 & 600 && 0 & 600\\
IncrSched & 6 && 5 & 857 && 2 & 2692 && 1 & 3016 && 1 & 3006 && 1 & 3004\\
KnightTour & 2 && 2 & 62 && 0 & 1200 && 0 & 1200 && 0 & 1200 && 0 & 1200\\
Labyrinth & 32 && 6 & 18377 && 0 & 19200 && 0 & 19200 && 0 & 19200 && 1 & 18912\\
NoMystery & 2 && 1 & 1091 && 1 & 694 && 0 & 1200 && 1 & 706 && 1 & 721\\
PPM & 15 && 15 & 264 && 15 & 81 && 15 & 76 && 15 & 113 && 15 & 76\\
QualSpatReas & 18 && 18 & 1019 && 17 & 4537 && 7 & 7406 && 7 & 7083 && 14 & 5707\\
Sokoban & 36 && 3 & 20529 && 3 & 20665 && 1 & 21102 && 1 & 21023 && 2 & 20918\\
VisitAll & 2 && 2 & 80 && 2 & 408 && 1 & 757 && 2 & 348 && 2 & 396\\
\cmidrule{1-17}
\textbf{Total} & \textbf{308} && \textbf{233} & \textbf{78584} && \textbf{226} & \textbf{66931} && \textbf{219} & \textbf{62097} && \textbf{208} & \textbf{76252} && \textbf{230} & \textbf{62993}\\
\bottomrule
\end{tabular}
\end{table}
\section{Related Work}
\label{sec:related}
Abstract solvers methodology for describing solving procedures have been introduced for the {{\sc dpll}\xspace} procedure with learning of SAT solving and for certain extensions implemented in SMT solvers \cite{nie06}. In ASP, \citeN{lier08} introduced and compared the abstract solvers for {\sc smodels} and {{\sc cmodels}\xspace} on non-disjunctive programs, then in~\cite{lier10} the framework has been extended by introducing transition rules that capture backjumping and learning techniques.
\citeN{lier11} presented a unifying perspective based on completion of solvers for non-disjunctive answer set solving.
\citeN{blm14} presented abstract solvers for disjunctive answer set solvers {{\sc cmodels}\xspace}, {{\sc gnt}\xspace} and {{\sc dlv}\xspace} implementing plain backtracking, and~\citeN{lie14} defined abstract frameworks for Constraint ASP solvers.
All these papers describe ASP procedures for computing (one) stable models in abstract solvers methodology.
In our paper we have, instead, focused on the description of ASP procedures for cautious reasoning tasks, possibly employing some of the solutions presented in related papers as ASP oracle calls.
Our paper significantly extends the short technical communication \cite{bro15b} by $(i)$ designing more advanced solving techniques, like chunking and core-based algorithms, that lead to new solving solutions, $(ii)$ implementing and testing such new solutions, $(iii)$ adding further examples and a detailed related work, and $(iv)$ formally stating a strong analogy between backbones computation in SAT and cautious reasoning in ASP.
As far as the application of abstract solvers methodology outside ASP is concerned, the first application has been already mentioned and is related to the seminal paper~\cite{nie06}, where SMT problems with certain logics via a lazy approach~\cite{Sebastiani07} are considered. Then, abstract
solvers have been presented for the satisfiability of Quantified Boolean Formulas by~\citeN{bro15}, and for solving certain reasoning taks in Abstract Argumentation under preferred semantics~\cite{BrocheninLMWW18}. Finally, in another number of papers, starting from a developed concept of modularity in answer set solving~\cite{LierlerT13}, abstract modeling of solvers for multi-logic systems are presented~\cite{LierlerT14,LierlerT15,LierlerT16}.
Another added, general, value of our paper is in its practical part, i.e., an implementation of new solutions designed through abstract solvers. In fact, while nowadays abstract solvers methodology has been widely used, often in the mentioned papers the presented results have rarely led to implementations, with the exception of~\cite{nie06}, where the related {\sc Barcelogic} implementation won the SMT Competition 2005 on same logics, and~\cite{lier10}, where a proposed combination of {\sc smodels} and {\sc cmodels} techniques has been implemented in the solver {\sc sup}, that reached positive results at the ASP Competition 2011 and, more recently,~\cite{BrocheninLMWW18}, where the new designed solution, obtained as a modification of the {\sc cegartix} solver, performed often better than the basic {\sc cergatix} solver on preferred semantics, that was among the best solvers in the first ICCMA competition.
Finally, very recently improved algorithms for computing cautious consequences of ASP programs have been presented in~\cite{AlvianoDJMP18}: such algorithms could be also modeled through abstract solvers and combined with the ones presented in this paper.
\section{Conclusion}
\label{sec:concl}
In this paper we modeled through abstract solvers advanced techniques for solving cautious reasoning tasks in ASP. Such advanced techniques have been borrowed from the computation of backbones of propositional formulas. We have then designed new solving procedures, and implemented them in {\sc wasp}, that already included algorithms of~\cite{AlvianoDR14,AlvianoDJMP18}. Experiments on devoted benchmarks have shown positive results for the new proposed solutions. At the same time, our work has formally stated, through an uniform treatment, a strong analogy among the algorithms for computing backbones of propositional formulas and those for computing cautious consequences of ASP programs.
Finally, we remark that algorithms presented in this paper are independent with respect to the underlying solving strategies, and can be complemented with existing heuristics and optimization techniques~\cite{DBLP:conf/jelia/GiunchigliaMT02,DBLP:conf/cp/GiunchigliaMT03,giu08}.
\bibliographystyle{acmtrans}
|
1,314,259,996,586 | arxiv | \section{Introduction}
The Higgs model of electroweak symmetry (EWS) breaking is less than satisfying
because it offers no understanding of fermion masses and is plagued by a
technical hierarchy problem with respect to the Higgs mass. Technicolor models
\cite{TC} break EWS by the formation of fermion condensates in a strongly
interacting theory patterned after QCD. There are no fundamental scalars and
therefore no Higgs-mass hierarchy problem. It has been proposed that the
fermion (quark and lepton) masses could be generated in technicolor models by
extending the gauge sector so that the fermions and the technifermions are
unified above the EWS breaking scale. In such extended technicolor (ETC)
\cite{ETC} models, the hierarchy of fermion masses is generated by a hierarchy
of breaking scales of the unified gauge group. The problem of the origin of the
fermion masses is replaced by the problem of the origin of the ETC symmetry
breaking scales.
A number of proposals have been made for the origin of the ETC breaking
scales. The ETC symmetries may be broken by including Higgs scalars
\cite{ETChiggs} in appropriate representations of the ETC group. This approach,
however, is usually assumed to be a low energy approximation to an even higher
scale dynamics since it reintroduces the technical hierarchy problem that
technicolor is designed to solve. It has so far not pointed the way to an
understanding of fermion mass.
A more audacious explanation of the ETC breaking is that the ETC group(s) break
themselves by becoming strong at high scales and forming fermion condensates
which are not singlets under the ETC group. This is the tumbling mechanism
\cite{Tumbling} . It is appealing in its economy, but the desired symmetry
breaking patterns require placing the fermions and technifermions in unusual,
non-fundamental representations, chosen to achieve the desired breaking
pattern. Furthermore, tumbling models have so far relied on speculative
most-attractive-channel (MAC) analyses to determine the condensates that form
at each scale.
In this paper, we explore an alternative approach to the ETC symmetry breaking
scales which is purely dynamical (no fundamental scalars and no bare mass terms
in the lagrangian), which puts fermions only in the fundamental representation,
and which employs only QCD-like dynamics. It thus avoids the use of MAC
analyses as well as non-fundamental representations. Instead, the breaking
pattern (the pattern of quark and lepton masses) is arranged here by the choice
of groups into which new fermions are placed and the coupling strengths of
these groups. Time will tell whether this holds the key to a deeper
understanding of the quark and lepton masses.
Dynamical models with fermions in only the fundamental representation of the
gauge groups have also been proposed in refs \cite{Moose1, Moose2}. However,
they generate the light fermion masses by means of couplings to new fermions
with mass terms containing the observed mass structure and were intended to
demonstrate that flavor dynamics could be separated from
EW scale physics in ETC models.
The model presented has one doublet of technifermions and involves Pati-Salam
unification \cite{PS} at high scales. It gives a relatively small contribution
to the electroweak parameter $S$ \cite{PT,Burgess} and gives rise to no
pseudo Goldstone bosons at the technicolor scale. Within this model we are
able to dynamically generate three family-scales, flavor breaking within each
family, a large top mass, and light neutrinos. The dynamics responsible for
these features do not generate flavour changing neutral currents (FCNC).
FCNCs induced by CKM mixing, the origin of which we do not address here, can be
suppressed by small mixing angles or the familiar walking \cite{Walk} and
strong ETC \cite{Gap} solutions to the problem.
The model as presented contains global symmetries above the highest ETC
breaking scale (typically of order 1000TeV) that, when dynamically broken,
generate exactly massless, physical Goldstone bosons. They couple to ordinary
matter through ETC interactions or the standard model (SM) interactions of
their constituent fermions. These interactions are suppressed by the ETC scale
and are not visible in current laboratory experiments. Astrophysical
constraints \cite{redgiant} from stellar lifetimes do, however, rule out light
Goldstones with SM couplings. We anticipate that yet higher scale unifications
than those discussed here may generate masses for these Goldstones which are
above the astophysical constraints.
ETC models that generate the large top mass tend to give rise rise to
contributions to the T \cite{PT,Burgess} parameter that are close to the
experimental bound. The T parameter may be reduced in top color assisted
technicolor models \cite{Hill} in which the top mass is generated by a close to
critical top self interaction. We show how an alternative model of top color
may be included simply in our ETC model. Unlike in the orginal top color
model, the isospin breaking that splits the top and bottom masses is the result
of chiral non-abelian color groups rather than a strong $U(1)$ gauge group.
In Section 2, we describe the basic QCD-like mechanism for breaking gauge
symmetries. We apply this dynamics in the case of a one-doublet model in
Section 3. We discuss both family symmetry breaking leading to different mass
scales for each of the three quark-lepton families, and flavor symmetry
breaking within each family. Phenomenological aspects of the model are also
discussed. In Section 4 we discuss how the model may be extended to include a
variation on top color assisted technicolor. In Section 5, we summarize the
work and present some conclusions.
\newpage
\section{Gauge Symmetry Breaking With QCD-Like Dynamics}
In this section, we describe our breaking mechanism using a simple model in
which an $SU(N)$ gauge group is broken to $i$ gauged subgroups and an $SU(j)$
global symmetry group using purely QCD-like dynamics. The driving force is an
additional $SU(M)$ gauge interaction which becomes strongly interacting at a
scale $\Lambda_M$. The model contains the essential dynamics used to break the
ETC symmetries in the following sections. There, the $SU(N)$ group will be the
ETC group, with quarks, leptons, and technifermions in its fundamental
representation. There will also be particles transforming according to the
fundamental representation of both the $SU(N)$ and $SU(M)$ groups, which will
play an active role in the ETC symmetry breaking. In this section, only the
latter particles will be included
for simplicity.
$\left. \right.$ \hspace{0.3in}\ifig\prtbdiag{}
{\epsfxsize12.8truecm\epsfbox{fig1.eps}} \vspace{-0.85cm}
\begin{center} Figure 1: A model of gauge symmetry breaking. \end{center}
In Fig. 1 we show the model in moose notation \cite{Moose1} with
\begin{equation} n_1 + n_2 +... +n_i + j = N. \end{equation}
A circled number $N$ corresponds to an $SU(N)$ gauge symmetry and directional
lines represent left-handed Weyl fermions that transform according to the
fundamental representation of the gauge groups they connect. A line leaving
(entering) a circle with a number $N$ inside represents a fermion transforming
under the $N$ ($\bar{N}$) representation of that group. Lines labelled by a
number $j$ that is not circled correspond to $j$ copies of the representation
of the gauge group and hence have a global symmetry $SU(j)\otimes U(1)$.
The fermion content of the model pictured in Fig 1 is therefore
\begin{equation} \begin{array}{ccccccc}
& SU(N) & SU(M) & SU(n_1) & SU(n_2) & .... & SU(n_i)\\
&&&&&&\\
a & N & \bar{M} & 0 & 0 & .... & 0\\
&&&&&&\\
b_1& 0 & M & n_1 & 0 & .... & 0\\
&&&&&&\\
b_2 & 0 & M & 0 & n_2 & .... & 0\\
&&&&&&\\
: & : & : & : & : & : & :\\
&&&&&&\\
b_i& 0 & M & 0 & 0 & .... & n_i\\
&&&&&&\\
c & 0 & M & 0 & 0 & .... & 0
\end{array} \end{equation}
\noindent where the index a runs over the $j$ flavours of the c-fermions. This
model is not anomaly free as shown but we shall assume that the additional
degrees of freedom required to make $SU(N)$ and the $SU(n_i)$ gauge groups
anomaly free do not transform under the $SU(M)$, which is anomaly free with the
constraint of Eq. (2.1). The $SU(M)$ will be the only strongly interacting
gauge group at its confinement scale $\Lambda_M$.
At this scale, the confining $SU(M)$ interaction leads to the formation of
the condensates
\begin{equation} <\bar{a}^{1..n_1} b_1> \neq 0, \hspace{.5cm} <\bar{a}^{n_1+1...n_1+n_2}
b_2> \neq 0, \hspace{.5cm} .... ,\hspace{.5cm} <\bar{a}^{n_1+n_2+...+n_i+1
...N} c> \neq 0. \end{equation}
\noindent With the other gauge interactions neglected, the global symmetry on
the fermions $a$, $b$ and $c$, would be
$SU(N)_L \otimes SU(N)_R$. The condensates break this symmetry in the usual
pattern
\begin{equation} SU(N)_L \otimes SU(N)_R \rightarrow SU(N)_V. \end{equation}
\noindent In the presence of the other gauge interactions, the gauged $SU(N)$
group is therefore broken to
\begin{equation} SU(n_1) \otimes SU(n_2) \otimes.... \otimes SU(n_i), \end{equation}
\noindent where the gauge field and gauge coupling for each group is a linear
combination of the fields and couplings of Fig. 1. We note that all $N^2-1$
Goldstone bosons associated with the broken symmetry are eaten by the $N^2-1$
gauge bosons that acquire a mass (of order $\Lambda_M$).
This symmetry breaking mechanism is of course reminiscent of technicolor
itself. Here, as there, the symmetry breaking is driven by an additional,
strongly coupled gauge interaction, and the breaking pattern is being imposed
by the choice of the $SU(n_i)$ gauge groups. In each case, this is to be
compared with the choice of scalar representation in the Higgs mechanism. For
ETC breaking, it can also be compared with the choice of fermion
representations in tumbling models.
\newpage
\section{One Doublet Technicolor}
As an example of ETC breaking using the above mechanism, we construct an ETC
model with a single doublet of technifermions, $U$ and $D$ \cite{TC,Moose2}
\begin{equation} Q_L = \left(\begin{array}{c} U \\ D \end{array} \right)_L, \hspace{0.2in}
Q_R = \left(\begin{array}{c} U \\ D \end{array} \right)_R .\end{equation}
The quarks and leptons must be unified in a single ETC multiplet with the
technifermion doublet. The simplest realization of this unification is a Pati
Salam \cite{PS} $SU(N+12)$ symmetry where the technicolor group is $SU(N)_{TC}$
and where the SM fermions and technidoublet form the multiplets
\begin{equation} \begin{array}{l}
{\cal U}_R = ( U, t,\nu_{\tau}, c, \nu_{\mu}, u, \nu_e )_R,\\
\\
\Psi_L = \left( \left( \begin{array}{c}U\\D\end{array} \right),
\left( \begin{array}{c}t \\ b\end{array} \right),
\left( \begin{array}{c} \nu_{\tau}\\ \tau\end{array} \right),
\left( \begin{array}{c} c \\ s \end{array} \right),
\left( \begin{array}{c} \nu_{\mu} \\ \mu \end{array} \right),
\left( \begin{array}{c} u \\ d \end{array} \right),
\left( \begin{array}{c} \nu_e \\ e \end{array} \right) \right)_L,\\
\\
{\cal D}_R = ( D, b,\tau, s, \mu, d, e)_R. \end{array} \end{equation}
\subsection{Family Structure}
\subsubsection{A Single Family Model}
To introduce the model we restrict attention to the technidoublet and the third
family quark and leptons only. The ETC group is then $SU(N+4)$. The model is
shown in moose notation in Fig 2.
$\left. \right.$ \hspace{1cm}
\ifig\prtbdiag{}
{\epsfxsize16.8truecm\epsfbox{fig2.eps}} \vspace{-1.85cm}
\begin{center} Figure 2: A one family, one technidoublet ETC model .
\end{center}
The $SU(M)$ gauge groups become strong in the order A and then X (at scales
$\Lambda_A$ and $\Lambda_X$ both of order a few TeV), triggering the breaking
of the ETC group to $SU(N)_{TC}$. Consider the highest of these two scales,
$\Lambda_A$. The fermions transforming under $SU(M)_A$ also transform according
to the fundamental representations of the gauged $SU(N+4)\otimes SU(N)\otimes
SU(3)$. The $SU(3)$ gauge group is present in order to leave an unbroken
$SU(3)$ subgroup of $SU(N+4)$, which will become QCD, acting on the third
family of quarks. The strong $SU(M)_A$ interactions form condensates
\vspace{0.3in}
\begin{equation} <\bar{a}^{1..N} b> \neq 0, \hspace{1cm} <\bar{a}^{ N+1..N+3} c> \neq 0,
\hspace{1cm} <\bar{a}^{N+4} d> \neq 0, \end{equation}
\noindent breaking the gauged $SU(N+4)\otimes SU(N)\otimes SU(3)$ symmetry to
$SU(N) \otimes SU(3)_{QCD}$. The multiplets in Eq. (3.2) are broken, with the
SU(3) subgroup corresponding to the $t$ and $b$ quarks with QCD interactions,
the singlet to the first family lepton doublet and the $SU(N)$ subgroup to the
unbroken technicolor gauge group. All $(N+4)^2-1$ Goldstone bosons generated
at this first stage of breaking are eaten by gauge bosons which acquire masses
of order the confinement scale.
The ETC gauge bosons corresponding to generators broken at $\Lambda_A$ acquire
masses of order $F_A$ , the decay constant of the Goldstone bosons formed at
$\Lambda_A$ that are eaten by the gauge bosons ($F_A^2\simeq M
\Lambda_A^2/4\pi^2$). Below the technicolor scale, where the technifermions
condense, these gauge bosons will generate masses for the third family quarks
and leptons given by
\begin{equation} m_f \simeq {\langle \bar{Q}Q \rangle \over F_A^2}, \end{equation}
where we have assumed that the ETC coupling is perturbative and have used the
four fermion approximation for the ETC gauge boson.The ETC gauge boson's mass
is proportional to its coupling ($M_{ETC}^2 \simeq g^2F_A^2$) and hence the ETC
coupling cancels in the quark and lepton masses.
In this simple model the quarks and leptons are degenerate. We shall address
generating flavor breaking within each family in section 3.2.
To cancel anomalies in the model, the additional fermions, $e$, $f$, $g$ and
$h$, transforming under the $SU(M)_X$ gauge group have been introduced. The
$SU(M)_X$ group confines these new fermions to remove them from the physical
spectrum at low energies. We assume that this confinement scale, $\Lambda_X$,
lies between the technicolor scale and the $SU(M)_A$ confinement scale. At
the scale $\Lambda_X$ there is a global $SU(N+4)_L \otimes SU(N+4)_R$ symmetry
acting on the fermions transforming under $SU(M)_X$.The prefered vacuum
alignment is that no gauge interactions are broken at this extra breaking scale
so there are $(N+4)^2-1$ Goldstone bosons which are not eaten. The
Goldstone's that transform under the adjoint or fundamental representations of
technicolor or QCD acquire masses governed by the scale $\Lambda_X$. The
remaining two Goldstones are massless and we leave discussion of them to
section 3.7.
$\left. \right.$ \hspace{0.15in}
\ifig\prtbdiag{}
{\epsfxsize12.8truecm\epsfbox{fig3.eps}} \vspace{-0.65cm}
\begin{center} Figure 3: A one doublet ETC model with three family scales.
\end{center}
\subsubsection{Three Families}
The model can be generalized to include three families of quarks and leptons as
shown in Fig 3. The ETC symmetry $SU(N+12)$ is broken to $SU(N)_{TC} \otimes
SU(3)_{QCD}$ at three separate scales. There is a separate SU(M) group to
trigger the breaking at each scale. Each is assumed to become strongly
interacting in the order A (at a scale of order a few 100's of TeV), B (at a
scale of order a few 10's of TeV), and finally C (at a scale of order a few
TeV). At each scale the breaking pattern is the same as that discussed in the
one family model; at $\Lambda_A$ the ETC symmetry $SU(N+12)$ is broken to
$SU(N+8) \otimes SU(3)$.
This breaking pattern is then repeated by the groups B and C. At the scale
$\Lambda_B$ it is the SU(3) containing the SU(3) subgroup of SU(N+12) broken
at the scale $\Lambda_A$, and an SU(3) subgroup of the SU(N+8) group that
break together to an SU(3) group that finally at the lowest breaking scale
becomes QCD. The QCD interactions are finally shared by all quarks in the
model. The broken gauge bosons of the ETC group now divide into three sets:
those with masses of order $F_A$ connecting the first family of SM fermions to
more massive generations; those with masses of order $F_B$ connecting the
second family to more massive generations; and those with masses of order
$F_C$ connecting the third family to technifermions. This hierarchy of ETC
gauge bosons masses will generate the hierarchy of
quark and lepton family masses below the technicolor scale.
Anomalies are again cancelled in the model by the fermions transforming under
the extra $SU(M)_X$ gauge group that confines these fermions between the
technicolor and lowest ETC scale. In the enlarged model there are 6 Goldstone
bosons that have no gauge interactions and are hence massless.
\subsection{Flavor Symmetry Breaking}
The model in Fig 3 has an $SU(8)$ flavor symmetry within each family, broken
only by the weak SM interactions. To generate the observed quark and lepton
masses we must introduce quark-lepton symmetry breaking interactions and
isospin symmetry breaking interactions for both the quarks and leptons. For
ease of understanding let us discuss a model of just the third family and the
technidoublet.
\vspace{-0.5cm}
\subsubsection{Isospin Breaking}
We shall break isospin degeneracy by making the ETC gauge group chiral
\cite{Chiral}.
We take it to be $SU(N+4)_L \otimes SU(N+4)_{{\cal U}_R}\otimes SU(N+4)_{{\cal
D}_R}$, as shown in the model in Fig 4. The one family model in Fig 2 is shown
by the full lines in Fig 4, with the additional sectors discussed in this
section shown as dashed lines.
$\left. \right.$ \hspace{-0.35in}
\ifig\prtbdiag{}
{\epsfxsize15truecm\epsfbox{fig4.eps}} \vspace{-1.55cm}
\begin{center} Figure 4: Isospin breaking in the model of the third family.
\end{center}
The $SU(M)_A$ gauge group forms condensates at $\Lambda_A$ and breaks the
$SU(N+4)_L$ ETC group to $SU(N)\otimes SU(3)$ as in the simplest model. The two
gauge groups $SU(M+1)_{D}$ and $SU(M+1)_E$ then become strong between the scale
$\Lambda_A$ and the technicolor scale (for the purposes of making estimates we
shall take $\Lambda_A \simeq \Lambda_D \simeq \Lambda_E $), breaking the chiral
ETC groups to the vector $SU(N)_{TC} \otimes SU(3)_{QCD}$. At each of these
breakings, all Goldstone modes are eaten by gauge bosons associated with broken
generators.
There are now three degrees of coupling freedom associated with the
interactions of the quarks and leptons: the $SU(N+4)_L$ coupling $g_L$; the
$SU(N+4)_{ {\cal U}_R}$ coupling $g_{{\cal U}_R}$; and the $SU(N+4)_{{\cal
D}_R}$ coupling $g_{{\cal D}_R}$. The couplings that enter into the quark and
lepton masses are these running couplings evaluated at the breaking scale of
the ETC interactions and they will in general break the isospin symmetry of
the model. The left and right handed ETC gauge bosons mix through loops of the
fermions transforming under $SU(M+1)_D$ and $SU(M+1)_E$ that have condensed at
$\Lambda_{D,E}$ as shown in Fig 5.
We shall use these extra degrees of freedom to generate the top-bottom mass
splitting. The two extra parameters will not be sufficient to explain
quark-lepton mass differences which we leave to the next section.
$\left. \right.$ \hspace{-0.35in}
\ifig\prtbdiag{}
{\epsfxsize7.8truecm\epsfbox{fig5.eps}} \vspace{-1.55cm}
\begin{center} Figure 5: Generation of third family fermion, $f$, mass from the
technifermion, $Q$, condensate. \end{center}
If we assume that the ETC gauge bosons coupling to the top have couplings,
$g_L$ and $g_{{\cal U}_R}$, of order one or greater (but less than $4\pi$ at
which the ETC gauge bosons become strongly coupled) these gauge bosons will
have masses of order $F_{A} \simeq F_D$ or larger.We may approximate them at
the technicolor scale by four fermion interactions.
The ETC couplings cancel as in Eq(3.4) and the top mass can be estimated to be
roughly
\begin{equation} m_t \simeq {N \over 4\pi^2} {\Sigma(0)^2 \over F_A^2} \Sigma(0) \end{equation}
where $\Sigma(p)$ is the dynamical technifermion mass. A simple Pagels-Stokar
\cite{Pagels} estimate, compatible with QCD, gives $v^2 \equiv (250GeV)^2
\simeq \frac{N}{4 \pi^2} \Sigma(0)^2$ and hence $\Sigma(0) \simeq 1TeV$ for $N
\simeq 2$.To generate $m_t$ in the $100+GeV$ range therefore requires $F_A
\simeq 800GeV$. Although $F_A$ must be close to the technicolor scale, the
scale $\Lambda_A \simeq 2\pi F_A/\sqrt{M}$ will be larger, as in QCD, and hence
there is some running space for the technicolor coupling to evolve from its
value at the breaking scale to its critical value at the technicolor scale. The
estimates above are clearly naive approximations to the full non-perturbative
technicolor dynamics and are not to be trusted to more than factors of two.
It is therefore not completely clear whether a $175GeV$ top mass may be
generated by perturbative ETC gauge bosons.
If the ETC coupling is raised close to its critical value (the value of the ETC
coupling at which the ETC interactions alone would break the chiral symmetry of
the quark and leptons) at the ETC breaking scale then the approximations above
are not valid and the ETC coupling will not cancel from the top mass. A
$175GeV$ top mass may be generated, though how close the ETC coupling must be
to its critical coupling is unclear. We shall assume that the ETC interactions
are perturbative in the discussions below leaving the possibility that they
might be strong and near-critical to section 4.
To generate a smaller bottom mass we take the coupling $g_{{\cal D}_R}$ to be
less than one. The ETC gauge bosons associated with $SU(N+4)_{{\cal D}_R}$
therefore acquire a mass $g_{{\cal D}_R} F_E$, which are light relative to
$F_{D,E} \simeq F_A \simeq \Sigma(0)$. Referring again to Fig 5, the bottom
mass is given approximately by
\begin{equation} m_b \simeq {N \over 4 \pi^2} \int dk^2 k^2 {\Sigma(k) \over k^2 +
\Sigma(k)^2} {g_{{\cal D}_R}^2 \over k^2 + g_{{\cal D}_R}^2 F_A^2} \end{equation}
where we have taken $F_E = F_A$ and set the external momentum to zero. With
$g_{{\cal D}_R}^2 F_A^2 < \Sigma(0)^2$, the integral can be estimated to give
roughly.
\begin{equation} m_b \simeq {N \over 4 \pi^2} g_{{\cal D}_R}^2 \Sigma(0) \end{equation}
where we have again neglected interactions between the ETC gauge boson and the
technicolor gauge bosons. The bottom mass is thus suppressed relative to the
top mass by $g_{{\cal D}_R}^2$. The choice $g_{{\cal D}_R} \simeq 1/6$ gives a
realistic value for $m_b$ and leads to a mass of order 200-300GeV for the
$SU(N+4)_{{\cal D}_R}$ ETC gauge boson.
The technifermion mass splitting $\Delta\Sigma(p) \equiv \Sigma_U(p) -
\Sigma_D(p)$ can also be estimated perturbatively in the ETC interactions. The
main contribution in the model is from the isospin violating, massive gauge
bosons that transform under the adjoint representation of SU(N). The splitting
can be estimated to be roughly
\begin{equation} \Delta \Sigma \simeq {N \over 4\pi^2} {\Sigma(0)^3 \over F_A^2} \simeq
m_t . \end{equation}
\noindent We discuss the implications of this mass splitting for the $\Delta
\rho \equiv \alpha T$ parameter in section 3.5 below.
\vspace{4cm}
$\left. \right.$ \hspace{-0.35in}
\ifig\prtbdiag{}
{\epsfxsize15 truecm\epsfbox{fig6.eps}} \vspace{-1.55cm}
\begin{center} Figure 6: Quark lepton mass splitting in the model of the third
family. The fermion lines are labelled by their $U(1)$ hypercharges.
\end{center}
\subsubsection{Lepton Masses}
In the model in Fig 4, the lepton's interactions are only split from their
quark isospin partners by SM interactions. Although QCD interactions may be
enhanced if the ETC interactions are close to critical (a possibility we
discuss below) and hence could possibly explain the tau-bottom mass splitting
they can not explain why the tau neutrino is so light or massless. In order to
give a fully perturbative ETC model we shall generate the tau-bottom and tau
neutrino-top mass splittings by further ETC breaking dynamics at new scales.
The extra sectors are shown in Fig 6. The $SU(M)_F$ gauge group becomes
strongly interacting at the scale $\Lambda_F$ and breaks a single gauge color
from the $SU(N+4)_{{\cal U}_R}$ gauge group. The corresponding broken
eigenstate of the multiplet in (3.2) will become the neutrino with mass
\begin{equation} m_{\nu_\tau} \simeq {N \over 4\pi} {F_D^2 \over F_A^2}
{\Sigma(0)^3 \over F_F^2} \end{equation}
with $F_A \sim F_D$ and with a suitably high choice of $F_F$ ($\geq 100TeV$)
the tau neutrino mass may be suppressed below the experimental bound of roughly
$30MeV$.
The gauge group $SU(M)_G$ plays the same role for the tau lepton suppressing
it's mass relative to the bottom quark's by $F_E^2/ F_G^2$ from which we learn
that $F_G $ must be of order a few TeV to reproduce the observed tau-bottom
mass splitting.
\subsection{The First And Second Families}
The lightest two families of quarks and leptons may be incorporated in the
model following the discussion in section 3.1.2 and will have mass scales set
by the higher two ETC breaking scales. The top-bottom mass splitting will feed
down to the lightest two family quarks, generating isospin breaking that could
explain the charm-strange mass splitting. The three right handed neutrinos
could all be broken from their ETC multiplet at the scale $\Lambda_F$. The
neutrino masses would then be suppressed relative to the charged lepton masses
by $(F_D/F_F)^2$. The single scale $\Lambda_F$ could thus serve to explain the
lightness of all three neutrinos.The quark-lepton mass splittings however can
probably not be generated from the third family in perturbative ETC models,
since the bottom and tau contributions to, for example, the strange and muon
masses are small in comparison with the feeddown from the technifermions' self
energies. If neccessary extra breaking scales may be introduced to explain the
splittings using the dynamics discussed above. Similarly the ETC gauge groups
acting on the right handed up and down quarks may be broken at additional
scales providing the freedom to accommodate the up-down mass inversion.The
symmetry breaking patterns presented here are not capable of producing the CKM
mixing angles in the quark sector since the families correspond to distinct ETC
gauge eigenstates broken at different scales. We leave the generation of the
mixing angles for future work.
\subsection{U(1) Embedding}
Hypercharge may be embedded in the moose model of Fig 6 by assigning each
particle the $U(1)$ charge indicated on the fermionic lines. The final
hypercharge group is a subgroup of the $U(1)_R$ group of the quarks, leptons
and technifermions and the broken diagonal generators of the $SU(N+4)$ ETC
group. To achieve the correct breaking pattern the condensates formed by
$SU(M)_A$ must be invariant to $U(1)_Y$. Since the $SU(N+4)$ symmetry of the
fermions transforming as an $\bar{M_A}$ is explicitly broken their $U(1)$
charges must correspond to the relevant subgroup of their $SU(N+4)\otimes U(1)$
symmetry.
\subsection{Phenomenology}
Since there is only one technidoublet in the model there are no pseudo
Goldstone bosons generated at the technicolor scale. The single doublet will
also generate only a small
contribution to the $S$ parameter
\cite{PT,Burgess}, $S \sim 0.1N$, which we expect to be compatible with the
current experimental two standard deviation upper limit $S < 0.4$.
The isospin violating ETC interactions
will, of course, give rise to a contribution to the $\Delta \rho (= \alpha T)$
parameter.
The W and Z masses are generated by techifermion condensation and deviations
from the
$\Delta \rho$ parameter from corrections to the relevant diagrams due to
exchange of isospin violating ETC gauge bosons. At first order in the ETC
interactions the largest contribution will be generated by the exchange of the
massive gauge bosons transforming under the adjoint of $SU(N)$ across the
techifermion loop. We estimate this ``direct" contribution \cite{Tom} to be
\begin{equation} \Delta \rho \simeq {v^2 \over 8 F_A^2}\end{equation}
which is of order a percent.
The isospin violation of the ETC interactions will also feed into the
technidoublet giving rise to mass splitting between the techniup and technidown
(estimated in Eq(3.8)). There is thus an ``indirect" contribution to the
$\Delta \rho$ parameter from loops of non-degenerate technifermions which is
second order in ETC gauge boson exchange. Roughly estimating the contribution
using the peturbative result for $\Delta \rho$ \cite{PT} and the estimate of
$\Delta \Sigma$ in Eq(3.8) we find
\begin{equation} \Delta \rho \simeq {N \Delta \Sigma^2 \over 12 \pi^2 v^2} \simeq { v^4
\over 3 F_A^4 }.\end{equation}
\noindent These estimates of $\Delta \rho$ are of course naive, ignoring the
effects of the strong technicolor dynamics between the technifermion loops and
neglecting a complete analysis of the many massive ETC gauge bosons. If they
are accurate they could be difficult to reconcile with the experimental
constraint $\Delta \rho {\
\lower-1.2pt\vbox{\hbox{\rlap{$<$}\lower5pt\vbox{\hbox{$\sim$}}}}\ } 0.3\%$. We
leave a more detailed computation of $\Delta \rho$ to a subsequent paper.
In any case, in section 4 below we present a variation of the model that will
not overly infect $\Delta \rho$.
The model will also give rise to corrections to the $Zb\bar{b}$ vertex . These
arise from both the exchange of the {\it sideways} gauge boson \cite{Zbb},
coupling technifermions to the bottom, across the $Zb{\bar b}$ vertex and from
mixing of the Z with the diagonal broken ETC generator \cite{Wu}. Each of these
contributions can be as large as a few percent for an ETC scale of order 1TeV
but have opposite signs. The magnitude and sign of the combined correction has
been shown to be compatible with the experimental measurement for some models
(the exact correction is dependent on $N$ and the relative sizes of $g_L$ and
$g_{{\cal U}_R}$).
As presented the model does not give rise to quark or lepton number changing
FCNCs since each family's quark and lepton number are conserved ETC charges in
the model. Of course the most stringent FCNC constraints on ETC models come
from $K^0 {\bar K}^0$ mixing through the CKM mixing angles which break quark
number within each family to a single subgroup. Since we have not addressed the
generation of these mixing angles in this paper we can not address this
constraint. We note though that these FCNCs may be suppressed in several ways;
by small mixing angles in the up-type quark sector, or by a walking technicolor
theory or strong ETC interactions that enhance the ETC scales.
\subsection{Massless Goldstone Bosons}
Massless Goldstone bosons are generated in the model at the scale $\Lambda_X$
as discussed above. These Goldstones carry no charge under any of the gauge
groups in the model. However, their constituents are charged, so they can be
produced by gluon or photon fusion or in the decay of the $Z$ \cite{PGB}. They
can also be produced through the exchange of the heavy ETC gauge bosons. The
amplitude in each case is proportional to $1/F_X$ where $F_X \simeq 1TeV$, so
that the production rate is down by at least an order of magnitude relative to
the production of the Goldstones composed of technifermions that arise in a one
family technicolor model. The rate is below current laboratory limits. With the
Goldstones massless or very light, however, their production by the above
mechanisms is a major energy loss mechanism for stars \cite{redgiant}, and is
ruled out by stellar abundances.
The Goldstones are thus troublesome but may acquire masses from further
unifications above the scales discussed in the model so far. In the spontaneous
breaking at $\Lambda_X$, the Goldstone bosons complete an adjoint
representation of the unbroken $SU(N+12)$ vector global symmetry group (in
the three family model). If at some higher scale this group is gauged
(corresponding for example to gauging the full chiral symmetry group in Fig 3)
then all the Goldstones will acquire masses given by
\begin{equation} M_A^2 \simeq 4\pi F_{X}^4/\Lambda_{new}^2\end{equation}
which is potentially sufficient to ensure that the Goldstones will not be a
source of energy loss in stellar interiors.
\section{Strong ETC and Chiral Top Color}
The model presented so far appears capable of producing a 175GeV top mass
treating the ETC interactions perturbatively without contradicting other
experimental bounds. However, the contributions of the isospin violating ETC
gauge bosons to the $\Delta \rho$ parameter and to the $Zb{\bar b}$ vertex
are close to experimental limits. These contributions, which scale as $1/
M_{ETC}^2$, can be reduced by increasing the lowest ETC scale, but at the
expense of tuning the ETC coupling close to it's critical value from below to
generate the large top mass. A near critical ETC interaction for the third
family would also enhance the QCD corrections to the third family quark masses
and potentially explain the bottom tau mass splitting without the need for the
extra ETC symmetry breaking scale $\Lambda_G$ discussed in section 3.2.2.
Finally increasing the lowest ETC scale would allow us to increase the scale
$\Lambda_X$ and hence generate larger masses for the Goldstones formed at that
scale.
Although near critical ETC interactions at a higher ETC scale may suppress the
direct contribution to $\Delta \rho$ the indirect contribution will remain
roughly the same size but may no longer be considered second order. This
follows from a gap equation analysis which suggests that the technifermion mass
splitting will remain of order $m_t$. Therefore if the large top mass is the
result of either perturbative or strongly interacting sideways ETC
interactions the contribution to the $\Delta \rho$ parameter may conflict with
the experimental limit.
Recently Hill \cite{Hill} has proposed that the large top mass may be generated
by a near critical top self interaction \cite{Topcond}. If the ETC gauge boson
with the large isospin violating coupling responsible for the top mass does
not couple to the technifermions then the isospin splitting will not feed back
into the technisector and hence the $\Delta \rho$ parameter as described above.
Hill generates the top self interaction by assuming that at ETC scales there is
a separate $SU(3)_C \otimes U(1)_Y$
gauge group acting on the third family that is near critical when broken to the
SM gauge groups.
$\left. \right.$ \hspace{-0.35in}
\ifig\prtbdiag{}
{\epsfxsize15 truecm\epsfbox{fig7.eps}} \vspace{-1.55cm}
\begin{center} Figure 7: Chiral top color in the model of the third family.
\end{center}
We can extend our model to include a top color interaction as shown in Fig 7.
The new $SU(M)_H$ group becomes strongly interacting at $\Lambda_H $ breaking
the $SU(N+3)_{\cal{U}_R}$ group, left after the right handed neutrino has
decoupled, to $SU(N) \otimes SU(3)$. The right handed SU(3) color group's
coupling will run independently of the technicolor coupling below this breaking
scale (we require that $\Lambda_H$ is large enough that there is enough running
time for the SU(3) and SU(N) groups' couplings to significantly diverge) and
this interaction of the top will be assumed to be near critical when broken to
the vector QCD subgroup at $\Lambda_D$. Unlike in Hill's model in which the
top bottom mass
splitting is the result of a strongly coupled $U(1)_Y$ gauge interaction (with
the associated problem of it's coupling being close to it's Landau pole) here
the isospin splitting is provided by chiral, asymptotically free, {\it
non-abelian} gauge groups.
\section{ Summary and Conclusions}
We have presented a one-doublet technicolor model in which the ETC gauge
symmetries are broken by purely QCD-like dynamics. All fermions transform only
under the fundamental representation of gauge groups. The model has chiral ETC
gauge groups, explicitly breaking custodial symmetry, and Pati-Salam
unification at high scales. It's main features are:
\begin{itemize}
{\item Three families of quarks and leptons are incorporated, with a hierarchy
of three family-symmetry breaking scales. Within the third family, the full
spectrum of masses can be accommodated. In particular, we argue that with a
third family ETC scale on the order of $1TeV$, it may be possible to
generate both the top and bottom masses through perturbative ETC interactions.
A light tau neutrino mass can be achieved by breaking the ETC group for right
handed isospin $+1/2$ fermions at a high scale. To place $m_{\nu_\tau}$ below
the current limit of roughly 30MeV, this scale must be above about 100TeV.}
{\item Since the model contains a single doublet of technifermions, no
pseudo-Goldstone bosons are formed at the electroweak scale and the $S$
parameter can be kept relatively small. The weak custodial isospin symmetry
breaking built into the model leads to a so called ``direct" contribution
\cite{Tom} to $\Delta \rho \equiv \alpha T$, which is first order in the ETC
interaction. Our naive estimate suggests that this contribution may be nearly
$1\%$ and hence possibly above the experimental limit. A more detailed analysis
of this contribution (and that to the $Z b {\bar b}$ vertex) will be given in a
succeeding paper. The ``indirect" contribution, arising from loops of
non-degenerate technifermions, is second order in ETC interactions and is small
relative to the direct contribution when the ETC interactions are
perturbative.}
{\item The model contains global symmetries at the ETC scales, whose
spontaneous breaking leads to massless Goldstone bosons. They can couple to
ordinary matter through SM interactions and are ruled out by stellar energy
loss constraints \cite{redgiant}. They can, however, be given
phenomenologically acceptable masses by further unifications above the ETC
scales, which break the global symmetries. }
{\item Some of the mass splittings within the first two families will be fed
down naturally from the third family. We have argued that the charm strange
mass splitting may be a result of the top bottom mass splitting. The
suppression of all three generations of neutrino masses may be explained by a
single ETC breaking scale. We have not discussed the origin of quark mixing
angles in this work though it will clearly be important to address this point
in the future.}
{\item We have also demonstrated that a large top quark mass can be generated
dynamically in technicolor by a near critical top color interaction without
the need for a strong U(1) interaction. This variant of the model is compatible
with the experimental value of $\Delta \rho$.}
\end{itemize}
The model presented here illustrates that ETC symmetries can be broken using
only QCD-like dynamics and fermions in fundamental representations. The
requisite number of quark-lepton and isospin symmetry violating parameters may
be introduced to accomodate the third family spectrum. It remains to be seen
whether this approach leads to an explanation of quark and lepton masses and
CKM mixing angles. \vspace{1in}
\noindent {\bf Acknowledgements}
The authors would like to thank Steve Hsu, Steve Selipsky, Andy Cohen, Sekhar
Chivukula, Ken Lane, Liz Simmons and Lisa Randall for useful comments and
discussion.
\newpage
|
1,314,259,996,587 | arxiv | \section{INTRODUCTION}
Let $f:I\subset
\mathbb{R}
\rightarrow
\mathbb{R}
$ be a convex function on the interval $I$ of real numbers and $a,b\in I$
with $a<b.$ The inequalit
\begin{equation}
f\left( \frac{a+b}{2}\right) \leq \frac{1}{b-a}\int_{a}^{b}f(x)dx\leq \frac
f(a)+f(b)}{2} \label{hc}
\end{equation
is known as Hermite-Hadamard's inequality for convex functions, \cite{JFY}.
In \cite{G}, Toader defined $m-$convexity as the following.
\begin{definition}
\label{d1.1} The function $f:[0,b]\rightarrow
\mathbb{R}
,$ $b>0$, is said to be $m-$convex where $m\in \lbrack 0,1],$ if we hav
\begin{equation*}
f(tx+m(1-t)y)\leq tf(x)+m(1-t)f(y)
\end{equation*
for all $x,y\in \lbrack 0,b]$ and $t\in \lbrack 0,1].$ We say that $f$ is
m- $concave if $\left( -f\right) $ is $m-$convex.
\end{definition}
In \cite{S}, Varo\v{s}anec defined the following class of functions.
$I$ and $J$ are intervals in
\mathbb{R}
,$ $\left( 0,1\right) \subseteq J$ and functions $h$ and $f$ are real
non-negative functions defined on $J$ and $I$, respectively.
\begin{definition}
\label{d1.6} Let $h:J\subseteq
\mathbb{R}
\rightarrow
\mathbb{R}
$ be a non-negative function, $h\neq 0.$ We say that $f:I\rightarrow
\mathbb{R}
$ is an $h-$convex function, or that $f$ belongs to the class $SX(h,I)$, if
f$ is non-negative and for all $x,y\in I$,$\alpha \in \left( 0,1\right) $ we
hav
\begin{equation}
f\left( \alpha x+(1-\alpha )y\right) \leq h(\alpha )f(x)+h(1-\alpha )f(y)
\label{D}
\end{equation}
\end{definition}
If inequality \ref{D} is reversed, then $f$ is said to be $h-$concave, i.e.
f\in SV\left( h,I\right) $.
In \cite{SSY}, Sar\i kaya et al. proved a variant of Hadamard inequality
which holds for $h-$convex functions.
\begin{theorem}
\label{t1.9} Let $f\in SX\left( h,I\right) ,$ $a,b\in I,$ with $a<b$ and
f\in L_{1}\left( \left[ a,b\right] \right) .$ Then
\end{theorem}
\begin{equation}
\frac{1}{2h\left( \frac{1}{2}\right) }f\left( \frac{a+b}{2}\right) \leq
\frac{1}{b-a}\int_{a}^{b}f\left( x\right) dx\leq \left[ f\left( a\right)
+f\left( b\right) \right] \int_{0}^{1}h\left( \alpha \right) d\alpha .
\label{hh}
\end{equation}
In \cite{OAS}, \"{O}zdemir et al. defined $\left( h,m\right) -$convexity and
obtained Hermite-Hadamard-type inequalities as following .
\begin{definition}
\label{d1.7} Let $h:J\subset
\mathbb{R}
\rightarrow
\mathbb{R}
$ be a non-negative function. We say that $f:\left[ 0,b\right] \rightarrow
\mathbb{R}
$ is a $\left( h,m\right) -$convex function, if $f$ is non-negative and for
all $x,y\in \left[ 0,b\right] ,m\in \lbrack 0,1]$ and $\alpha \in (0,1),$ we
have
\begin{equation*}
f(\alpha x+m(1-\alpha )y)\leq h(\alpha )f(x)+mh(1-\alpha )f(y).
\end{equation*
If the inequality is reversed, then $f$ is said to be $\left( h,m\right) -
concave function on $[0,b].$
\end{definition}
\begin{theorem}
\label{t1.11} Let $f:\left[ 0,\infty \right) \rightarrow
\mathbb{R}
$ be an $\left( h,m\right) -$convex function with $m\in \left( 0,1\right] ,$
$t\in \left[ 0,1\right] .$ If \ $0\leq a<b<\infty $ and $f\in $ $L_{1}\left[
ma,b\right] ,$ then the following inequality holds
\begin{eqnarray*}
&&\frac{1}{m+1}\left[ \frac{1}{mb-a}\int_{a}^{mb}f\left( x\right) dx+\frac{
}{b-ma}\int_{ma}^{b}f\left( x\right) dx\right] \\
&& \\
&\leq &\left[ f(a)+f(b)\right] \int_{0}^{1}h\left( t\right) dt.
\end{eqnarray*}
\end{theorem}
Let us consider a function $\varphi :[a,b]\rightarrow \lbrack a,b]$ where
[a,b]\subset
\mathbb{R}
.$ In \cite{Y}, Youness defined the $\varphi -$convex functions as the
following:
\begin{definition}
\label{d1.8} A function $f:[a,b]\rightarrow
\mathbb{R}
$ is said to be $\varphi -$convex on $[a,b]$ if for every two points $x\in
\lbrack a,b],y\in \lbrack a,b]$ and $t\in \lbrack 0,1],$ the following
inequality holds
\begin{equation*}
f\left( t\varphi (x)+(1-t)\varphi (y)\right) \leq tf(\varphi
(x))+1-tf(\varphi (y)).
\end{equation*}
\end{definition}
In \cite{Z}, M.Z. Sarikaya defined $\varphi _{h}-$convex functions and
obtained the following inequalities for this class.
\begin{definition}
\label{d1.9} Let $I$ be an interval in
\mathbb{R}
$ and $h:(0,1)\rightarrow (0,\infty )$ be a given function. We say that a
function $f:I\rightarrow \lbrack 0,\infty )$ is $\varphi _{h}-$convex if
\begin{equation}
f\left( t\varphi (x)+(1-t)\varphi (y)\right) \leq h(t)f(\varphi
(x))+h(1-t)f(\varphi (y)) \label{12}
\end{equation
for all $x,y\in I$ and $t\in (0,1).$ If inequality (\ref{12}) is reversed,
then $f$ is said to be $\varphi _{h}-$concave.
\end{definition}
\begin{theorem}
\label{t1.13} Let $h:\left( 0,1\right) \rightarrow \left( 0,\infty \right) $
be a given function. If $f:I\rightarrow \lbrack 0,\infty )$ is Lebesgue
integrable and $\varphi _{h}-$convex for continuous function $\varphi
:[a,b]\rightarrow \lbrack a,b],$ then the following inequality holds
\begin{eqnarray*}
&&\frac{1}{\varphi (b)-\varphi (a)}\int_{\varphi (a)}^{\varphi
(b)}f(x)f(\varphi (a)+\varphi (b)-x)dx \\
&& \\
&\leq &\left[ f^{2}\left( \varphi (x)\right) +f^{2}\left( \varphi (y)\right)
\right] \int_{0}^{1}h(t)h\left( 1-t\right) dt+2f\left( \varphi (x)\right)
f\left( \varphi (y)\right) \int_{0}^{1}h^{2}(t)dt.
\end{eqnarray*}
\end{theorem}
\begin{theorem}
\label{t1.14} Let $h:\left( 0,1\right) \rightarrow \left( 0,\infty \right) $
be a given function. If $f,g:I\rightarrow \lbrack 0,\infty )$ is Lebesgue
integrable and $\varphi _{h}-$convex for continuous function $\varphi
:[a,b]\rightarrow \lbrack a,b],$ then the following inequality holds
\begin{eqnarray*}
&&\frac{1}{\varphi \left( b\right) -\varphi \left( a\right) -}\int_{\varphi
(a)}^{\varphi (b)}f\left( x\right) g(x)dx \\
&\leq &M(a,b)\int_{0}^{1}h^{2}(t)dt+N(a,b)\int_{0}^{1}h(t)h(1-t)dt
\end{eqnarray*
where
\begin{equation*}
M(a,b)=f\left( \varphi (x)\right) g\left( \varphi (x)\right) +f\left(
\varphi (y)\right) g\left( \varphi (y)\right)
\end{equation*
and
\begin{equation*}
N(a,b)=f\left( \varphi (x)\right) g\left( \varphi (y)\right) +f\left(
\varphi (y)\right) g\left( \varphi (x)\right) .
\end{equation*}
\end{theorem}
The aim of this paper is to define a new class of convex function and then
establish new Hermite-Hadamard-type inequalities.
\section{MAIN RESULTS}
In the begining we give a new definition $\varphi _{h,m}-$convex function.
$I$ and $J$ are intervals on
\mathbb{R}
,\left( 0,1\right) \subseteq J$ and functions $h$ and $f$ are real
non-negative functions defined on $J$ and $I$, respectively.
\begin{definition}
\label{d2.1} Let $h:J\subset
\mathbb{R}
\rightarrow
\mathbb{R}
$ be a non-negative function, $h\neq 0.$ We say that $f:\left[ 0,b\right]
\subseteq \left[ 0,\infty \right) \rightarrow
\mathbb{R}
$ is a $\varphi _{h,m}-$convex function, if $f$ is non-negative and
satisfies the inequalit
\begin{equation}
f(t\varphi (x)+m(1-t)\varphi (y))\leq h(t)f(\varphi (x))+mh(1-t)f(\varphi
(y)) \label{h}
\end{equation
for all $x,y\in \left[ 0,b\right] ,t\in \left( 0,1\right) .$
\end{definition}
If the inequality (\ref{h}) is reversed, then $f$ is said to be $\varphi
_{h,m}-$concave function on $\left[ 0,b\right] .$
Obviously, if we choose $h(t)=t$ and $m=1$ we have non-negative $\varphi -
convex functions$.$ If we choose $m=1,$ then we have $\varphi _{h}-$convex
functions. If we choose $m=1$ and $\varphi (x)=x$ the two definitions
\varphi _{h,m}-$convex and $h-$convex functions become identical.
The following results were obtained for $\varphi _{h,m}-$convex functions.
\begin{proposition}
\label{prop 2.1} If $f,g$ are $\varphi _{h,m}-$convex functions and $\lambda
>0,$ then $f+g$ and $\lambda f$ are $\varphi _{h,m}-$convex functions.
\end{proposition}
\begin{proof}
From the definition of $\varphi _{h,m}-$convex functions we can writ
\begin{equation*}
f(t\varphi (x)+m(1-t)\varphi (y))\leq h(t)f(\varphi (x))+mh(1-t)f(\varphi
(y)))
\end{equation*
and
\begin{equation*}
g(t\varphi (x)+m(1-t)\varphi (y))\leq h(t)g(\varphi (x))+mh(1-t)g(\varphi
(y)))
\end{equation*
for all $x,y\in \left[ 0,b\right] ,m\in (0,1]$ and $t\in \left[ 0,1\right] .$
If we add the above inequalities we get
\begin{equation*}
\left( f+g\right) (t\varphi (x)+m(1-t)\varphi (y))\leq h(t)\left( f+g\right)
(\varphi (x))+mh(1-t)\left( f+g\right) (\varphi (y)).
\end{equation*
And also we have
\begin{equation*}
\lambda f(t\varphi (x)+m(1-t)\varphi (y))\leq h(t)\lambda f(\varphi
(x))+mh(1-t)\lambda f(\varphi (y)))
\end{equation*
which completes the proof.
\end{proof}
\begin{proposition}
\label{prop 2.2} Let $h_{1},h_{2}:\left( 0,1\right) \rightarrow \left(
0,\infty \right) $ be functions such that $h_{2}\left( t\right) \leq
h_{1}\left( t\right) $ for all $t\in \left( 0,1\right) .$ If $f$ is $\varphi
_{h_{2},m}-$convex on $[0,b],$ then for all $x,y\in \left[ 0,b\right] $ $f$
is $\varphi _{h_{1},m}-$convex on $[0,b].$
\end{proposition}
\begin{proof}
Since $f$ is $\varphi _{h_{2},m}-$convex on $[0,b],$ for all $x,y\in \left[
0,b\right] $ and $t\in (0,1),$ we have
\begin{eqnarray*}
f(t\varphi (x)+m(1-t)\varphi (y)) &\leq &h_{2}(t)f(\varphi
(x))+mh_{2}(1-t)f(\varphi (y))) \\
&\leq &h_{1}(t)f(\varphi (x))+mh_{1}(1-t)f(\varphi (y)))
\end{eqnarray*
which completes the proof.
\end{proof}
\begin{theorem}
\label{t2.0} Let $f$ be $\varphi _{h,m}-$convex function. Then i) if
\varphi $ is linear, then $f\circ \varphi $ is $\left( h-m\right) -$convex
and ii) if $f$ is increasing and $\varphi $ is $m-$convex, then $f\circ
\varphi $ is $\left( h-m\right) -$convex.
\end{theorem}
\begin{proof}
i) From $\varphi _{h,m}-$convexity of $f$ and linearity of $\varphi ,$ we
hav
\begin{eqnarray*}
f\circ \varphi \left[ tx+m(1-t)y\right] &=&f\left[ \varphi \left(
tx+m(1-t)y\right) \right] \\
&=&f\left[ t\varphi (x)+m(1-t)\varphi (y)\right] \\
&\leq &h(t)f\circ \varphi (x)+mh\left( 1-t\right) f\circ \varphi (y)
\end{eqnarray*
which completes the proof for first case.
ii) From $m-$convexity of $\varphi $, we hav
\begin{equation*}
\varphi \left[ tx+m(1-t)y\right] \leq t\varphi (x)+m(1-t)\varphi (y).
\end{equation*
Since $f$ is increasing we can write
\begin{eqnarray*}
f\circ \varphi \left[ tx+m(1-t)y\right] &\leq &f\left[ t\varphi
(x)+m(1-t)\varphi (y)\right] \\
&\leq &h(t)f\circ \varphi (x)+mh\left( 1-t\right) f\circ \varphi (y).
\end{eqnarray*
This completes the proof for this case.
\end{proof}
\begin{theorem}
\label{t2.00} Let $h:J\subseteq
\mathbb{R}
\rightarrow
\mathbb{R}
$ be a non-negative function, $h\neq 0$ and $f:\left[ 0,b\right] \subseteq
\left[ 0,\infty \right) \rightarrow
\mathbb{R}
$ be an $\varphi _{h,m}-$convex function with $m\in (0,1]$ and $t\in \left(
0,1\right) .$ Then for all $x,y\in \lbrack 0,b],$ the function
g:[0,1]\rightarrow
\mathbb{R}
,$ $g(t)=f(t\varphi (x)+m(1-t)\varphi (y))$ is $\left( h-m\right) -$convex
on $\left[ 0,b\right] .$
\end{theorem}
\begin{proof}
Since $f$ is $\varphi _{h,m}-$convex function, for $x,y\in \lbrack 0,b],$
\lambda _{1},\lambda _{2}\in \left( 0,1\right) $ with $\lambda _{1}+\lambda
_{2}=1$ and $t_{1},t_{2}\in \left( 0,1\right) $ we obtai
\begin{eqnarray*}
&&g\left( \lambda _{1}t_{1}+m\lambda _{2}t_{2}\right) \\
&=&f\left[ \left( \lambda _{1}t_{1}+m\lambda _{2}t_{2}\right) \varphi
(x)+m\left( 1-\lambda _{1}t_{1}-m\lambda _{2}t_{2}\right) \varphi (y)\right]
\\
&=&f\left[ \lambda _{1}\left( t_{1}\varphi (x)+m\left( 1-t_{1}\right)
\varphi (y)\right) +m\lambda _{2}\left( t_{2}\varphi (x)+m\left(
1-t_{2}\right) \varphi (y)\right) \right] \\
&\leq &h\left( \lambda _{1}\right) f\left( t_{1}\varphi (x)+m\left(
1-t_{1}\right) \varphi (y)\right) +mh\left( \lambda _{2}\right) f\left(
t_{2}\varphi (x)+m\left( 1-t_{2}\right) \varphi (y)\right) \\
&=&h\left( \lambda _{1}\right) g\left( t_{1}\right) +mh\left( \lambda
_{2}\right) g\left( t_{2}\right)
\end{eqnarray*
which shows the $\left( h-m\right) -$convexity of $g.$
\end{proof}
\begin{theorem}
\label{t2.1} Let $h:J\subseteq
\mathbb{R}
\rightarrow
\mathbb{R}
$ be a non-negative function, $h\neq 0$ and $f:\left[ 0,b\right] \subseteq
\left[ 0,\infty \right) \rightarrow
\mathbb{R}
$ be a $\varphi _{h,m}-$convex function with $m\in (0,1]$ and $t\in \left(
0,1\right) .$ If $f\in L_{1}\left[ \varphi (a),m\varphi (b)\right] ,$ $h\in
L_{1}\left[ 0,1\right] ,$ one has the following inequality:
\begin{eqnarray*}
&&\frac{1}{m\varphi (y)-\varphi (x)}\int_{\varphi (x)}^{m\varphi
(y)}f(u)f(\varphi (x)+m\varphi (y)-u)du \\
&& \\
&\leq &f^{2}\left( \varphi (x)\right) +m^{2}f^{2}\left( \varphi (y)\right)
\int_{0}^{1}h(t)h\left( 1-t\right) dt+f\left( \varphi (x)\right) f\left(
\varphi (y)\right) \left[ m+1\right] \int_{0}^{1}h^{2}(t)dt
\end{eqnarray*}
\end{theorem}
\begin{proof}
Since $f$ is $\varphi _{h,m}-$convex function, $t\in \left[ 0,1\right] $ and
$m\in (0,1],$ the
\begin{equation*}
f(t\varphi (x)+m(1-t)\varphi (y))\leq h(t)f(\varphi (x))+mh(1-t)f(\varphi
(y))
\end{equation*}
an
\begin{equation*}
f((1-t)\varphi (x)+mt\varphi (y))\leq h(1-t)f(\varphi (x))+mh(t)f(\varphi
(y))
\end{equation*}
for all $x,y\in \lbrack 0,b].$
By multiplying these inequalities and integrating on $\left[ 0,1\right] $
with respect to $t$, we obtai
\begin{eqnarray*}
&&\int_{0}^{1}f(t\varphi (x)+m(1-t)\varphi (y))f((1-t)\varphi (x)+mt\varphi
(y))dt \\
&\leq &f^{2}\left( \varphi (x)\right) \int_{0}^{1}h(t)h(1-t)dt+mf\left(
\varphi (x)\right) f\left( \varphi (y)\right) \int_{0}^{1}h^{2}(t)dt \\
&&+mf\left( \varphi (x)\right) f\left( \varphi (y)\right)
\int_{0}^{1}h^{2}(1-t)dt+m^{2}f^{2}\left( \varphi (y)\right)
\int_{0}^{1}h(t)h(1-t)dt \\
&=&\left[ f^{2}\left( \varphi (x)\right) +m^{2}f^{2}\left( \varphi
(y)\right) \right] \int_{0}^{1}h(t)h(1-t)dt+f\left( \varphi (x)\right)
f\left( \varphi (y)\right) \left[ m+1\right] \int_{0}^{1}h^{2}(t)dt.
\end{eqnarray*
If we change the variable $u=t\varphi (x)+m(1-t)\varphi (y),$ we obtain the
inequality which is the required.
\end{proof}
\begin{remark}
\label{rem 2.0} In Theorem \ref{t2.1}, if we choose $m=1$ Theorem \ref{t2.1}
reduces to Theorem \ref{t1.13}.
\end{remark}
\begin{theorem}
\label{t2.2.} Under the assumptions of Theorem \ref{t2.1}, we have the
following inequalit
\begin{equation*}
\frac{1}{m\varphi \left( y\right) -\varphi \left( x\right) }\int_{\varphi
(x)}^{m\varphi (y)}f\left( u\right) du\leq \left[ f\left( \varphi (x)\right)
+f\left( \varphi (y)\right) \right] \int_{0}^{1}h(t)dt.
\end{equation*}
\end{theorem}
\begin{proof}
By definition of $\varphi _{h,m}-$convex function we can write
\begin{equation*}
f(t\varphi (x)+m(1-t)\varphi (y))\leq h(t)f(\varphi (x))+mh(1-t)f(\varphi
(y)).
\end{equation*
If we integrate the above inequality on $\left[ 0,1\right] $ with respect to
$t$ and change the variable $u=t\varphi (x)+m(1-t)\varphi (y),$ we obtained
the required inequality.
\end{proof}
\begin{remark}
\label{rem 2.1} In Theorem \ref{t2.2.}, if we choose $m=1$ and $\varphi
:[a,b]\rightarrow \lbrack a,b],\varphi (x)=x$, we obtained the inequality
which is the right hand side of (\ref{hh}).
\end{remark}
\begin{theorem}
\label{t2.2} Under the assumptions of Theorem \ref{t2.1}, we have the
following inequality
\begin{eqnarray*}
&&\frac{1}{m+1}\left[ \frac{1}{\varphi \left( y\right) -m\varphi \left(
x\right) }\int_{m\varphi (x)}^{\varphi (y)}f\left( u\right) du+\frac{1}
m\varphi \left( y\right) -\varphi \left( x\right) }\int_{\varphi
(x)}^{m\varphi (y)}f\left( u\right) du\right] \\
&\leq &\left[ f\left( \varphi (x)\right) +f\left( \varphi (y)\right) \right]
\int_{0}^{1}h(t)dt
\end{eqnarray*
for all $0\leq m\varphi \left( x\right) \leq \varphi \left( x\right) \leq
m\varphi (y)<\varphi (y)<\infty .$
\end{theorem}
\begin{proof}
Since $f$ is $\varphi _{h,m}-$convex function, we can write
\begin{eqnarray*}
f(t\varphi (x)+m(1-t)\varphi (y)) &\leq &h(t)f(\varphi (x))+mh(1-t)f(\varphi
(y)), \\
&& \\
f((1-t)\varphi (x)+mt\varphi (y)) &\leq &h(1-t)f(\varphi (x))+mh(t)f(\varphi
(y)), \\
&& \\
f(t\varphi (y)+m(1-t)\varphi (x)) &\leq &h(t)f(\varphi (y))+mh(1-t)f(\varphi
(x)),
\end{eqnarray*
an
\begin{equation*}
f((1-t)\varphi (y)+mt\varphi (x))\leq h(1-t)f(\varphi (y))+mh(t)f(\varphi
(x)).
\end{equation*}
By summing these inequalities and integrating on $\left[ 0,1\right] $ with
respect to $t$, we obtai
\begin{eqnarray*}
&&\int_{0}^{1}f(t\varphi (x)+m(1-t)\varphi (y))dt+\int_{0}^{1}f((1-t)\varphi
(x)+mt\varphi (y))dt \\
&&+\int_{0}^{1}f(t\varphi (y)+m(1-t)\varphi
(x))dt+\int_{0}^{1}f((1-t)\varphi (y)+mt\varphi (x))dt \\
&\leq &\left[ f\left( \varphi (x)\right) +f\left( \varphi (y)\right) \right]
\left( m+1\right) \left[ \int_{0}^{1}h(t)dt+\int_{0}^{1}h(1-t)dt\right] .
\end{eqnarray*
It is easy to see tha
\begin{eqnarray*}
\int_{0}^{1}f(t\varphi (x)+m(1-t)\varphi (y))dt
&=&\int_{0}^{1}f((1-t)\varphi (y)+mt\varphi (x))dt=\frac{1}{m\varphi \left(
y\right) -\varphi \left( x\right) }\int_{\varphi (x)}^{m\varphi (y)}f\left(
u\right) du, \\
\int_{0}^{1}f(t\varphi (y)+m(1-t)\varphi (x))dt
&=&\int_{0}^{1}f((1-t)\varphi (y)+mt\varphi (x))dt=\frac{1}{\varphi \left(
y\right) -m\varphi \left( x\right) }\int_{m\varphi (x)}^{\varphi (y)}f\left(
u\right) du
\end{eqnarray*
an
\begin{equation*}
\int_{0}^{1}h(t)dt=\int_{0}^{1}h(1-t)dt.
\end{equation*
If we write these equalities in the above inequality we obtain the required
result.
\end{proof}
\begin{remark}
\label{rem 2.2} In Theorem \ref{t2.2}, if we choose $\varphi
:[a,b]\rightarrow \lbrack a,b],\varphi (x)=x$ Theorem \ref{t2.2} reduces to
Theorem \ref{t1.11}..
\end{remark}
\begin{theorem}
\label{t2.3}Let $h:J\subseteq
\mathbb{R}
\rightarrow
\mathbb{R}
$ be a non-negative function, $h\neq 0$ and $f,g:\left[ 0,b\right] \subseteq
\left[ 0,\infty \right) \rightarrow
\mathbb{R}
$ be $\varphi _{h,m}-$convex functions with $m\in (0,1]$. If $f$ and $g$ are
Lebesque integrable, the following inequality holds:
\begin{eqnarray*}
&&\frac{1}{m\varphi \left( y\right) -\varphi \left( x\right) -}\int_{\varphi
(x)}^{m\varphi (y)}f\left( u\right) g(u)du \\
&\leq &M(a,b)\int_{0}^{1}h^{2}(t)dt+mN(a,b)\int_{0}^{1}h(t)h(1-t)dt
\end{eqnarray*
where
\begin{equation*}
M(a,b)=f\left( \varphi (x)\right) g\left( \varphi (x)\right) +m^{2}f\left(
\varphi (y)\right) g\left( \varphi (y)\right)
\end{equation*
and
\begin{equation*}
N(a,b)=f\left( \varphi (x)\right) g\left( \varphi (y)\right) +f\left(
\varphi (y)\right) g\left( \varphi (x)\right) .
\end{equation*}
\end{theorem}
\begin{proof}
Since $f$ and $g$ are $\varphi _{h,m}-$convex functions, we can writ
\begin{equation*}
f(t\varphi (x)+m(1-t)\varphi (y))\leq h(t)f(\varphi (x))+mh(1-t)f(\varphi
(y))
\end{equation*
and
\begin{equation*}
g(t\varphi (x)+m(1-t)\varphi (y))\leq h(t)g(\varphi (x))+mh(1-t)g(\varphi
(y)).
\end{equation*
If we multiply the above inequalities and integrate on $\left[ 0,1\right] $
with respect to $t$, we obtai
\begin{eqnarray*}
&&\int_{0}^{1}f(t\varphi (x)+m(1-t)\varphi (y))g(t\varphi (x)+m(1-t)\varphi
(y))dt \\
&\leq &f\left( \varphi (x)\right) g\left( \varphi (x)\right)
\int_{0}^{1}h^{2}(t)dt+m^{2}f\left( \varphi (y)\right) g\left( \varphi
(y)\right) \int_{0}^{1}h^{2}(1-t)dt \\
&&+m\left[ f\left( \varphi (x)\right) g\left( \varphi (y)\right) +f\left(
\varphi (y)\right) g\left( \varphi (x)\right) \right] \int_{0}^{1}h\left(
t\right) h(1-t)dt.
\end{eqnarray*
If we change the variable $u=t\varphi (x)+m(1-t)\varphi (y),$ we obtain the
inequality which is the required.
\end{proof}
\begin{remark}
\label{rem 2.3} In Theorem \ref{t2.3}, if we choose $m=1$ Theorem \ref{t2.3}
reduces to Theorem \ref{t1.14}.
\end{remark}
|
1,314,259,996,588 | arxiv |
\section{Background}
We will show that diagonally dominant matrix $A$ can be
well-approximated by a product $\UU^{T} \DD \UU$ where $\UU$ is upper-triangular and sparse
and $\DD$ is diagonal.
By solving linear equations in each of these matrices, we can quickly solve a
system of linear equations in $A$.
We now review the notion of approximation that we require
along with some of its standard properties.
For symmetric matrices $A$ and $B$, we write
$A \succcurlyeq B$ if $A - B$ is positive semidefinite.
The ordering given by $\succcurlyeq$ is called the ``Loewner partial order''.
\ifthenelse{\boolean{@full}}{\begin{fact}\label{fact:orderInverse}
For $A$ and $B$ positive definite,
$A \succcurlyeq B$ if and only if $B^{-1} \succcurlyeq A^{-1}$.
\end{fact}
\begin{fact}\label{fact:orderCAC}
If $A \succcurlyeq B$ and $C$ is any matrix of compatible dimension,
then $C A C^{T} \succcurlyeq C B C^{T}$.
\end{fact}
}{}We say that $\AA$ is an $\epsilon$-approximation of $\BB$, written
$\AA \approx_{\epsilon} \BB $,
if
\[
e^{\epsilon} \BB \succcurlyeq \AA \succcurlyeq e^{-\epsilon} \BB.
\]
Observe that this relation is symmetric.
\ifthenelse{\boolean{@full}}{
Simple arithmetic yields the following fact about compositions of approximations.
\begin{fact}\label{frac:orderComposition}
If $\AA \approx_{\epsilon} \BB$
and $\BB \approx_{\delta } \CC$, then
$\AA \approx_{\epsilon + \delta} \CC$.
\end{fact}
}{}
\ifthenelse{\boolean{@full}}{
We say that $\xxtil$ is an $\epsilon$-approximate solution to the
system $\AA \xx = \bb$ if
\[
\norm{\xxtil - \AA^{-1} \bb}_{\AA} \leq \epsilon \norm{\xx}_{\AA},
\]
where
\[
\norm{\xx}_{\AA} = (\xx^{T} \AA \xx)^{1/2}.
\]}
{
We say that $\xxtil$ is an $\epsilon$-approximate solution to the
system $\AA \xx = \bb$ if
$\norm{\xxtil - \AA^{-1} \bb}_{\AA} \leq \epsilon \norm{\xx}_{\AA}$,
where $\norm{\xx}_{\AA} = (\xx^{T} \AA \xx)^{1/2}$.
}
This is the notion of approximate solution typically used when
analyzing preconditioned linear system solvers, and it is the notion assumed in the
works we reference that use these solvers as subroutines.
\ifthenelse{\boolean{@full}}{
\begin{fact}\label{fact:approxSolve}
If $\epsilon<1/2$, $\AA \approx_{\epsilon} \BB$ and
$\BB \xxtil = \bb$, then
$\xxtil$ is a $2 \sqrt{\epsilon}$ approximate solution to
$\AA \xx = \bb$.
\end{fact}
}{}So, if one can find a matrix $\BB$ that is a good approximation of
$\AA$ and such that one can quickly solve linear equations in $\BB$, then
one can quickly compute approximate solutions to systems of linear
equations in $\AA$.
Using methods such as \textit{iterative refinement}, one can use
multiple solves in $\BB$ and multiplies by $\AA$ to obtain
arbitrarily good approximations.
For example, if $\BB$ is a constant approximation of $\AA$, then
for every $\epsilon < 1$, one can obtain an $\epsilon$ approximate
solution of a linear system in $\AA$ by performing
$O (\log (\epsilon^{-1}))$ solves in $\BB$ and multiplies by $\AA$
(see, for example, \cite[Lemma 4.4]{PengS14}).
It is known that one can reduce the problem of solving systems of equations in SDD matrices
to either the special case of Laplacian matrices or SDDM matrices---the family of SDD matrices that
are nonsingular and have non-positive off diagonal entries (see, e.g. \cite{SpielmanTengLinsolve,CohenKMPPRX}).
We will usually consider SDDM matrices.
Every SDDM matrix $\AA$ can be uniquely written as a sum $\LL + \XX $ where $\LL$ is a Laplacian matrix
and $\XX$ is a nonnegative diagonal matrix.
The main properties of SDDM matrices that we exploit are that they are closed under Schur complements
and that they can be \textit{sparsified}.
The stongest known sparsifications come from the main result of \cite{BSS}, which implies the following.
\begin{theorem}\label{thm:BSS}
For every $n$-dimensional SDDM matrix $\AA $
and every $\epsilon \leq 1$, there is a SDDM
matrix $\BB $ having at most $10 n / \epsilon^{2}$ nonzero entries that is
an $\epsilon$-approximation of $\AA$. In particular, the number of non-zero
entries in $\BB$ above the diagonal is at most $4.1 n / \epsilon^{2}$.
\end{theorem}
\ifthenelse{\boolean{@full}}{
While the matrix $\BB$ guaranteed to exist by this theorem may be
found in polynomials time, this is not fast enough for the
algorithms we desire.
So, we only use Theorem~\ref{thm:BSS} to prove existence results.
We
later show how to replace it with faster algorithms, at some expense
in the quality of the sparsifiers we produce.
}{}
\section{Block Cholesky Factorization}\label{sec:cholesky}
Our algorithm uses block-Cholesky factorization to eliminate
a block of vertices all at once.
We now review how block-Cholesky factorization works.
To begin, we remind the reader that Cholesky factorization is the natural
way of performing Gaussian elimination
on a symmetric matrix:
by performing eliminations on rows and columns simultaneously, one preserves the
symmetry of the matrix.
The result of Cholesky factorization is a representation of a matrix $\MM$
in the form $\UU^{T} \UU$,
where $\UU$ is an upper-triangular matrix.
We remark that this is usually written as $\LL \LL^{T}$ where $\LL$ is lower-triangular.
We have chosen to write it in terms of upper-triangular matrices so as to avoid confusion with the
use of the letter $\LL$ for Laplacian matrices.
To produce matrices $\UU$ with 1s on their diagonals, and
to avoid the computation of square roots, one often instead forms a factorization
of the form $\UU^{T} \DD \UU$, where $\DD$ is a diagonal matrix.
Block-Cholesky factorization forms a factorization of this form, but with $\DD$
being a block-diagonal matrix.
To begin, we must choose a set of rows to be eliminated.
We will eliminate the same set of columns.
For consistency with the notation used in the description of multigrid algorithms,
we will let $F$ (for finer) be the set of rows to be eliminated.
We then let $C$ (for coarse) be the remaining set of rows.
In contrast with multigrid methods, we will have $\sizeof{F} < \sizeof{C}$.
By re-arranging rows and colums, we can write $\MM$
in block form:
\[
\MM
=
\left[
\begin{array}{cc}
\MM_{FF} &\MM_{FC}\\
\MM_{CF} & \MM_{CC}
\end{array}
\right].
\]
Elimination of the rows and columns in $F$ corresponds to writing
\begin{equation} \label{eqn:blockformula}
\MM
=
\begin{bmatrix}
\II & 0\\
\MM_{CF} \MM_{FF}^{-1} & \II
\end{bmatrix}
\begin{bmatrix}
\MM_{FF} & 0\\
0 & \MM_{CC} - \MM_{CF} \MM_{FF}^{-1} \MM_{FC}
\end{bmatrix}
\begin{bmatrix}
\II & \MM_{FF}^{-1} \MM_{FC } \\
0 & \II
\end{bmatrix}.
\end{equation}
Note that the left and right matrices are lower and upper triangular.
The matrix in the lower-right block of the middle matrix is the Schur
complement of $F$ in $\MM$.
We will refer to it often by the notation
\[
\schur{\MM}{F} \stackrel{\mathrm{def}}{=} \MM_{CC} - \MM_{CF} \MM_{FF}^{-1} \MM_{FC}.
\]
We remark that one can solve a linear system in $\schur{\MM}{F}$
by solving a system in $\MM$: one just needs to put zeros coordinates corresponding
to $F$ in the right-hand-side vector.
Recall that
\begin{equation}\label{eqn:blockLinverse}
\begin{bmatrix}
\II & 0\\
\MM_{CF} \MM_{FF}^{-1} & \II
\end{bmatrix}^{-1}
=
\begin{bmatrix}
\II & 0\\
-\MM_{CF} \MM_{FF}^{-1} & \II
\end{bmatrix}.
\end{equation}
So, if we can quickly multiply by this last matrix,
and if we can quickly solve linear systems in $\MM_{FF}$
and in the Schur complement, then we can quickly solve systems in $\MM$.
Algebraically, we exploit the following identity:
\begin{fact}\label{fact:blockInverse}
\begin{equation}\label{eqn:blockInverse}
\MM^{-1}
=
\left[
\begin{array}{cc}
\II & -\MM_{FF}^{-1} \MM_{FC}\\
0 & \II
\end{array}
\right]
\left[
\begin{array}{cc}
\MM_{FF}^{-1} & 0 \\
0 & \schur{\MM}{F}^{-1}
\end{array}
\right]
\left[
\begin{array}{cc}
\II & 0\\
-\MM_{CF} \MM_{FF}^{-1} & \II
\end{array}
\right].
\end{equation}
\end{fact}
Our algorithms depend upon the following
important property of Schur complements of SDDM matrices.
\begin{fact}
If $\MM$ is a SDDM matrix and $F$ is a subset of its columns,
the Schur complement
$\schur{\MM}{F}$ is also a SDDM matrix.
\end{fact}
We now mention two other facts that we will use about the
$\preccurlyeq $ order and Schur complements.
\begin{fact}\label{fact:blockSubstitute}
If $\MM_{FF} \preccurlyeq \Mtil_{FF}$, then
\[
\begin{pmatrix}
\MM_{FF} &\MM_{FC}\\
\MM_{CF} & \MM_{CC}
\end{pmatrix}
\preccurlyeq
\begin{pmatrix}
\Mtil_{FF} &\MM_{FC}\\
\MM_{CF} & \MM_{CC}
\end{pmatrix}.
\]
\end{fact}
\begin{fact}[Lemma B.1. from~\cite{MillerP13}]
\label{fact:schurLoewner}
If $\MM$ and $\Mtil$ are positive semidefinite
matrices satisfying $\MM \preceq \Mtil$,
then
\[
\schur{\MM}{F} \preceq \schur{\Mtil}{F}.
\]
\end{fact}
The first idea that motivates our algorithms is that we can sparsify $\MM$
and $\schur{\MM}{F}$.
If $\MM$ is sparse, then we can quickly multiply vectors by $\MM_{FC}$.
However, to be able to quickly apply the factorization of $\MM^{-1}$ given
in Fact~\ref{fact:blockInverse}, we also need to be able to quickly
apply $\MM_{FF}^{-1}$.
If we can do that, then we can
quickly solve systems in $\MM$ by recursively solving
systems in $\schur{\MM}{F}$.
The easiest way to find an $F$ for which we could quickly apply
$\MM_{FF}^{-1}$ would be to choose $F$ to be a large independent set,
in which case $\MM_{FF}$ would be diagonal.
Such a set $F$ must exist as we can assume $\MM$ is sparse.
However, the independent set we are guaranteed to find by the sparsity of $\MM$ is not big enough:
if we repeatedly find large independent sets and then sparsify the resulting Schur complements,
the error that accumulates could become too big.
The second idea behind our algorithms is that we can find a large set $F$ for which
$\MM_{FF}$ is well-approximated by a diagonal matrix.
This will allow us to apply $\MM_{FF}^{-1}$ quickly.
In the next section, we show that a very good choice of $F$ always exists, and that
the use of such sets $F$ yields nearly-optimal algorithms for solving linear systems in $\MM$.
In order to make the entire algorithm efficient, we are still left with the problem of quickly
computing a sparsifier of the Schur complement.
In Section~\ref{sec:vertexSparsify}, we show how to quickly compute and use \textit{Spectral Vertex Sparsifiers},
which are sparsifiers of the Schur complement.
In particular, we do this by expressing the Schur complement as the sum of the Schur complements of two simpler matrices:
one with a diagonal $FF$ block, and the other with a better conditioned $FF$ block.
We handle the matrix with the diagonal block directly, and the matrix with the better conditioned block recursively.
\section{Introduction}
There have been incredible advances in the design of algorithms for
solving systems of linear equations in Laplacian and symmetric, diagonally dominant (SDD) matrices.
Cohen \textit{et. al.} \cite{CohenKMPPRX} have recently designed algorithms that find $\epsilon$-approximate solutions to such
systems of equations in time $O (m \log^{1/2} n \log \epsilon^{-1})$,
where $n$ is the dimension of the matrix and $m$ is its number of nonzero entries.
Peng and Spielman \cite{PengS14}
recently discovered the first parallel algorithms that require only poly-logarithmic time and nearly-linear work.
In this paper, we prove that for every such matrix
there is an operator that approximately solves equations in this matrix
and that can be evaluated in linear work and depth $O(\log n (\log \log n)^{2})$.
These operators are analogous to the LU decompositions produced by Gaussian elimination:
they take longer to compute than to apply.
We present two fast parallel algorithms for finding solvers that are almost as fast.
One runs in nearly linear time and polylogarithmic depth (Theorem \ref{thm:black_box}).
The algorithm presented in Theorem \ref{thm:non_combin_result}
has preprocessing depth $n^{o (1)}$, but is more efficient in terms of work
and produces a solver whose work and depth are
within a logarithmic factor of the best one we can show exists.
A matrix $\AA$ is diagonally dominant if each of its diagonal entries is
at least the sum of the absolute values of the off-diagonal entries in
its row.
The most famous symmetric, diagonally dominant matrices
are the Laplacian matrices of graphs: those with non-positive off-diagonal
such that every diagonal is exactly equal to the sum of the absolute values
of the off-diagonal entries in its row.
Laplacian and SDD matrices arise in many applications, including the solution
of optimization problems such as maximum flow \cite{ChristianoEtAl,KelnerMillerPeng,LeeRS13,Madry13},
minimium cost flow \cite{daitch2008faster,lsMaxflow},
semi-supervised learning \cite{Zhu03},
and the solution of elliptic PDEs \cite{BomanHV04}.
Building on the work of Vaidya \cite{Vaidya},
Spielman and Teng \cite{SpielmanTengLinsolve} discovered that through
the use of two constructions in graph theory---sparsifiers and low stretch spanning trees---one
could design algorithms for solving such linear equations that run
in nearly-linear time.
Kelner \textit{et. al.} \cite{KOSZ} construct an elementary algorithm
for solving SDD systems in nearly linear time that only makes use of
low stretch spanning trees.
Conversely, Peng and Spielman \cite{PengS14} design an algorithm that
only uses sparsifiers.
The present paper builds on their approach.
The parallel algorithm of Peng and Spielman \cite{PengS14}
approximates the inverse of a matrix by the sum and product
of a small number of sparse matrices.
The main bottleneck in their algorithm is that all of the matrices it produces
have the same dimension, and that the number of these matrices
depends on the condition number of the system to be solved.
This leads to each matrix having an average number of nonzero entries
per column that is proportional to the square of the logarithmic of
the condition number, leading to work $O ((m + n \log^{3}\kappa) \log \epsilon^{-1})$.
Our result improves on the construction of Peng and
Spielman~\cite{PengS14} in a number of ways.
First, the depth and work of our new algorithms are independent
of the condition number of the matrix.
Second, the matrices in the product that approximates the inverse are
of geometrically decreasing sizes.
This leads to much faster algorithms.
That said,
our efficient algorithms for constructing solvers and spectral vertex sparsifiers
critically relies on their work.
We introduce sparsified Cholesky factorization
in
in Section \ref{sec:vertexReduce}, where we prove
that the inverse of every SDD matrix $\AA $
can be approximated by an operator that can be evaluated
in linear work and depth $O (\log^{2} n \log \log n)$.
By using this operator as a preconditioner, or by applying iterative refinement,
this leads to a solver that produces $\epsilon$-approximate solutions to systems
in $\AA$ in work $O (m \log \epsilon^{-1})$ and depth $O (\log^{2} n \log \log n \log \epsilon^{-1})$,
where $m$ is the number of nonzeros in $\AA$.
We begin by eliminating a block consisting of a constant fraction of the vertices.
The elimination of these vertices adds edges to the subgraph induced on the remaining vertices.
We use the work of \cite{BSS} to sparsify the modified subgraph
(Figure \ref{fig:applyChain}, Lemma \ref{lem:apply_chain} and Theorem \ref{thm:result_BSS}).
The choice of which vertices we eliminate is important.
We use subset of vertices whose degrees in their induced subgraph are substantially
smaller than in the original graph (see Definition \ref{def:strongDD} and Lemma \ref{lem:subsetSimple}).
In Section \ref{sec:existence} we show how to convert this solver into
a sparse approximate inverse.
That is, we show that $\AA$ can be approximated by a product of the form $\UU^T \DD \UU$ where $\UU$
an upper-triangular matrix with $O (n)$ nonzero entries and $\DD$ is
diagonal.
While we can construct this $\UU$ and $\DD$ in polynomial time,
we do not yet have a nearly linear time or low depth efficient parallel algorithm that does so.
\ifthenelse{\boolean{@full}}{We obtain our best existence result in Section \ref{sec:depth}
by reducing the depth of the parallel solvers by a logarthmic
factor.}{In the full version, we obtain our best existence result by reducing the depth of the parallel solvers by a logarithmic
factor. }The reduction comes from observing that the construction of Section \ref{sec:existence}
would have the desired depth if every vertex in $\AA$ and in the smaller
graphs produced had bounded degree.
While we can use sparsification to approximate an arbitrary graph by a sparse one,
the sparse one need not have bounded degree.
We overcome this problem by proving that the Laplacian of every graph
can be approximated by a Schur complement
of the Laplacian of a larger graph of bounded degree \ifthenelse{\boolean{@full}}{(Theorem \ref{thm:degreeReduction})}{}.
We then
turn to the problem of computing our solvers efficiently in parallel.
The first obstacle is that we must quickly compute an approximation
of a Schur complement of a set of vertices without actually constructing
the Schur complement, as it could be too large.
This is the problem we call \textit{Spectral Vertex Sparsification}.
It is analogous to the problem of vertex sparsfication for cut and combinatorial flow problems
\cite{LeightonM10,Moitra13}:
given a subset of the vertices we must compute a graph on those vertices
that allows us to compute
approximations of electrical flows in the original graph between vertices in that subset.
In contrast with cut and combinatorial flow problems,
there is a graph that allows for this computation exactly on the subset of vertices,
and it is the Schur complement in the graph Laplacian.
In Section~\ref{sec:vertexSparsify}, we build on the techniques of \cite{PengS14} to give an efficient algorithm for
spectrally approximating Schur complements.
The other obstacle is that we need to compute sparsifications of graphs efficiently
in parallel.
We examine two ways of doing this in Section~\ref{sec:algo}.
The first, examined in Section \ref{ssec:blackBox}, is to use a black-box parallel algorithm for graph sparsification,
such as that of Koutis \cite{Koutis14}.
This gives us our algorithm of best total depth.
The second, examined in Section \ref{ssec:recursive}, employs a recursive
scheme in which we solve smaller linear systems to compute
probabilities with which we sample the edges, as in \cite{SpielmanS08:journal}.
Following \cite{cohen2014uniform}, these smaller linear systems
are obtained by crudely sub-sampling the original graph.
The resulting algorithm runs in depth $n^{o (1)}$, but produces
a faster solver.
We expect that further advances in graph sparsification
such as~\cite{Allen-ZhuLL15} will result in even better algorithms.
\section{
\section{Algorithmic Constructions}
\label{sec:algo}
In this section, we gives two algorithms to compute vertex sparsifier
chains, the first algorithm uses existing spectral sparsifier for
graphs and the second algorithm does not. Although combining two approaches
gives a better theoretical result, we do not show it because we believe
there will be better spectral sparsifier algorithms for graphs soon
and hybrid approaches may not be useful then.
\subsection{Black Box Construction}\label{ssec:blackBox}
The first construction relies on existing parallel spectral sparsifer
algorithms. For concreteness, we use the parallel spectral graph sparsification
algorithm given by Koutis~\cite{Koutis14}.
\ifthenelse{\boolean{@full}}{
\begin{theorem} \label{thm:sparsificaiton_result} Given any SDD
matrix $\MM$ with $n$ variables and $m$ non-zeros, there is an
algorithm $\textsc{BlackBoxSparsify}(\MM,\epsilon)$ outputs a
SDD matrix $\BB$ with $O(n\log^{3}n/\epsilon^2)$ non-zeros such that
$\MM\approx_{\epsilon}\BB$ in $O(\log^{3}n\log\alpha/\epsilon^2)$ depth
and $O((m+n\log^{3}n/\epsilon^2)\log^{2}n/\epsilon^2)$ work where
$\alpha =\frac{m}{n log^{3}n/\epsilon^2}.$
\end{theorem}
}{}
\begin{figure}[ht]
\begin{algbox}
$(\MM^{(1)},\MM^{(2)},\cdots;F_1,F_2,\cdots)=\textsc{BlackBoxConstruct}(\MM^{(0)})$
\begin{enumerate}
\item Let $k=1$, $\MM^{(1)}\leftarrow\MM^{(0)}$ and $F_0$ be the set of all variables.
\item While $\MM^{(k)}$ has more than $100$ variables
\begin{enumerate}
\item $\MM^{(k)}\leftarrow\textsc{BlackBoxSparsify}(\MM^{(k)},1/(k \log^{2}({k+4})))$.
\item Find a subset $F_k$ of size $\Omega(n^{(k)})$ such that $\MM_{F_k F_k}^{(k)}$
is $4$-strongly diagonally dominant.
\item $\MM^{(k+1)}\leftarrow\textsc{ApproxSchur}(\MM^{(k)},(F_k,F_{k-1} \setminus F_{k}),4,1/(k \log^{2}({k+4})))$.
\item $k\leftarrow k+1$.
\end{enumerate}
\end{enumerate}
\end{algbox}
\caption{Pseudocode for Constructing Vertex Sparsifier
Chains Using Existing Spectral Sparsifiers}
\ifthenelse{\boolean{@full}}{}{\ }
\label{fig:blackBoxConstruct}
\end{figure}
In the $k^{th}$ step of the algorithm, we sparsify the graph and
compute an approximate Schur complement to $1/(k \log^{2}({k+1}))$ accuracy and
this makes sure the cumulative error is upper bounded by $\sum_{k=1}^{\infty}1/(k \log^{2}({k+1}))$
which is a constant.
\begin{theorem} \label{thm:black_box}Given
any SDD matrix $\MM^{(0)}$ with $n$ variables and $m$ non-zeros,
the algorithm $\textsc{BlackBoxConstruct}(\MM^{(0)})$ returns a vertex sparsifier
chain such that the linear operator $\WW$ corresponding to it satisfies
\ifthenelse{\boolean{@full}}{\[
\WW^{\dagger}\approx_{O(1)}\MM^{(0)}.
\]}{$\WW^{\dagger}\approx_{O(1)}\MM^{(0)}$.}
Also, we can evaluate $\WW b$ in $O(\log^{2}(n)\log\log n)$ depth
and $O(n\log^{3}n\log\log n)$ work for any vector $b$.
Furthermore, the algorithm $\textsc{BlackBoxConstruct}(\MM^{(0)})$
runs in $O(\log^{6}n\log^{4}\log n)$ depth and $O(m\log^{2}n + n \log^{5} n)$
work.
\end{theorem}
\begin{proof}
Let $n^{(k)}$ and $m^{(k)}$
be the number of vertices and non zero entries in matrix $\MM^{(k)}$.
Let $s(n)=n\log^{3}n$ which is the output size of $\textsc{BlackBoxSparsify}$
and $\epsilon(k)=1/(k \log^{2}(k+4))$ which is the accuracy of the $k$-th sparsification and approximate schur complement.
We first prove the correctness of the algorithm.
The ending condition ensures $\MM^{(last)}$ has size $O(1)$;
step $(2a)$ and $(2c)$ ensures $\MM^{(k+1)}\approx_{2 \epsilon(k)}SC(\MM^{(k)},F_k)$ and
step $(2b)$ ensures $\MM_{F_k F_k}^{(k)}$ is $4$ strongly diagonally dominant.
Therefore, the chain $(\MM^{(1)},\cdots;F_1,\cdots)$ is a vertex sparsifier chain.
Since the cumlative error $\sum \epsilon(k)=O(1)$, Lemma \ref{lem:apply_chain}
shows that the resultant operator $\WW$ satisfies
\[
\WW^{\dagger}\approx_{O(1)}\MM^{(0)}.
\]
Now, we upper bound the cost of evaluating $\WW b$.
Lemma \ref{lem:subsetSimple} shows that $\left|F_k\right|=\Omega(n^{(k)})$
and hence a constant portion of variables is eliminated each iteration.
Therefore, $n^{(k)}\leq c^{k-1}n$ for some $c$. Using this, Lemma
\ref{lem:apply_chain} shows the depth for evaluating $\WW b$ is
\[
O(\sum_{k=1}^{O(\log n)}\log(k)\log(n))=O(\log^{2}(n)\log\log n)
\]
and the work for evaluating $\WW b$ is
\[
O(\sum_{k=1}^{O(\log n)}\log(k)s(c^{k-1}n)/\epsilon(k)^2).
\]
Using $s(n)=n\log^{3}n$ and $\epsilon(k)=1/(k \log^{2}(k+4))$, the work for evaluating is simply $O(s(n))$.
For the work and depth of the construction, Lemma \ref{lem:subsetSimple}
shows that it takes $O(m^{(k)})$ work and $O(\log n^{(k)})$ depth
to find $F_k$ and Theorem \ref{thm:aprox_schur} shows that $\textsc{ApproxSchur}$
takes $O(m^{(k)}k^{O(\log k)})$ work and $O(\log n^{(k)} \log k)$ depth.
Using $n^{(k)}\leq c^{k-1}n$ and $m^{(i)}=s(n^{(i)})/\epsilon(i)^2$, the total
work for this algorithm excluding $\textsc{BlackBoxSparsify}$ is
\[
\sum_{k=1}^{O(\log n)}O(s(c^{k-1}n)k^{O(\log k)}/\epsilon(k)^2)=O(s(n)).
\]
Hence, the total work for $\textsc{BlackBoxConstruct}$ is
\[
O(s(n))+O(m\log^{2}n)+\sum_{k=2}^{O(\log n)}O(s(n^{(k)})k^{O(\log k)}\log^{2}n^{(k)}/\epsilon(k)^2).
\]
Using $s(n^{(k)})$ is geometric decreasing, the total work is $O(m\log^{2}n + n \log^{5}n)$.
We can bound the total depth similarly.
\end{proof}
\begin{remark} \ifthenelse{\boolean{@full}}{Given an sparsifier algorithm that takes
$d(m,n)$ depth and $w(m,n)/\epsilon^2$ work to find a sparsifer of size $s(n)/\epsilon^2$,
the $\textsc{BlackBoxConstruct}$ roughly takes $O(\log^{2}n \log \log n)+O(d(m,n) \log n)$
depth and $O(w(m,n))$ work to construct a vertex sparsifier chain
and such chain has total depth $O(\log^{2}n\log\log n)$ and total
work $O(s(n))$.
Therefore}{In general}, the work for preprocessing is roughly linear to the
work needed to sparsify and the work for solving is linear to the
size of sparsifier. Hence, solving Laplacian system is nearly as simple
as computing sparsifier.\end{remark}
\subsection{Recursive Construction}\label{ssec:recursive}
We now give a recursive construction based on the idea that solvers
can be used to compute sampling probabilities~\cite{SpielmanS08:journal}.
We will describe the construction in phases, each containing $r$ iterations.
Each iteration decreases the number of vertices while maintaining the
density of graph.
\ifthenelse{\boolean{@full}}{
We maintain
the density by the general sparsification technique introduced by~\cite{cohen2014uniform} as follows:
\begin{lemma}[\cite{cohen2014uniform}]
\label{lem:sparsify} Given $\mathcal{M}$ be a class of positive definite
$n\times n$ matrices. Let $\mathcal{M}(m)$ be the set of all $\BB^{T}\BB\in\mathcal{M}$
such that $\BB$ has $m$ rows. Assume that
\begin{enumerate}
\item For any $\BB^{T}\BB\in\mathcal{M}$ and non negative diagonal matrix
$\DD$, we have $\BB^{T}\DD\BB\in\mathcal{M}$.
\item For any matrix $\BB^{T}\BB\in\mathcal{M}$, we can check if every row
$b$ is in $im(\BB^{T})$ or not in depth $d_{chk}(m)$ and work $w_{chk}(m)$.
\item For any $\BB^{T}\BB\in\mathcal{M}(m)$, we can find an implicit representation
of a matrix $\WW$ such that $\WW\approx_{1}(\BB^{T}\BB)^{\dagger}$
in depth $d_{con}(m,n)$ and work $w_{con}(m,n)$ and for any vector
$b$, we can evaluate $\WW b$ in depth $d_{eval}(m,n)$ and work $w_{eval}(m,n)$.
\end{enumerate}
For any $k\geq1$, $1\geq\epsilon>0$ and matrix $\BB^{T}\BB\in\mathcal{M}(m)$,
the algorithm $\textsc{Sparsify}(\BB^{T}\BB,k,\epsilon)$ outputs
an explicit matrix $\CC^{T}\CC\in\mathcal{M}(O(kn\log n/\epsilon^{2}))$
with $\CC^{T}\CC\approx_{\epsilon}\BB^{T}\BB$.
Also, this algorithm
runs in $d_{con}\left(\frac{m}{k},n\right)+O(d_{eval}(m,n)+d_{chk}(m)+\log n)$
depth and $w_{con}\left(\frac{m}{k},n\right)+O(w_{eval}(m,n) \log n +w_{chk}(m)+m\log n)$
work.
\end{lemma}}{
We maintain the density by the general sparsification technique introduced by~\cite{cohen2014uniform},
this technique allows us to sparsify a graph with $m$ edges via solving the Laplacian $O(m/k)$ sampled graphs of size $O(k n log(n))$.
}
Each call of spectral vertex sparsification increases edge density,
but the $\textsc{Sparsify}$ routine allows us to reduce the density
at a much faster rate. A higher reduction parameter $r$ in the algorithm $\textsc{RecursiveConstruct}_{r}$ allows us
to reduce cost of these recursive sparsification steps.
\begin{figure}[ht]
\begin{algbox}
$(\MM^{(1)},\MM^{(2)},\cdots;F_1,F_2,\cdots)=\textsc{RecursiveConstruct}_{r}(\MM^{(0)})$
\begin{enumerate}
\item $\MM^{(1)}\leftarrow\textsc{Sparsify}(\MM^{(0)},2^{c_{2}r},1/4)$,
$k\leftarrow1$ and $F_0$ be the set of all variables.
\item While $\MM^{(k)}$ has more than $\Theta(1)^{r}$ vertices,
\begin{enumerate}
\item Find a subset $F_k$ of size $\Omega(n^{(k)})$ such that $\MM_{F_k F_k}$
is $4$-strongly diagonally dominant.
\item $\MM^{(k+1)}\leftarrow\textsc{ApproxSchur}(\MM^{(k)},(F_k,F_{k-1} \setminus F_{k}),4,(k+8)^{-2})$.
\item If $k+1\text{ mod }r=0$, Then
\begin{enumerate}
\item $\MM^{(k+1)}\leftarrow\textsc{Sparsify}(\MM^{(k+1)},(k+9)^{-2},2^{2c_{2}r\log^{2}(k+1)}).$
\end{enumerate}
\item $k\leftarrow k+1$.
\end{enumerate}
\end{enumerate}
\end{algbox}
\caption{Pseudocode for Recursively Constructing Vertex
Sparsifier Chains}
\label{fig:recursiveConstruct}
\end{figure}
\ifthenelse{\boolean{@full}}{
The following lemma proves that the algorithm $\textsc{RecursiveConstruct}_{r}$
produces a vertex sparsifier chain and the linear operator corresponding
to the vertex sparsifier can be evaluated efficiently.
\begin{lemma}
\label{lem:eliminiate_vertex}Given a large enough constant $r$.
There are universal constants $0<c_{1}<1$ and $c_{2}>0$ such that
for any SDD matrix $\MM^{(0)}$ with $n$ variables, the algorithm
$\textsc{RecursiveConstruct}_{r}(\MM^{(0)})$ returns a vertex sparsifier
chain $(\MM^{(1)},\MM^{(2)},\cdots;F_1,F_2,\cdots)$ satisfying
the following conditions
\begin{enumerate}
\item For all $k\geq1$, $n^{(k)}\leq c_{1}^{k-1}n$ where $n^{(k)}$ are
the number of variables in $\MM^{(k)}$.
\item Except step 1, at any moment, all intermediate matrices $\MM$ appears
at the $k^{th}$ iteration has density
\[
\frac{m'}{n'\log n'}\leq2^{3c_{2}r\log^{2}k}
\]
for $k>1$ where $m'$ and $n'$ are the number of non-zeros and variables
of $\MM$.
\item For all $k\geq1$, $\MM_{F_k F_k}^{(k)}$ is $4$-strongly
diagonally dominant,
\item For all $k\geq1,$ $\MM^{(k+1)}\approx_{2(k+8)^{-2}}\schur{\MM^{(k)}}{F_k}$.
\end{enumerate}
Furthermore, the linear operator $\WW$ corresponding to the vertex
sparsifier chain satisfies
\[
\WW\approx_{1}\left(\MM^{(0)}\right)^{\dag}.
\]
Also, we can evaluate $\WW b$ in $O(\log^{2}n\log\log n)$ depth
and $2^{O(r\log^{2}r)}n\log n$ work for any vector $b$.\end{lemma}
\begin{proof}
For the assertion (1), we note that the step (2a) ensures $\left|F_k\right|=\Omega(n^{(k)})$
and hence a constant portion of variables is eliminated each iteration.
This proves $n^{(k)}\leq c^{k-1}n$ for some $c$.
For the assertion (2), Theorem \ref{thm:aprox_schur} shows that after the approximate Schur complement
\begin{eqnarray*}
m^{(k+1)} & = & O(m^{(k)}(k^{2}\log(k+8))^{O(\log(k+8))})\\
& \leq & 2^{O\left(\log^{2}(k+1)\right)}m^{(k)}.
\end{eqnarray*}
Hence, it shows that each iteration the density increases by at most
$2^{c_{2}\log^{2}(k+1)}$ for some constant $c_{2}$. After the $\textsc{Sparsify}$
step in (2ci), we have
\[
\frac{m^{(sr)}}{n^{(sr)}\log n^{(sr)}}\leq2^{2c_{2}r\log^{2}(sr)}.
\]
Then, after $r$ iterations of $\textsc{ApproxSchur}$ and before
the sparsification of $\MM^{((s+1)r)}$, we have
\begin{eqnarray*}
\frac{m^{((s+1)r)}}{n^{((s+1)r)}\log n^{((s+1)r)}} & \leq & 2^{2c_{2}r\log^{2}(sr)}2^{c_{2}\log^{2}(sr+1)}\cdots2^{c_{2}\log^{2}((s+1)r)}\\
& \leq & 2^{3c_{2}r\log^{2}((s+1)r)}.
\end{eqnarray*}
This proves the assertion (2).
For the assertion (3), it follows from the construction of $F_k$ in step
(2a).
For the assertion (4), we note that in step (2b), we construct the approximate Schur complement
$\MM^{(k+1)}$ such that $\MM^{(k+1)}\approx_{(k+8)^{-2}}\schur{\MM^{(k)}}{F_k}$.
Therefore, we only need to check $\MM^{(sr)}$ for all $s$ because
$\MM^{(sr)}$ is modified at step (2ci) after the sparsification.
Note that Lemma \ref{lem:sparsify} guarantee that $\MM^{(k)}$ changes
only by $(k+8)^{-2}$ factor. Hence, in total, we have $\MM^{(k+1)}\approx_{2(k+8)^{-2}}\schur{\MM^{(k)}}{F_k}$.
For the last claim, Lemma \ref{lem:apply_chain} shows that
\[
\WW\approx_{1/2+4\sum_{k}(k+8)^{-2}}\left(\MM^{(0)}\right)^{\dag}
\]
and we can evaluate $\WW b$ in $$O(\sum_{k}\log k\log n^{(k)})=O(\log^{2}n\log\log n)$$
depth and $$O(\sum_{k}2^{3c_{2}r\log^{2}k}n^{(k)}\log n^{(k)}\log k)=2^{O(r\log^{2}r)}n\log n$$
work.
\end{proof}
In the algorithm $\textsc{RecursiveConstruct}_{r}$, we call the $(sr+1)^{th}$
to the $((s+1)r)^{th}$ iteration as the $s^{th}$ phase. At the end
of each phase, the $\textsc{Sparsify}$ is called once. The previous
lemma showed that the density of the graph at the $k^{th}$ iteration
is less than $2^{3c_{2}r\log^{2}k}$. This explains our choice of
reduction factor $2^{2c_{2}r\log^{2}k}$ in the $\textsc{Sparsify}$
algorithm as follows:
\begin{lemma}
\label{lem:each_phase_blow_up}Let $n^{(k)}$ is the number of variables
of $\MM^{(k)}$. From the $(sr+1)^{th}$ to the $((s+1)r)^{th}$ iteration
including the $\textsc{Sparsify}$ call at the end, the algorithm
takes
\[
2^{O(r\log^{2}(sr))}n^{(sr+1)}\log^2 n^{(sr+1)}
\]
work and
\[
O(r\log(sr)\log^2 n^{(sr)})+O(\log^{2}n^{(sr+1)}\log\log n^{(sr+1)})
\]
depth and the time to construct the vertex sparsifier chain for a
SDD matrix with $n^{((s+1)r)}$ variables and $2^{c_{2}r\log^{2}((s+1)r)}n^{((s+1)r)}\log n^{((s+1)r)}$
non zeros.\end{lemma}
\begin{proof}
Let $m^{(k)}$ and $n^{(k)}$ be the number of non zeros and variables
in $\MM^{(k)}$ before the $\textsc{Sparsify}$ call if there is.
Lemma \ref{lem:subsetSimple} and Theorem \ref{thm:aprox_schur} shows
that the depth and work of the $k^{th}$ iteration takes $O(m^{(k)}+m^{(k+1)})$
work and $O(\log k\log n^{(k)})$ depth. Lemma \ref{lem:eliminiate_vertex}
shows that
\[
n^{(k)}\leq c_{1}^{k-1}n\text{ and }m^{(k)}\leq2^{3c_{2}r\log^{2}k}n^{(k)}\log n^{(k)}
\]
and hence, from the $(sr+1)^{th}$ to the $((s+1)r)^{th}$ iteration
(excluding the $\textsc{Sparsify}$ call at the end), the algorithm
takes
\begin{eqnarray*}
\sum_{k=sr+1}^{(s+1)r}O\left(m^{(k)}+m^{(k+1)}\right) & \leq & \sum_{k=sr+1}^{(s+1)r}2^{O(r\log^{2}k)}n^{(k)}\log n^{(k)}\\
& \leq & 2^{O(r\log^{2}(sr))}n^{(sr+1)}\log n^{(sr+1)}
\end{eqnarray*}
work and
\begin{eqnarray*}
\sum_{k=sr+1}^{(s+1)r}O\left(\log k\log n^{(k)}\right) & \leq & O(r\log(sr)\log n^{(sr)})
\end{eqnarray*}
depth.
Now, we bound the cost of the $\textsc{Sparsify}$ call. Let $m^{*}$
and $n^{*}$ be the the number of non zeros and variables in $\MM^{((s+1)r)}$
before the $\textsc{Sparsify}$ call. Lemma \ref{lem:sparsify} shows
that the $\textsc{Sparsify}$ call takes $d_{con}\left(m^{*}2^{-2c_{2}r\log^{2}((s+1)r)},n^{*}\right)+O(d_{eval}(m^{*},n^{*})+d_{chk}(m^{*})+\log n^{*})$
depth and $w_{con}\left(m^{*}2^{-2c_{2}r\log^{2}((s+1)r)},n^{*}\right)+O(w_{eval}(m^{*},n^{*}) \log n^{*}+w_{chk}(m^{*})+m^{*}\log n^{*})$
work.
For any SDD matrix $\BB^{T}\BB$, an edge $b\in im(\BB^{T})$ if and
only if the end points of the edge is in the same connected component
of the graph corresponding to $\BB^{T}\BB$. Halperin and Zwick \cite{halperin1996optimal}
shows how to compute the connected components of a graph with $m$ edges
and $n$ vertices in $O(\log n)$ depth and $O(m+n)$ work for the
EREW PRAM model. Using this, we can check every edge in $O(\log n)$
depth and $O(m+n)$ work.
To construct an implicit approximate inverse for the sampled SDD matrix,
we can use $\textsc{RecursiveConstruct}_{r}$. Lemma \ref{lem:eliminiate_vertex}
showed that it takes $O(\log^{2}n^{*}\log\log n^{*})$ depth and $2^{O(r\log^{2}r)}n^{*}\log n^{*}$
work to apply the approximate inverse once.
Hence, the total running time from the $(sr+1)^{th}$ to the $((s+1)r)^{th}$
iteration including the $\textsc{Sparsify}$ call is the
time to construct the vertex sparsifier chain plus
\[
2^{O(r\log^{2}(sr))}n^{(sr+1)}\log^2 n^{(sr+1)}
\]
extra work and
\[
O(r\log(sr)\log n^{(sr)})+O(\log^{2}n^{(sr+1)}\log\log n^{(sr+1)})
\]
extra depth.
\end{proof}
Note that at the end of the $s^{th}$ phase, the time required to construct an extra vertex
sparsifier chain for the $\textsc{Sparsify}$ call is less than the
remaining cost after the $s^{th}$ phase. This is the reason why we
use $2^{2c_{2}r\log^{2}k}$ as the reduction factor for the $\textsc{Sparsify}$
call. The following theorem takes account for the recursive call and
show the total running time for the algorithm.
}{}
\begin{lemma}
\label{lem:recursive_con}
With high probability, the algorithm
$\textsc{RecursiveConstruct}_{r}(\MM^{(0)})$ returns a
vertex sparsifier chain such that the linear operator $\WW$ corresponding
to it satisfies
\ifthenelse{\boolean{@full}}{\[
\WW\approx_{1}\left(\MM^{(0)}\right)^{\dag}.
\]}{$\WW\approx_{1}\left(\MM^{(0)}\right)^{\dag}$.}
Assume $r\log^{2}r=o(\log n)$, we can evaluate $\WW b$ in $O(\log^{2}n\log\log n)$ depth
and $2^{O(r\log^{2}r)}n\log n$ work. Also, the algorithm $\textsc{RecursiveConstruct}_{r}\left(\MM^{(0)}\right)$
takes $2^{O(\log n/r)}$ depth and $m\log n+2^{O(r\log^{2}r)}n\log n$
work.\end{lemma}
\begin{proof}
All result is proved in lemma \ref{lem:eliminiate_vertex} except
the construction time.
To bound the construction time, we first consider the case $\MM^{(0)}$
has only $2^{c_{2}r}n\log n$ non-zeros. In that case, the algorithm skips
step 1 because the matrix is already sparse. Lemma
\ref{lem:each_phase_blow_up} shows that during the $s^{th}$ phase,
the $\textsc{Sparsify}$ call requires us to construct an extra vertex
sparsifier chain for a matrix with $n^{((s+1)r)}$ variables and at
most $2^{c_{2}r\log^{2}((s+1)r)}n^{((s+1)r)}\log n^{((s+1)r)}$ non-zeros.
Also, we know that the $\textsc{Sparsify}$ returns a matrix with
$n^{((s+1)r)}$ variables and $2^{2c_{2}r\log^{2}((s+1)r)}n^{((s+1)r)}\log n^{((s+1)r)}$
non-zero. Hence, the cost of remaining iteration (excluding the recursion
created afterward) is larger than the cost to construct the extra
vertex sparsifier chain required at the $s^{th}$ phase.
Hence, considering this recursion factor, the running time of the
$s^{th}$ phase is multiplied by a factor of $2^{s}$.
Since there are $O(\log n/r)$ phases and $r\log^{2}r=o(\log n)$,
the total depth of the algorithm is
\begin{eqnarray*}
& & \sum_{s=1}^{O(\log n/r)}2^{s}\left(r\log(sr)\log^2 n^{(sr)}+\log^{2}n^{(sr)}\log\log n^{(sr)}\right)\\
& = & 2^{O(\log n/r)}O\left(r\log\log n\log^2 n^{(last)}+\log^{2}n^{(last)}\log\log n^{(last)}\right)\\
& = & 2^{O(\log n/r)}O\left(r^{2}\log\log n+r^{2}\log r\right)\\
& = & 2^{O(\log n/r)}r^{2}\log\log(n)\\
& = & 2^{O(\log n/r)}
\end{eqnarray*}
and the total work of the algorithm is
\begin{eqnarray*}
& & \sum_{s=1}^{O(\log n/r)}2^{s}\left(2^{O(r\log^{2}(sr))}n^{(sr+1)}\log^2 n^{(sr+1)}\right)\\
& = & 2^{O(r\log^{2}r)}n\log^2 n.
\end{eqnarray*}
For general $m$, during the first step, $\textsc{Sparsify}$, we
need to solve a certain SDD matrix with at most $m^{(0)}2^{-c_{2}r}$
non-zeros and $n^{(0)}$ variables. To solve that SDD matrix, we use
$\textsc{RecursiveConstruct}_{r}$ to construct a vertex sparsifier
chain and use the chain to solve that $O(\log(n))$ different right
hand sides. Using $r\log^{2}r=o(\log n)$, the total depth for this
algorithm is
\[
O\left(\log_{2^{r}}\left(\frac{m}{n\log n}\right)2^{O(\log n/r)}\right)=\log\left(m\right)2^{O(\log n/r)}=2^{O(\log n/r)}.
\]
and the total work of the algorithm is
\begin{eqnarray*}
& & m\log n+2^{O(r\log^{2}r)}n\log^2 n\log_{2^{r}}\left(\frac{m}{n\log n}\right)\\
& = & m\log n+2^{O(r\log^{2}r)}n\log^2 n\log\left(\frac{m}{n\log n}\right).
\end{eqnarray*}
Note that the first term dominate if $\frac{m}{n}\geq2^{O(r\log^{2}r)}$
and hence we can simplify the term to
\[
m\log n+2^{O(r\log^{2}r)}n\log^2 n.
\]
\end{proof}
The following theorem follows from Lemma \ref{lem:recursive_con} by setting $r=\log\log\log n$.
\begin{theorem}
\label{thm:non_combin_result}
Given any SDD matrix $\MM$ with $n$ variables and $m$ non-zeros.
We can find an implicit block-Cholesky factorization for the matrix
$\MM$ in $O(m\log n+n\log^{2+o(1)}n)$ work and $O(n^{o(1)})$ depth
such that for any vector $b$, we can compute an $\epsilon$ approximation
solution to $\MM^{-1}b$ in $O((m+n\log^{1+o(1)}n)\log(1/\epsilon))$
work and $O(\log^{2}n\log\log n\log(1/\epsilon))$ depth.\end{theorem}
\section*{Acknowledgements}
We thank Michael Cohen for notifying us of several issues
in previous versions of this manuscript.
\input{paper.bbl}
\ifthenelse{\boolean{@full}}{
\begin{appendix}
\input{weightedExpander}
\end{appendix}
}{
\newpage
{\Large
This page is intentionally left (almost) blank.
The full version starts after this page.
}
}
\end{document}
\section{Some Related Work}
\label{sec:related}
Gaussian elimination solves systems of equations in a matrix $\AA$ by computing
lower and upper triangular matrices $\LL$ and $\UU$ so that $\AA = \LL \UU$.
Equations in $\AA$ may then be solved by solving equations in $\LL$ and $\UU$,
which takes time proportional to the number of nonzero entries in those matrices.
This becomes slow if $\LL$ or $\UU$ has many nonzero entries, with is often the case.
Cholesky factorization is the natural symmetrization of this process: it writes symmetric
matrices $\AA$ as a product $\LL \LL^{T}$.
Incomplete Cholesky factorizations \cite{ICC} instead approximate $\AA$ by a product of sparse matrices
$\LL \LL^{T}$ by strategically dropping some entries in the computation of Cholesky factors.
One can then use these approximations as preconditioners to compute highly accurate solutions to systems in $\AA$.
While this is a commonly used heuristic, there have been few general theoretical analyses of the performance of the resulting algorithms.
Interestingly, Meijerink and van der Vorst \cite{ICC} analyze the performance of this algorithm on SDD matrices
whose underlying graph is a regular grid.
SDD linear systems have been extensively studied in scientific computing
as they arise when solving elliptic partial differential equations.
Multigrid methods have proved very effective at solving the resulting systems.
Fedorenko~\cite{fedorenko1964speed} gave the first multigrid method for SDD systems
on regular square grids and proved that it is an nearly-linear time algorithm.
Multigrid methods have since been used to solve many types of linear systems
\cite{brandt1977multi,hackbusch1985multi},
and have been shown to solve special systems in
linear work and logarithmic depth \cite{nicolaides1978multigrid,hackbusch1982multi} under some smoothness assumptions.
Recently, Artem and Yvan~\cite{napov2012algebraic} gave the first algebraic multigrid method
with a guranteed convergence rate.
However, to the best of our knowledge, a worst-case nearly-linear work bound
has not been proved for any of these algorithms.
Our algorithm is motivated both by multigrid methods and incomplete Choleksy factorizations.
Both exploit the fact that elimination operations in SDD matrices result in SDD matrices.
That is, Schur complements of SDD matrices result in SDD matrices with fewer vertices.
However, where multigrid methods eliminate a large fraction of vertices at each level,
our algorithms eliminate a small but constant fraction.
The main novelty of our approach is that we sparsify the resulting Schur complement.
A heuristic approach to doing this was recently studied by
Krishnan, Fattal, and Szeliski \cite{krishnan2013efficient}.
\dan{I cut a lot from this.}
\section{Existence of Linear Work and $O(\log n\log^2\log n)$ depth Solvers}\label{sec:depth}
The factorizations constructed in the previous section
can be evaluated in $O(\log^2{n})$ depth and $O(n)$ work.
One $O(\log n)$ factor comes from the
depth of the recursion and another $O(\log n)$ factor comes from
the depth of matrix vector multiplication.
The reason that matrix-vector multiplication can take logarithmic depth
is that computing the sum of $k$ numbers takes $O(\log{k})$ depth.
Thus, if we can instead multiply by matrices with $k^{O (1)}$ nonzeros in each
row and column, for some small $k$,
we can reduce the depth of each matrix-vector multiplication to $O (\log k)$.
Although the number of non-zeros in each row of $\ZZ^{(k_{i})}_{F_{i}F_{i}} \MM_{F_{i}C_{i}}$
is bounded by $(80 (i+1)^{4})^{k_{i}+1}$, the number of non-zeros per column
can be high.
This is because although we picked $F_i$ to be of bounded degree, many of those vertices
can be adjacent to a few vertices in $C_i$.
For the factorization constructed in Section~\ref{sec:existence},
$k$ can be as large as $n$.
In this section, we reduce this degree to $\log^{O(1)}{n}$
by splitting high degree vertices.
This leads a factorization that can be evaluated in linear work
and $O(\log n\log^{2} \log n)$ depth.
\subsection{Splitting High Degree Vertices}
While sparsification produces graphs with few edges, it does not guarantee
that every vertex has low degree.
We will approximate an arbitrary graph by one of bounded degree
by splitting each high degree vertex into many vertices.
The edges that were originally attached to that vertex will be partitioned
among the vertices into which it is split.
The vertices into which it is split will then be connected by a complete graph,
or an expander if the complete graph would have too high degree.
The resulting bounded-degree graph has more vertices.
To approximate the original graph, we take a Schur complement of the bounded-degree graph
with respect to the extra vertices.
We recall that one can solve a system of equations in a Schur complement of a matrix
by solving one equation in the original matrix\footnote{%
To solve a system $\schur{\MM}{S} \xx = \bb$, where $S$ is the last set of rows of $\MM$,
one need merely solve the system $\MM \xxhat = \bbhat$, where $\bbhat$ is the same as $\bb$
but has zeros appended for the coordinates in $S$.
The vector $\xx$ is then obtained by simply ignoring the coordinates of $\xxhat$ in $S$.
}
We begin our analysis by examining what happens when we split an individual vertex.
\begin{lemma}\label{lem:splitStar}
Let $G$ be a weighted star graph with vertex set
$\setof{v_{1}, \dots , v_{n}, u}$
and edges connecting $u$ to each $v_{i}$ with weight $w_{i}$.
Let $\Ghat$ be a graph with vertex set
$\setof{v_{1}, \dots , v_{n}, u_{1}, \dots , u_{k}}$
in which the vertices $\setof{u_{1}, \dots , u_{k}}$
are connected by a complete graph of edges of weight
$W = \delta^{-1} \sum_{i} w_{i}$,
and each vertex $v_{i}$ is connected to exactly one vertex $u_{j}$,
again by an edge of weight $w_{i}$.
Let $U = \setof{u_{2}, \dots , u_{k}}$.
Then, $\schur{\Ghat}{U} \preccurlyeq G$,
and in $\schur{\Ghat}{U}$ the edge
between $u_{1}$ and $v_{i}$
has weight at least $w_{i} (1-2 \delta)$, for every $i$.
\end{lemma}
\begin{proof}
We will examine the Laplacian matrices of $G$ and $\Ghat$.
Define $w_{tot} = \sum_{i} w_{i}$, so $W = w_{tot} / \delta$.
Let $\bb$ be the vector of weights $w_{1}, \dots , w_{n}$,
and let $\BB$ be the diagonal matrix of $\bb$, so that
so that
\[
\LL_{G} = \begin{pmatrix}
\BB & -\bb \\
-\bb^{T} & w_{tot}
\end{pmatrix}.
\]
Similarly, let $\CC$ be the adjacency matrix between
$v_{1}, \dots , v_{n}$ and $u_{1}, \dots , u_{k}$,
and let $\DD$ be the diagonal matrix whose
$j$th entry is the sum of the $w_{i}$ for which
$v_{i}$ is connected to $u_{j}$.
Then,
\[
\LL_{\Ghat} =
\begin{pmatrix}
\BB & -\CC \\
-\CC^{T} & \DD + W (k \II_{k} - \JJ_{k}),
\end{pmatrix}
\]
where $\JJ_{k}$ is the $k \times k$ all ones matrix and $k \II_{k} -\JJ_{k}$
is the Laplacian of the complete graph on $k$ vertices.
To express the Schur complement,
let $\DD_{2}$ be the submatrix of $\DD$ obtained by excluding its
first row and column,
let $\CC_{2}$ be the submatrix of $\CC$ excluding its first column,
and let $\cc_{1}$ be the first column of $\CC$.
Let $\cc_{2} = \CC_{2} \bvec{1}$, so $\bb = \cc_{1} + \cc_{2}$.
We then have that $ \schur{\LL_{\Ghat}}{U}$ equals
\begin{align*}
& \begin{pmatrix}
\BB & -\cc_{1} \\
-\cc_{1}^{T} & \DD (1,1)
\end{pmatrix}
-
\begin{pmatrix}
-\CC_{2}^{T} \\
- W \bvec{1}^{T}
\end{pmatrix}
\left( \DD_{2} + W (k \II_{k-1} - \JJ_{k-1}) \right)^{-1}
\begin{pmatrix}
-\CC_{2} &
- W \bvec{1}
\end{pmatrix}
\\
& =
\begin{pmatrix}
\BB & -\cc_{1} \\
-\cc_{1}^{T} & \DD (1,1)
\end{pmatrix}
-
\begin{pmatrix}
\CC_{2}^{T} \\
W \bvec{1}^{T}
\end{pmatrix}
\left( \DD_{2} + W (k \II_{k-1} - \JJ_{k-1}) \right)^{-1}
\begin{pmatrix}
\CC_{2} &
W \bvec{1}
\end{pmatrix}
\end{align*}
To understand this expression,
we will show that it approaches $\LL_{G}$ as $\delta $ goes to zero.
We first note that
\[
(k \II_{k-1} - \JJ_{k-1})^{-1} = \frac{1}{k} (\II_{k-1} + \JJ_{k-1}),
\]
and so
\[
\CC_{2}^{T} (k \II_{k-1} - \JJ_{k-1})^{-1} \bvec{1} = \CC_{2}^{T} \bvec{1} = \cc_{2}^{T}.
\]
So, the last row and column of the Schur complement agrees with $\LL_{G}$ as $\delta$ goes to zero.
On the other hand, the upper-left block becomes
\[
\BB - \CC_{2}^{T} \left( \DD_{2} + W (k \II_{k-1} - \JJ_{k-1}) \right)^{-1} \CC_{2},
\]
which goes to $\BB$ as $\delta $ goes to zero.
To bound the discrepancy in terms of $\delta $, we recall that $\JJ = \bvec{1} \bvec{1}^{T}$,
and so we can use the Sherman-Morrison-Woodbury formula to
compute
\[
\left( \DD_{2} + W (k \II_{k-1} - \JJ_{k-1}) \right)^{-1}
=
\left( \DD_{2} + W k \II_{k-1} \right)^{-1}
+
\frac{\left( \DD_{2} + W k \II_{k-1} \right)^{-1} W \JJ_{k-1} \left( \DD_{2} + W k \II_{k-1} \right)^{-1}}
{1 - W \bvec{1}^{T} \left( \DD_{2} + W k \II_{k-1} \right)^{-1} \bvec{1}}.
\]
Note that $\DD_{2} + W k \II_{k-1}$ is a diagonal matrix.
As all entries of $\DD_{2}$ are less than $\delta W$,
every diagonal entry of this matrix is at least $(W k (1+\delta))^{-1}$.
So, we have the entry-wise inequality
\[
\frac{\left( \DD_{2} + W k \II_{k-1} \right)^{-1} W \JJ_{k-1} \left( \DD_{2} + W k \II_{k-1} \right)^{-1}}
{1 - W \bvec{1}^{T} \left( \DD_{2} + W k \II_{k-1} \right)^{-1} \bvec{1}}
\geq
\frac{
W \JJ_{k-1}/ (W k (1+\delta) )^{2}
}{1/k}
=
\frac{1}{W k (1+\delta)^{2}} \JJ_{k-1}.
\]
This tells us that, entry-wise,
\[
\left( \DD_{2} + W (k \II_{k-1} - \JJ_{k-1}) \right)^{-1}
\geq
(1 - 2 \delta) \frac{1}{W k}
\left( \II_{k-1} + \JJ_{k-1}\right)
=
(1 - 2 \delta)
(W ( k \II_{k-1} - \JJ_{k-1} ))^{-1}.
\]
The claimed bound on the entries in row and column corresponding to $u_{1}$ of the Schur complement
now follows
from the fact that they are obtained by multiplying this matrix inverse on either side by
$\CC_{2}$ and $W \bvec{1}$:
as these are non-negative matrices, the entry-wise inequality propogates to the product.
\end{proof}
The following theorem states the approximation we obtain if we split all the vertices of high degree
and connect the clones of each vertex by expanders.
\begin{theorem}
\label{thm:degreeReduction}
For any graph $G=(V,E)$ with $n$ vertices,
$\varepsilon > 0$ and $t > 1/\varepsilon^{2}$,
there is a graph $\widetilde{G}=(V\cup S,\widetilde{E})$
of maximum degree $O (t)$
such that
\begin{equation}\label{eqn:thm:degreeReduction}
G \approx_{\varepsilon} \schur{\widetilde{G}}{S},
\end{equation}
$\sizeof{S} = O\left(n/(\varepsilon^{2}t)\right)$,
and $\sizeof{\widetilde{E}} \leq O(n/\varepsilon^{2} + n/(\varepsilon^{4} t))$.
\end{theorem}
\begin{proof}
We first sparsify $G$ using Theorem~\ref{thm:BSS},
obtaining $\Ghat $ with $O(n / \epsilon^{2})$ edges
such that $\Ghat \approx_{\varepsilon / 3} G$.
Let $U$ be the set of vertices in $\Ghat$ of degree more than $t$.
We will split each vertex in $U$ into many vertices.
For each $u \in U$, let $d_{u}$ be its degree in $\Ghat$.
We split $u$ into $\ceil{d_{u} / t}$ vertices, one of which we identify
with the original vertex $u$, and the rest of which we put in $S$.
We then partition the edges that were attached to $u$ among these
$\ceil{d_{u} / t}$ vertices, so that each is now attached to at most $t$
of these edges.
We then place a complete graph between all of the
vertices derived from $u$ in which every edge has weight equal to
the sum of the weights of edges attached to $u$, times $12/\varepsilon$.
That is, we apply the construction of Lemma~\ref{lem:splitStar} with
$\delta = \varepsilon / 3$.
Call the resulting graph $G'$.
If $\ceil{d_{u} / t} > t$, we replace that complete graph by
a weighted expander of degree $O (1/\varepsilon^{2})$ that is an $\varepsilon/3$
approximation of this weighted complete graph, as guaranteed to exist
by Lemma \ref{lem:existExpanders}.
The resulting graph is $\widetilde{G}$.
To show that \eqref{eqn:thm:degreeReduction} holds,
we first show that
\[
\Ghat \approx_{\varepsilon/3} \schur{G'}{S}.
\]
Lemma \ref{lem:splitStar} tells us that
$\schur{G'}{S} \preccurlyeq \Ghat.$
It also tells us that the graph looks
like $\Ghat$ except that
it can have some extra edges and that the edges attached to vertices we split can
have a slightly lower weight.
If an edge is attached to just one of the split vertices, its weight can be lower by a factor
of $2 \delta = \varepsilon / 6$.
However, some edges could be attached to two of the split vertices, in which case they could
have weight that is lower by a factor of $\varepsilon/3$.
This implies that
$(1-\varepsilon/3) \Ghat \preccurlyeq \schur{G'}{S}$.
To prove \eqref{eqn:thm:degreeReduction}, we now combine this with the factors
of $\varepsilon/3$ that we loose by sparsifying at the start
and by replacing with expanders at the end.
It is clear that every vertex in $\widetilde{G}$ has degree at most $t + O (1/\epsilon^{2})$.
To bound the number of edges in $\widetilde{G}$, we observe that the sum of the degrees
of vertices that are split is at most $O (n / \varepsilon^{2})$, and so the number
of extra vertices in $S$ is at most $O (n / \varepsilon^{2} t)$.
Our process of adding expanders at the end can create at most $O (1 / \varepsilon^{2})$ new edges
for each of these vertices,
giving a total of at most $O (n / \varepsilon^{4} t)$ new edges.
\end{proof}
\begin{remark}\label{rem:thm:degreeReduction}
We do not presently know how to implement the exact construction from the above theorem in polynomial time,
because it relies on the nonconstructive proof of the existance of expanders from \cite{IF4}.
One can transform this into a polynomial time construction by instead using the explicit constructions
of Ramanujan graphs \cite{Margulis88,LPS} as described in Lemma \ref{lem:explicitExpanders}.
This would, however, add the requirement $t > 1/\epsilon^{6}$ to Theorem~\ref{thm:degreeReduction}.
While this would make Theorem~\ref{thm:degreeReduction} less appealing, it does not
alter the statement of Theorem \ref{thm:lowerDepth}.
\end{remark}
It remains to incorporate this degree reduction routine into
the solver construction.
Since our goal is to upper-bound the degree by $O(\log^{c}{n})$ for some constant $c$, we
can pick $t$ in Theorem~\ref{thm:degreeReduction}
so that $\epsilon^2 t \leq \log^{O (1)}n$.
This leads to a negligible increase in vertex count at each step.
So we can use a construction similar to Theorem~\ref{thm:UDU}
to obtain the lower depth solver algorithm.
\begin{theorem}\label{thm:lowerDepth}
For every $n$-dimensional SDDM matrix $\MM$ there a linear operator
$\ZZ$ such that
\[
\ZZ \approx_{2} \MM^{-1}
\]
and matrix-vector multiplications in $\ZZ$ can be done in
linear work and $O(\log{n} \log^{2} \log{n})$ depth.
Furthermore, this operator can be obtained via a diagonal
$\DD$, an upper triangular matrix $\UU$ with $O(n)$ non-zero entries
and a set of vertices $\widehat{V}$ such that
\[
\MM\approx_{2}\schur{\UU^{T}\DD\UU}{\widehat{V}}.
\]
\end{theorem}
\begin{proof}
We will slightly modify the vertex sparsification chain from
Definition~\ref{def:vertexSparsifierChain}.
Once again, we utilize $\alpha_{i} = 4$ for all $i$
and $\epsilon_{i} = 1/2 (i+2)^{2}$.
The main difference is that instead of using spectral sparsifiers
from Theorem~\ref{thm:BSS} directly, we use Theorem~\ref{thm:degreeReduction}
to control the degrees.
Specifically we invoke it with $\epsilon = \epsilon_i$
and $t_i = 200 \epsilon_i^{-2}$ on $\schur{\MM^{(i)}}{F_i}$
to obtain $\MM^{(i + 1)}$ and $S_{i + 1}$ s.t.
\[
\schur{\MM^{(i)}}{F_i} \approx_{\varepsilon_{i}}\schur{\MM^{(i + 1)}}{S_{i + 1}}.
\]
This leads to a slightly modified version of the vertex sparsifier chain.
We obtain a sequence of matrices $\MM_{1}, \MM_{2} \ldots$and subsets $S_i$
and $F_i$ s.t.
\begin{enumerate}
\item [a.] $\MM^{(1)} \approx_{\epsilon_0} \MM^{(0)}$,
\item [b.] $\schur{\MM^{(i + 1)}}{S_{i + 1}} \approx_{\epsilon_i} \schur{\MM^{(i)}}{F_i}$,
\item [c.] Each row and column of $\MM^{(i)}$ has at most $t$ non-zeros.
\item [d.] Each column and row of $\MM^{(i)}$ indexed by $F_i$ has at most $80 (i+1)^{4}$
nonzero entries. (obtained by combining the bound on non-zeros from
Theorem~\ref{thm:degreeReduction} with Lemma~\ref{lem:subsetLowDeg}.
\item [e.] $\MM^{(i)}_{F_i F_i}$ is $4$-strongly diagonally dominant and
\item [f.] $\MM^{(d)}$ has size $O(1)$.
\end{enumerate}
This modified chain can be invoked in a way analogous to the vertex
sparsifier chain.
At each step we
\begin{enumerate}
\item Apply a recursively computed approximation to $(\MM^{(i)}_{F_i F_i})^{-1}$ on $\bb^{(i)}$
to obtain $\bbbar^{(i + 1)}$.
\item Pad $\bbbar^{(i + 1)}$ with zeros on $S_i$ to obtain $\bb^{(i + 1)}$
\item Repeat on level $i + 1$
\item Restrict the solution $\xx^{(i + 1)}$ to obtain $\xxbar^{(i + 1)}$
\item Apply a recursively computed approximation to $(\MM^{(i)}_{F_i F_i})^{-1}$ to
$\xxbar^{(i + 1)}$ to obtain $\xx^{(i)}$.
\end{enumerate}
\richard{Attempt at a pseudocode, steps 1 and 5 are identical to Schur complementing}
Let $n_{i}$ denote the dimension of $\MM^{(i)}$.
Since $t$ was set to $200 \epsilon_i^{-2}$, the increase in vertex size given by
$S^{(i)}$ is at most:
\[
n_{i + 1} \leq n_i \left(1 - \frac{1}{80} \right) \left( 1 + \frac{1}{\epsilon^2 t} \right)
\leq n_i \left(1 - \frac{1}{400} \right)
\]
By induction this gives
\[
n_{i} \leq n \left(1 - \frac{1}{400} \right)^{i-1}.
\]
So the total work follows in a way analogous to Theorem~\ref{thm:result_BSS},
and it remains to bound depth.
The constant factor reduction in vertex count gives a bound
on chain length of $d = O(\log{n})$.
This in turn implies $t = O(\epsilon_i^{-2})= O(\log^{4}{n})$.
Therefore the depth of each matrix-vector multiplication by $\MM_{F_i C_i}$
is bounded by $O(\log\log{n})$.
Also, choosing $k_{i}$ as in Theorem~\ref{thm:UDU} gives that the number of non-zeros
in $\ZZ_{F_iF_i}^{(k_i)}$ is bounded by $(\log{n})^{O(\log\log{n})}$,
giving a depth of $O(\log^{2}\log{n})$ for each matrix-vector multiplication
involving $\ZZ_{F_iF_i}$.
The $O(\log{n})$ bound on $d$ then gives a bound on the total depth of $O (\log n \log^2 \log n)$.
This algorithm can also be viewed as a linear operator corresponding to a
$\UU^T \DD \UU$ factorization of a larger matrix.
We will construct the operators inductively.
Suppose we have $\DDhat^{(i + 1}$, $\UU^{(i + 1)}$, and $\widehat{V}^{(i + 1)}$ such that
\[
\MM^{(i + 1)} \approx_{2 \sum_{i' = i + 1}^{d} \epsilon_{i'}} \schur{\left( \UU^{(i + 1)} \right)^{T} \DDhat^{(i + 1)} \UU^{(i + 1)} }{\widehat{V}^{(i + 1)}}.
\]
An argument similar to that in the proof of Lemma \ref{lem:uduApprox} gives
\[
\MM^{(i)}\approx_{\varepsilon_{i}}\left[\begin{array}{cc}
\II & 0\\
\UU_{F_{i} C_{i}}^{T} & \II
\end{array}\right]\left[\begin{array}{cc}
\left(\ZZ^{(k_{1})}_{F_{1}F_{1}} \right)^{-1} & 0\\
0 & \schur{\MM^{(i)}}{F_i}
\end{array}\right]\left[\begin{array}{cc}
\II & \UU_{F_{i} C_{i}}\\
0 & \II
\end{array}\right].
\]
Consider the entry $\schur{\MM^{(i)}}{F_i}$.
Combining condition b of the chain with the inductive
hypothesis and Fact~\ref{fact:schurLoewner} gives
\begin{align*}
\schur{\MM^{(i)}}{F_i}
& \approx_{\epsilon_i} \schur{\MM^{(i + 1)}}{S_{i + 1}}\\
& \approx_{\epsilon_i + \sum_{i' = i + 1}^{d} \epsilon_i} \schur{
\schur{\left( \UU^{(i + 1)} \right)^{T} \DDhat^{(i + 1)} \UU^{(i + 1)} }{\widehat{V}^{(i + 1)}}}{S_{i + 1}}.
\end{align*}
Since the order by which we remove vertices when taking Schur complements
does not matter, we can set
\[
\widehat{V}^{(i)} = \widehat{V}^{(i + 1)} \cup S_{i + 1},
\]
to obtain
\[
\schur{\MM^{(i)}}{F_i}
\approx_{\epsilon_i + \sum_{i' = i + 1}^{d} \epsilon_i}
\schur{\left( \UU^{(i + 1)} \right)^{T} \DDhat^{(i + 1)} \UU^{(i + 1)} }
{\widehat{V}^{(i)}}.
\]
Block-substituting this and using Fact~\ref{fact:blockSubstitute} then gives:
\[
\MM^{(i)}\approx_{2 \sum_{i' = i}^{d} \varepsilon_{i'}}\left[\begin{array}{cc}
\II & 0\\
\UU_{F_{i} C_{i}}^{T} & \II
\end{array}\right]\left[\begin{array}{cc}
\left(\ZZ^{(k_{1})}_{F_{1}F_{1}} \right)^{-1} & 0\\
0 & \schur{\left( \UU^{(i + 1)} \right)^{T} \DDhat^{(i + 1)} \UU^{(i + 1)} }{\widehat{V}^{(i)}}
\end{array}\right]\left[\begin{array}{cc}
\II & \UU_{F_{i} C_{i}}\\
0 & \II
\end{array}\right].
\]
We will show in Lemma~\ref{lem:schurRearrange} that the Schur
complement operation can be taken outside multiplications by $\UU^T$
and $\UU$.
This allows us to rearrange the right-hand side into:
\[
\schur{
\left[\begin{array}{ccc}
\II & 0 & 0\\
\UU_{F_{i} C_{i}}^{T} & \II & 0\\
0 & 0 & \II_{\widehat{V}^{(i)}}
\end{array}\right]
\left[\begin{array}{cc}
\left(\ZZ^{(k_{1})}_{F_{1}F_{1}} \right)^{-1} & 0\\
0 & (\UU^{(i + 1)})^{T} \DDhat^{(i + 1)} \UU^{(i + 1)}
\end{array}\right]
\left[\begin{array}{ccc}
\II & \UU_{F_{i} C_{i}} & 0\\
0 & \II & 0\\
0 & 0 & \II_{\widehat{V}^{(i)}} \\
\end{array}\right]
}
{\widehat{V}^{(i)}}.
\]
Hence choosing
\[
\DDhat^{(i)} =
\left[\begin{array}{cc}
\left(\ZZ^{(k_{1})}_{F_{1}F_{1}} \right)^{-1} & 0\\
0 & \DDhat^{(i + 1)}
\end{array}\right],
\]
and
\[
\UU^{(i)} =
\left[\begin{array}{cc}
\II & 0\\
0 & \UU^{(i + 1)}
\end{array}\right]
\left[\begin{array}{ccc}
\II & \UU_{F_{i} C_{i}} & 0\\
0 & \II & 0\\
0 & 0 & \II_{\widehat{V}^{(i)}}
\end{array}\right]
=
\begin{bmatrix}
\II & \UU_{F_{i} C_{i}} & 0\\
0 & \II & 0\\
0 & 0 & \UU^{(i+1)}
\end{bmatrix},
\]
gives $\MM^{(i)} \approx_{2 \sum_{i' = i}^{d} \epsilon_{i'} } \schur{\left( \UU^{(i)} \right)^{T} \DD^{(i)} \UU^{(i)} }{\widehat{V}^{(i)}}$, and the inductive hypothesis holds for $i$ as well.
We then finish the proof as in Lemma \ref{lem:uduApprox} by replacing $\DDhat^{(0)}$ with
a matrix $\DD$ whose diagonals contain $\XX^{(i)}$ instead of $\left(\ZZ^{(k_{1})}_{F_{1}F_{1}} \right)^{-1}$.
\end{proof}
It remains to show the needed Lemma rearanging the order of taking Schur complements.
\begin{lemma}
\label{lem:schurRearrange}
Let $\PP$ be an arbitrary matrix, and $\MM = \schur{\MMhat}{\widehat{V}}$.
Then
\[
\PP^T \MM \PP
= \schur{\left[\begin{array}{cc}
\PP & 0\\
0 & \II_{\widehat{V}}
\end{array}\right]^T
\MMhat
\left[\begin{array}{cc}
\PP & 0\\
0 & \II_{\widehat{V}}
\end{array}\right]
}{\widehat{V}}.
\]
\end{lemma}
\begin{proof}
Let the rows and columns of $\MM$ be indexed by $V$.
It suffices to show that the matrix
\[
\left(
\left[\begin{array}{cc}
\PP & 0\\
0 & \II_{\widehat{V}}
\end{array}\right]^T
\MMhat
\left[\begin{array}{cc}
\PP & 0\\
0 & \II_{\widehat{V}}
\end{array}\right] \right)^{-1}_{VV}
\]
is the same as $\left( \PP^T \MM \PP \right)^{-1}$.
This matrix can be written as:
\[
\left[\begin{array}{cc}
\PP^{-1} & 0\\
0 & \II_{\widehat{V}}
\end{array}\right]
\MMhat^{-1}
\left[\begin{array}{cc}
\PP^{-T} & 0\\
0 & \II_{\widehat{V}}
\end{array}\right].
\]
The top left block corresponding to $V$ gives
\[
\PP^{-1} \left(\MMhat^{-1} \right)_{VV} \PP^{-T}.
\]
The definition of Schur complements gives
$\MM^{-1} = \left(\MMhat^{-1} \right)_{VV}$,
which completes the proof.
\end{proof}
\section{Linear sized $U^{T} D U$ approximations}\label{sec:existence}
We now show that the vertex sparsifier chains of $\MM$ from the previous section
can be used to construct Cholesky factorizations of matrices that
are 2-approximations of $\MM$.
In particular, we prove that for every SDDM matrix $\MM$ of dimension $n$ there
exists a diagonal matrix $\DD$ and an upper-triangular matrix $\UU$
having $O (n)$ nonzero entries such that $\UU^{T} \DD \UU$
is a 2-approximation of $\MM$.
The obstacle to obtaining such a factorization is that it does not allow
us to multiply a vector by $\ZZ^{(k_{i})}_{F_{i}F_{i}}$ in many steps.
Rather, we must explicitly construct the matrices
$\ZZ^{(k_{i})}_{F_{i}F_{i}}$.
If we directly apply the construction suggested in the previous section,
these matrices could be dense and thereby result in a matrix $\UU$
with too many nonzero entries.
To get around this problem, we show that we can always find strongly
diagonally dominant subsets in which all the vertices have low degree.
This will ensure that all of the matrices $\ZZ^{(k_{i})}_{F_{i}F_{i}}$
are sparse.
\begin{lemma}
\label{lem:subsetLowDeg}
For every $n$-dimensional SDD matrix $\MM$ and every $\alpha \geq 0$,
there is an $\alpha$-strongly diagonally dominant subset of columns $F$
of size at least $\frac{n}{16(1+\alpha)}$
such that the number of nonzeros in every column $F$ is at most twice the average
number of nonzeros in columns of $\MM$.
\end{lemma}
\begin{proof}
Discard every column of $\MM$ that has more than twice the average number of nonzeros
per column.
Then remove the corresponding rows.
The remaining matrix has dimension at least $n/2$.
Use Lemma~\ref{lem:subsetSimple}
to find an $\alpha$-strongly diagonally subset of the columns of
this matrix.
\end{proof}
To obtain a $\UU^{T} \DD \UU$ factorization from a vertex sparsifier chain,
we employ the procedure in Figure~\ref{fig:UDU}.
\begin{figure}[h]
\begin{algbox}
$(\DD, \UU) = \textsc{Decompose}\left(\MM^{(1)}, \dots , \MM^{(d)}, F_{1}, \dots , F_{d-1}\right)$,
where each $\MM^{(i)}$ is a SDDM matrix.
\begin{enumerate}
\item let $k_{i}$ be the smallest odd integer greater than or equal to $\log_{\alpha_{i}/2} \epsilon_i^{-1}$.
\item For each $i < d$, write $\MM^{(i)} = \XX^{(i)} + \LL^{(i)}$
where $\XX^{(i)}$ is a positive diagonal matrix and $\LL^{(i)}$ is a Laplacian.
\item Let $\XX^{(d)} = \II_{C_{d-1}}$
and let $\UUhat$ be the upper-triangular Cholesky factor of $\MM^{(d)}$.
\item Let $\DD$ be the diagonal matrix with $\DD_{F_{i} F_{i}} = \XX_{i}$,
for $1 \leq i < d$, and $\DD_{C_{d-1} C_{d-1}} = \II_{C_{d-1}}$.
\item Let $\UU$ be the upper-triangular matrix with $1s$ on the diagonal,
$\UU_{C_{d-1} C_{d-1}} = \UUhat$, and
$\UU_{F_{i} C_{i}} = \ZZ_{F_{i} F_{i}}^{(k_{i})} \MM^{(i)}_{F_{i} C_{i}}$,
for $1 \leq i < d$.
\end{enumerate}
\end{algbox}
\caption{Converting a vertex sparsifer chain into $\UU$ and $\DD$.}
\label{fig:UDU}
\end{figure}
\begin{lemma}\label{lem:uduApprox}
On input a vertex sparsifier chain of $\MM$
with parameters $\alpha_{i} \geq 4$ and $\epsilon_{i}>0$,
the algorithm \textsc{Decompose} produces matrices $\DD$ and $\UU$
such that
\[
\UU^{T} \DD \UU \approx_{\gamma} \MM ,
\]
where
\[
\gamma \leq 2 \sum_{i=0}^{d-1} \epsilon_{i} + 4 / \min_{i} \alpha_{i}.
\]
\end{lemma}
\begin{proof}
Consider the inverse of the operator $\WW = \WW^{(1)}$ realized by the algorithm
\textsc{ApplyChain},
and the operators $\WW^{(i)}$ that appear in the proof of Lemma~\ref{lem:apply_chain}.
We have
\[
\left( \WW^{(i)} \right)^{-1}
=
\left[
\begin{array}{cc}
\II & 0\\
\MM_{C_{i}F_{i}} \ZZ^{(k_{i})}_{F_{i}F_{i}} & \II
\end{array}
\right]
\left[
\begin{array}{cc}
\left(\ZZ^{(k_{i})}_{F_{i}F_{i}} \right)^{-1} & 0 \\
0 & \left(\WW^{(i+1)} \right)^{-1}
\end{array}
\right]
\left[
\begin{array}{cc}
\II & \ZZ^{(k_{i})}_{F_{i}F_{i}} \MM_{F_{i}C_{i}}\\
0 & \II
\end{array}
\right],
\]
and
\[
\left( \WW^{(d)} \right)^{-1}
=
\MM^{(d)}
= \UUhat^{T} \UUhat.
\]
After expanding and multiplying the matrices in this recursive factorization,
we obtain
\[
\left( \WW^{(1)} \right)^{-1}
=
\UU^{T}
\begin{bmatrix}
\left(\ZZ^{(k_{1})}_{F_{1}F_{1}} \right)^{-1} & \dots & 0 & 0\\
0 & \ddots & 0 & 0\\
0 & \ldots & \left(\ZZ^{(k_{d-1})}_{F_{d-1}F_{d-1}} \right)^{-1} & 0 \\
0 & \ldots & 0 & \II_{C_{d-1} C_{d-1}}
\end{bmatrix}
\UU.
\]
Moreover, we know that this latter matrix is a $2 \sum_{i=0}^{d-1} \epsilon_{i}$
approximation of $\MM$.
It remains to determine the impact of replacing the matrix in the middle of
this expression with $\DD$.
It suffices to examine how well each matrix
$\left(\ZZ^{(k_{i})}_{F_{i}F_{i}} \right)^{-1}$
is approximated by $\XX^{(i)}$.
From Lemma~\ref{lem:Xdom} we know that
\[
\XX^{(i)} \succcurlyeq (\alpha_{i} /2) \LL^{(i)}.
\]
Thus, we may use Lemma~\ref{lem:lightBlock2} with $\beta = \alpha_{i} /2$
to conclude that
\[
\XX^{(i)}
\approx_{4 / \alpha_{i}}
\left(\ZZ^{(k_{i})}_{F_{i}F_{i}} \right)^{-1}.
\]
This implies that replacing each of the matrices
$\left(\ZZ^{(k_{i})}_{F_{i}F_{i}} \right)^{-1}$
by $\XX^{(i)}$
increases the approximation factor by at most
$4 / \min_{i} \alpha_{i}$.
\end{proof}
Using this decomposition procedure in a way similar to
Theorem~\ref{thm:result_BSS}, but with subsets chosen
using Lemma~\ref{lem:subsetLowDeg} gives the linear
sized decomposition.
\begin{theorem}\label{thm:UDU}
For every $n$-dimensional SDDM matrix $\MM$ there exists a diagonal matrix $\DD$
and an upper triangular matrix $\UU$ with $O (n)$ nonzero entries so that
\[
\UU^{T} \DD \UU \approx_{2} \MM .
\]
Moreover, back and forward solves in $\UU$ can be performed
with linear work in depth $O (\log^{2} n)$.
\end{theorem}
\begin{proof}
We choose the same parameters as were used in the proof of
Theorem \ref{thm:result_BSS}: $\alpha_{i} = 4$ for all $i$
and $\epsilon_{i} = 1/2 (i+2)^{2}$.
Theorem~\ref{thm:BSS} then guarantees that the average number of
nonzero entries in each column of $\MM^{(i)}$ is at most
$10 / \epsilon_{i}^{2} = 40 (i+1)^{4}$.
If we now apply Lemma~\ref{lem:subsetLowDeg} to find $4$-diagonally dominant
subsets $F_{i}$ of each $\MM^{(i)}$,
we find that each such subset contains at least a $1/80$ fraction of the columns
of its matrix and that each
column and row of $\MM^{(i)}$ indexed by $F$ has at most $80 (i+1)^{4}$
nonzero entries.
This implies that each row of
$\ZZ^{(k_{i})}_{F_{i}F_{i}} \MM_{F_{i}C_{i}}$
has at most
$(80 (i+1)^{4})^{k_{i}+1}$ nonzero entries.
Let $n_{i}$ denote the dimension of $\MM^{(i)}$.
By induction, we know that
\[
n_{i} \leq n \left(1 - \frac{1}{80} \right)^{i-1}.
\]
So, the total number of nonzero entries in $\UU$ is at most
\[
\sum_{i=1}^{d} n_{i} (80 (i+1)^{4})^{k_{i}+1}
\leq
n \sum_{i=1}^{d} \left(1 - \frac{1}{80} \right)^{i-1} (80 (i+1)^{4})^{k_{i}+1}.
\]
We will show that the term multiplying $n$ in this later expression is upper bounded by
a constant.
To see this, note that $k_{i} \leq 1 + \log_{\alpha_i / 2} (2 \epsilon_{i}^{-1} )\leq \nu \log (i+1)$
for some constant $\nu$.
So, there is some other constant $\mu$ for which
\[
(80 (i+1)^{4})^{k_{i}+1}
\leq
\exp (\mu \log^{2} (i+1)).
\]
This implies that the sum is at most
\[
\sum_{i \geq 1} \exp (\mu \log^{2} (i+1) - i / 80),
\]
which is bounded by a constant.
The claimed bound on the work to perform backwards and forwards substitution with $\UU$
is standard: these operations require work linear in the number of nonzero entries of $\UU$.
The bound on the depth follows from the fact that the substitions can be performed blockwise,
take depth $O (\log n)$ for each block, and the number of blocks, $d$, is logarithmic in $n$.
\end{proof}
\section{A Polynomial Time Algorithm for Optimal Solver Chains}
\label{sec:vertexReduce}
Our algorithms will begin by eliminating a set of vertices $F$
that is $\alpha$-strongly diagonally dominant, a concept that we now define.
\begin{definition}
\label{def:strongDD}
A symmetric matrix $\MM$ is $\alpha$-strongly diagonally dominant
if for all $i$
\[
\MM_{ii} \geq (1+\alpha) \sum_{j \neq i} \left| \MM_{ij} \right|.
\]
We say that a subset $F$ of the rows of a matrix $\MM$
is $\alpha$-strongly diagonally dominant if $\MM_{FF}$
is an $\alpha$-strongly diagonally dominant matrix.
\end{definition}
We remark that $0$-strongly diagonal dominance coincides with the
standard notion of weak diagonal dominance.
In particular, Laplacian matrices are $0$-strongly diagonally dominant.
It is easy to find an $\alpha$-strongly diagonally dominant subset
containing at least an $1/8 (1+\alpha)$ fraction of the rows of an SDD
matrix: one need merely pick a random subset and then discard the rows
that do not satisfy the condition.
\ifthenelse{\boolean{@full}}{
Pseudocode for computing such a subset is given in Figure~\ref{fig:randF}.
\begin{figure}[h]
\begin{algbox} $F=\textsc{SDDSubset}(\MM , \alpha)$, where $\MM$
is an $n$-dimensional SDD matrix.
\begin{enumerate}
\item Let $F'$ be a uniform random subset of $\setof{1, \dots , n}$ of size
$\frac{n}{4(1+\alpha)}$.
\label{ln:randSubset}
\item Set \[
F=\left\{ i\in F'\text{ such that }\sum_{j \in F', j \neq i}\left|\MM_{ij}\right|\leq\frac{1}{1 + \alpha} \left|\MM_{ii}\right|\right\} .
\]
\item If $|F| < \frac{n}{8(1 + \alpha)}$, goto Step~\ref{ln:randSubset}.
\item Return $F$
\end{enumerate}
\end{algbox}
\caption{Routine for Generating an $\alpha$-strongly diagonally dominant
subset $F$}
\label{fig:randF}
\end{figure}
}{}
\begin{lemma}
\label{lem:subsetSimple}
For every $n$-dimensional SDD matrix $\MM$ and every $\alpha \geq 0$,
\textsc{SDDSubset} computes an $\alpha$-strongly diagonally dominant subset $F$
of size at least $n/(8 (1+\alpha))$ in $O(m)$ expected work and $O(\log{n})$ expected depth,
where $m$ is the number of nonzero entries in $\MM$.
\end{lemma}
\begin{proof}
As $F$ is a subset of $F'$,
\[
\sum_{j \in F, j \neq i}\left|\MM_{ij}\right| \leq \sum_{j \in F', j \neq i}\left|\MM_{ij}\right|.
\]
So, when the algorithm does return a set $F$, it is guaranteed to be $\alpha$-strongly diagonally dominant.
We now show that the probability that the algorithm finishes in each iteration is at least $1/2$.
Let $A_{i}$ be the event that $i \in F'$ and that $i \not \in F$.
This only happens if $i \in F'$ and
\begin{equation}\label{eqn:subsetSimple}
\sum_{j \in F', j \not = i} \abs{\MM_{ij}}
>
\frac{1}{1+\alpha } \abs{\MM_{ii}}.
\end{equation}
The set $F$ is exactly the set of $i \in F'$ for which $A_{i}$ does not hold.
Given that $i \in F'$, the probability that each other $j \not = i$ is in $F'$ is
\[
\frac{1}{n-1} \left(\frac{n}{4 (1+\alpha)}-1 \right) .
\]
So,
\[
\expec{}{\sum_{j \in F', j \not = i } \abs{\MM_{ij}} \Big| i \in F'}
\leq
\frac{1}{n-1}
\left(\frac{n}{4 (1+\alpha)}-1 \right) \sum_{j \not = i} \abs{\MM_{ij}}
<
\frac{1}{4 (1+\alpha)} \sum_{j \not = i} \abs{\MM_{ij}}
\leq
\frac{1}{4 (1+\alpha)} \abs{\MM_{ii}},
\]
as $\MM$ is strongly diagonally dominant.
So, Markov's inequality tells us that
\[
\prob{}{
\sum_{j \in F', j \not = i } \abs{\MM_{ij}}
>
\frac{1}{1+\alpha} \abs{\MM_{ii}}
\Big| i \in F'
}
< 1/4,
\]
and thus
\[
\prob{}{A_{i}} = \prob{}{i \in F'} \prob{}{i \not \in F | i \in F'}
<
\frac{1}{4 (1+\alpha)} \frac{1}{4}
=
\frac{1}{16 (1+\alpha)}.
\]
Again applying Markov's inequality allows us to conclude
\[
\prob{}{\sizeof{\setof{i : A_{i}}} \geq \frac{n}{8 (1+\alpha)}} < 1/2.
\]
So, with probability at least $1/2$, $\sizeof{F} \geq n / 8 (1+\alpha)$,
and the algorithm will pass the test in line 3.
Thus, the expected number of iterations made by the algorithm is at most $2$.
The claimed bounds on the expected work and depth of the algorithm follow.
\end{proof}
Strongly diagonally dominant subsets are useful because linear systems
involving them can be solved rapidly.
Given such a set $F$, we will construct
an operator $\ZZ_{FF}^{(k)}$ that approximates
$\MM_{FF}^{-1}$
and that can be applied quickly.
To motivate our construction,
observe that if $\MM_{FF} = \XX_{FF} + \LL_{FF}$ where $\XX_{FF}$ is a nonnegative diagonal matrix and
$\LL_{FF}$ is a Laplacian, then
\[
\MM_{FF}^{-1} = \XX_{FF}^{-1}
- \XX_{FF}^{-1} \LL_{FF} \XX_{FF}^{-1} +
\sum_{i \geq 2} (-1)^{i} \XX_{FF}^{-1} (\LL_{FF} \XX_{FF}^{-1})^{i}.
\]
We will approximate this series by its first few terms:
\begin{equation}
\ZZ_{FF}^{(k)} \stackrel{\mathrm{def}}{=} \sum_{i = 0}^{k} \XX_{FF}^{-1} \left(-\LL_{FF} \XX_{FF}^{-1} \right)^{i}.
\label{eqn:defZ}
\end{equation}
\ifthenelse{\boolean{@full}}{
In the following lemmas, we show that using $\ZZ_{FF}$
in place of $\MM_{FF}^{-1}$ in \eqref{eqn:blockInverse}
provides a good approximation
of $\MM^{-1}$.
We begin by pointing out that
$\XX_{FF}$ is much greater than $\LL_{FF}$.
In particular, this implies that
all diagonal entries of $\XX_{FF}$ are positive, so that
$\XX_{FF}^{-1}$ actually exists.
\begin{lemma}\label{lem:Xdom}
Let $\MM$ be a SDDM matrix that is $\alpha$-strongly diagonally dominant.
Write $\MM = \XX + \LL$ where $\XX$ is a nonnegative diagonal matrix and
$\LL$ is a Laplacian.
Then,
\[
\XX \succcurlyeq \frac{\alpha}{2} \LL.
\]
\end{lemma}
\begin{proof}
Write $\LL = \YY - \AA$ where $\YY$ is diagonal and $\AA$ has zero diagonal.
As $\LL$ is diagonally dominant, so is $\YY + \AA$.
This implies that $\YY \succcurlyeq -\AA$, and so
$2 \YY \succcurlyeq \LL$.
As $\MM$ is $\alpha$-strongly diagonally dominant and the diagonal of $\MM$
is $\XX + \YY$,
\[
((\XX + \YY) \bvec{1})_{i} \geq (\alpha +1) (\AA \bvec{1})_{i}.
\]
As $\LL$ is a Laplacian, $\LL \bvec{1} = \bvec{0}$, which implies $\YY \bvec{1} = \AA \bvec{1}$ and
\[
(\XX \bvec{1})_{i} \geq \alpha (\AA \bvec{1})_{i} = \alpha (\YY \bvec{{1}})_{i}.
\]
As both $\XX$ and $\YY$ are diagonal,
this implies that
\[
\XX \succcurlyeq \alpha \YY \succcurlyeq \frac{\alpha}{2} \LL .
\]
\end{proof}
We now bound the quality of approximation of the power series \eqref{eqn:defZ}.
\begin{lemma}\label{lem:lightBlock2}
Let $\MM$ be a SDDM matrix and let $F$ be a set of columns
so that when we write $\MM_{FF} = \XX_{FF} + \LL_{FF}$
with $\XX_{FF}$ nonnegative diagonal and $\LL_{FF}$ a Laplacian,
we have
$\LL_{FF} \preccurlyeq \beta \XX_{FF}$.
Then, for odd $k$ and for $\ZZ_{FF}^{(k)}$ as defined in \eqref{eqn:defZ} we have:
\begin{equation}\label{eqn:lightBlock2}
\XX_{FF} + \LL_{FF}
\preceq (\ZZ_{FF}^{(k)})^{-1} \preceq
\XX_{FF} + \left( 1 + \delta \right) \LL_{FF},
\end{equation}
where
\[
\delta = \beta^{k} \frac{1+\beta}{1-\beta^{k+1}}.
\]
\end{lemma}
\begin{proof}
The left-hand inequlity is equivalent to the statement that
all the eigenvalues of
$\ZZ^{(k)}_{FF} (\XX_{FF} + \LL_{FF})$ are at most $1$
(see \cite[Lemma 2.2]{SupportGraph} or
\cite[Proposition 3.3]{SpielmanTengLinsolve}).
To see that this is the case, expand
\begin{align*}
\ZZ_{FF}^{(k)} (\XX_{FF} + \LL_{FF})
& =
\left(\sum_{i=0}^{k} \XX_{FF}^{-1} (-\LL_{FF} \XX_{FF}^{-1})^{i} \right)
(\XX_{FF} + \LL_{FF})
\\
& =
\sum_{i=0}^{k} (-\XX_{FF}^{-1} \LL_{FF})^{i}
-
\sum_{i=1}^{k+1} (\XX_{FF}^{-1} \LL_{FF})^{i}
\\
& =
\II_{FF}
- (\XX_{FF}^{-1} \LL_{FF})^{k+1} .
\end{align*}
As all the eigenvalues of an even power of a matrix are nonnegative,
all of the eigenvalues of this last matrix are at most $1$.
Similarly, the other inequality is equivalent to the assertion
that all of the eigenvalues of
$\ZZ^{(k)}_{FF} (\XX_{FF} + (1+\delta ) \LL_{FF})$
are at least one.
Expanding this product yields
\begin{multline*}
\left(\sum_{i=0}^{k} \XX_{FF}^{-1} (-\LL_{FF} \XX_{FF}^{-1})^{i} \right)
(\XX_{FF} + (1+\delta ) \LL_{FF})
\\
=
\II_{FF}
- (\XX_{FF}^{-1} \LL_{FF})^{k+1}
+
\delta
\sum_{i=0}^{k} (-1)^{i} (\XX_{FF}^{-1} \LL_{FF})^{i+1}
\end{multline*}
The eigenvalues of this matrix are precisely the numbers
\begin{equation}\label{eqn:inLightBlock2}
1 - \lambda^{k+1} + \delta \sum_{i=0}^{k} (-1)^{i} \lambda^{i+1},
\end{equation}
where $\lambda$ ranges over the eigenvalues of
$\XX_{FF}^{-1} \LL_{FF}$.
The assumption $\LL_{FF} \preccurlyeq \beta \XX_{FF}$ implies
that the eigenvalues of $\XX_{FF}^{-1} \LL_{FF}$
are at most $\beta$, so
$0 \leq \lambda \leq \beta$.
We have chosen the value of $\delta$ precisely to guarantee that,
under this condition on $\lambda$, the value of \eqref{eqn:inLightBlock2}
is at least $1$.
\end{proof}
We remark that this power series is identical to
the Jacobi iteration for solving linear systems.
The following lemma allows us to extend the approximation of $\MM_{FF}$
by the inverse of $\ZZ_{FF}^{(k)}$ to the entire matrix $\MM$.
\begin{lemma}\label{lem:lightBlock3}
Under the conditions of Lemma \ref{lem:lightBlock2} and assuming that
$0 \leq \beta \leq 1/2$,
\[
\MM \preccurlyeq
\begin{pmatrix}
\left(\ZZ_{FF}^{(k)} \right)^{-1} & \MM_{FC}\\
\MM_{CF} & \MM_{CC}
\end{pmatrix}
\preccurlyeq
(1 + 2 \beta^{k}) \MM .
\]
\end{lemma}
\begin{proof}
The left-hand inequality follows immediately from
Fact \ref{fact:blockSubstitute} and
the left-hand side of \eqref{eqn:lightBlock2}.
To prove the right-hand inequality we apply
Fact \ref{fact:blockSubstitute}
and the right-hand side of \eqref{eqn:lightBlock2}
to conclude
\[
\begin{pmatrix}
\left(\ZZ_{FF}^{(k)} \right)^{-1} & \MM_{FC}\\
\MM_{CF} & \MM_{CC}
\end{pmatrix}
\preccurlyeq
\begin{pmatrix}
\MM_{FF} + \delta \LL_{FF} & \MM_{FC}\\
\MM_{CF} & \MM_{CC}
\end{pmatrix}
=
\MM +
\delta \begin{pmatrix}
\LL_{FF} & 0\\
0 & 0
\end{pmatrix}.
\]
Consider the (unique) decomposition of $\MM$ into
$\LL + \XX$ where $\LL$ is a graph Laplacian.
When viewed as graphs, $\LL_{FF}$ is a subgraph $\LL$,
which means:
\[
\begin{pmatrix}
\LL_{FF} & 0\\
0 & 0
\end{pmatrix}
\preceq \LL \preceq \MM,
\]
by which we may conclude that
\[
\MM +
\delta \begin{pmatrix}
\LL_{FF} & 0\\
0 & 0
\end{pmatrix}
\preccurlyeq
\MM + \delta \MM .
\]
To finish the proof, recall that $\delta = \beta^{k} (1+\beta) / (1- \beta^{k+1})$
and observe that for $k \geq 1$ and $\beta \leq 1/2$, $ \delta \leq 2 \beta^{k}$.
\end{proof}
}{}
We now show that we can obtain a good approximation of $\MM^{-1}$
by replacing $\MM_{FF}^{-1}$ by $\ZZ_{FF}^{(k)}$ in the three places
in which it explicitly appears in \eqref{eqn:blockInverse},
but not in the Schur complement.
\begin{lemma}\label{lem:subZFF}
Let $\MM$ be a SDDM matrix and let $F$ be an $\alpha$-diagonally dominant
set of columns for some $\alpha \geq 4$.
Then, for $k$ odd and $\ZZ^{(k)}$ as defined in \eqref{eqn:defZ},
\[
\left[
\begin{array}{cc}
\II & -\ZZ^{(k)}_{FF} \MM_{FC}\\
0 & \II
\end{array}
\right]
\left[
\begin{array}{cc}
\ZZ^{(k)}_{FF} & 0 \\
0 & \schur{\MM}{F}^{-1}
\end{array}
\right]
\left[
\begin{array}{cc}
\II & 0\\
-\MM_{CF} \ZZ^{(k)}_{FF} & \II
\end{array}
\right]
\approx_{\gamma} \MM^{-1},
\]
for $\gamma = 2 (2/\alpha)^{k}$.
\end{lemma}
\begin{proof}
Define
\[
\MMhat =
\left[
\begin{array}{cc}
(\ZZ^{(k)}_{FF})^{-1} &\MM_{FC}\\
\MM_{CF} & \MM_{CC}
\end{array}
\right].
\]
Lemma~\ref{lem:Xdom} tells us that $\MM$ satisfies the conditions of Lemma~\ref{lem:lightBlock2}
with $\beta = 2/\alpha$.
So, Lemma~\ref{lem:lightBlock3} implies
\[
\MM \preccurlyeq \MMhat
\preccurlyeq \left(1+ \gamma\right) \MM.
\]
By facts~\ref{fact:blockInverse} and \ref{fact:orderInverse}, this implies
\[
\MM^{-1}
\succcurlyeq
\left[
\begin{array}{cc}
\II & -\ZZ^{(k)}_{FF} \MM_{FC}\\
0 & \II
\end{array}
\right]
\left[
\begin{array}{cc}
\ZZ^{(k)}_{FF} & 0 \\
0 & \schur{\MMhat}{F}^{-1}
\end{array}
\right]
\left[
\begin{array}{cc}
\II & 0\\
-\MM_{CF} \ZZ^{(k)}_{FF} & \II
\end{array}
\right]
\succcurlyeq
(1+\gamma)^{-1} \MM^{-1}.
\]
From Facts \ref{fact:schurLoewner} and \ref{fact:orderInverse}, we know that
\[
\schur{\MM}{F}^{-1}
\succcurlyeq
\schur{\MMhat}{F}^{-1}
\succcurlyeq
(1+\gamma)^{-1} \schur{{\MM}}{F}^{-1}.
\]
When we use Fact \ref{fact:orderCAC} to substitute this inequality into the one above,
we obtain
\[
(1+\gamma )\MM^{-1}
\succcurlyeq
\left[
\begin{array}{cc}
\II & -\ZZ^{(k)}_{FF} \MM_{FC}\\
0 & \II
\end{array}
\right]
\left[
\begin{array}{cc}
\ZZ^{(k)}_{FF} & 0 \\
0 & \schur{\MM}{F}^{-1}
\end{array}
\right]
\left[
\begin{array}{cc}
\II & 0\\
-\MM_{CF} \ZZ^{(k)}_{FF} & \II
\end{array}
\right]
\succcurlyeq
(1+\gamma)^{-1} \MM^{-1},
\]
which implies the lemma.
\end{proof}
We now use Lemma~\ref{lem:subZFF} to analyze a solver obtained by iteratively sparsifying Schur complements
of strongly diagonally dominant subsets.
We refer to the sequence of subsets and matrices obtained as a
\textit{vertex sparsifer chain},
as an approximation of a Schur complement is a spectral vertex sparsifier.
In the following definition, $\MM^{(1)}$ is intended to be a sparse approximation of
$\MM^{(0)}$.
The sparsity of the matrices will show up in the analysis of the runtime, but not in the
definition of the chain.
\begin{definition}[Vertex Sparsifier chain]
\label{def:vertexSparsifierChain}
For any SDDM matrix $\MM^{(0)}$, a vertex sparsifier chain of $\MM^{(0)}$
with parameters $\alpha_i \geq 4$ and $ 1/2\geq\epsilon_i>0$
is a sequence of matrices and subsets $(\MM^{(1)}, \ldots, \MM^{(d)};F_1, \ldots, F_{d-1})$
such that:
\begin{enumerate}
\item $\MM^{(1)} \approx_{\epsilon_0} \MM^{(0)}$,
\item $\MM^{(i + 1)} \approx_{\epsilon_i} \schur{\MM^{(i)}}{F_i}$,
\item $\MM^{(i)}_{F_i F_i}$ is $\alpha_i$-strongly diagonally dominant and
\item $\MM^{(d)}$ has size $O(1)$.
\end{enumerate}
\end{definition}
We present pseudocode that uses a vertex sparsifier chain to approximately solve
a system of equations in $\MM^{(0)}$ in Figure~\ref{fig:applyChain}.
We analyze the running time and accuracy of this algorithm
in Lemma~\ref{lem:apply_chain}.
\begin{figure}[h]
\begin{algbox} $\xx^{(1)} = \textsc{ApplyChain}(\MM^{(1)}, \ldots, \MM^{(d)}, F_1, \ldots, F_{d-1}, \alpha_{1} \ldots \alpha_{d - 1}, \epsilon_0 \ldots \epsilon_{d-1}, \bb^{(1)})$
\begin{enumerate}
\item For i = $1, \ldots, d-1$
\begin{enumerate}
\item let $k_{i}$ be the smallest odd integer greater than or equal to $\log_{\alpha_{i}/2} (2/\epsilon_i)$.
\item $\xx^{(i)}_{F_i} \leftarrow \ZZ_{F_{i} F_{i}}^{(k_i)} \bb^{(i)}_{F_i}$,
where $\ZZ_{F_{i} F_{i}}^{(k_i)}$ is obtained from $\MM_{F_{i} F_{i}}^{(i)}$ as in \eqref{eqn:defZ}.
\item $\bb^{(i+1)} \leftarrow \bb^{(i)}_{C_i} - \MM^{(i)}_{C_i F_i} \xx^{(i)}_{F_i}$.
\end{enumerate}
\item $\xx^{(d)} \leftarrow \left( \MM^{(d)} \right)^{-1} \bb^{(d)}$.
\item For i = $d-1, \ldots, 1$
\begin{enumerate}
\item $\xx^{(i)}_{C_i} \leftarrow \xx^{(i+1)}$.
\item $\xx^{(i)}_{F_i} \leftarrow \xx^{(i)}_{F_i} - \ZZ_{F_{i} F_{i}}^{(k_i)} \MM^{(i)}_{F_i C_i} \xx^{(i+1)}$.
\end{enumerate}
\end{enumerate}
\end{algbox}
\caption{Solver Algorithm using Vertex Sparsifier Chain}
\label{fig:applyChain}
\end{figure}
\begin{lemma}
\label{lem:apply_chain}
Given a vertex sparsifier chain where $\MM^{(i)}$ has $m_i$ non-zero entries,
the algorithm $\textsc{ApplyChain}(\MM^{(1)}, \ldots, \MM^{(d)}, F_1, \ldots, F_{d-1}, \alpha_{1} \ldots \alpha_{d - 1}, \epsilon_0 \ldots \epsilon_{d-1}, \bb )$ corresponds to a linear operator
$\WW$ acting on $\bb$ such that
\begin{enumerate}
\item \[
\WW^{-1} \approx_{\sum_{i = 0}^{d-1} 2\epsilon_i} \MM^{(0)},
\] and
\item for any vector $\bb$, $\textsc{ApplyChain}(\MM^{(1)}, \ldots, \MM^{(d)}, F_1, \ldots, F_{d-1}, \alpha_{1} \ldots \alpha_{d - 1}, \epsilon_0 \ldots \epsilon_{d-1}, \bb )$ runs in
$O\left(\sum_{i = 1}^{d-1} \left( \log_{\alpha_i} \left( \epsilon_i^{-1} \right) \log{n} \right) \right)$ depth
and $O\left(\sum_{i = 1}^{d-1} \left( \log_{\alpha_i}\left( \epsilon_i^{-1} \right) \right) m_i \right)$ work.
\end{enumerate}
\end{lemma}
\begin{proof}
We begin by observing that the output vector $\xx^{(1)}$ is a linear transformation
of the input vector $\bb^{(1)}$.
Let $\WW^{(1)}$ be the matrix that realizes this transformation.
Similarly, for $2 \leq i \leq d$, define $\WW^{(i)}$ to be the matrix so that
\[
\xx^{(i)} = \WW^{(i)} \bb^{(i)}.
\]
An examination of the algorithm reveals that
\begin{equation}\label{eqn:apply_chain1}
\WW^{(d)} = \left(\MM^{(d)} \right)^{-1},
\end{equation}
and
\begin{equation}\label{eqn:apply_chaini}
\WW^{(i)}
=
\left[
\begin{array}{cc}
\II & -\ZZ^{(k_{i})}_{F_{i}F_{i}} \MM_{F_{i}C_{i}}\\
0 & \II
\end{array}
\right]
\left[
\begin{array}{cc}
\ZZ^{(k_{i})}_{F_{i}F_{i}} & 0 \\
0 & \WW^{(i+1)}
\end{array}
\right]
\left[
\begin{array}{cc}
\II & 0\\
-\MM_{C_{i}F_{i}} \ZZ^{(k_{i})}_{F_{i}F_{i}} & \II
\end{array}
\right].
\end{equation}
We will now prove by backwards induction on $i$ that
\[
\left(\WW^{(i)} \right)^{-1} \approx_{\sum_{j = i}^{d-1} 2\epsilon_j} \MM^{(i)}.
\]
The base case of $i = d$ follows from \eqref{eqn:apply_chain1}.
When we substitute our choice of $k_{i}$
from line $1a$ of \textsc{ApplyChain}
into Lemma~\ref{lem:subZFF}, we find that
\[
\left[
\begin{array}{cc}
\II & -\ZZ^{(k_{i})}_{F_{i}F_{i}} \MM^{(i)}_{F_{i}C_{i}}\\
0 & \II
\end{array}
\right]
\left[
\begin{array}{cc}
\ZZ^{(k_{i})}_{F_{i}F_{i}} & 0 \\
0 & \schur{\MM^{(i)}}{F_{i}}^{-1}
\end{array}
\right]
\left[
\begin{array}{cc}
\II & 0\\
- \MM^{(i)}_{C_{i}F_{i}} \ZZ^{(k_{i})}_{F_{i}F_{i}} & \II
\end{array}
\right]
\approx_{\epsilon_{i}}
\left(\MM^{(i)} \right)^{-1}.
\]
As $\MM^{(i+1)} \approx_{\epsilon_{i}} \schur{\MM^{(i)}}{F_{i}}$,
\[
\left[
\begin{array}{cc}
\II & -\ZZ^{(k_{i})}_{F_{i}F_{i}} \MM^{(i)}_{F_{i}C_{i}}\\
0 & \II
\end{array}
\right]
\left[
\begin{array}{cc}
\ZZ^{(k_{i})}_{F_{i}F_{i}} & 0 \\
0 & \left(\MM^{(i+1)} \right)^{-1}
\end{array}
\right]
\left[
\begin{array}{cc}
\II & 0\\
- \MM^{(i)}_{C_{i}F_{i}} \ZZ^{(k_{i})}_{F_{i}F_{i}} & \II
\end{array}
\right]
\approx_{2 \epsilon_{i}}
\left(\MM^{(i)} \right)^{-1}.
\]
By combining this identity with \eqref{eqn:apply_chaini}
and our inductive hypothesis, we obtain
\[
\WW^{(i)}
\approx_{\sum_{j = i}^{d-1} 2 \epsilon_{j}}
\left(\MM^{(i)} \right)^{-1}.
\]
Finally, as $\MM^{(0)} \approx_{\epsilon_{0}} \MM^{(1)}$,
\[
\WW^{(1)}
\approx_{\sum_{j = 0}^{d-1} 2 \epsilon_{j}}
\left(\MM^{(0)} \right)^{-1}.
\]
To bound the work and depth of the algorithm, we observe that we do not need to construct
the matrices $\ZZ^{(k_{i})}_{F_{i}F_{i}}$ explicitly.
Rather, we multiply vectors by the matrices by performing
$k_{i} = O(\log_{\alpha_i} ( \epsilon_i^{-1} ) )$ matrix-vector products
by the submatrices of $\MM^{(i)}$ that appear in the expression \eqref{eqn:defZ}.
As each matrix-vector product can be performed in depth $O (\log n)$,
the depth of the whole algorithm is bounded by
$O ((\log n) \sum_{i} k_{i})$.
As each matrix $\MM^{(i)}$ has $m_{i}$ non-zero entries,
and the work of the $i$th iteration is dominated by the cost of multiplying
by submatrices of $\MM^{(i)}$ $O (k_{i})$ times, the total work of the algorithm
is $O (\sum_{i=1}^{d-1} m_{i} k_{i} )$.
\end{proof}
\ifthenelse{\boolean{@full}}{
\begin{definition}[Work and Depth of a Vertex Sparsifier chain]
An $\epsilon$-vertex sparsifier chain of an SDDM matrix $\MM^{(0)}$
of depth $D$ and work $W$ is a vertex sparsifer chain of $\MM^{(0)}$
with parameters $\alpha_i \geq 4$ and $ 1/2\geq\epsilon>0$
that satisfies
\begin{enumerate}
\item $2 \sum_{i=0}^{d-1} \epsilon_{i} \leq \epsilon$,
\item $\sum_{i = 1}^{d-1} m_{i} \log_{\alpha_i} \epsilon_i^{-1} \leq W$,
where $m_{i}$ is the number of nonzeros in $\MM^{(i)}$, and
\item $\sum_{i = 1}^{d-1} (\log n) \log_{\alpha_i} \epsilon_i^{-1} \leq D$,
where $n$ is the dimension of $\MM^{(0)}$.
\end{enumerate}
\end{definition}
\begin{theorem}
\label{thm:result_BSS}
Every SDDM matrix $\MM$ of dimension $n$
has a $1$-vertex sparsifier chain
of depth $O(\log^{2}n\log\log n)$
and work $O (n)$.
Given such vertex sparsifier chain, for any vector $b$,
we can compute an $\epsilon$ approximate solution to $\MM^{-1}b$
in $O(m\log(1/\epsilon))$ work and $O(\log^{2}n\log\log n\log(1/\epsilon))$
depth.
\end{theorem}
}
{
\begin{theorem}
\label{thm:result_BSS}
Every SDDM matrix $\MM$ of dimension $n$
has a $1$-vertex sparsifier chain.
Given such vertex sparsifier chain, for any vector $b$,
we can compute an $\epsilon$ approximate solution to $\MM^{-1}b$
in $O(m\log(1/\epsilon))$ work and $O(\log^{2}n\log\log n\log(1/\epsilon))$
depth.
\end{theorem}
}
\begin{proof}
We will show the existence of such a vertex sparsifier chain with
$\alpha_i = 4$ for all $i$ and $\epsilon_i = \frac{1}{2(i+2)^2}$.
Lemma~\ref{lem:subsetSimple} tells us that every SDDM matrix
has a $4$-strongly diagonally dominant subset consisting of at least
a $1/ 8 (1+4) = 1/40$ fraction of its columns.
By taking such a subset, we ensure that the number of vertices of $\MM^{(i)}$,
which we define to be $n_{i}$, satisfies
\[
n_i \leq \left( \frac{39}{40} \right)^{i-1} n.
\]
In particular, this means that $d$, the number of matrices in the chain,
will be logarithmic in $n$.
If we use Theorem~\ref{thm:BSS} to find a matrix $\MM^{(1)}$
that is an $\epsilon_{0}$ approximation of $\MM^{(0)} = \MM$,
and to find a matrix $\MM^{(i+1)}$ that is an $\epsilon_{i}$
approximation of $\schur{\MM^{(i)}}{F_{i}}$, then
each matrix $\MM^{(i)}$ will have a number of nonzero entries satisfying
\[
m_i \leq
O (n_{i} / \epsilon_{i-1}^{2})
\leq
O\left( \left( \frac{39}{40} \right)^{i-1} (i+1)^4 n \right).
\]
Lemma~\ref{lem:apply_chain} tell us that the vertex sparsifier chain
induces a linear operator that is an $\epsilon$-approximation of the inverse of $\MM$,
where
\[
\epsilon \leq 2 \sum_{i=0}^{d-1} \epsilon_{i}
\leq 2 \sum_{i=0}^{d-1} \frac{1}{2(i+2)^2}
\leq \sum_{i \geq 2} \frac{1}{i^{2}}
\leq 1.
\]
To compute the work and depth of the chain,
recall that we set $k_{i}$ to be the
smallest odd integer that is at least
$\log_{\alpha_{i}/2} \epsilon_i^{-1}$,
so $k_{i} \leq O (\log i)$.
Thus, the work of the chain is at most
\[
\sum_{i = 1}^{d} k_{i} m_{i}
\leq
O \left(\sum_{i=1}^{d} \log (i) \left( \frac{39}{40} \right)^{i-1} (i+1)^4 n \right)
\leq
O \left(\sum_{i=1}^{d} \left( \frac{39}{40} \right)^{i-1} i^5 n \right)
\leq
O (n).
\]
Similarly, the depth of the chain is at most
\[
\sum_{i = 1}^{d} (\log n) k_{i}
\leq
O\left( \sum_{i = 1}^{d} (\log n) \log d \right)
\leq
O (\log^{2} n \ \log \log n) .
\]
\end{proof}
\dan{This proof should tie in better with the definition.}
\section{Spectral Vertex Sparsification Algorithm}
\label{sec:vertexSparsify}
In this section, we give a nearly-linear work algorithm for computing
spectral vertex sparsifiers.
Recall that our goal is to approximate the matrix
\[
\schur{\MM}{F} = \MM_{CC} - \MM_{CF} \MM_{FF}^{-1} \MM_{FC}.
\]
Our algorithm approximates $\MM_{FF}^{-1}$ in a way analogous to the
recent parallel solver by Peng and Spielman~\cite{PengS14}.
It repeatedly writes the Schur complement as the average of the Schur complements
of two matrices.
The $FF$ block in one of these is diagonal, which makes its construction easy.
The other matrix is more strictly diagonally dominant than the previous one, so
that after a small number of iterations we can approximate it by a diagonal matrix.
\subsection{Spliting of Schur complement}
This spliting of the Schur complement is based on the following identity
from~\cite{PengS14}:
\begin{equation}\label{eqn:identity}
(\DD - \AA)^{-1}
=
\frac{1}{2}
\left[
\DD^{-1}
+
\left(\II + \DD^{-1} \AA\right)
\left(\DD - \AA \DD^{-1} \AA\right)^{-1}
\left(\II + \AA \DD^{-1}\right)
\right].
\end{equation}
We write $\MM_{FF} = \DD_{FF} - \AA_{FF}$ where $\DD_{FF}$ is diagonal
and $\AA_{FF}$ has zero diagonal, and apply \eqref{eqn:identity}
to obtain the following expression for the Schur complement.
\begin{align} \label{eqn:rearrange}
\schur{\MM}{F}
& = \frac{1}{2} \left[ 2\MM_{CC} - \MM_{CF} \DD_{FF}^{-1} \MM_{FC}
\right. \nonumber \\ & \qquad \left.
- \MM_{CF} \left(\II_{FF} + \DD_{FF}^{-1} \AA_{FF} \right)
\left(\DD_{FF} - \AA_{FF} \DD_{FF}^{-1} \AA_{FF} \right)^{-1}
\left(\II + \AA_{FF} \DD_{FF}^{-1} \right) \MM_{FC} \right].
\end{align}
Our key observation is that this is the average of the Schur complement
of two simpler matrices.
The first term is the Schur complement of:
\[
\left[
\begin{array}{cc}
\DD_{FF}& \MM_{FC}\\
\MM_{CF} & 0
\end{array}
\right],
\]
while the second term is the Schur complement of the matrix:
\[
\left[
\begin{array}{cc}
\DD_{FF} - \AA_{FF} \DD_{FF}^{-1} \AA_{FF} &
\left( \II + \AA_{FF} \DD_{FF}^{-1} \right)\MM_{FC}\\
\MM_{CF} \left( \II + \DD_{FF}^{-1} \AA_{FF} \right)
& 2 \MM_{CC}
\end{array}
\right].
\]
This leads to a recursion similar to that used in~\cite{PengS14}.
However, to ensure that the Schur complements of both matrices are SDDM,
we move some of the diagonal from the $CC$ block of the second
matrix to the $CC$ block of the first.
To describe this precisely, we use the notation
$\textsc{diag} (\xx)$ to indicate the diagonal matrix whose entries
are given by the vector $\xx$.
We also let $\mathbf{1}$ denote the all-ones vector.
So, $\textsc{diag}(\xx)\mathbf{1} = \xx$.
\begin{lemma}
\label{lem:splitting}
Let $\MM$ be a SDDM matrix, and let $(F, C)$ be an arbitrary partition of its columns.
Let $\MM_{FF} = \DD_{FF} - \AA_{FF}$, where $\DD_{FF}$ is
a diagonal matrix and $\AA_{FF}$ is a nonnegative matrix with zero diagonal.
Define the matrices:
\begin{equation}
\MM_1 \stackrel{\mathrm{def}}{=} \left[
\begin{array}{cc}
\DD_{FF}& \MM_{FC}\\
\MM_{CF} & \textsc{diag}(\MM_{CF} \DD_{FF}^{-1} \MM_{FC} \mathbf{1}_{C})
\end{array}
\right],
\label{eqn:firstHalf}
\end{equation}
and
\begin{equation}
\MM_2 \stackrel{\mathrm{def}}{=}
\left[
\begin{array}{cc}
\DD_{FF} - \AA_{FF} \DD_{FF}^{-1} \AA_{FF} &
\left( \II + \AA_{FF} \DD_{FF}^{-1} \right)\MM_{FC}\\
\MM_{CF} \left( \II + \DD_{FF}^{-1} \AA_{FF} \right)
& 2 \MM_{CC} - \textsc{diag}(\MM_{CF} \DD_{FF}^{-1} \MM_{FC} \mathbf{1}_{C}))
\end{array}
\right].
\label{eqn:secondHalf}
\end{equation}
Then $\schur{\MM_1}{F}$ is a Laplacian matrix, $\MM_2$ is a SDDM matrix,
and
\begin{equation}\label{part:average}
\schur{\MM}{F} = \frac{1}{2} \left( \schur{\MM_1}{F} + \schur{\MM_2}{F} \right).
\end{equation}
\end{lemma}
\begin{proof}
Equation \ref{part:average} follows immediately from equation~\ref{eqn:rearrange}.
To prove that $\schur{\MM_1}{F}$ is a Laplacian matrix, we observe that all of its
off-diagonal entries are nonpositive, and that its row-sums are zero:
\[
\schur{\MM_1}{F} \mathbf{1}_{C} =
\textsc{diag}(\MM_{CF} \DD_{FF}^{-1} \MM_{FC} \mathbf{1}_{C}) \mathbf{1}_{C}
- \MM_{CF} \DD_{FF}^{-1} \MM_{FC} \mathbf{1}_{C} = \bvec{0}_{C}.
\]
To prove that $\MM_{2}$ is a SDDM matrix, we observe that
all of its off-diagonal entries are also nonpositive.
For the $FF$ block this follows from from the nonnegativity
of $\AA_{FF}$ and $\DD_{FF}$.
For the $FC$ and $CF$ blocks it follows from the nonpositivity of
$\MM_{CF}$ and $\MM_{FC}$.
We now show that
\[
\MM_{2} \mathbf{1} \geq \MM \mathbf{1} .
\]
This implies that $\MM_{2}$ is an SDDM matrix, as it implies that
its row-sums are nonnegative and not exactly zero.
We first analyze the row-sums in the rows in $F$.
\begin{align*}
(\MM_2 \mathbf{1})_F
& =
\left[
\begin{array}{cc}
\DD_{FF} - \AA_{FF} \DD_{FF}^{-1} \AA_{FF} &
\left( \II + \AA_{FF} \DD_{FF}^{-1} \right)\MM_{FC}
\end{array}
\right]
\left[
\begin{array}{c}
\mathbf{1}_F\\
\mathbf{1}_C
\end{array}
\right]\\
& = \DD_{FF} \mathbf{1}_{F} + \MM_{FC} \mathbf{1}_{C}
-\AA_{FF} \DD_{FF}^{-1} \left( \AA_{FF} \mathbf{1}_{F} - \MM_{FC} \mathbf{1}_{C} \right) \\
& \geq \DD_{FF} \mathbf{1}_{F} + \MM_{FC} \mathbf{1}_{C}
- \AA_{FF} \DD_{FF}^{-1} \DD_{FF} \mathbf{1}_{F}\\
& = \DD_{FF} \mathbf{1}_{F} - \AA_{FF} \mathbf{1}_{F} + \MM_{FC} \mathbf{1}_{C}
\\
& = (\MM \mathbf{1})_{F}.
\end{align*}
Before, analyzing the row-sums for rows in $C$,
we derive an inequality.
As $\MM$ is diagonally dominant,
every entry of
of $\DD_{FF}^{-1} ( \AA_{FF} \mathbf{1}_F - \MM_{FC} \mathbf{1}_C )$
is between $0$ and $1$.
As $\MM_{FC}$ is non-positive,
this implies that
\[
\MM_{FC} \DD_{FF}^{-1} ( \AA_{FF} \mathbf{1}_F - \MM_{FC} \mathbf{1}_C )
\geq
\MM_{FC} \mathbf{1}_{C}.
\]
Using this inequality, we obtain
\begin{align*}
(\MM_2 \mathbf{1})_C
& = \left[
\begin{array}{cc}
\MM_{CF} \left( \II + \DD_{FF}^{-1} \AA_{FF} \right)
& 2 \MM_{CC} - \textsc{diag}(\MM_{CF} \DD_{FF}^{-1} \MM_{FC} \mathbf{1}_{C})
\end{array}
\right]
\left[
\begin{array}{c}
\mathbf{1}_F\\
\mathbf{1}_C
\end{array}
\right]\\
& = \MM_{CF} \mathbf{1}_F + \MM_{CF} \DD_{FF}^{-1} \AA_{FF} \mathbf{1}_F
+ 2 \MM_{CC} \mathbf{1}_{C}
- \textsc{diag}(\MM_{CF} \DD_{FF}^{-1} \MM_{FC} \mathbf{1}_{C}) \mathbf{1}_C\\
& = \MM_{CF} \mathbf{1}_F + \MM_{CF} \DD_{FF}^{-1} \AA_{FF} \mathbf{1}_F
+ 2 \MM_{CC} \mathbf{1}_{C} - \MM_{CF} \DD_{FF}^{-1} \MM_{FC} \mathbf{1}_{C}\\
& = \left( \MM_{CC} \mathbf{1}_{C} + \MM_{CF} \mathbf{1}_F \right)
+ \MM_{CC} \mathbf{1}_C + \MM_{CF} \DD_{FF}^{-1} \left( \AA_{FF} \mathbf{1}_F - \MM_{FC} \mathbf{1}_C \right)
\\
& \geq
\left( \MM_{CC} \mathbf{1}_{C} + \MM_{CF} \mathbf{1}_F \right)
+ \MM_{CC} \mathbf{1}_C + \MM_{CF} \mathbf{1}_{C}
\\
& = 2 (\MM \mathbf{1})_{C}.
\end{align*}
\end{proof}
We first discuss how to approximate the Schur complement of $\MM_1$.
\begin{lemma}
\label{lem:schurDiag}
There is a procedure $\textsc{ApproxSchurDiag}(\MM, (F, C), \epsilon)$ that takes
a graph Laplacian matrix $\MM$ with $m$ non-zero entries, partition of variables $(F, C)$ and
returns a matrix matrix $\tilde{\MM}_{SC}$ such that:
\begin{enumerate}
\item $\tilde{\MM}_{SC}$ has $O(m \epsilon^{-4})$ non-zero entries, and
\item $\tilde{\MM}_{SC} \approx_{\epsilon} \MM_1$ where $\MM_1$ is defined in equation~\ref{eqn:firstHalf}.
\end{enumerate}
Furthermore, the procedure takes in $O(m \epsilon^{-4})$
work and $O(\log{n})$ depth.
\end{lemma}
The proof is based on the observation that this graph is a sum of product demand
graphs, one per vertex in $F$.
\ifthenelse{\boolean{@full}}{
These demand graphs can be formally defined as:
\begin{definition}
\label{def:demand}
The product demand graph of a vector $\dd$, $G(\dd)$, is a complete weighted graph
whose weight between vertices $i$ and $j$ is given by
\[
\ww_{ij} = \dd_{i} \dd_{j}.
\]
\end{definition}
In Section~\ref{sec:weightedExp}, we give a result on directly
constructing approximations to these graphs that can be summarized as follows:
\begin{lemma}
\label{lem:weightedExpander}
There is a routine $\textsc{WeightedExpander}(\dd, \epsilon)$ such that
for any demand vector $\dd$ of length $n$ and a parameter $\epsilon$, $\textsc{WeightedExpander}(\dd, \epsilon)$ returns in $O(n \epsilon^{-4})$
work and $O(\log{n})$ depth a graph $H$ with $O(n \epsilon^{-4})$ edges such that
\[
\LL_{H} \approx_{\epsilon} \LL_{G(\dd)}.
\]
\end{lemma}
}{
These demand graphs is a complete weighted graph
whose weight between vertices $i$ and $j$ is given by $\ww_{ij} = \dd_{i} \dd_{j}$.
In the full version, we give a result on directly constructing linear-sized approximations to these graphs in linear time.
}
\begin{proof}(of Lemma~\ref{lem:schurDiag})
Since there are no edges between vertices in $F$,
the resulting graph consists of one clique among the neighbors
of each vertex $u \in F$.
Therefore it suffices to sparsify these separately.
It can be checked that the weight between two neighbors $v_1$ and $v_2$
in such a clique generated from vertex $u$ is $\frac{\ww_{uv_1} \ww_{u v_2}}{\dd_u}$.
Therefore we can replace it with a weighted expander given
in Lemma~\ref{lem:weightedExpander} above.
\end{proof}
Now, we can invoke Lemma~\ref{lem:schurDiag} on $\MM_1$ to compute its Schur complement,
which means it remains to iterate on $\MM_2$.
Of course, $\MM_2$ may be a dense matrix.
Once again, we approximate it implicitly using weighted expanders.
\ifthenelse{\boolean{@full}}{
Here we also need weighted bipartite expanders:
\begin{definition}
\label{def:demand2}
The bipartite product demand graph of two vectors $\dd^{A}$, $\dd^{B}$,
$G(\dd^{A}, \dd^{B})$, is a weighted bipartite graph
whose weight between vertices $i \in A$ and $j \in B$ is given by
\[
\ww_{ij} = \dd^{A}_{i} \dd^{B}_{j}.
\]
\end{definition}
\begin{lemma}
\label{lem:weightedBipartiteExpander}
There is a routine $\textsc{WeightedBipartiteExpander}(\dd^{A}, \dd^{B} \epsilon)$ such that
for any demand vectors $\dd^{A}$ and $\dd^{B}$ of total length $n$ and a parameter $\epsilon$, it returns in $O(n \epsilon^{-4})$
work and $O(\log{n})$ depth a graph $H$ with $O(n \epsilon^{-4})$ edges such that
\[
\LL_{H} \approx_{\epsilon} \LL_{G(\dd^{A}, \dd^{B})}.
\]
\end{lemma}
}{
}
\begin{lemma}
\label{lem:squareSparsify}
There exists a procedure $\textsc{SquareSparsify}$ such that,
$\textsc{SquareSparsify}(\MM, (F, C), \epsilon)$
returns in $O(m \epsilon^{-4})$ work and $O(\log{n})$ depth a matrix
$\tilde{\MM}_2$ with $O(m \epsilon^{-4} )$ non-zero entries
such that $\tilde{\MM}_2 \approx_{\epsilon} \MM_2$,
where $\MM_2$ is defined in equation~\ref{eqn:secondHalf}.
\end{lemma}
\begin{proof
The edges in this graph come from $-\MM_{FC}$,
$\AA_{FF} \DD_{FF}^{-1} \AA_{FF}$ and
$\AA_{FF} \DD_{FF}^{-1} \MM_{FC}$.
The first is a subset, so we can keep them without increasing
total size by a more than a constant factor.
The later two consist of length two paths
involving some $u \in F$.
Therefore we can once again sum together a set of
expanders, one per each $u \in F$.
The edges in $\AA_{FF} \DD_{FF}^{-1} \AA_{FF}$
correspond to one clique with product demands
given by $\AA_{uv}$ for each $u \in F$, and
can be approximated using the weighted expander
in Lemma~\ref{lem:weightedExpander}.
The edges in $\AA_{FF} \DD_{FF}^{-1} \MM_{FC}$
can be broken down by midpoint into edges of weight
\[
\frac{\AA_{uv_F} \AA_{uv_C}}{\dd_{u}}
\]
where $v_F \in F$, $v_C \in C$ are neighbors of $u$.
This is a bipartite demand graph, so we can replace it
with the weighted bipartite expanders given in
Lemma~\ref{lem:weightedBipartiteExpander}.
The total size of the expanders that we generate is
$O(deg(u) \epsilon^{-4})$.
Therefore the total graph size follows from $\sum_{u \in F} deg(u) \leq m$.
\end{proof}
In the next subsection, we shows how to handle the case that the $\MM$ is $\alpha$-diagonally dominant matrix with large $\alpha$.
Therefore, the number of iterations of splitting depends on how diagonally dominant is the matrix.
Here we once again use the approach introduced in~\cite{PengS14}
by showing that $\MM_2$ is more diagonally dominant than $\MM$
by a constant factor.
This implies $O(\log(1 / \alpha \epsilon))$ iterations suffices for
obtaining a good approximation to the Schur complement.
\begin{lemma}
\label{lem:improve}
If $\DD - \AA$ is $\alpha$-strongly diagonally dominant and $\AA$
has $0$s on the diagonal, then
$\DD - \AA \DD^{-1} \AA$ is $ ((1 + \alpha)^2-1)$-strongly diagonally dominant.
\end{lemma}
\begin{proof}
Consider the sum of row $i$ in $\AA \DD^{-1} \AA$, it is
\[
\sum_{j } \sum_{k} \left| \AA_{ij} \DD_{jj}^{-1} \AA_{jk} \right|
= \sum_{j} \left|\AA_{ij} \right| \DD_{jj}^{-1} \sum_{k} \left| \AA_{jk} \right|
\leq \left(1 + \alpha\right)^{-1} \sum_{j} \left|\AA_{ij} \right|
\]
where the inequality follows from applying the fact that $\DD$
is $1 + \alpha$-strongly diagonally dominant to the $j\textsuperscript{th}$
row.
The result then follows from $\sum_{j} \left|\AA_{ij} \right| \leq (1 + \alpha)^{-1} \DD_{ii}$.
\end{proof}
\ifthenelse{\boolean{@full}}{
This notion is also stable under spectral sparsification.
\begin{lemma}
\label{lem:sparsifyOk}
If $\AA = \XX + \YY$ is $\alpha$-strongly diagonally dominant,
$\XX$ is diagonal, $\YY$ is a graph Laplacian, and $\YY \approx_{\epsilon} \tilde{\YY}$.
Then $\tilde{\AA} = \XX + \tilde{\YY}$ is $\exp\left(-\epsilon \right) \alpha$-strongly diagonally dominant.
\end{lemma}
\begin{proof}
Using $\YY \approx_{\epsilon} \tilde{\YY}$, we have
$$\tilde{\YY}_{i,i} \leq \exp(\epsilon) \YY_{i, i}.$$
The fact that $\AA$ is $\alpha$-strongly diagonally dominant also gives
$\XX_{i, i} \geq \alpha \YY_{i, i}$.
Combining these gives $ \XX_{i,i} \geq \exp(-\epsilon) \alpha \tilde{\YY}_{i,i}$,
which means $\XX + \tilde{\YY}$ is $\exp\left(-\epsilon \right) \alpha$-strongly diagonally dominant.
\end{proof}
}{}
\subsection{Schur Complement of Highly Strongly Diagonally Dominant Matrices}
\label{subsec:highlySDD}
It remains to show how to deal with the highly strongly diagonally
dominant matrix at the last step.
Directly replacing it with its diagonal, aka. \textsc{SquareSparsify}
is problematic.
Consider the case where $F$ contains $u$ and $v$
with a weight $\epsilon$ edge between them, and
$u$ and $v$ are connected to $u'$ and $v'$ in $C$
by weight $1$ edges respectively.
Keeping only the diagonal results in a Schur complement
that disconnects $u'$ and $v'$.
This however can be fixed by taking a step of random
walk within $F$.
Given a SDDM matrix $\MM_{FF} = \XX_{FF} + \LL_{FF}$ where $\LL_{FF}$
is a graph Laplacian and $\XX_{FF}$ is a diagonal matrix.
We will consider the linear operator
\begin{equation}
\ZZ_{FF}^{(last)}
\stackrel{\mathrm{def}}{=}
\frac{1}{2} \XX_{FF}^{-1}
+ \frac{1}{2} \XX_{FF}^{-1} \left( \XX_{FF} - \LL_{FF} \right) \XX_{FF}^{-1}
\left( \XX_{FF} - \LL_{FF} \right) \XX_{FF}^{-1}
\label{eq:zzFinal}.
\end{equation}
\begin{lemma}
\label{lem:threeStep}
If $\MM_{FF} = \XX_{FF} + \LL_{FF} $ be a SDDM matrix that's $\alpha$-strongly
diagonally dominant for some $\alpha \geq 4$, then the operator
$\ZZ^{(last)}$ as defined in Equation~\ref{eq:zzFinal} satisfies:
\[
\MM_{FF} \preceq \left( \ZZ_{FF}^{(last)} \right)^{-1}
\preceq \MM_{FF} + \frac{2}{\alpha} \LL_{FF}.
\]
\end{lemma}
\begin{proof}
Composing both sides by $\XX^{-1/2}_{FF}$
and substituting in $\mathcal{L}_{FF} = \XX_{FF}^{-1/2} \LL_{FF} \XX_{FF}^{-1/2}$ means it
suffices to show
\[
\II + \mathcal{L}_{FF}
\preceq \left( \frac{1}{2} \II
+ \frac{1}{2} \left( \II - \mathcal{L}_{FF} \right)^{2} \right)^{-1}
\preceq \II + \mathcal{L}_{FF} + \frac{2}{\alpha} \mathcal{L}_{FF}.
\]
The fact that $\MM_{FF}$ is $\alpha$-strongly diagonally dominant
gives $0 \preceq \LL_{FF} \preceq \frac{2}{\alpha} \XX_{FF}$, or
$0 \preceq \mathcal{L}_{FF} \preceq \frac{2}{\alpha} \II$ (Lemma \ref{lem:Xdom}).
As $\mathcal{L}_{FF}$ and $\II$ commute, the spectral theorem
means it suffices to show this for any scalar $0 \leq t \leq \frac{2}{\alpha}$.
Note that
\[
\frac{1}{2}
+ \frac{1}{2} \left( 1 - t \right)^{2}
= 1 - t + \frac{1}{2} t^2
\]
Taking the difference between the inverse of this
and the `true' value of $1 + t$ gives:
\[
\left( 1 - t + \frac{1}{2} t^2 \right)^{-1} - \left( 1 + t \right)
= \frac{1 - \left( 1 + t \right) \left( 1 - t + \frac{1}{2} t^2\right)}{1 - t + \frac{1}{2}t^2 }
= \frac{\frac{1}{2 } t^2 \left( 1 - t \right) }
{1 - t + \frac{1}{2} t^2}
\]
Incorporating the assumption that $0 \leq t \leq \frac{2}{\alpha}$
and $\alpha \geq 4$ gives
that the denominator is at least
\[
1 - \frac{2}{\alpha} \geq \frac{1}{2},
\]
and the numerator term can be bounded by
\[
0 \leq \frac{t^2}{2} \left( 1 - t\right) \leq \frac{t}{\alpha}.
\]
Combining these two bounds then gives the result.
\end{proof}
To utilize $\ZZ^{(last)}$, note that the Schur complement of the matrix
\begin{equation}
\MM^{\left(last\right)} \stackrel{\mathrm{def}}{=}
\left[
\begin{array}{cc}
\left(\ZZ^{(last)}_{FF}\right)^{-1}& \MM_{FC}\\
\MM_{CF} & \MM_{CC}
\end{array}
\right]
\label{eqn:lastMM}
\end{equation}
equals to the average of the Schur complements of the matrices
\begin{equation}
\MM^{\left(last\right)}_{1} \stackrel{\mathrm{def}}{=}
\left[
\begin{array}{cc}
\XX_{FF}& \MM_{FC}\\
\MM_{CF} & \textsc{diag}\left(\MM_{CF} \XX_{FF}^{-1} \MM_{FC} \mathbf{1}_{C}\right)
\end{array}
\right]
\label{eqn:lastFirstHalf}
\end{equation}
and
\begin{equation}
\MM^{\left(last\right)}_{2} \stackrel{\mathrm{def}}{=}
\left[
\begin{array}{cc}
\XX_{FF}& \left( \XX_{FF} - \LL_{FF} \right) \XX_{FF}^{-1} \MM_{FC}\\
\MM_{CF} \XX_{FF} ^{-1} \left( \XX_{FF} - \LL_{FF} \right) &
2 \MM_{CC} - \textsc{diag}\left(\MM_{CF} \XX_{FF}^{-1} \MM_{FC} \right).
\end{array}
\right]
\label{eqn:lastSecondHalf}
\end{equation}
The first term is SDDM by construction of its $CC$ portion
We can verify that the second term is also SDDM in a way that's
similar to Lemma~\ref{lem:splitting}.
\begin{lemma}
\label{lem:splitting2}
Let $\MM$ be a SDDM matrix, and let $(F, C)$ be an arbitrary partition of its columns.
Suppose that $\MM_{FF}$ is $\alpha$-strongly diagonally dominant for some $\alpha \geq 4$.
Define the matrices $\ZZ^{(last)}$, $\MM^{\left(last\right)}$, $\MM^{\left(last\right)}_{1}$ and $\MM^{\left(last\right)}_{2}$
as in Equations \ref{eq:zzFinal}, \ref{eqn:lastMM}, \ref{eqn:lastFirstHalf} and \ref{eqn:lastSecondHalf}.
Then, $\schur{\MM^{\left(last\right)}_{1}}{F}$ is a Laplacian matrix, $\MM^{\left(last\right)}_{2}$ is a SDDM matrix,
and
\begin{equation}\label{eqn:last_average}
\schur{\MM^{\left(last\right)}}{F} = \frac{1}{2} \left( \schur{\MM^{\left(last\right)}_{1}}{F} + \schur{\MM^{\left(last\right)}_{2}}{F} \right).
\end{equation}
\end{lemma}
\begin{proof}
\yintat{The proof is basically same as before, should we ignore that?}
Equation \ref{eqn:last_average} follows from substituting Equation~\ref{eq:zzFinal}
into Equations~\ref{eqn:lastMM}, ~\ref{eqn:lastFirstHalf} and~\ref{eqn:lastSecondHalf}.
To prove that $\schur{\MM^{\left(last\right)}_{1}}{F}$ is a Laplacian matrix, we observe that all of its
off-diagonal entries are nonpositive, and that its row-sums are zero:
\[
\schur{\MM^{\left(last\right)}_{1}}{F} \mathbf{1}_{C} =
\textsc{diag}(\MM_{CF} \XX_{FF}^{-1} \MM_{FC} \mathbf{1}_{C}) \mathbf{1}_{C}
- \MM_{CF} \XX_{FF}^{-1} \MM_{FC} \mathbf{1}_{C} = \bvec{0}_{C}.
\]
To prove that $\MM^{\left(last\right)}_{2}$ is a SDDM matrix, we observe that
all of its off-diagonal entries are also nonpositive.
For the $FF$ block this follows from from the nonnegativity
of $\XX_{FF}$.
For the $FC$ and $CF$ blocks it follows from the nonpositivity of
$\MM_{CF}$ and $\MM_{FC}$ and the fact that off-diagonal entries
of $\LL_{FF}$ are nonpositive,
the diagonal of $\LL_{FF}$ being bounded by $2/\alpha \XX_{FF}$,
and $\alpha \geq 2$.
For the $CC$ block, it follows from the fact that
\[
2\MM_{CC} - \MM_{CF} \XX_{FF}^{-1} \MM_{FC}
\succeq
2\MM_{CC} - 2\MM_{CF} \MM_{FF}^{-1} \MM_{FC}
\succeq
\bvec{0}.
\]
We now show that
\[
\MM^{\left(last\right)}_{2} \mathbf{1} > \bvec{0} .
\]
This implies that $\MM^{\left(last\right)}_{2}$ is an SDDM matrix, as it implies that
its row-sums are nonnegative and not exactly zero.
We first analyze the row-sums in the rows in $F$:
\begin{align*}
(\MM^{\left(last\right)}_{2} \mathbf{1})_F
& =
\left[
\begin{array}{cc}
\XX_{FF} &
\left( \XX_{FF} - \LL_{FF} \right) \XX_{FF}^{-1} \MM_{FC}
\end{array}
\right]
\left[
\begin{array}{c}
\mathbf{1}_F\\
\mathbf{1}_C
\end{array}
\right]\\
& = \XX_{FF} \mathbf{1}_{F} + \left( \XX_{FF} - \LL_{FF} \right) \XX_{FF}^{-1} \MM_{FC} \mathbf{1}_{C}\\
& > \XX_{FF} \mathbf{1}_{F} - \left( \XX_{FF} - \LL_{FF} \right) \XX_{FF}^{-1} \XX_{FF} \mathbf{1}_{F}\\
& = \bvec{0},
\end{align*}
where we used the fact $\MM_{FC} \mathbf{1}_{C} = -\LL_{FF} \mathbf{1}_{F} > -\XX_{FF} \mathbf{1}_{F}$ in the inequality.
For the row-sum in the rows in $C$, we obtain
\begin{align*}
(\MM^{\left(last\right)}_{2} \mathbf{1})_C
& = \left[
\begin{array}{cc}
\MM_{CF} \XX_{FF} ^{-1} \left( \XX_{FF} - \LL_{FF} \right) &
2 \MM_{CC} - \textsc{diag}\left(\MM_{CF} \XX_{FF}^{-1} \MM_{FC} \right)
\end{array}
\right]
\left[
\begin{array}{c}
\mathbf{1}_F\\
\mathbf{1}_C
\end{array}
\right]\\
& = \MM_{CF} \mathbf{1}_F
+ 2 \MM_{CC} \mathbf{1}_{C}
- \textsc{diag}(\MM_{CF} \XX_{FF}^{-1} \MM_{FC} \mathbf{1}_{C}) \mathbf{1}_C\\
& > \MM_{CC} \mathbf{1}_{C}
- \MM_{CF} \XX_{FF}^{-1} \MM_{FC} \mathbf{1}_{C}\\
& > \MM_{CC} \mathbf{1}_{C}
+ \MM_{CF} \XX_{FF}^{-1} \XX_{FF} \mathbf{1}_{F}\\
& = \MM \mathbf{1} > \bvec{0}.
\end{align*}
\end{proof}
\richard{Can we go with $\DD$ instead?}
\yintat{I believe $\DD - \LL$ should be okay. I haven't check $\AA$ yet, they seems quite different.}
\begin{figure}[ht]
\begin{algbox}
$\tilde{\LL}_{schur[C]} = \textsc{LastStep}\left(\MM, \left(F, C\right), \epsilon \right)$
\begin{enumerate}
\item Form $\MM^{\left(last\right)}_{1}$ as in Equation~\ref{eqn:lastFirstHalf}.
\item Form $\MM^{\left(last\right)}_{2}$ as in Equation~\ref{eqn:lastSecondHalf} .
\item $\MM^{\left( last \right)}_{2S} \leftarrow
\textsc{SquareSparsify}\left(\MM^{(last)}_2, \left(F, C\right), \epsilon/2 \right) $.
\item $\tilde{\MM}_{SC} \leftarrow
\frac{1}{2} \textsc{ApproxSchurDiag}\left(\MM^{\left(last\right)}_{1}, \epsilon/2 \right)
+ \frac{1}{2} \textsc{ApproxSchurDiag}\left(\MM^{\left(last\right)}_{2S}, \epsilon/2 \right)$.
\item Return $\tilde{\MM}_{SC}$.
\end{enumerate}
\end{algbox}
\caption{Pseudocode for approximating a highly strongly diagonally dominant matrix.
Small modifications on \textsc{ApproxSchurDiag} and
\textsc{SquareSparsify} is required to handle this case.}
\label{fig:lastStep}
\end{figure}
\begin{lemma}
\label{lem:lastStep}
Let $\MM$ be a SDDM matrix, and let $(F, C)$ be an arbitrary partition of its columns.
Suppose that $\MM_{FF}$ is $\alpha$-strongly diagonally dominant for some $\alpha \geq 4$.
There exists a procedure $\textsc{LastStep}$ such that,
$\textsc{LastStep}(\MM, (F, C), \epsilon)$
returns in $O(m \epsilon^{-8})$ work and $O(\log{n})$ depth a matrix
$\tilde{\MM}_{SC}$ with $O(m \epsilon^{-8} )$ non-zero entries
such that $\tilde{\MM}_{SC} \approx_{\epsilon+2/\alpha} \schur{\MM}{F}$.
\end{lemma}
\begin{proof}
We remark that Lemma~\ref{lem:schurDiag} is designed to compute Schur complement of the matrix~\ref{eqn:firstHalf} and
Lemma~\ref{lem:squareSparsify} is designed to sparsify the matrix the matrix~\ref{eqn:secondHalf}. However, it is easy to modify them to work for computing the Schur complement of the matrix~\ref{eqn:lastFirstHalf} and sparsifying the matrix~\ref{eqn:lastSecondHalf}.
By Lemma~\ref{lem:squareSparsify}, we know that $\textsc{SquareSparsify}$ takes $O(m \epsilon^{-4})$ work and $O(\log{n})$ depth and outputs the matrix $\MM^{\left( last \right)}_{2S}$ with $O(m \epsilon^{-4} )$ non-zero entries. Therefore, Lemma~\ref{lem:schurDiag} shows that $\textsc{ApproxSchurDiag}$ takes $O(m \epsilon^{-8})$ work and $O(\log{n})$ depth and outputs a matrix with $O(m \epsilon^{-8} )$ non-zero entries. This proves the running time and the output size
For the approximation guarantee, Lemmas~\ref{lem:threeStep},~\ref{lem:squareSparsify},
and~\ref{lem:schurDiag} give:
\begin{align*}
\tilde{\MM}_{SC} & \approx_{2/\alpha} \frac{1}{2} \left( \schur{\MM^{\left(last\right)}_{1}}{F} + \schur{\MM^{\left(last\right)}_{2}}{F} \right) \\
& \approx_{\epsilon/2} \frac{1}{2} \left( \schur{\MM^{\left(last\right)}_{1}}{F} + \schur{\MM^{\left(last\right)}_{2S}}{F} \right) \\
& \approx_{\epsilon/2} \frac{1}{2} \textsc{ApproxSchurDiag}\left(\MM^{\left(last\right)}_{1}, \epsilon/2 \right)
+ \frac{1}{2} \textsc{ApproxSchurDiag}\left(\MM^{\left(last\right)}_{2S}, \epsilon/2 \right) \\
& = \tilde{\MM}_{SC}.
\end{align*}
\end{proof}
\subsection{Summary}
Combining the splitting step and the final step gives our algorithm (Figure \ref{fig:approxSchur}).
\begin{figure}[ht]
\begin{algbox}
$\tilde{\LL}_{schur[C]} = \textsc{ApproxSchur}\left(\MM, \left(F, C\right), \alpha, \epsilon \right)$
\begin{enumerate}
\item Initialize $\tilde{\MM}_{SC} \leftarrow 0$, $\MM^{(0)} \leftarrow \MM$,
$d = \log_{1+\alpha}\left(13 \epsilon^{-1}\right)$
\item For $i$ from $1$ to $d$ do
\begin{enumerate}
\item Form $\MM^{\left(i - 1\right)}_{1}$ as in Equation~\ref{eqn:firstHalf}.
\item Form $\MM^{\left(i - 1\right)}_{2}$ as in Equation~\ref{eqn:secondHalf} .\item $\tilde{\MM}_{SC} \leftarrow \tilde{\MM}_{SC}
+ \frac{1}{2} \textsc{ApproxSchurDiag}\left(\MM^{\left(i - 1\right)}_{1}, \frac{\epsilon}{3d} \right)$.
\item $\MM^{\left( i \right)} \leftarrow \frac{1}{2}
\textsc{SquareSparsify}\left(\MM^{(i - 1)}, \left(F, C\right), \frac{\epsilon}{3d}\right) $.
\end{enumerate}
\item $\tilde{\MM}_{SC} \leftarrow \tilde{\MM}_{SC} + \textsc{LastStep}\left(\MM^{(d)}, \frac{\epsilon}{12}\right)$.
\item Return $\tilde{\MM}_{SC}$.
\end{enumerate}
\end{algbox}
\caption{Pseudocode for Computing Spectral Vertex Sparsifiers}
\label{fig:approxSchur}
\end{figure}
\begin{theorem}
\label{thm:aprox_schur}
Suppose that $\MM$ is $\alpha$-strongly diagonally dominant and $0<\epsilon<1$, then
$\textsc{ApproxSchur}$ returns
a matrix $\tilde{\MM}_{SC}$ with
$O \left( m \left( \epsilon^{-1} \log_{\alpha}\left(\epsilon^{-1} \right) \right)^{O \left(\log_{\alpha}\left(\epsilon^{-1} \right) \right)} \right)$ non-zeros such that
\[
\tilde{\MM}_{SC} \approx_{\epsilon} \schur{\MM}{F}.
\]
in $O\left( m \left( \epsilon^{-1} \log_{\alpha}\left(\epsilon^{-1} \right) \right)^{O \left(\log_{\alpha}\left(\epsilon^{-1} \right) \right)} \right)$ work
and $O\left(\log_{\alpha}\left(\epsilon^{-1} \right) \log(n) \right)$ depth.
\end{theorem}
\begin{proof}
Let $\tilde{\MM}^{(i)}_{SC}$ denote the $\tilde{\MM}_{SC}$ after
$i$ steps of the main loop in \textsc{ApproxSchur}
We will show by induction that at each $i$,
\[
\schur{\MM}{F} \approx_{\frac{\epsilon i}{3d}} \tilde{\MM}^{(i)}_{SC} + \schur{\MM^{(i)}}{F}.
\]
The base case of $i = 0$ clearly holds.
For the inductive case, suppose we have the result for some $i$, then
\[
\schur{\MM}{F} \approx_{\frac{\epsilon i}{3d}} \tilde{\MM}^{(i)}_{SC}
+ \frac{1}{2} \left( \schur{\MM^{(i)}_1}{F} + \schur{\MM^{(i)}_2}{F} \right).
\]
Lemma~\ref{lem:schurDiag} gives
\begin{equation} \label{eqn:approx_schur_eq1}
\tilde{\MM}^{(i + 1)}_{SC}
= \tilde{\MM}^{(i)}_{SC} + \frac{1}{2} \textsc{ApproxSchurDiag}\left(\MM^{\left(i \right)}_{1}, (F,C), \frac{\epsilon}{3d} \right)
\approx_{\frac{\epsilon}{3d}} \tilde{\MM}^{(i)}_{SC} + \frac{1}{2} \schur{\MM^{(i)}_1}{F},
\end{equation}
while Lemma~\ref{lem:squareSparsify} gives
\[
\MM^{(i + 1)} \approx_{\frac{\epsilon}{3d}} \frac{1}{2} \MM^{(i)}_2,
\]
which combined with the preservation of Loewner ordering
from Fact~\ref{fact:schurLoewner} gives
\begin{equation} \label{eqn:approx_schur_eq2}
\schur{\MM^{(i + 1)}}{F} \approx_{\frac{\epsilon}{3d}} \frac{1}{2} \schur{\MM^{(i)}_2}{F}.
\end{equation}
Combining these two bounds~\eqref{eqn:approx_schur_eq1} and~\eqref{eqn:approx_schur_eq2} then gives:
\[
\tilde{\MM}^{(i)}_{SC}
+ \frac{1}{2} \left( \schur{\MM^{(i)}_1}{F} + \schur{\MM^{(i)}_2}{F} \right)
\approx_{\frac{\epsilon}{3d}} \tilde{\MM}^{(i + 1)}_{SC} + \schur{\MM^{(i + 1)}}{F}.
\]
Hence, the inductive hypothesis holds for $i + 1$ as well.
By Lemmas~\ref{lem:improve} and~\ref{lem:sparsifyOk},
we have that $\MM^{(d)}_{FF}$ is $12\epsilon^{-1}$-strongly diagonally dominant
at the last step.
Lemma~\ref{lem:lastStep} then gives
\[
\MM^{(d)}_{1} \approx_{\frac{1}{3} \epsilon} \textsc{LastStep}\left(\MM^{(d)}_{1}, \frac{\epsilon}{12}\right).
\]
Composing this bound with the guarantees of the iterations
then gives the bound on overall error.
The work of these steps, and the size of the output graph
follow from Lemma~\ref{lem:schurDiag} and~\ref{lem:squareSparsify}.
\end{proof}
\ifthenelse{\boolean{@full}}{
In our invocations to this routine, both $\alpha$ and $\epsilon$ will be set to constants.
As a result, this procedure is theoretically $O(m)$ time.
For a spectral vertex sparsification algorithm for handling general graph Laplacians,
$\alpha$ can be $0$ and we need to invoke spectral sparsifiers to $\LL_{i}$ after each step.
Any parallel algorithm for spectral sparsification
(e.g.~\cite{SpielmanT11,SpielmanS08:journal,OrecchiaV11,Koutis14}
will then lead to nearly linear work and polylog depth.
\begin{corollary}
Given a SDDM matrix with condition number $\kappa$,
a partition of the vertices into $(F, C)$, and error $\epsilon > 0$, we can compute in $O\left(m \log^{O\left(1\right)} (n \kappa \epsilon^{-1}) \right)$ work
and $O\left( \log^{O\left(1\right)} (n \kappa \epsilon^{-1}) \right)$ depth a matrix
$\tilde{\MM}_{SC}$ with $O\left(n\log^{O(1)}n \epsilon^{-2} \right)$ non-zeros such that
\[
\tilde{\MM}_{SC} \approx_{\epsilon} \schur{\MM}{F}.
\]
\end{corollary}
\begin{proof}
We can add $\frac{\epsilon \trace{\MM} }{n\kappa}$ to each element on the diagonal to
obtain $\MM' \approx_{\epsilon} \MM$.
Therefore it suffices to assume that $\MM_{FF}$ is $\frac{1}{\text{poly}(n) \kappa}$-strongly diagonally dominant.
Therefore Theorem~\ref{thm:aprox_schur} gives that \textsc{ApproxSchur} terminates
in $d = O(\log{\kappa} + \log{n})$ steps.
If we invoke a spectral sparsification algorithm at each step, the
number of non-zeros in each $\MM^{(i)}$ can be bounded by
$O(n \log^{O(1)}n (\epsilon/d)^{-2}) = O(n \log^{O\left(1\right)} (n \kappa \epsilon^{-1}) )$.
The overall work bound then follows from combining this with the
$\text{poly}(\epsilon^{-1} d)$ increase in edge count at each step,
and the nearly-linear work guarantees of spectral sparsification algorithms.
\end{proof}
We remark that the setting of $\epsilon_i = 1/\log{\kappa}$ leads to a fairly
large number of log factors.
In the rest of this paper we only invoke spectral vertex sparsifiers with
moderate values of $\epsilon_i$ (unless we're at graphs that are smaller by
$\text{poly}(n)$ factors).
Also, we believe recent developments in faster combinatorial spectral sparsification
algorithms~\cite{Koutis14} make faster algorithms for spectral vertex sparsifiers
a question beyond the scope of this paper.
}{}
\section{Weighted Expander Constructions}
\label{sec:weightedExp}
\def\edg#1{\pmb{\boldsymbol{[}} #1 \pmb{\boldsymbol{]}}}
\def\edgu#1{\pmb{\boldsymbol{(}} #1 \pmb{\boldsymbol{)}}}
In this section, we give a linear time algorithm for computing linear
sized spectral sparsifiers of complete and bipartite
product demand graphs.
Recall that the \textit{product demand graph} with vertex set $V$ and demands $\dd : V \rightarrow \mathbb{R}_{> 0}$
is the complete graph
in which the weight of edge $(u,v)$ is the product $d_{u} d_{v}$.
Similarly, the \textit{bipartite demand graph} with vertex set $U \cup V$
and demands $\dd : U \cup V \rightarrow \mathbb{R}_{> 0}$ is the
complete bipartite graph on which the weight of the edge $(u,v)$ is the product $d_{u} d_{v}$.
Our routines are based on reductions to the unweighted, uniform case.
In particular, we
\begin{itemize}
\item [1.] Split all of the high demand vertices into many vertices that all have the same demand.
This demand will still be the highest.
\item [2.] Given a graph in which almost all of the vertices have the same highest demand,
we \begin{itemize}
\item [a.] drop all of the edges between vertices of lower demand,
\item [b.] replace the complete graph between the vertices of highest demand with an expander, and
\item [c.] replace the bipartite graph between the high and low demand vertices with
a union of stars.
\end{itemize}
\item [3.] To finish, we merge back together the vertices that split off from each original vertex.
\end{itemize}
We start by showing how to construct the expanders that we need for step (2b).
We state formally and analyze the rest of the algorithm for the
complete case in the following two sections.
We explain how to handle the bipartite case in Section \ref{subsec:bipartite}.
Expanders give good approximations to unweighted complete graphs,
and our constructions will use the spectrally best expanders---Ramanunan graphs.
These are defined in terms of the eigenvalues of their adjacency matrices.
We recall that the adjacency matrix of every $d$-regular graph has eigenvalue $d$
with multiplicity $1$ corresponding to the constant eigenvector.
If the graph is bipartite, then it also has an eigenvalue of $-d$ corresponding
to an eigenvector that takes value $1$ on one side of the bipartition and $-1$
on the other side.
These are called the \textit{trivial} eigenvalues.
A $d$-regular graph is called a Ramanujan graph if all of its non-trivial eigenvalues
have absolute value at most $2 \sqrt{d-1}$.
Ramanujan graphs were constructed independently by Margulis~\cite{Margulis88}
and Lubotzky, Phillips, and Sarnak~\cite{LPS}.
The following theorem and proposition summarizes part of their results.
\begin{theorem}\label{thm:LPS}
Let $p$ and $q$ be unequal primes congruent to $1$ modulo 4.
If $p$ is a quadratic residue modulo $q$, then there is a non-bipartite
Ramanujan graph of degree $p+1$ with $q^{2} (q-1)/2$ vertices.
If $p$ is not a quadratic residue modulo $q$, then there is a bipartite
Ramanujan graph of degree $p+1$ with $q^{2} (q-1)$ vertices.
\end{theorem}
The construction is explicit.
\begin{proposition}\label{pro:LPS}
If $p < q$, then
the graph guaranteed to exist by Theorem~\ref{thm:LPS} can be constructed in
parallel depth $O (\log n)$ and work $O (n)$, where $n$ is its number of vertices.
\end{proposition}
\begin{proof}[Sketch of proof.]
When $p$ is a quadratic residue modulo $q$, the graph is a Cayley graph of
$PSL (2,Z/qZ)$.
In the other case, it is a Cayley graph of $PGL (2,Z/qZ)$.
In both cases, the generators are determined by the $p+1$ solutions
to the equation $p = a_{0}^{2} + a_{1}^{2} + a_{2}^{2} + a_{3}^{2}$
where $a_{0} > 0$ is odd and $a_{1}, a_{2}$, and $a_{3}$ are even.
Clearly, all of the numbers $a_{0}$, $a_{1}$, $a_{2}$ and $a_{3}$
must be at most $\sqrt{p}$.
So, we can compute a list of all sums $a_{0}^{2} + a_{1}^{2}$
and all of the sums $a_{2}^{2} + a_{3}^{2}$
with work $O (p)$, and thus a list of all $p+1$
solutions with work $O (p^{2}) < O (n)$.
As the construction requires arithmetic modulo $q$, it is convenient
to compute the entire multiplication table modulo $q$.
This takes time $O (q^{2}) < O (n)$.
The construction also requires the computation of a square root of $-1$
modulo $q$, which may be computed from the multiplication table.
Given this data, the list of edges attached to each vertex of the graph
may be produced using linear work and logarathmic depth.
\end{proof}
For our purposes, there are three obstacles to using these graphs:
\begin{itemize}
\item [1.] They do not come in every degree.
\item [2.] They do not come in every number of vertices.
\item [3.] Some are bipartite and some are not.
\end{itemize}
We handle the first two issues by observing that the primes
congruent to 1 modulo 4 are sufficiently dense.
To address the third issue, we give a procedure to convert a non-bipartite expander into a bipartite expander, and \textit{vice versa}.
An upper bound on the gaps between consecutive primes congruent to 1 modulo 4 can
be obtained from the following theorem of Tchudakoff.
\begin{theorem}[\cite{Tchudakoff}]
For two integers $a$ and $b$, let
$p_{i}$ be the $i$th prime congruent to $a$ modulo $b$.
For every $\epsilon > 0$,
\[
p_{i+1} - p_{i} \leq O (p_{i}^{3/4 + \epsilon }).
\]
\end{theorem}
\begin{corollary}\label{cor:tchudakoff}
There exists an $n_{0}$ so that for all $n \geq n_{0}$
there is a prime congruent to 1 modulo 4 between $n$ and $2 n$.
\end{corollary}
We now explain how we convert between bipartite and non-bipartite expander graphs.
To convert a non-bipartite expander into a bipartite expander, we take its double-cover.
We recall that if $G = (V,E)$ is a graph with adjacency matrix $\AA$, then its double-cover
is the graph with adjacency matrix
\[
\begin{pmatrix}
0 & \AA \\
\AA^{T} & 0
\end{pmatrix}.
\]
It is immediate from this construction that the eigenvalues of the adjacency matrix
of the double-cover
are the union of the eigenvalues of $\AA$ with the eigenvalues of $-\AA$.
\begin{proposition}\label{pro:doubleCover}
Let $G$ be a connected, $d$-regular graph in which all matrix eigenvalues
other than $d$ are bounded in absolute value by $\lambda$.
Then, all non-trivial adjacency matrix eigenvalues of the double-cover of $G$
are also bounded in absolute value by $\lambda$.
\end{proposition}
To convert a bipartite expander into a non-bipartite expander, we will simply
collapse the two vertex sets onto one another.
If $G = (U \cup V, E)$ is a bipartite graph,
we specify how the vertices of $V$ are mapped onto $U$ by a permutation $\pi : V \rightarrow U$.
We then define the \textit{collapse} of $G$ induced by $\pi$
to be the graph with vertex set $U$
and edge set
\[
\setof{ (u, \pi (v)) : (u,v) \in E }.
\]
Note that the collapse will have self-loops at vertices $u$ for which $(u,v) \in E$
and $u = \pi (v)$.
We assign a weight of $2$ to every self loop.
When a double-edge would be created, that is when $(\pi (v), \pi^{-1} (u))$ is also an edge in the graph,
we give the edge a weight of $2$.
Thus, the collapse can be a weighted graph.
\begin{proposition}\label{pro:collapse}
Let $G$ be a $d$-regular bipartite graph with all non-trivial adjacency matrix eigenvalues
bounded by $\lambda$, and let $H$ be a collapse of $G$.
Then, every vertex in $H$ has weighted degree $2d$
and all adjacency matrix eigenvalues of $H$ other than $d$ are bounded in absolute value by $2 \lambda$.
\end{proposition}
\begin{proof}
To prove the bound on the eigenvalues, let $G$ have adjacency matrix
\[
\begin{pmatrix}
0 & \AA \\
\AA^{T} & 0
\end{pmatrix}.
\]
After possibly rearranging rows and columns, we may assume that
the adjacency matrix of the collapse is given by
\[
\AA + \AA^{T}.
\]
Note that the self-loops, if they exist, correspond to diagonal entries of value $2$.
Now, let $\xx$ be a unit vector orthogonal to the all-1s vector.
We have
\[
\xx^{T} (\AA + \AA^{T}) \xx
=
\begin{pmatrix}
\xx
\\
\xx
\end{pmatrix}^{T}
\begin{pmatrix}
0 & \AA \\
\AA^{T} & 0
\end{pmatrix}
\begin{pmatrix}
\xx
\\
\xx
\end{pmatrix}
\leq
\lambda \norm{
\begin{pmatrix}
\xx
\\
\xx
\end{pmatrix}
}^{2}
\leq
2 \lambda ,
\]
as the vector $[\xx ;\xx]$ is orthogonal to the eigenvectors of the trivial
eigenvalues of the adjacency matrix of $G$.
\end{proof}
We now state how bounds on the eigenvalues of the adjacency matrices of graphs
lead to approximations of complete graphs and complete bipartite graphs.
\begin{proposition}\label{pro:expanderApprox}
Let $G$ be a graph with $n$ vertices, possibly with self-loops and weighted edges,
such that every vertex of $G$ has weighted degree $d$ and
such that all non-trivial eigenvalues of the adjacency matrix of $G$
have absolute value at most $\lambda \leq d/2$.
If $G$ is not bipartite, then
$(n/d) \LL_{G}$ is an $\epsilon$-approximation of $K_{n}$ for $\epsilon = (2 \ln 2) (\lambda)/d$.
If $G$ is bipartite, then
$(n/d) \LL_{G}$ is an $\epsilon$-approximation of $K_{n,n }$ for $\epsilon = (2 \ln 2) (\lambda)/d$.
\end{proposition}
\begin{proof}
Let $\AA$ be the adjacency matrix of $G$.
Then,
\[
\LL_{G} = d \II - \AA .
\]
In the non-bipartite case, we observe that all of the non-zero eigenvalues
of $\LL_{K_{n}}$ are $n$,
so for all vectors $x$ orthogonal to the constant vector,
\[
x^{T} \LL_{K_{n}} x = n x^{T}x.
\]
As all of the non-zero eigenvalues of $\LL_{G}$
are between $d - \lambda$ and $d + \lambda$,
for all vectors $x$ orthogonal to the constant vector
\[
n \left(1-\frac{\lambda }{d} \right) x^{T} x
\leq x^{T} (n/d)\LL_{G} x
\leq
n \left(1+\frac{\lambda }{d} \right) x^{T} x.
\]
Thus,
\begin{equation*}
\left(1-\frac{\lambda }{d} \right) \LL_{K_{n}}
\preccurlyeq \LL_{G} \preccurlyeq
\left(1+\frac{\lambda }{d} \right) \LL_{K_{n}} .
\end{equation*}
In the bipartite case, we naturally assume that the bipartition is the same in both $G$ and $K_{n,n}$.
Now, let $\xx$ be any vector on the vertex set of $G$.
Both the graphs $K_{n,n}$ and $(n/d) G$ have Laplacian matrix eigenvalue
$0$ with the constant eigenvector, and eigenvalue $2 n$ with eigenvector
$[\bvec{1};-\bvec{1}]$.
The other eigenvalues of the Laplacian of $K_{n,n}$ are $n$, while the
other eigenvalues of the Laplacian of $(n/d) G$ are between
\[
n \left(1 - \frac{\lambda}{d} \right)
\quad \text{and} \quad
n \left(1 + \frac{\lambda}{d} \right).
\]
Thus,
\[
\left(1-\frac{\lambda }{d} \right) \LL_{K_{n,n}}
\preccurlyeq \LL_{G} \preccurlyeq
\left(1+\frac{\lambda }{d} \right) \LL_{K_{n,n}} .
\]
The proposition now follows from our choice of $\epsilon$, which guarantees that
\[
e^{-\epsilon} \leq 1 - \lambda /d
\quad \text{and} \quad 1 + \lambda /d
\leq e^{\epsilon},
\]
provided that $\lambda /d \leq 1/2$.
\end{proof}
\begin{lemma}
\label{lem:explicitExpanders}
There are algorithms that on input $n$ and $\epsilon > n^{-1/6}$
produce a graph having $O (n/\epsilon^{2})$ edges that is an
$O (\epsilon)$ approximation of $K_{n'}$ or $K_{n',n'}$
for some $n \leq n' \leq 8n$.
These algorithms run in $O (\log n)$ depth and $O (n / \epsilon^{2})$ work.
\end{lemma}
\begin{proof}
We first consider the problem of constructing an approximation of $K_{n', n'}$.
By Corollary~\ref{cor:tchudakoff} there
is a constant $n_{0}$ so that if $n > n_{0}$, then
there is a prime $q$ that is equivalent to
$1$ modulo $4$ so that $q^{2} (q-1)$ is between and $n$ and $8 n$.
Let $q$ be such a prime and let $n' = q^{2} (q-1)$.
Similarly, for $\epsilon$ sufficiently small, there is a prime $p$
equivalent to $1$ modulo $4$ that is between
$\epsilon^{-2}/2$ and $\epsilon^{-2}$.
Our algorithm should construct the corresponding Ramanujan graph, as described
in Theorem~\ref{thm:LPS} and Proposition~\ref{pro:LPS}.
If the graph is bipartite, then Proposition~\ref{pro:expanderApprox} tells us
that it provides the desired approximation of $K_{n',n'}$.
If the graph is not biparite, then we form its double cover to obtain
a bipartite graph and use Proposition~\ref{pro:doubleCover}
and Proposition~\ref{pro:expanderApprox} to see that it provides the desired
approximation of $K_{n',n'}$.
The non-bipartite case is similar, except that we require a prime $q$
so that $q^{2} (q-1)/2$ is between $n$ and $8 n$, and we use
a collapse to convert a bipartite expander to a non-bipartite one,
as analyzed in Proposition~\ref{pro:collapse}.
\end{proof}
In Section \ref{sec:depth}, we just need to know that there exist
graphs of low degree that are good approximations of
complete graphs.
We may obtain them from the recent theorem of Marcus, Spielman and Srivastava
that there exist bipartite Ramanujan graphs of every degree and number of vertices
\cite{IF4}.
\begin{lemma}
\label{lem:existExpanders}
For every integer $n$ and even integer $d$,
there is a weighted graph on $n$ vertices of degree at most
$d$ that is a $4 / \sqrt{d} $ approximation
of $K_{n}$.
\end{lemma}
\begin{proof}
The main theorem of \cite{IF4} tells us that there is a bipartite Ramanujan
graph on $2n$ vertices of degree $k$ for every $k \leq n$.
By Propositions \ref{pro:collapse} and \ref{pro:expanderApprox},
a collapse of this graph
is a weighted graph of degree at most $2k$
that is a $(4 \ln 2)/\sqrt{k}$ approximation of $K_{n,n}$.
The result now follows by setting $d = 2k$.
\end{proof}
\subsection{Sparsifying Complete Product Demand Graphs}
\label{subsec:complete}
Our algorithm for sparsifying complete product demand graphs begins by
splitting the vertices of highest demands into many vertices.
By \textit{splitting} a vertex, we mean replacing it by many
vertices whose demands sum to its original demand.
In this way, we obtain a larger product demand graph.
We observe that we can obtain a sparsifier of the original graph by
sparsifying the larger graph, and then collapsing back together
the vertices that were split.
\begin{proposition}\label{pro:splitProduct}
Let $G$ be a product demand graph with vertex set
$\setof{1, \dots ,n}$
and demands $\dd$,
and let $\Ghat = (\Vhat, \Ehat )$ be a product demand graph with
demands $\ddhat$.
If there is a partition of $\Vhat$ into sets $S_{1}, \dots , S_{n}$
so that for all $i \in V$, $\sum_{j \in S_{i}} \hat{d}_{j} = d_{i}$,
then $\Ghat$ is a \textit{splitting} of $G$ and there is a matrix
$\MM$ so that
\[
\LL_{G} = \MM \LL_{\Ghat} \MM^{T}.
\]
\end{proposition}
\begin{proof}
The $(i,j)$ entry of matrix $\MM$ is $1$ if and only if $j \in S_{i}$.
Otherwise, it is zero.
\end{proof}
We now show that we can sparsify $G$ by sparsifying $\Ghat$.
\begin{proposition}
\label{pro:collapseLoewner}
Let $\Ghat_{1}$ and $\Ghat_{2}$ be graphs on the same vertex set $\Vhat$ such
that $\Ghat _{1}\approx_{\epsilon}\Ghat _{2}$ for some $\epsilon$.
Let $S_{1}, \dots , S_{n}$ be a partition of $\Vhat$, and let $G_{1}$
and $G_{2}$ be the graphs obtained by collapsing together all the
vertices in each set $S_{i}$ and eliminating any self loops that are
created.
Then
\[
G_{1}\approx_{\epsilon}G_{2}.
\]
\end{proposition}
\begin{proof}
Let $\MM$ be the matrix introduced in Proposition \ref{pro:splitProduct}.
Then,
\[
\LL_{G_{1}} = \MM \LL_{\Ghat_{1}} \MM^{T} \quad \text{and} \quad
\LL_{G_{2}} = \MM \LL_{\Ghat_{2}} \MM^{T}.
\]
The proof now follows from Fact \ref{fact:orderCAC}.
\end{proof}
For distinct vertices $i$ and $j$, we let $\edgu{i,j}$ denote the graph with an edge of weight $1$ between vertex $i$ and vertex $j$.
If $i = j$, we let $\edgu{i,j}$ be the empty graph.
With this notation, we can express the product demand graph
as
\[
\sum_{i < j} d_{i} d_{j} \edgu{i,j}
=
\frac{1}{2} \sum_{i,j \in V} d_{i} d_{j}\edgu{i,j}.
\]
This notation also allows us to precisely express our algorithm for sparsifying
product demand graphs.
\begin{algbox}
$G'=\textsc{WeightedExpander}(\dd,\epsilon)$
\begin{enumerate}
\item Let $\nhat$ be the least integer greater than
$2 n / \epsilon^{2}$ such that the algorithm described in Lemma \ref{lem:explicitExpanders}
produces an $\epsilon$-approximation of $K_{\nhat}$.
\item Let $t = \frac{\sum_{k}d_{k}}{\nhat}$.
\item Create a new product demand graph $\Ghat$ with demand vector $\hat{\dd}$
by
splitting each vertex $i$ into a set of $\ceil{d_{i}/t}$ vertices, $S_i$:
\begin{enumerate}
\item $\floor{d_{i}/t}$ vertices with demand $t$.
\item one vertex with demand $d_{i} - t \floor{d_{i}/t}$.
\end{enumerate}
\item Let $H$ be a set of $\nhat$ vertices in $\Ghat$ with demand $t$,
and let $L$ contain the other vertices. Set $k = \sizeof{L}$.
\item
Partition $H$ arbitrarily into sets $V_{1}, \dots , V_{k}$, so that
$\sizeof{V_{i}} \geq \floor{\nhat / k}$ for all $1 \leq i \leq k$.
\item
Use the algorithm described in Lemma \ref{lem:explicitExpanders} to
produce $\tilde{K}_{HH}$, an $\epsilon$-approximation of the complete graph on $H$.
Set
\[
\widetilde{G} = t^2 \tilde{K}_{HH} + \sum_{l \in L}
\frac{\sizeof{H}}{\sizeof{V_{l}}} \sum_{h \in V_{l}}
\dhat_{l} \dhat_{h} \edgu{l,h}.
\]
\item Let $G'$ be the graph obtained by collapsing together all vertices
in each set $S_{i}$.
\end{enumerate}
\end{algbox}
This section and the next are devoted to the analysis of this algorithm.
Given Proposition~\ref{pro:collapseLoewner}, we just need to show that
$\widetilde{G}$ is a good approximation to $\Ghat$.
\begin{proposition}\label{pro:numVertsAfterSplit}
The number of vertices in $\Ghat$ is at most $n + \nhat$.
\end{proposition}
\begin{proof}
The number of vertices in $\Ghat$ is
\[
\sum_{i \in V} \ceil{d_{i} / t}
\leq
n + \sum_{i \in V} d_{i} / t
=
n + \nhat .
\]
\end{proof}
So, $k \leq n$ and $\nhat \geq 2 k / \epsilon^{2}$.
That is, $\sizeof{H} \geq 2 \sizeof{L} / \epsilon^{2}$.
In the next section, we prove the lemmas that show that for these special product demand graphs $\Ghat $ in which
almost all weights are the maximum,
our algorithm produces a graph $\widetilde{G}$ that is a good approximation of $\Ghat$.
\begin{theorem}
\label{thm:expanderFull}
Let $0 < \epsilon < 1$ and
let $G$ be a product demand graph with $n$ vertices and demand vector $\dd$.
Given $\dd$ and $\epsilon$ as input, \textsc{WeightedExpander} produces
a graph $G'$ with $O (n / \epsilon^{4})$ edges that is
an $O (\epsilon)$ approximation of $G$.
Moreover, \textsc{WeightedExpander} runs in $O (\log n)$ depth
and $O (n / \epsilon^{4})$ work.
\end{theorem}
\begin{proof}
The number of vertices in the graph $\Ghat$ will be between
$n + 2 n / \epsilon^{2}$ and $n + 16 n / \epsilon^{2}$.
So, the algorithm described in Lemma \ref{lem:explicitExpanders} will take
$O (\log n)$ depth and $O (n / \epsilon^{4})$ work to produce an
$\epsilon$ approximation of the complete graph on $\nhat$ vertices.
This dominates the computational cost of the algorithm.
Proposition
\ref{pro:collapseLoewner} tells us that
$G'$ approximates $G$ at least as well as $\widetilde{G}$ approximates
$\Ghat$.
To bound how well $\widetilde{G}$ approximates $\Ghat$,
we use two lemmas that are stated in the next section.
Lemma \ref{lemma:light_vertex_not_important} shows
that
\[
\Ghat_{HH} + \Ghat_{LH} \approx_{O(\epsilon^{2})} \Ghat .
\]
Lemma \ref{lem:replaceLH} shows that
\[
\Ghat_{HH} + \Ghat_{LH}
\approx_{4 \epsilon}
\Ghat_{HH} + \sum_{l \in L}
\frac{\sizeof{H}}{\sizeof{V_{l}}} \sum_{h \in V_{l}}
\dhat_{l} \dhat_{h} \edgu{l,h}.
\]
And, we already know that $t^2 \tilde{K}$ is an $\epsilon$-approximation of
$\Ghat_{HH}$.
Fact \ref{frac:orderComposition} says that we can combine these three approximations to conclude that
$\widetilde{G}$ is an $O (\epsilon)$-approximation of $\Ghat$.
\end{proof}
\subsection{Product demand graphs with most weights maximal}
In this section, we consider product demand graphs in which almost all weights are the maximum.
For simplicity, we make a slight change of notation from the previous section.
We drop the hats, we let $n$ be the number of vertices in the product demand graph,
and we order the demands so that
\[
d_{1} \leq d_{2} \leq \dots \leq d_{k} \leq d_{k+1} = \dots = d_{n} = 1.
\]
We let $L = \setof{1, \dots , k}$ and $H = \setof{k+1, \dots , n}$
be the set of low and high demand vertices, respectively.
Let $G$ be the product demand graph corresponding to $\dd$, and let
$G_{LL}$, $G_{HH}$ and $G_{LH}$ be the subgraphs containing the
low-low, high-high and low-high edges repsectively.
We now show that little is lost by dropping the edges in $G_{LL}$
when $k$ is small.
Our analysis will make frequent use of the
following Poincare inequality:
\begin{lemma} \label{lemma:poincare}Let
$c \edgu{u,v}$ be an edge of weight $c$ and let $P$ be a path from
from $u$ to $v$
consisting of edges of weights $c_{1},c_{2},\cdots,c_{k}$.
Then
\[
c \edgu{u,v} \preceq c\left(\sum c_{i}^{-1}\right)P.
\]
\end{lemma}
As the weights of the edges we consider in this section are determined
by the demands of their vertices,
we introduce the notation
\[
\edg{i,j} = d_{i} d_{j} \edgu{i,j}.
\]
With this notation, we can express the product demand graph
as
\[
\sum_{i < j} \edg{i,j}
=
\frac{1}{2} \sum_{i,j \in V} \edg{i,j}.
\]
\begin{lemma} \label{lemma:light_vertex_not_important}
If $\left|L\right|\leq\left|H\right|$,
then
\[
G_{HH}+G_{LH}\approx_{3\frac{\left|L\right|}{\left|H\right|}} G.
\]
\end{lemma}
\begin{proof}
The lower bound $G_{HH}+G_{LH}\preceq G_{HH}+G_{LH}+G_{LL}$
follows from $G_{LL}\succeq0$.
Using lemma \ref{lemma:poincare} and the assumptions $d_{l} \leq 1$
for $l \in L$ and and $d_{h} = 1$ for $h\in H$, we derive for every $l_{1}, l_{2} \in L$,
\begin{align*}
\edg{l_{1}, l_{2}}
& = \frac{1}{\left|H\right|^{2}}\sum_{h_{1},h_{2}\in H} \edg{l_{1}, l_{2}}\\
\intertext{\text{(by Lemma \ref{lemma:poincare})}}
& \preceq \frac{1}{\left|H\right|^{2}}
\sum_{h_{1},h_{2}\in H}d_{l_{1}}d_{l_{2}}
\left(\frac{1}{d_{l_{1}}d_{h_{1}}}+\frac{1}{d_{h_{1}}d_{h_{2}}}+\frac{1}{d_{h_{2}}d_{l_{2}}}\right)
\left(\edg{l_{1},h_{1}}+\edg{h_{1},h_{2}}+\edg{h_{2},l_{2}}\right)\\
& \preceq
\frac{3}{\left|H\right|^{2}} \sum_{h_{1},h_{2}\in H} \left(\edg{l_{1},h_{1}}+\edg{h_{1},h_{2}}+\edg{h_{2},l_{2}}\right)\\
& =
\frac{3}{\left|H\right|}\sum_{h\in H} \left(\edg{l_{1},h}+\edg{l_{2},h}\right)+\frac{6}{\left|H\right|^{2}}G_{HH}.
\end{align*}
So,
\begin{align*}
G_{LL} & =
\frac{1}{2} \sum_{l_{1}, l_{2} \in L} \edg{l_{1}, l_{2}}
\\
& \preceq \frac{1}{2}
\sum_{l_{1},l_{2}}\left(\frac{3}{\left|H\right|}\sum_{h\in H}
\left(\edg{l_{1},h}+\edg{l_{2},h}\right)+\frac{6}{\left|H\right|^{2}}G_{HH}\right)\\
& = \frac{3\left|L\right|}{\left|H\right|}G_{LH}+\frac{3\left|L\right|^{2}}{\left|H\right|^{2}}G_{HH}.
\end{align*}
The assumption $\sizeof{L} \leq \sizeof{H}$ then allows us to conclude
\[
G_{HH}+G_{LH}+G_{LL}\preceq\left(1+3\frac{\left|L\right|}{\left|H\right|}\right)\left(G_{HH}+G_{LH}\right).
\]
\end{proof}
Using a similar technique, we will show that the edges between $L$ and $H$
can be replaced by the union of a small number of stars.
In particular, we will partition the vertices of $H$ into $k$ sets,
and for each of these sets we will create one star connecting
the vertices in that set to a corresponding vertex in $L$.
We employ the following consequence of the Poincare inequality in Lemma \ref{lemma:poincare}.
\begin{lemma} \label{lemma:light_can_be_merge}
For any $\epsilon \leq 1$, $l \in L$ and $h_{1}, h_{2} \in H$,
\[
\epsilon \edg{h_{1},l}+ (1/2)\edg{h_{1},h_{2}}
\approx_{4 \sqrt{\epsilon}}
\epsilon \edg{h_{2},l}+ (1/2)\edg{h_{1},h_{2}}.
\]
\end{lemma} \begin{proof}
By applying Lemma \ref{lemma:poincare} and
recalling that $d_{h_{1}} = d_{h_{2}} = 1$ and $d_{l} \leq 1$, we compute
\begin{align*}
\edg{h_{1}, l}
& \preceq d_{h_{1}} d_{l}
\left(\frac{\sqrt{\epsilon}}{ d_{h_{1}} d_{h_{2}}}+\frac{1}{ d_{h_{2}} d_{l} }\right)
\left(\frac{1}{\sqrt{\epsilon}}\edg{h_{1},h_{2}}+\edg{h_{2},l}\right)\\
& \preceq \frac{1+\sqrt{\epsilon}}{\sqrt{\epsilon}}\edg{h_{1},h_{2}}
+ (1+\sqrt{\epsilon})\edg{h_{2},l}
\\
& \preceq (1+\sqrt{\epsilon})\edg{h_{2},l}+\frac{2}{\sqrt{\epsilon}}\edg{h_{1},h_{2}}.
\end{align*}
Multiplying both sides by $\epsilon$ and adding $(1/2) \edg{h_{1}, h_{2}}$ then gives
\begin{align*}
\epsilon \edg{h_{1}, l} + (1/2) \edg{h_{1}, h_{2}}
& \preccurlyeq
(1+\sqrt{\epsilon}) \epsilon \edg{h_{2}, l}
+
(2 \sqrt{\epsilon} + 1/2) \edg{h_{1},h_{2}}
\\
& \preccurlyeq
(1 + 4 \sqrt{\epsilon}) \left( \epsilon \edg{h_{2},l}+ (1/2)\edg{h_{1},h_{2}} \right)
\\
& \preccurlyeq
e^{4 \sqrt{\epsilon }} \left( \epsilon \edg{h_{2},l}+ (1/2)\edg{h_{1},h_{2}} \right).
\end{align*}
By symmetry, we also have
\[
\epsilon \edg{h_{2}, l} + (1/2) \edg{h_{1}, h_{2}}
\preccurlyeq
e^{4 \sqrt{\epsilon }} \left( \epsilon \edg{h_{1},l}+ (1/2)\edg{h_{1},h_{2}} \right).
\]
\end{proof}
\begin{lemma}\label{lem:replaceLH}
Recall that $L = \setof{1,\dots ,k}$ and let
$V_{1}, \dots , V_{k}$ be a partition of $H = \setof{k+1, \dots , n}$
so that $\sizeof{V_{l}} \geq s$ for all $l$.
Then,
\[
G_{HH} + G_{LH}
\approx_{4 / \sqrt{s}}
G_{HH} + \sum_{l \in L} \frac{\sizeof{H}}{\sizeof{V_{l}}} \sum_{h \in V_{l}} \edg{l,h}.
\]
\end{lemma}
\begin{proof}
Observe that
\[
G_{LH} = \sum_{l \in L} \sum_{h \in H} \edg{l,h}.
\]
For each $l \in L$, $h_{1} \in H$ and $h_{2} \in V_{l}$ we apply
Lemma \ref{lemma:light_can_be_merge} to show that
\[
\frac{1}{\sizeof{V_{l}}} \edg{l, h_{1}}
+ \frac{1}{2} \edg{h_{1}, h_{2}}
\approx_{4 / \sqrt{s}}
\frac{1}{\sizeof{V_{l}}} \edg{l, h_{2}}
+ \frac{1}{2} \edg{h_{1}, h_{2}}.
\]
Summing this approximation over all $h_{2} \in V_{l}$
gives
\[
\edg{l, h_{1}}
+ \sum_{h_{2} \in V_{l}}
\frac{1}{2} \edg{h_{1}, h_{2}}
\approx_{4 / \sqrt{s}}
\sum_{h_{2} \in V_{l}}
\left( \frac{1}{\sizeof{V_{l}}}
\edg{l, h_{2}}
+ \frac{1}{2} \edg{h_{1}, h_{2}} \right)
.
\]
Summing the left-hand side of this this approximation
over all $l \in L$ and $h_{1} \in H$ gives
\[
\sum_{l \in L, h_{1} \in H} \edg{l, h_{1}}
+
\sum_{h_{2} \in V_{l}}
\frac{1}{2} \edg{h_{1}, h_{2}}
=
\sum_{l \in L, h_{1} \in H} \edg{l, h_{1}}
+
\frac{1}{2}
\sum_{h_{1} \in H, l \in L}
\sum_{h_{2} \in V_{l}} \edg{h_{1}, h_{2}}
=
G_{LH} + G_{HH}.
\]
On the other hand, the sum of the right-hand terms gives
\[
G_{HH} +
\sum_{l \in L, h_{1} \in H}
\sum_{h_{2} \in V_{l}}
\frac{1}{\sizeof{V_{l}}}
\edg{l, h_{2}}
=
G_{HH} +
\sum_{l \in L}
\sum_{h_{2} \in V_{l}}
\frac{\sizeof{H}}{\sizeof{V_{l}}}
\edg{l, h_{2}}.
\]
\end{proof}
\subsection{Weighted Bipartite Expanders}
\label{subsec:bipartite}
This construction extends analogously to bipartite product graphs.
The bipartite product demand graph of vectors $(\dd^{A},\dd^{B})$
is a complete bipartite graph whose weight between vertices $i\in A$
and $j\in B$ is given by $w_{ij}=d_{i}^{A}d_{j}^{B}$. Without
loss of generality, we will assume $d_{1}^{A}\geq d_{2}^{A}\geq\cdots\geq d_{n^{A}}^{A}$
and $ d_{1}^{B}\geq d_{2}^{B}\geq\cdots\geq d_{n^{B}}^{B}$.
As the weights of the edges we consider in this section are determined
by the demands of their vertices,
we introduce the notation
\[
\edg{i,j} = d^{A}_{i} d^{B}_{j} \edgu{i,j}.
\]
Our construction is based on a similar observation that if most vertices
on $A$ side have $d_{i}^{A}$ equaling to $d_{1}^{A}$ and most
vertices on $B$ side have $d_{i}^{B}$ equaling to $d_{1}^{B}$,
then the uniform demand graph on these vertices dominates the graph.
\begin{algbox}
$G'=\textsc{WeightedBipartiteExpander}(\dd^{A},\dd^{B},\epsilon)$
\begin{enumerate}
\item Let $n'=\max(n^{A},n^{B})$ and $\nhat$ be the least integer greater than
$2 n' / \epsilon^{2}$ such that the algorithm described in
Lemma \ref{lem:explicitExpanders} produces an $\epsilon$-approximation of $K_{\nhat ,\nhat}$.
\item Let $t^{A}=\frac{\sum_{k}d_{k}^{A}}{\nhat}$ and $t^{B}=\frac{\sum_{k}d_{k}^{B}}{\nhat}$.
\item Create a new bipartite demand graph $\Ghat$ with demands
$\ddhat^{A}$ and $\ddhat^{B}$ follows:
\begin{enumerate}
\item On the side $A$ of the graph, for each vertex $i$, create a subset $S_i$ consisting of $\ceil{d^{A}_{i}/t^A}$ vertices:
\begin{enumerate}
\item $\floor{d^{A}_{i}/t^A}$ with demand $t^A$.
\item one vertex with demand $d^{A}_{i} - t^A \floor{d^{A}_{i}/t^A}$.
\end{enumerate}
\item Let $H^{A}$ contain $\hat{n}$ vertices of $A$ of with demand $t^{A}$, and let
$L^{A}$ contain the rest.
Set $k^{A} = \sizeof{L^{A}}$.
\item Create the side $B$ of the graph with partition $H^{B}, L^{B}$
and demand vector $\ddhat^{B}$ similarly.
\end{enumerate}
\item Partition $H^{A}$ into sets of size
$\sizeof{V^{A}_{i}} \geq \floor{\nhat / k^{A}}$, one corresponding
to each vertex $l \in L^{A}$.
Partition $V_{B}$ similarly.
\item
Let $\tilde{K}_{H^{A} H^{B}} $ be a bipartite expander produced
by Lemma~\ref{lem:explicitExpanders} that $\epsilon$-approximates $K_{\hat{n} \hat{n}}$, identified with the vertices $H^{A}$ and $H^{B}$.
Set
\[
\widetilde{G} = t^{A} t^{B} \tilde{K} + \sum_{l \in L^{A}}
\frac{\sizeof{H^{B}}}{\sizeof{V^{B}_{l}}} \sum_{h \in V^{B}_{l}}
\dhat^{A}_{l} \dhat^{B}_{h} \edgu{l,h}
+ \sum_{l \in L^{B}}
\frac{\sizeof{H^{A}}}{\sizeof{V^{A}_{l}}} \sum_{h \in V^{A}_{l}}
\dhat^{B}_{l} \dhat^{A}_{h} \edgu{l,h}.
\]
\item Let $G'$ be the graph obtained by collapsing together all vertices
in each set $S^{A}_{i}$ and $S^{B}_{i}$.
\end{enumerate}
\end{algbox}
Similarly to the nonbipartite case, the Poincare inequality show that
the edges between low demand vertices can be completely omitted if
there are many high demand vertices which allows the demand routes
through high demand vertices.
\begin{lemma} \label{lemma:light_vertex_not_important2}Let $G$
be the bipartite product demand graph of the demand $(\dd_{i}^{A},\dd_{j}^{B})$.
Let $H^{A}$ a subset of vertices on $A$ side with demand higher
than the set of remaining vertices $L^{A}$ on $A$ side.
Define $H^{B},L^{B}$ similarly. Assume that $\sizeof{L^{A}}\leq\sizeof{H^{A}}$
and $\sizeof{L^{B}}\leq\sizeof{H^{B}}$, then
\[
G_{H^{A}H^{B}}+G_{H^{A}L^{B}}+G_{L^{A}H^{B}}\approx_{3\max\left(\frac{\sizeof{L^{A}}}{\sizeof{H^{A}}},\frac{\sizeof{L^{B}}}{\sizeof{H^{B}}}\right)}G.
\]
\end{lemma} \begin{proof}
The proof is analogous to Lemma~\ref{lemma:light_vertex_not_important},
but with the upper bound modified for bipartite graphs.
For every edge $l_A, l_B$, we embed it evenly into paths of
the form $l_A, h_B, h_A, l_B$ over all choices of $h_A$ and $h_B$.
The support of this embedding can be calculated using
Lemma~\ref{lemma:poincare}, and the overall accounting
follows in the same manner as Lemma~\ref{lemma:light_vertex_not_important}.
\end{proof}
It remains to show that the edges between low demand and high demand
vertices can be compressed into a few edges.
The proof here is also analogous to Lemma~\ref{lemma:light_can_be_merge}:
we use the Poincare inequality to show that all
demands can routes through high demand vertices.
The structure of the bipartite graph makes it helpful
to further abstract these inequalities via the following
Lemma for four edges.
\begin{lemma} \label{lem:light_can_be_merge_2}Let
$G$ be the bipartite product demand graph of the demand $(\dd_{i}^{A},\dd_{j}^{B})$.
Given $h_{A},l_{A}\in A$ and $h_{B,1},h_{B,2}\in B$. Assume that
$d_{h_{A}}^{A}=d_{h_{B,1}}^{B}=d_{h_{B,2}}^{B}\geq d_{l_{A}}^{A}$.
For any $\epsilon<1$ , we have
\[
\epsilon\edg{l_{A},h_{B,1}}+\edg{h_{A},h_{B,2}}+\edg{h_{A},h_{B,1}}\approx_{3\sqrt{\epsilon}}\epsilon \edg{l_{A},h_{B,2}}+\edg{h_{A},h_{B,2}}+\edg{h_{A},h_{B,1}}.
\]
\end{lemma} \begin{proof}
Using Lemma $\ref{lemma:poincare}$ and
$d_{h_{A}}^{A}=d_{h_{B,1}}^{B}=d_{h_{B,2}}^{B}\geq d_{l_{A}}^{A}$,
we have
\begin{align*}
& \edg{l_{A},h_{B,1}} \\
& \preceq d_{l_{A}}^{A} d_{h_{B,1}}^{B}\left(\frac{1}{ d_{l_{A}}^{A} d_{h_{B,2}}^{B}}+\frac{\sqrt{\epsilon}}{d_{h_{A}}^{A} d_{h_{B,2}}^{B}}+\frac{\sqrt{\epsilon}}{ d_{h_{A}}^{A} d_{h_{B,1}}^{B}}\right)\left(\edg{l_{A},h_{B,2}}+\frac{1}{\sqrt{\epsilon}}\edg{h_{A},h_{B,2}}+\frac{1}{\sqrt{\epsilon}}\edg{h_{A},h_{B,1}}\right)\\
& \preceq (1+2\sqrt{\epsilon})\edg{l_{A},h_{B,2}}+\frac{1+2\sqrt{\epsilon}}{\sqrt{\epsilon}}\edg{h_{A},h_{B,2}}+\frac{1+2\sqrt{\epsilon}}{\sqrt{\epsilon}}\edg{h_{A},h_{B,1}}.
\end{align*}
Therefore,
\begin{align*}
& \epsilon\edg{l_{A},h_{B,1}}+\edg{h_{A},h_{B,2}}+\edg{h_{A},h_{B,1}} \preceq (1+3\sqrt{\epsilon})\left(\epsilon\edg{l_{A},h_{B,2}}+\edg{h_{A},h_{B,2}}+\edg{h_{A},h_{B,1}}\right).
\end{align*}
The other side is similar due to the symmetry.\end{proof}
\begin{theorem}
\label{lem:BiExpanderFull}
Let $0 < \epsilon < 1$ and
let $G$ be a bipartite demand graph with $n$ vertices and demand vector $(\dd^{A},\dd^{B})$.
\textsc{WeightedBipartiteExpander} produces
a graph $G'$ with $O (n / \epsilon^{4})$ edges that is
an $O (\epsilon)$ approximation of $G$.
Moreover, \textsc{WeightedBipartiteExpander} runs in $O (\log n)$ depth
and $O (n / \epsilon^{4})$ work.
\end{theorem}
\begin{proof}
The proof is analogous to Theorem~\ref{thm:expanderFull}.
After the splitting, the demands in $H^{A}$ are higher than the demands in $L^{A}$ and so is $H^{B}$ to $L^{B}$.
Therefore, Lemma \ref{lemma:light_vertex_not_important2} shows that
that
\[
\Ghat_{H^{A}H^{B}}+\Ghat_{H^{A}L^{B}}+\Ghat_{L^{A}H^{B}} \approx_{3 \epsilon^{2}/2} \Ghat .
\]
By a proof analogous to Lemma \ref{lem:replaceLH}, one can use Lemma \ref{lem:light_can_be_merge_2} to show that
\[
\Ghat_{H^{A}H^{B}}+\Ghat_{H^{A}L^{B}}+\Ghat_{L^{A}H^{B}}
\approx_{O(\epsilon)}
\Ghat_{H^{A}H^{B}} + \frac{\sizeof{H^{B}}}{\sizeof{V^{B}_{l}}} \sum_{h \in V^{B}_{l}}
\dhat^{A}_{l} \dhat^{B}_{h} \edgu{l,h}
+ \sum_{l \in L^{B}}
\frac{\sizeof{H^{A}}}{\sizeof{V^{A}_{l}}} \sum_{h \in V^{A}_{l}}
\dhat^{B}_{l} \dhat^{A}_{h} \edgu{l,h}.
\]
And, we already know that $t^A t^B \tilde{K}$ is an $\epsilon$-approximation of
$\Ghat_{H^{A}H^{B}}$.
Fact \ref{frac:orderComposition} says that we can combine these three approximations to conclude that
$\widetilde{G}$ is an $O (\epsilon)$-approximation of $\Ghat$.
\end{proof}
|
1,314,259,996,589 | arxiv | \section{Introduction}\label{sec:introduction}}
\subsection{Integrating Three Models}
Deep probabilistic generative models are a powerful framework for representing complex data distributions. They have been widely used in unsupervised learning problems to learn from unlabeled data. The goal of generative learning is to build rich and flexible models to fit complex, multi-modal data distributions as well as to be able to generate samples with high realism. The family of generative models may be roughly divided into two classes: The first class is the \textit{energy-based model} (a.k.a undirected graphical model) and the second class is the latent variable model (a.k.a directed graphical model) which usually includes \textit{generator model} for the generation and \textit{inference model} for inference or reconstruction.
These models have their advantages and limitations. An energy-based model defines an explicit likelihood of the observed data up to a normalizing constant. However, sampling from such a model usually requires expensive Markov chain Monte Carlo (MCMC). A generator model defines direct sampling of the data. However, it does not have an explicit likelihood. The inference of the latent variables also requires MCMC sampling from the posterior distribution. The inference model defines an explicit approximation to the posterior distribution of the latent variables.
Combining the energy-based model, the generator model, and the inference model to get the best of each model is an attractive goal. On the other hand, challenges may accumulate when the models are trained together since different models need to effectively compete or cooperate together to achieve their highest performances. In this work, we propose the divergence triangle for joint training of energy-based model, generator model and inference model. The learning of three models can then be seamlessly integrated in a principled probabilistic framework. The energy-based model is learned based on the samples supplied by the generator model. With the help of the inference model, the generator model is trained by both the observed data and the energy-based model. The inference model is learned from both the real data fitted by the generator model as well as the synthesized data generated by the generator model.
Our experiments demonstrate that the divergence triangle is capable of learning an energy-based model with a well-behaved energy landscape, a generator model with highly realistic samples, and an inference model with faithful reconstruction ability.
\subsection{Prior Art}
The divergence triangle jointly learns an energy-based model, a generator model, and an inference model. The following are previous methods for learning such models.
The maximum likelihood learning of the energy-based model requires expectation with respect to the current model, while the maximum likelihood learning of the generator model requires expectation with respect to the posterior distribution of the latent variables. Both expectations can be approximated by MCMC, such as Gibbs sampling~\cite{gibbs}, Langevin dynamics, or Hamiltonian Monte Carlo (HMC)~\cite{neal2011mcmc}. \cite{LuZhuWu2016, xieLuICML} used Langevin dynamics for learning the energy-based models, and \cite{HanLu2016} used Langevin dynamics for learning the generator model. In both cases, MCMC sampling introduces an inner loop in the training procedure, posing a computational expense.
An early version of the energy-based model is the FRAME (Filters, Random field, And Maximum Entropy) model \cite{zhu1997minimax, wu2000equivalence}. \cite{zhu1997GRADE} used gradient-based method such as Langevin dynamics to sample from the model. \cite{zhu2003statistical} called the energy-based models as descriptive models. \cite{LuZhuWu2016, xieLuICML} generalized the model to deep variants.
For learning the energy-based model \cite{lecun2006tutorial}, to reduce the computational cost of MCMC sampling, contrastive divergence (CD)~\cite{hinton} initializes a finite step MCMC from the observed data. The resulting learning algorithm follows the gradient of the difference between two Kullback-Leibler divergences, thus the name contrastive divergence. In this paper, we shall use the term ``contrastive divergence'' in a more general sense than \cite{hinton}. Persistent contrastive divergence~\cite{pcd} initializes MCMC sampling from the samples of the previous learning iteration.
Generalizing \cite{tu2007learning}, \cite{TuNIPS} developed an introspective learning method where the energy function is discriminatively learned, and the energy-based model is both a generative model and a discriminative model.
For learning the generator model, the variational auto-encoder (VAE)~\cite{kingma2013auto, RezendeICML2014, MnihGregor2014} approximates the posterior distribution of the latent variables by an explicit inference model. In VAE, the inference model is learned jointly with the generator model from the observed data. A precursor of VAE is the wake-sleep algorithm~\cite{hinton1995wake}, where the inference model is learned from the dream data generated by the generator model in the sleep phase.
The generator model can also be learned jointly with a discriminator model, as in the generative adversarial networks (GAN)~\cite{goodfellow2014generative}, as well as deep convolutional GAN (DCGAN)~\cite{radford2015unsupervised}, energy-based GAN (EB-GAN)~\cite{zhao2016energy}, Wasserstein GAN (WGAN)~\cite{arjovsky2017wasserstein}. GAN does not involve an inference model.
The generator model can also be learned jointly with an energy-based model \cite{Bengio2016, dai2017calibrating}. We can interpret the learning scheme as an adversarial version of contrastive divergence. While in GAN, the discriminator model eventually becomes a confused one, in the joint learning of the generator model and the energy-based model, the learned energy-based model becomes a well-defined probability distribution on the observed data. The joint learning bares some similarity to WGAN, but unlike WGAN, the joint learning involves two complementary probability distributions.
To bridge the gap between the generator model and the energy-based model, the cooperative learning method of \cite{coopnets_pami} introduces finite-step MCMC sampling of the energy-based model with the MCMC initialized from the samples generated by the generator model. Such finite-step MCMC produces synthesized examples closer to the energy-based model, and the generator model can learn from how the finite-step MCMC revises its initial samples.
Adversarially learned inference (ALI)~\cite{dumoulin2016adversarially,donahue2016adversarial} combines the learning of the generator model and inference model in an adversarial framework. ALI can be improved by adding conditional entropy regularization, resulting in the ALICE~\cite{li2017alice} model. The recently proposed method~\cite{chen2018symmetric} shares the same spirit. They lack an energy-based model on observed data.
\subsection{Our Contributions}
Our proposed formulation, which we call the \textit{divergence triangle}, re-interprets and integrates the following elements in unsupervised generative learning: (1) maximum likelihood learning, (2) variational learning, (3) adversarial learning, (4) contrastive divergence, (5) wake-sleep algorithm. The learning is seamlessly integrated into a probabilistic framework based on KL divergence.
We conduct extensive experiments to analyze the learned models. Energy landscape mapping is used to verify that our learned energy-based model is well-behaved. Further, we evaluate the learning of a generator model via synthesis by generating samples with competitive fidelity, and evaluate the accuracy of the inference model both qualitatively and quantitatively via reconstruction. Our proposed model can also benefit in learning directly from incomplete images with various blocking patterns.
\section{Learning Deep Probabilistic Models}
In this section, we shall review the two probabilistic models, namely the generator model and the energy-based model, both of which are parametrized by convolutional neural networks \cite{lecun1998gradient, krizhevsky2012imagenet}. Then, we shall present the maximum likelihood learning algorithms for training these two models, respectively. Our presentation of the two maximum likelihood learning algorithms is unconventional. We seek to derive both algorithms based on the Kullback-Leibler divergence using the same scheme. This will set the stage for the divergence triangle.
\subsection{Generator Model and Energy-based Model}
The generator model \cite{goodfellow2014generative, radford2015unsupervised, kingma2013auto, RezendeICML2014, MnihGregor2014} is a generalization of the factor analysis model \cite{rubin1982algorithms},
\begin{eqnarray}
z \sim {\rm N}(0, I_d), \; x = g_\theta(z) + \epsilon,
\end{eqnarray}
where $g_\theta$ is a top-down mapping parametrized by a deep network with parameters $\theta$. It maps the $d$-dimensional latent vector $z$ to the $D$-dimensional signal $x$. $\epsilon \sim {\rm N}(0, \sigma^2 I_D)$ and is independent of $z$. In general, the model is defined by the prior distribution $p(z)$ and the conditional distribution $p_\theta(x|z)$. The complete-data model $p_\theta(z, x) = p(z) p_\theta(x|z)$. The observed-data model is $p_\theta(x) = \int p_\theta(z, x) dz$. The posterior distribution is $p_\theta(z|x) = p_\theta(z, x)/p_\theta(x)$. See the diagram (a) below.
\begin{eqnarray*}
\begin{array}[c]{ccc}
\mbox{{Top-down} mapping} && \mbox{{Bottom-up} mapping}\\
\mbox{{hidden vector} $z$} && \mbox{{energy} $-f_\alpha(x)$}\\
\Downarrow&&\Uparrow\\
\mbox{signal $x \approx g_\theta(z)$} && \mbox{signal $x$}\\
\mbox{(a) Generator model} && \mbox{(b) Energy-based model}
\end{array} \label{eq:diagram0}
\end{eqnarray*}
A complementary model is the energy-based model \cite{Ng2011, Dai2015ICLR, LuZhuWu2016, xieLuICML}, where $-f_\alpha(x)$ defines the energy of $x$, and a low energy $x$ is assigned a high probability. Specifically, we have the following probability model
\begin{eqnarray}
\pi_\alpha(x) = \frac{1}{Z(\alpha)} \exp\left[ f_\alpha(x) \right],
\end{eqnarray}
where $f_\alpha(x)$ is parametrized by a bottom-up deep network with parameters $\alpha$, and $Z(\alpha)$ is the normalizing constant. If $f_\alpha(x)$ is linear in $\alpha$, the model becomes the familiar exponential family model in statistics or the Gibbs distribution in statistical physics. We may consider $\pi_\alpha$ an evaluator, where $f_\alpha$ assigns the value to $x$, and $\pi_\alpha$ evaluates $x$ by a normalized probability distribution. See the diagram (b) above.
The energy-based model $\pi_\alpha$ defines explicit log-likelihood via $f_\alpha(x)$, even though $Z(\alpha)$ is intractable. However, it is difficult to sample from $\pi_\alpha$. The generator model $p_\theta$ can generate $x$ directly by first generating $z \sim p(z)$, and then transforming $z$ to $x$ by $g_\theta(z)$. But it does not define an explicit log-likelihood of $x$.
In the context of inverse reinforcement learning \cite{ziebart2008maximum, abbeel2004apprenticeship} or inverse optimal control, $x$ is action and $-f_\alpha(x)$ defines the cost function or $f_\alpha(x)$ defines the value function or the objective function.
\subsection{Maximum Likelihood Learning}
Let $q_{\rm data}(x)$ be the true distribution that generates the training data. Both the generator $p_\theta$ and the energy-based model $\pi_\alpha$ can be learned by maximum likelihood. For large sample, the maximum likelihood amounts to minimizing the Kullback-Leibler divergence ${\rm KL}(q_{\rm data}\|p_\theta)$ over $\theta$, and minimizing ${\rm KL}(q_{\rm data}\|\pi_\alpha)$ over $\alpha$, respectively. The expectation ${\rm E}_{q_{\rm data}}$ can be approximated by sample average.
\subsubsection{EM-type Learning of Generator Model}
To learn the generator model $p_\theta$, we seek to minimize ${\rm KL}(q_{\rm data}(x)\|p_\theta(x))$ over $\theta$. Suppose in an iterative algorithm, the current $\theta$ is $\theta_t$. We can fix $\theta_t$ at any place we want, and vary $\theta$ around $\theta_t$.
We can write
\begin{eqnarray}
&&{\rm KL}(q_{\rm data}(x) p_{\theta_t}(z|x) \|p_\theta(z, x))=\nonumber\\
&&{\rm KL}(q_{\rm data}(x) \|p_\theta(x)) + {\rm KL}(p_{\theta_t}(z|x)\|p_\theta(z|x)). \label{eq:VAE0}
\end{eqnarray}
In the EM algorithm \cite{dempster1977maximum}, the left hand side is the surrogate objective function. This surrogate function is more tractable than the true objective function $ {\rm KL}(q_{\rm data}(x) \|p_\theta(x))$ because $q_{\rm data}(x) p_{\theta_t}(z|x)$ is a distribution of the complete data, and $p_\theta(z, x)$ is the complete-data model.
We can write (\ref{eq:VAE0}) as
\begin{eqnarray}
S(\theta) = K(\theta) + \tilde{K}(\theta). \label{eq:v0}
\end{eqnarray}
The geometric picture is that the surrogate objective function $S(\theta)$ is above the true objective function $K(\theta)$, i.e., $S$ majorizes (upper bounds) $K$, and they touch each other at $\theta_t$, so that $S(\theta_t) = K(\theta_t)$ and $S'(\theta_t) = K'(\theta_t)$. The reason is that $\tilde{K}(\theta_t) = 0$ and $\tilde{K}'(\theta_t) = 0$. See Figure \ref{fig:k1}.
\begin{figure}[ht]
\centering
\includegraphics[width=.43\linewidth]{./figure/triangle/K1.jpg}
\caption{\small The surrogate $S$ majorizes (upper bounds) $K$, and they touch each other at $\theta_t$ with the same tangent. }
\label{fig:k1}
\end{figure}
$q_{\rm data}(x) p_{\theta_t}(z|x)$ gives us the complete data. Each step of EM fits the complete-data model $p_\theta(z, x)$ by minimizing the surrogate $S(\theta)$,
\begin{eqnarray}
\theta_{t+1} = \arg\min_\theta {\rm KL}(q_{\rm data} (x) p_{\theta_t}(z|x) \| p_\theta(z, x)),
\end{eqnarray}
which amounts to maximizing the complete-data log-likelihood. By minimizing $S$, we will reduce $S(\theta)$ relative to $\theta_t$, and we will reduce $K(\theta)$ even more, relative to $\theta_t$, because of the majorization picture.
We can also use gradient descent to update $\theta$. Because $S'(\theta_t) = K'(\theta_t)$, and we can place $\theta_t$ anywhere, we have
\begin{eqnarray}
&&- \frac{\partial}{\partial \theta} {\rm KL}(q_{\rm data}(x)\|p_\theta(x)) \nonumber \\
&&= {\rm E}_{q_{\rm data}(x) p_\theta(z|x)} \left[\frac{\partial}{\partial \theta} \log p_\theta(z, x)\right].
\end{eqnarray}
To implement the above updates, we need to compute the expectation with respect to the posterior distribution $p_\theta(z|x)$. It can be approximated by MCMC such as Langevin dynamics or HMC~\cite{neal2011mcmc}. Both require gradient computations that can be efficiently accomplished by back-propagation. We have learned the generator using such learning method~\cite{HanLu2016}.
\subsubsection{Self-critic Learning of Energy-based Model}
To learn the energy-based model $\pi_\alpha$, we seek to minimize ${\rm KL}(q_{\rm data}(x)\|\pi_\alpha(x))$ over $\alpha$. Suppose in an iterative algorithm, the current $\alpha$ is $\alpha_t$. We can fix $\alpha_t$ at any place we want, and vary $\alpha$ around $\alpha_t$.
Consider the following contrastive divergence
\begin{eqnarray}
{\rm KL}(q_{\rm data}(x)\|\pi_\alpha(x)) - {\rm KL}(\pi_{\alpha_t}(x)\|\pi_\alpha(x)). \label{eq:A0}
\end{eqnarray}
We can use the above as surrogate function, which is more tractable than the true objective function, since the $\log Z(\theta)$ term is canceled out. Specifically, we can write (\ref{eq:A0}) as
\begin{eqnarray}
S(\alpha) &=& K(\alpha) - \tilde{K}(\alpha) \label{eq:a0} \\
&=& - ({\rm E}_{q_{\rm data}}[f_\alpha(x)] - {\rm E}_{\pi_{\alpha_t}}[f_\alpha(x)]) + {\rm const}.
\end{eqnarray}
The geometric picture is that the surrogate function $S(\alpha)$ is below the true objective function $K(\alpha)$, i.e., $S$ minorizes (lower bounds) $K$, and they touch each other at $\alpha_t$, so that $S(\alpha_t) = K(\alpha_t)$, and $S'(\alpha_t) = K'(\alpha_t)$. The reason is that ${\tilde{K}(\alpha_t) = 0}$ and ${\tilde{K}'(\alpha_t) = 0}$. See Figure \ref{fig:k2}.
\begin{figure}[h]
\centering
\includegraphics[width=.32\linewidth]{./figure/triangle/K2.jpg}
\caption{\small The surrogate $S$ minorizes (lower bounds) $K$, and they touch each other at $\alpha_t$ with the same tangent. }
\label{fig:k2}
\end{figure}
Because $S$ minorizes $K$, we do not have a EM-like update. However, we can still use gradient descent to update $\alpha$, where the derivative is
\begin{eqnarray}
K'(\alpha_t) = S'(\alpha_t) = -({\rm E}_{q_{\rm data}}[f'_{\alpha_t}(x)] -{\rm E}_{\pi_{\alpha_t}}[f'_{\alpha_t}(x)]),
\end{eqnarray}
where
\begin{eqnarray}
f'_{\alpha_t}(x) =\frac{ \partial} {\partial \alpha}f_\alpha(x) \Big|_{\alpha_t}.
\end{eqnarray}
Since we can place $\alpha_t$ anywhere, we have
\begin{eqnarray}
&&- \frac{\partial}{\partial \alpha} {\rm KL}(q_{\rm data}(x)\|\pi_\alpha(x)) \nonumber\\
&&= {\rm E}_{q_{\rm data}} \left[\frac{\partial}{\partial \alpha} f_\alpha(x)\right] - {\rm E}_{\pi_\alpha} \left[\frac{\partial}{\partial \alpha} f_\alpha(x)\right]. \label{eq:e1}
\end{eqnarray}
To implement the above update, we need to compute the expectation with respect to the current model $\pi_{\alpha_t}$. It can be approximated by MCMC such as Langevin dynamics or HMC that samples from $\pi_{\alpha_t}$. It can be efficiently implemented by gradient computation via back-propagation. We have trained the energy-based model using such learning method \cite{LuZhuWu2016, xieLuICML}.
The above learning algorithm has an adversarial interpretation. Updating $\alpha_t$ to $\alpha_{t+1}$ by following the gradient of $S(\alpha) = {\rm KL}(q_{\rm data}(x)\|\pi_\alpha(x)) - {\rm KL}(\pi_{\alpha_t}(x)\|\pi_\alpha(x)) = -({\rm E}_{q_{\rm data}}[f_\alpha(x)] - {\rm E}_{\pi_{\alpha_t}}[f_\alpha(x)]) + {\rm const}$, we seek to decrease the first KL-divergence, while we will increase the second KL-divergence, or we seek to shift the value function $f_\alpha(x)$ toward the observed data and away from the synthesized data generated from the current model. That is, the model $\pi_\alpha$ criticizes its current version $\pi_{\alpha_t}$, i.e., the model is its own adversary or its own critic.
\subsubsection{Similarity and Difference}
In both models, at $\theta_t$ or $\alpha_t$, we have $S = K$, $S' = K'$, because $\tilde{K} = 0$ and $\tilde{K}' = 0$.
The difference is that in the generator model, $S = K + \tilde{K}$, whereas in energy-based model, $S = K - \tilde{K}$.
In the generator model, if we replace the intractable $p_{\theta_t}(z|x)$ by the inference model $q_\phi(z|x)$, we get VAE.
In energy-based model, if we replace the intractable $\pi_{\alpha_t}(x)$ by the generator $p_\theta(x)$, we get adversarial contrastive divergence (ACD). The negative sign in front of $\tilde{K}$ is the root of the adversarial learning.
\section{Divergence Triangle: Integrating Adversarial and Variational Learning}
In this section, we shall first present the divergence triangle, emphasizing its compact symmetric and anti-symmetric form. Then, we shall show that it is an re-interpretation and integration of existing methods, in particular, VAE~\cite{kingma2013auto, RezendeICML2014, MnihGregor2014} and ACD~\cite{Bengio2016, dai2017calibrating}.
\subsection{Loss Function}
Suppose we observe training examples $\{x_{(i)} \sim q_{\rm data}(x)\}_{i=1}^{n}$ where $q_{\rm data}(x)$ is the unknown data distribution. ${\pi_\alpha(x) \propto \exp[f_\alpha(x)]}$ with energy function $-f_\alpha$ denotes the energy-based model with parameters $\alpha$. The generator model $p(z)p_\theta(x|z)$ has parameters $\theta$ and latent vector $z$. It is trivial to sample the latent distribution $p(z)$ and the generative process is defined as $z\sim p(z)$, $x \sim p_\theta(x|z)$.
The maximum likelihood learning algorithms for both the generator and energy-based model require MCMC sampling. We modify the maximum likelihood KL-divergences by proposing a divergence triangle criterion, so that the two models can be learned jointly without MCMC. In addition to the generator $p_\theta$ and energy-based model $\pi_\alpha$, we also include an inference model $q_\phi(z|x)$ in the learning scheme. Such an inference model is a key component in the variational auto-encoder \cite{kingma2013auto, RezendeICML2014, MnihGregor2014}. The inference model $q_\phi(z|x)$ with parameters $\phi$ maps from the data space to latent space. In the context of EM, $q_\phi(z|x)$ can be considered an imputor that imputes the missing data $z$ to get the complete data $(z, x)$.
The three models above define joint distributions over $z$ and $x$ from different perspectives. The two marginals, i.e., empirical data distribution $q_{\rm data}(x)$ and latent prior distribution $p(z)$, are known to us. The goal is to harmonize the three joint distributions so that the competition and cooperation between different loss terms improves learning.
\begin{figure}
\centering
\includegraphics[width=.48\linewidth]{./figure/triangle/triangle1}
\caption{\small Divergence triangle is based on the Kullback-Leibler divergences between three joint distributions of $(z, x)$. The blue arrow indicates the ``running toward'' behavior and the red arrow indicates the ``running away'' behavior.}
\label{fig:t1}
\end{figure}
The divergence triangle involves the following three joint distributions on $(z, x)$:
\begin{enumerate}
\item $Q$-distribution: $Q(z, x) = q_{\rm data}(x) q_\phi(z|x)$.
\item $P$-distribution: $P(z, x) = p(z) p_\theta(x|z)$.
\item $\Pi$-distribution: $\Pi(z, x) = \pi_\alpha(x) q_\phi(z|x)$.
\end{enumerate}
We propose to learn the three models $p_\theta$, $\pi_\alpha$, $q_\phi$ by the following divergence triangle loss functional ${\cal D}$
\begin{eqnarray}
&& \max_\alpha \min_\theta \min_\phi {\cal D}(\alpha, \theta, \phi), \nonumber \\
&& {\cal D} = {\rm KL}(Q\|P) + {\rm KL}(P\|\Pi) - {\rm KL}(Q\|\Pi). \label{eq:triangle}
\end{eqnarray}
See Figure \ref{fig:t1} for illustration. The divergence triangle is based on the three KL-divergences between the three joint distributions on $(z, x)$. It has a symmetric and anti-symmetric form, where the anti-symmetry is due to the negative sign in front of the last KL-divergence and the maximization over $\alpha$. The divergence triangle leads to the following dynamics between the three models: (1) $Q$ and $P$ seek to get close to each other. (2) $P$ seeks to get close to $\Pi$. (3) $\pi$ seeks to get close to $q_{\rm data}$, but it seeks to get away from $P$, as indicated by the red arrow. Note that ${\rm KL}(Q\|\Pi) = {\rm KL}(q_{\rm data}\|\pi_\alpha)$, because $q_\phi(z|x)$ is canceled out. The effect of (2) and (3) is that $\pi$ gets close to $q_{\rm data}$, while inducing $P$ to get close to $q_{\rm data}$ as well, or in other words, $P$ chases $\pi_\alpha$ toward $q_{\rm data}$.
\subsection{Unpacking the Loss Function}
The divergence triangle integrates variational and adversarial learning methods, which are modifications of maximum likelihood.
\subsubsection{Variational Learning}
\begin{figure}[h]
\centering
\includegraphics[height=.28\linewidth]{./figure/triangle/VAE} \includegraphics[height=.28\linewidth]{./figure/triangle/VAEcurve}
\caption{\small Variational auto-encoder (VAE) as joint minimization by alternating projection. Left: Interaction between the models. Right: Alternating projection. The two models run toward each other. }
\label{fig:t2}
\end{figure}
First, $\min_\theta \min_\phi {\rm KL}(Q\|P)$ captures the variational auto-encoder (VAE).
\begin{eqnarray}
{\rm KL}(Q\|P) &=& {\rm KL}(q_{\rm data}(x) \| p_\theta(x)) \nonumber\\
&+& {\rm KL}(q_\phi(z|x)\|p_\theta(z|x)), \label{eq:VAE}
\end{eqnarray}
Recall $S = K + \tilde{K}$ in (\ref{eq:v0}), if we replace the intractable $p_{\theta_t}(z|x)$ in (\ref{eq:v0}) by the explicit $q_\phi(z|x)$, we get (\ref{eq:VAE}), so that we avoid MCMC for sampling $p_{\theta_t}(z|x)$.
We may interpret VAE as alternating projection between $Q$ and $P$. See Figure \ref{fig:t2} for illustration. If $q_\phi(z|x) = p_\theta(z|x)$, the algorithm reduces to the EM algorithm. The wake-sleep algorithm \cite{hinton1995wake} is similar to VAE, except that it updates $\phi$ by $\min_\phi {\rm KL}(P\|Q)$ instead of $\min_\phi {\rm KL}(Q\|P)$, so that the wake-sleep algorithm does not have a single objective function.
The VAE $\min_\theta \min_\phi {\rm KL}(Q\|P)$ defines a cooperative game, with the dynamics that $q_\phi$ and $p_\theta$ run toward each other.
\subsubsection{Adversarial Learning}
\begin{figure}[h]
\centering
\includegraphics[height=.28\linewidth]{./figure/triangle/ACD} \includegraphics[height=.14\linewidth]{./figure/triangle/ACDcurve}
\caption{\small Adversarial contrastive divergence (ACD). Left: Interaction between the models. Red arrow indicates a chasing game, where the generator model chases the energy-based model, which runs toward the data distribution. Right: Contrastive divergence. }
\label{fig:t3}
\end{figure}
Next, consider the learning of the energy-based model model \cite{Bengio2016, dai2017calibrating}. Recall $S = K - \tilde{K}$ in (\ref{eq:a0}), if we replace the intractable $\pi_{\alpha_t}(x)$ in (\ref{eq:a0}) by $p_\theta(x)$, we get
\begin{eqnarray}
\min_\alpha \max_\theta[ {\rm KL}(q_{\rm data}(x)\|\pi_\alpha(x)) - {\rm KL}(p_\theta(x) \| \pi_\alpha(x))], \label{eq:ACD}
\end{eqnarray}
or equivalently
\begin{eqnarray}
\max_\alpha \min_\theta[ {\rm KL}(p_\theta(x) \| \pi_\alpha(x)) - {\rm KL}(q_{\rm data}(x)\|\pi_\alpha(x))], \label{eq:ACD1}
\end{eqnarray}
so that we avoid MCMC for sampling $\pi_{\alpha_t}(x)$, and the gradient for updating $\alpha$ becomes
\begin{eqnarray}
\frac{\partial}{\partial \alpha} [{\rm E}_{q_{\rm data}} (f_\alpha(x)) - {\rm E}_{p_\theta}(f_\alpha(x))]. \label{eq:ACD3}
\end{eqnarray}
Because of the negative sign in front of the second KL-divergence in (\ref{eq:ACD}), we need $\max_\theta$ in (\ref{eq:ACD}) or $\min_\theta$ in (\ref{eq:ACD1}), so that the learning becomes adversarial. See Figure \ref{fig:t3} for illustration. Inspired by {\cite{Hinton2002a}, we call (\ref{eq:ACD}) the adversarial contrastive divergence (ACD). It underlies \cite{Bengio2016, dai2017calibrating}.
The adversarial form (\ref{eq:ACD}) or (\ref{eq:ACD1}) defines a chasing game with the following dynamics: the generator $p_\theta$ chases the energy-based model $\pi_\alpha$ in $\min_\theta {\rm KL}(p_\theta \| \pi_\alpha)$, the energy-based model $\pi_\alpha$ seeks to get closer to $q_{\rm data}$ and get away from $p_\theta$. The red arrow in Figure \ref{fig:t3} illustrates this chasing game. The result is that $\pi_\alpha$ lures $p_\theta$ toward $q_{\rm data}$. In the idealized case, $p_\theta$ always catches up with $\pi_\alpha$, then $\pi_\alpha$ will converge to the maximum likelihood estimate $\min_\alpha {\rm KL}(q_{\rm data}\|\pi_\alpha)$, and $p_\theta$ converges to $\pi_\alpha$.
The above chasing game is different from VAE $\min_\theta \min_\phi {\rm KL}(Q\|P)$, which defines a cooperative game where $q_\phi$ and $p_\theta$ run toward each other.
Even though the above chasing game is adversarial, both models are running toward the data distribution. While the generator model runs after the energy-based model, the energy-based model runs toward the data distribution. As a consequence, the energy-based model guides or leads the generator model toward the data distribution. It is different from GAN~\cite{goodfellow2014generative}. In GAN, the discriminator eventually becomes a confused one because the generated data become similar to the real data. In the above chasing game, the energy-based model becomes close to the data distribution.
The updating of $\alpha$ by (\ref{eq:ACD3}) bears similarity to Wasserstein GAN (WGAN)~\cite{arjovsky2017wasserstein}, but unlike WGAN, $f_\alpha$ defines a probability distribution $\pi_\alpha$, and the learning of $\theta$ is based on $\min_\theta {\rm KL}(p_\theta(x) \| \pi_\alpha(x))$, which is a variational approximation to $\pi_\alpha$. This variational approximation only requires knowing $f_\alpha(x)$, without knowing $Z(\alpha)$. However, unlike $q_\phi(z|x)$, $p_\theta(x)$ is still intractable, in particular, its entropy does not have a closed form. Thus, we can again use variational approximation, by changing the problem to $\min_\theta \min_\phi$$ {\rm KL}(p(z) p_\theta(x|z) \| \pi_\alpha(x) q_\phi(z|x))$, i.e., $\min_\theta \min_\phi {\rm KL}(P\|\Pi)$, which is analytically tractable and which underlies \cite{dai2017calibrating}. In fact,
\begin{eqnarray}
{\rm KL}(P\|\Pi) = {\rm KL}(p_\theta(x)\|\pi_\alpha(x)) + {\rm KL}(p_\theta(z|x)\|q_\phi(z|x)).
\end{eqnarray}
Thus, we can modify (\ref{eq:ACD1}) into $\max_\alpha \min_\theta \min_\phi [{\rm KL}(P\|\Pi) - {\rm KL}(Q \|\Pi)]$, because again ${\rm KL}(Q\|\Pi) = {\rm KL}(q_{\rm data}\|\pi_\alpha)$.
Fitting the above together, we have the divergence triangle (\ref{eq:triangle}), which has a compact symmetric and anti-symmetric form.
\subsection{Gap Between Two Models}
We can write the objective function ${\cal D}$ as
\begin{flalign*}
{\cal D}&= ( {\rm KL}(q_{\rm data}(x) \| p_\theta(x)) + {\rm KL}(q_\phi(z|x)\|p_\theta(z|x)))\\
&- ({\rm KL}(q_{\rm data}(x)\|\pi_\alpha(x)) - {\rm KL}(p(z) p_\theta(x|z) \| \pi_\alpha(x) q_\phi(z|x)))\\
&= (( {\rm KL}(q_{\rm data}(x) \| p_\theta(x)) - {\rm KL}(q_{\rm data}(x)\|\pi_\alpha(x)) )\\
&+ {\rm KL}(q_\phi(z|x)\|p_\theta(z|x))+{\rm KL}(p(z) p_\theta(x|z) \| \pi_\alpha(x) q_\phi(z|x)).
\end{flalign*}
Thus ${\cal D}$ is an upper bound of the difference between the log-likelihood of the energy-based model and the log-likelihood of the generator model.
\subsection{Two Sides of KL-divergences}
In the divergence triangle, the generator model appears on the right side of ${\rm KL}(Q\|P)$, and it also appears on the left side of ${\rm KL}(P\|\Pi)$. The former tends to interpolate or smooth the modes of $Q$, while the later tends to seek after major modes of $\Pi$ while ignoring minor modes. As a result, the learned generator model tends to generate sharper images. As to the inference model $q_\phi(z|x)$, it appears on the left side of ${\rm KL}(Q\|P)$, and it also appears on the right side of ${\rm KL}(P\|\Pi)$. The former is variational learning of the real data, while the latter corresponds to the sleep phase of wake-sleep learning, which learns from the dream data generated by $P$. The inference model thus can infer $z$ from both observed $x$ and generated $x$.
In fact, if we define
\begin{eqnarray}
{\cal D}_0 = {\rm KL}(q_{\rm data}\|p_\theta) + {\rm KL}(p_\theta\|\pi_\alpha) - {\rm KL}(q_{\rm data}\|\pi_\alpha), \label{eq:D0}
\end{eqnarray}
we have
\begin{eqnarray}
{\cal D} = {\cal D}_0 + {\rm KL}(q_\phi(z|x) \| p_\theta(z|x))+ {\rm KL}(p_\theta(z|x) \| q_\phi(z|x)). \label{eq:D1}
\end{eqnarray}
(\ref{eq:D0}) is the divergence triangle between the three marginal distributions on $x$, where $p_\theta$ appears on both sides of KL-divergences. (\ref{eq:D1}) is the variational scheme to make the marginal distributions into the joint distributions, which are more tractable. In (\ref{eq:D1}), the two KL-divergences have reverse orders.
\subsection{Training Algorithm}
The three models are each parameterized by convolutional neural networks. The joint learning under the divergence triangle can be implemented by stochastic gradient descent, where the expectations are replaced by the sample averages. Algorithm~\ref{alg:dt} describes the procedure which is illustrated in Figure~\ref{fig:train}.
\begin{figure}[H]
\centering
\includegraphics[height=.28\linewidth]{./figure/triangle/diag.jpg}
\caption{Joint learning of three models. The shaded circles $z$ and $x$ represent variables that can be sampled from the true distributions, i.e., ${\rm N}(0, I_d)$ and empirical data distribution, respectively. $\tilde{x}$ and $\tilde{z}$ are generated samples using the generator model and the inference model, respectively. The solid line with arrow represents the conditional mapping and dashed line indicates the matching loss is involved.}
\label{fig:train}
\end{figure}
\begin{algorithm}[H]
\caption{Joint Training for Divergence Triangle Model}
\label{alg:dt}
\begin{algorithmic}[1]
\REQUIRE ~~\\
training images $\{x_{(i)}\}_{i=1}^{n}$,\\
number of learning iterations $T$,\\
$\alpha$, $\theta$, $\phi \leftarrow$ initialized network parameters.
\ENSURE~~\\
estimated parameters $\{\alpha, \theta, \phi\}$,\\
generated samples $\{\tilde{x}_{(i)}\}_{i=1}^{\tilde{n}}$.
\STATE Let $t \leftarrow 0$.
\REPEAT
\STATE $\{z_{(i)} \sim p(z)\}_{i=1}^{\tilde{M}}$.
\STATE $\{\tilde{x}_{(i)} \sim p_\theta(x|z_{(i)})\}_{i=1}^{\tilde{M}}$.
\STATE $\{x_{(i)} \sim q_{\rm data}(x)\}_{i=1}^{M}$.
\STATE $\{\tilde{z}_{(i)} \sim q_\phi(z|x_{(i)})\}_{i=1}^{M}$.
\STATE {\bf $\alpha$-step}: Given $\{\tilde{x}_{(i)}\}_{i=1}^{\tilde{M}}$ and $\{x_{(i)}\}_{i=1}^{M}$,\\ update $\alpha \leftarrow \alpha + \eta_\alpha \frac{\partial}{\partial \alpha}{\cal D}$ with learning rate $\eta_\alpha$.
\STATE {\bf $\phi$-step}: Given $ \{(z_{(i)}, \tilde{x}_{(i)})\}_{i=1}^{\tilde{M}}$ and $ \{(\tilde{z}_{(i)}, x_{(i)})\}_{i=1}^{M}$,\\
update $\phi \leftarrow \phi - \eta_\phi \frac{\partial}{\partial \phi}{\cal D}$, with learning rate $\eta_\phi$.
\STATE {\bf $\theta$-step}: Given $ \{(z_{(i)}, \tilde{x}_{(i)})\}_{i=1}^{\tilde{M}}$ and $\{(\tilde{z}_{(i)}, x_{(i)})\}_{i=1}^{M}$,\\
update $\theta \leftarrow \theta - \eta_\theta \frac{\partial}{\partial \theta}{\cal D} $, with learning rate $\eta_\theta$\\
(optional: multiple-step update).
\STATE Let $t \leftarrow t+1$.
\UNTIL $t = T$
\end{algorithmic}
\end{algorithm}
\section{Experiments}
\begin{table*}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline Model & VAE~\cite{kingma2013auto} & DCGAN~\cite{radford2015unsupervised} & WGAN~\cite{arjovsky2017wasserstein} & CoopNet~\cite{coopnets_pami} & CEGAN~\cite{dai2017calibrating} & ALI~\cite{dumoulin2016adversarially} & ALICE~\cite{li2017alice} & Ours \\
\hline CIFAR-10 (IS) & 4.08 & 6.16 & 5.76 & 6.55 & 7.07 & 5.93 & 6.02 & \bf{7.23}\\
\hline CelebA (FID) & 99.09 & 38.39 & 36.36 & 56.57 & 41.89 & 60.29 & 46.14 & \bf{31.92}\\
\hline
\end{tabular}
\end{center}
\caption{Sample quality evaluation. Row 1: Inception scores for CIFAR-10. Row 2: FID scores for CelebA. }
\label{tab:is_fid}
\end{table*}
In this section, we demonstrate not only that the divergence triangle is capable of successfully learning an energy-based model with a well-behaved energy landscape, a generator model with highly realistic samples, and an inference model with faithful reconstruction ability, but we also show competitive performance on four tasks: image generation, test image reconstruction, energy landscape mapping, and learning from incomplete images. For image generation, we consider spatial stationary texture images, temporal stationary dynamic textures, and general object categories. We also test our model on large-scale datasets and high-resolution images.
The images are resized and scaled to $[-1, 1]$, no further pre-processing is needed. The network parameters are initialized with zero-mean Gaussian with standard deviation $0.02$ and optimized using Adam~\cite{kingma2014adam}. Network weights are decayed with rate $0.0005$, and batch normalization~\cite{ioffe2015batch} is used. We refer to the Appendix for the model specifications.
\subsection{Image Generation}
In this experiment, we evaluate the visual quality of generator samples from our divergence triangle model. If the generator model is well-trained, then the obtained samples should be realistic and match the visual features and contents of training images.
\subsubsection{Object Generation}
For object categories, we test our model on two commonly-used datasets of natural images: CIFAR-10 and CelebA~\cite{liu2015faceattributes}. For CelebA face dataset, we randomly select 9,000 images for training and another 1,000 images for testing in reconstruction task. The face images are resized to $64\times 64$ and CIFAR-10 images remain $32 \times 32$. The qualitative results of generated samples for objects are shown in Figure~\ref{fig:generation}. We further evaluate our model using quantitative evaluations which are based on the Inception Score (IS)~\cite{salimans2016improved} for CIFAR-10 and Frechet Inception Distance (FID)~\cite{lucic2017gans} for CelebA faces. We generate 50,000 random samples for the computation of the inception score and 10,000 random samples for the computation of the FID score. Table~\ref{tab:is_fid} shows the IS and FID scores of our model compared with VAE~\cite{kingma2013auto}, DCGAN~\cite{radford2015unsupervised}, WGAN~\cite{arjovsky2017wasserstein}, CoopNet~\cite{coopnets_pami}, CEGAN~\cite{dai2017calibrating}, ALI~\cite{dumoulin2016adversarially}, ALICE~\cite{li2017alice}.
\begin{figure}
\begin{center}
\includegraphics[width=0.48\linewidth]{./figure/cifar10/510_samples_small.jpg}
\includegraphics[width=0.48\linewidth]{./figure/celeba9000/triangle_799_samples_small.jpg}
\end{center}
\caption{Generated samples. Left: generated samples on CIFAR-10 dataset. Right: generated samples on CelebA dataset.}
\label{fig:generation}
\end{figure}
Note that for the Inception Score on CIFAR-10, we borrowed the scores from relevant papers, and for FID score on 9,000 CelebA faces, we re-implemented or used the available code with the similar network structure as our model. It can be seen that our model achieves the competitive performance compared to recent baseline models.
\subsubsection{Large-scale Dataset}
We also train our model on large scale datasets including down-sampled $32\times 32$ version of ImageNet~\cite{oord2016pixel,russakovsky2015imagenet} (roughly 1 million images) and Large-scale Scene Understand (LSUN) dataset~\cite{yu2015lsun}. For the LSUN dataset, we consider the \textit{bedroom}, \textit{tower} and \textit{Church ourdoor} categories which contains roughly 3 million, 0.7 million and 0.1 million images and were re-sized to $64\times 64$. The network structures are similar with the ones used in object generation with twice the number of channels and batch normalization is used in all three models. Generated samples are shown on Figure~\ref{fig:generation_large}.
\begin{figure}
\begin{center}
\includegraphics[width=0.48\linewidth]{./figure/large_scale/epoch_005_iter_3600_samples_imagenet_small.jpg}
\includegraphics[width=0.48\linewidth]{./figure/large_scale/epoch_002_iter_28000_samples_bed_small.jpg
\end{center}
\caption{Generated samples. Left: $32\times 32$ ImageNet. Right: $64 \times 64$ LSUN (bedroom)
}
\label{fig:generation_large}
\end{figure}
\subsubsection{High-resolution Synthesis}
\begin{figure*}
\begin{center}
\includegraphics[width=1\linewidth]{./figure/celeba_hq/syn/gen_1_by_6_v3_small}
\end{center}
\caption{Generated samples with $1,024\times 1,024$ resolution drawn from $g_\theta(z)$ with 512-dimensional latent vector $z\sim N(0,I_d)$ for CelebA-HQ.}
\label{fig:celeba_hq}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=\linewidth]{./figure/celeba_hq/inter/gen_7_7_4500_inter_small}
\includegraphics[width=\linewidth]{./figure/celeba_hq/inter/gen_7_8_4450_inter_small}
\end{center}
\caption{High-resolution synthesis from the generator model $g_\theta(z)$ with linear interpolation in latent space (i.e., $(1-\alpha)\cdot z_0 + \alpha\cdot z_1$) for CelebA-HQ.}
\label{fig:celeba_hq_interpolate}
\end{figure*}
In this section, we recruit a layer-wise training scheme to learn models on CelebA-HQ \cite{karras2017progressive} with resolutions of up to $1,024 \times 1,024$ pixels. Layer-wise training dates back to initializing deep neural networks by Restricted Boltzmann Machines to overcome optimization hurdles \cite{hinton2006reducing, bengio2007greedy} and has been resurrected in progressive GANs \cite{karras2017progressive}, albeit the order of layer transitions is reversed such that top layers are trained first. This resembles a Laplacian Pyramid \cite{denton2015deep} in which images are generated in a coarse-to-fine fashion.
As in \cite{karras2017progressive}, the training starts with down-sampled images with a spatial resolution of $4 \times 4$ while progressively increasing the size of the images and number of layers. All three models are grown in synchrony where $1 \times 1$ convolutions project between RGB and feature. In contrast to \cite{karras2017progressive}, we do not require mini-batch discrimination to increase variation of $g_\theta(\cdot)$ nor gradient penalty to preserve $1$-Lipschitz continuity of $f_\alpha(\cdot)$.
Figure~\ref{fig:celeba_hq} depicts high-fidelity synthesis in a resolution of $1,024\times1,024$ pixels sampled from the generator model $g_\theta(z)$ on CelebA-HQ. Figure~\ref{fig:celeba_hq_interpolate} illustrates linear interpolation in latent space (i.e., $(1-\alpha)\cdot z_0 + \alpha\cdot z_1$), which indicates diversity in the samples.
Therefore, the joint learning in the triangle formulation is not only able to train the three models with stable optimization, but it also achieves synthesis with high fidelity.
\subsubsection{Texture Synthesis}
We consider texture images, which are spatial stationary and contain repetitive patterns. The texture images are resized to $224\times 224$. Separate models are trained on each image. We start from the latent factor of size $7\times7\times5$ and use five convolutional-transpose layers with kernel size $4$ and up-sampling factor $2$ for the generator network. The layers have $512$, $512$, $256$, $128$ and $3$ filters, respectively, and ReLU non-linearity between each layer is used. The inference model has the inverse or ``mirror" structure of generator model except that we use convolutional layers and ReLU with leak factor $0.2$. The energy-based model has three convolutional layers. The first two layers have kernel size $7$ with stride $2$ for $100$ and $70$ filters respectively, and the last layer has $30$ filters with kernel size $5$ and stride $1$.
The representative examples are shown in Figure~\ref{fig:texture}. Three texture synthesis results are obtained by sampling different latent factors from prior distribution $p(z)$. Notice that although we only have one texture image for training, the proposed triangle divergence model can effectively utilize the repetitive patterns, thus generating realistic texture images with different configurations.
\begin{figure}
\begin{center}
\includegraphics[width=0.209\linewidth]{./figure/texture/red_rose/train_redrose_small.jpg}
\includegraphics[width=0.209\linewidth]{./figure/texture/red_rose/epoch_7999_iter_000_samples_train.jpg}
\includegraphics[width=0.209\linewidth]{./figure/texture/red_rose/epoch_7999_iter_000_samples1_train.jpg}
\includegraphics[width=0.209\linewidth]{./figure/texture/red_rose/epoch_7999_iter_000_samples2_train.jpg}\\
\vspace{0.5mm}
\includegraphics[width=0.209\linewidth]{./figure/texture/beehave/train_beehave_small.jpg}
\includegraphics[width=0.209\linewidth]{./figure/texture/beehave/epoch_7999_iter_000_samples_train.jpg}
\includegraphics[width=0.209\linewidth]{./figure/texture/beehave/epoch_7999_iter_000_samples1_train.jpg}
\includegraphics[width=0.209\linewidth]{./figure/texture/beehave/epoch_7999_iter_000_samples2_train.jpg}\\
\vspace{0.5mm}
\vspace{0.5mm}
\includegraphics[width=0.209\linewidth]{./figure/texture/coffee/train0001_small.jpg}
\includegraphics[width=0.209\linewidth]{./figure/texture/coffee/epoch_7949_iter_000_samples_train.jpg}
\includegraphics[width=0.209\linewidth]{./figure/texture/coffee/epoch_7949_iter_000_samples1_train.jpg}
\includegraphics[width=0.209\linewidth]{./figure/texture/coffee/epoch_7949_iter_000_samples2_train.jpg}\\
\vspace{0.5mm}
\includegraphics[width=0.209\linewidth]{./figure/texture/wall3/train0001.jpg}
\includegraphics[width=0.209\linewidth]{./figure/texture/wall3/epoch_7999_iter_000_samples_train.jpg}
\includegraphics[width=0.209\linewidth]{./figure/texture/wall3/epoch_7999_iter_000_samples1_train.jpg}
\includegraphics[width=0.209\linewidth]{./figure/texture/wall3/epoch_7999_iter_000_samples2_train.jpg}\\
\vspace{0.5mm}
\includegraphics[width=0.209\linewidth]{./figure/texture/qinghua/qinghua_small.jpg}
\includegraphics[width=0.209\linewidth]{./figure/texture/qinghua/epoch_7999_iter_000_samples_train.jpg}
\includegraphics[width=0.209\linewidth]{./figure/texture/qinghua/epoch_7999_iter_000_samples1_train.jpg}
\includegraphics[width=0.209\linewidth]{./figure/texture/qinghua/epoch_7999_iter_000_samples2_train.jpg}\\
\end{center}
\caption{Generated texture patterns. For each row, the left one is the training texture, the remaining images are 3 textures generated by divergence triangle.}
\label{fig:texture}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/light/real_epoch04999_frame00001.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/light/real_epoch04999_frame00006.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/light/real_epoch04999_frame00011.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/light/real_epoch04999_frame00017.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/light/real_epoch04999_frame00023.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/light/real_epoch04999_frame00029.png}\\
\vspace{0.5mm}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/light/sample1_epoch04999_frame00001.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/light/sample1_epoch04999_frame00006.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/light/sample1_epoch04999_frame00011.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/light/sample1_epoch04999_frame00017.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/light/sample1_epoch04999_frame00023.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/light/sample1_epoch04999_frame00029.png}\\
\vspace{1mm}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/water_fall/real_epoch04999_frame00001.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/water_fall/real_epoch04999_frame00006.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/water_fall/real_epoch04999_frame00011.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/water_fall/real_epoch04999_frame00017.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/water_fall/real_epoch04999_frame00023.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/water_fall/real_epoch04999_frame00029.png}\\
\vspace{0.5mm}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/water_fall/sample3_epoch04999_frame00001.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/water_fall/sample3_epoch04999_frame00006.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/water_fall/sample3_epoch04999_frame00011.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/water_fall/sample3_epoch04999_frame00017.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/water_fall/sample3_epoch04999_frame00023.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/water_fall/sample3_epoch04999_frame00029.png}\\
\vspace{1mm}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/light_bulb/real_epoch04999_frame00001.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/light_bulb/real_epoch04999_frame00006.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/light_bulb/real_epoch04999_frame00011.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/light_bulb/real_epoch04999_frame00017.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/light_bulb/real_epoch04999_frame00023.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/light_bulb/real_epoch04999_frame00029.png}\\
\vspace{0.5mm}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/light_bulb/sample1_epoch04999_frame00001.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/light_bulb/sample1_epoch04999_frame00006.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/light_bulb/sample1_epoch04999_frame00011.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/light_bulb/sample1_epoch04999_frame00017.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/light_bulb/sample1_epoch04999_frame00023.png}
\includegraphics[width=0.15\linewidth]{./figure/dynamic/light_bulb/sample1_epoch04999_frame00029.png}\\
\end{center}
\caption{Generated dynamic texture patterns. The top row shows the frames from the training video, the bottom row represents the frames for the generated video.}
\label{fig:dt}
\end{figure}
\subsubsection{Dynamic Texture Synthesis}
Our model can also be used for dynamic patterns which exhibit stationary regularity in the temporal domain. The training video clips are selected from Dyntex database~\cite{peteri2010dyntex} and resized to $64$ pixels $\times$ $64$ pixels $\times$ $32$ frames. Inspired by recent work~\cite{vondrick2016generating,han2019generator}, we adopt spatial-temporal models for dynamic patterns that are stationary in the temporal domain but non-stationary in the spatial domain. Specifically, we start from $10$ latent factors of size $1\times1\times 2$ for each video clip and we adopt the same spatial-temporal convolutional transpose generator network as in~\cite{han2019generator} except we use kernel size $5$ for the second layer. For the inference model, we use $5$ spatial-temporal convolutional layers. The first $4$ layers have kernel size $4$ with upsampling factor $2$ and the last layer is fully-connected in spatial domain but convolutional in the temporal domain, yielding re-parametrized $\mu_\phi$ and $\sigma_\phi$ which have the same size the as latent factors. For the energy-based model, we use three spatial-temporal convolutional layers. The first two layers have kernel size $4$ with up-sample factor $2$ in all directions, but the last layer is fully-connected in the spatial domain but convolutional with kernel size $4$ and upsample by $2$ in the temporal domain. Each layer has $64$, $128$ and $128$ filters, respectively. Some of the synthesis results are shown in Figure~\ref{fig:dt}. Note, we sub-sampled $6$ frames of the training and generated video clips and we only show them in the first batch for illustration.
\subsection{Test Image Reconstruction}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline Model & WS~\cite{hinton1995wake} & VAE~\cite{kingma2013auto} & ALI~\cite{dumoulin2016adversarially} & ALICE~\cite{li2017alice} & Ours \\
\hline CIFAR-10 & 0.058 & 0.037 & 0.311 &0.034 & \bf{0.028}\\
\hline CelebA & 0.152 & 0.039 & 0.519 & 0.046 & \bf{0.030}\\
\hline
\end{tabular}
\end{center}
\caption{Test reconstruction evaluation. Row 1: MSE for CIFAR-10 test set. Row 2: MSE for 1,000 hold out set from CelebA.}
\label{tab:mse}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[width=.48\linewidth]{./figure/cifar10/epoch_500_testReal_small.jpg}
\includegraphics[width=0.48\linewidth]{./figure/cifar10/epoch_500_testRecon_small.jpg}
\\
\vspace{1mm}
\includegraphics[width=0.48\linewidth]{./figure/celeba9000/gridB0_real_small.jpg}
\includegraphics[width=0.48\linewidth]{./figure/celeba9000/gridB0_rec_small.jpg}
\end{center}
\caption{Test image reconstruction. Top: CIFAR-10. Bottom: CelebA. Left: test images. Right: reconstructed images.}
\label{fig:test_reconstruction}
\end{figure}
In this experiment, we evaluate the reconstruction ability of our model for a hold-out testing image dataset. This is a strong indicator for the accuracy of our inference model. Specifically, if our divergence triangle model ${\cal D}$ is well-learned, then the inference model should match the true posterior of generator model, i.e., $q_\phi(z|x) \approx p_\theta(z|x)$. Therefore, given test signal $x_{te}$, its reconstruction $\tilde{x_{te}}$ should be close to $x_{te}$, i.e., $x_{te}\xrightarrow{q_\phi} z_{te} \xrightarrow{p_\theta} \tilde{x_{te}} \approx x_{te}$. Figure~\ref{fig:test_reconstruction} shows the testing images and their reconstructions on CIFAR-10 and CelebA.
For CIFAR-10, we use its own 10,000 test images while for CelebA, we use the hold-out 1,000 test images as stated above. The reconstruction quality is further measured by per-pixel mean square error (MSE). Table~\ref{tab:mse} shows the per-pixel MSE of our model compared to WS~\cite{hinton1995wake}, VAE~\cite{kingma2013auto}, ALI~\cite{dumoulin2016adversarially}, ALICE~\cite{li2017alice}.
Note, we do not consider methods without inference models on training data, including variants of GANs and cooperative training, since it is infeasible to test such models using image reconstruction.
\subsection{Energy Landscape Mapping}
In the following, we evaluate the learned energy-based model by mapping the macroscopic structure of the energy landscape. When following a MLE regime by minimizing ${\rm KL}(q_{\rm data}\|\pi_\alpha)$, we expect the energy-function $-f_\alpha(x)$ to encode ${x\sim q_{\rm data}(x)}$ as local energy minima. Moreover, $-f_\alpha(x)$ should form minima for unseen images and macroscopic landscape structure in which basins of minima are distinctly separated by energy barriers. Hopfield observed that such landscape is a model of associative memory \cite{hopfield1982neural}.
In order to learn a well-formed energy-function, in Algorithm \ref{alg:dt}, we perform multiple $\theta$-steps such that the samples $\{\tilde{x}_i\}_{i=1}^{\tilde{M}}$ are sufficiently ``close'' to the local minima of $-f_\alpha(x)$. This avoids the formation of energy minima not resembling the data. The variational approximation of entropy of the marginal generator distribution $H(p_\theta(x))$ preserves diversity in the samples avoiding mode-collapse.
\begin{figure*}
\includegraphics[width=1\linewidth, clip, trim={0in 0in 0in 0in}]{./figure/ebm/no_spec_53_map_5_dg_v2}
\caption{Illustration of the disconnectivity-graph depicting the basin structure of the learned energy-function $f_\alpha(x)$ for the MNIST dataset. Each column represents the set of at most 12 basins members ordered by energy where circles indicate the total number of basin members. Vertical lines encode minima depth in terms of energy and horizontal lines depict the lowest known barrier at which two basins merge in the landscape. Basins with less than 4 members were omitted for clarity.}
\label{fig:dg_mnist}
\end{figure*}
\begin{figure}
\includegraphics[width=1.0\linewidth, clip, trim={4.8in 0in 4.8in 0in}]{./figure/ebm/fashin_53_map_3_v2}
\caption{Illustration of the disconnectivity-graph depicting the basin structure of the learned energy-function for the Fashion-MNIST dataset. Each column represents the set of at most 12 basins members ordered by energy where circles indicate the total number of basin members. Vertical lines encode minima depth in terms of energy and horizontal lines depict the lowest known barrier at which two basins merge in the landscape. Basins with less than 4 members were omitted for clarity.}
\label{fig:dg_fashion}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.9\linewidth]{./figure/incomplete/learn_incomplete_crop_small.jpg}
\end{center}
\caption{Learning from incomplete data from the CelebA dataset. The 9 columns belong to experiments P.5, P.7, MB10, MB10, B20, B20, B30, B30, B30 respectively. Row 1: original images, not observed in learning stage. Row 2: training images. Row 3: recovered images using VAE~\cite{kingma2013auto}. Row 4: recovered images using ABP~\cite{HanLu2016}. Row 5: recovered images using our method. }
\label{fig:learn_incomplete}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.32\linewidth]{./figure/incomplete/vae_syn_crop_small.jpg}
\includegraphics[width=0.32\linewidth]{./figure/incomplete/abp_syn_crop_small.jpg}
\includegraphics[width=0.32\linewidth]{./figure/incomplete/triangle_syn_crop_small.jpg}
\end{center}
\caption{Image generation from different models learned from training images of the CelebA dataset with $30 \times 30$ occlusions. Left: images generated from VAE model~\cite{kingma2013auto}. Middle: images generated from ABP model~\cite{HanLu2016}. Right: images generated from our proposed triangle divergence model. }
\label{fig:incomplete_syn}
\end{figure}
To verify that (i) local minima of $-f_\alpha(x)$ resemble $\{x_i\}$ and (ii) minima are separated by significant energy barriers, we shall follow the approach used in~\cite{hill2018building}. When clustering with respect to energetic barriers, the landscape is partitioned into Hopfield basins of attraction whereby each point $\{x_i\}$ on the landscape $-f_\alpha(x)$ is mapped onto a local minimum $\{\hat{x}_i\}$ by a steepest-descent path ${x_i^{t+1} = x_i^{t} + \eta\nabla f_\alpha(x_i^t)}$. The similarity measure used for hierarchical clustering is the barrier energy that separates any two regions. Given a pair of local minima ${\{\hat{x}_i, \hat{x}_j\}}$, we estimate the barrier ${b_{i,j} = \max \{-f_\alpha(x_k) : x_k \in \hat{x}_i \overset\gamma\rightharpoondown \hat{x}_j\}}$ as the highest energy along a linear interpolation ${x \overset\gamma\rightharpoondown y = \{x + \gamma(y-x) : \gamma \subseteq [0,1]\}}$. If $b_{i,j} < \epsilon$ for some energy threshold $\epsilon$, then ${\{x_i, x_j\}}$ belong to the same basin. The clustering is repeated recursively until all minima are clustered together. Such graphs have come to be referred as disconnectivity graphs (DG)~\cite{wales1998archetypal}.
We conduct energy landscape mapping experiments on the MNIST~\cite{lecun-mnisthandwrittendigit-2010} and Fashion-MNIST~\cite{xiao2017fashion} datasets, each containing $70,000$ grayscale images of size $28\times28$ pixels depicting handwritten digits and fashion products from $10$ categories, respectively.
The energy landscape mapping is not without limitations, because it is practically impossible to locate all local modes. Based on the local modes located by our algorithm, see Figure~\ref{fig:dg_mnist} for the MNIST dataset, it suggests that the learned energy function is well-formed which not only encodes meaningful images as minima, but also forms meaningful macroscopic structure. Moreover, within basins the local minima have a high degree of purity (i.e. digits within a basin belong to the same class), and, the energy barrier between basins seem informative (i.e. basins of ones and sixes form pure super-basins). Figure~\ref{fig:dg_fashion} depicts the energy landscape mapping on Fashion-MNIST.
Potential applications include unsupervised classification in which energy barriers act as a geodesic similarity measure which captures perceptual distance (as opposed to e.g. $\ell_2$ distance), weakly-supervised classification with one label per basins, or, reconstruction of incomplete data (i.e. Hopfield content-addressable memory or image inpainting).
\subsection{Learning from incomplete images}
The divergence triangle can be used to learn from occluded images. This task is challenging~\cite{HanLu2016}, because only parts of the images are observed, thus the model needs to learn sufficient information to recover the occluded parts.
The generative models with inferential mechanism can be used for this task. Notably,~\cite{HanLu2016} proposed to recover incomplete images using alternating back-propagation (ABP) which has a MCMC based inference step to refine the latent factors and perform reconstruction iteratively. VAEs~\cite{rezende2014stochastic,kingma2013auto} build the inference model on occluded images, and can also be adapted for this task. It proceeds by filling the missing parts with average pixel intensity in the beginning, then iteratively re-update the missing parts using reconstructed values. Unlike VAEs, which only consider the un-occluded parts of training data, our model utilizes the generated samples which become gradually recovered during training, resulting in improved recovery accuracy and sharp generation. Note that learning from incomplete data can be difficult for variants of GANs~\cite{goodfellow2014generative,dai2017calibrating,radford2015unsupervised,arjovsky2017wasserstein} and cooperative training~\cite{coopnets_pami}, since inference cannot be performed directly on the occluded images.
We evaluate our model on 10,000 images randomly chosen from CelebA dataset. Then, selected images are further center cropped as in~\cite{HanLu2016}. Similar to VAEs, we zero-fill the occluded parts in the beginning, then iterative update missing values using reconstructed images obtained from the generator model. Three types of occlusions are used: (1) salt and pepper noise which randomly covers $50\%$ (P.5) and $70\%$ (P.7) of the image. (2) Multiple block occlusion which has 10 random blocks of size $10 \times 10$ (MB10). (3) Singe block occlusion where we randomly place a large $20\times 20$ and $30 \times 30$ block on each image, denoted by B20 and B30 respectively. Table~\ref{tab:recover} shows the recovery errors using VAE~\cite{kingma2013auto}, ABP~\cite{HanLu2016} and our triangle model where the error is defined as per-pixel absolute difference (relative to the range of pixel values) between the recovered image on the occluded pixels and the ground truth image.
\begin{table}[H]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline EXP & P.5 & P.7 & MB10 & B20 & B30 \\
\hline VAE~\cite{kingma2013auto} & 0.0446 & 0.0498 &0.1169 & 0.0666 & 0.0800\\
\hline ABP~\cite{HanLu2016} & \bf{0.0379} & \bf{0.0428} & 0.1070& 0.0633 & 0.0757 \\
\hline Ours &0.0380 & 0.0430 &\bf{0.1060}& \bf{0.0621} & \bf{0.0733}\\
\hline
\end{tabular}
\end{center}
\caption{Recovery errors for different occlusion masks for $10,000$ images selected from the CelebA dataset. }
\label{tab:recover}
\end{table}
It can be seen that our model consistently out-performs the VAE model for different occlusion patterns. For structured occlusions (i.e., multiple and single blocks), the un-occluded parts contain more meaningful configurations that will improve learning of the generator through the energy-based model, which will, in turn, generate more meaningful samples to refine our inference model. This could be verified by the superior results compared to ABP~\cite{HanLu2016}. While for unstructured occlusions (i.e., salt and pepper noise), ABP achieves improved recovery, a possible reason being that un-occluded parts contain less meaningful patterns which offer limited help for learning the generator and inference model. Our model synthesizes sharper and more realistic images from the generator on occluded images. See Figure~\ref{fig:incomplete_syn} in which images are occluded with $30 \times 30$ random blocks.
\section{Conclusion}
The proposed probabilistic framework, namely divergence triangle, for joint learning of the energy-based model, the generator model, and the inference model. The divergence triangle forms the compact learning functional for three models and naturally unifies aspects of maximum likelihood estimation~\cite{HanLu2016, coopnets_pami}, variational auto-encoder~\cite{kingma2013auto, RezendeICML2014, MnihGregor2014}, adversarial learning~\cite{Bengio2016, dai2017calibrating}, contrastive divergence~\cite{hinton}, and the wake-sleep algorithm~\cite{hinton1995wake}.
An extensive set of experiments demonstrated learning of a well-behaved energy-based model, realistic generator model as well as an accurate inference model. Moreover, experiments showed that the proposed divergence framework can be effective in learning directly from incomplete data.
In future work, we aim to extend the formulation to learn interpretable generator and energy-based models with multiple layers of sparse or semantically meaningful latent variables or features \cite{salakhutdinov2009deep, lee2009convolutional}. Further, it would be desirable to unify the generator, energy-based and inference models into a single model \cite{dinh2014nice, dinh2016density} by allowing them to share parameters and nodes instead of having separate sets of parameters and nodes.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
The work is supported by DARPA XAI project N66001-17-2-4029; ARO project W911NF1810296; and ONR MURI project N00014-16-1-2007; and Extreme Science and Engineering Discovery Environment (XSEDE) grant ASC170063. We thank Dr. Tianfu Wu, Shuai Zhu and Bo Pang for helpful discussions.
|
1,314,259,996,590 | arxiv | \section{Introduction}\label{s:1}
Let $r$ and $\ell$ be given integers such that
$2\leq\ell\leq r-1$. A hypergraph ${H}$ on vertex set $[n]$ is an \textit{$r$-uniform hypergraph}
(\textit{$r$-graph} for short) if each edge is a set of $r$ vertices.
An $r$-graph is called a partial Steiner $(n,r,\ell)$-system,
if every subset of size $\ell$ (\textit{$\ell$-set} for short) lies in exactly one
edge of $H$. Replacing ``exactly one edge" by ``at most one edge", we have
a partial Steiner $(n,r,\ell)$-system.
In particular, partial Steiner $(n,r,2)$-systems are also called
\textit{linear hypergraphs} and Steiner $(n,3,2)$-systems are called
\textit{Steiner triple systems}. Steiner triple systems and their many natural generalizations are
central to combinatorics, and have been studied since the work of Pl\"{u}cker, Kirkman, and Steiner in the
mid-nineteenth century, see~\cite{wilson03} for discussion of this history
and recent breakthrough results in~\cite{keevash14,keevash18}.
Let $N$ be an abbreviation
for $ \binom{n}{r}$. Let $\mathcal{S}(n,r,\ell)$ be the set of all partial Steiner $(n,r,\ell)$-systems
on the vertex set $[n]$.
Grable and Phelps~\cite{grable96} used the R\"{o}dl nibble algorithm~\cite{rodl85}
to obtain an asymptotic formula for $\log |\mathcal{S}(n,r,\ell)|$ as $2\leq\ell\leq r-1$
and $n\to\infty$. Asratian and Kuzjurin gave another proof~\cite{asas00}.
Blinovsky and Greenhill~\cite{valelec} used the switching method
to obtain the enumeration formulas of sparse partial Steiner $(n,r,2)$-systems with given degree sequences.
Balogh and Li~\cite{balgoh17} obtained an upper bound on the
total number of partial Steiner $(n,r,2)$-systems with given girth.
Little is known about the enumeration on partial Steiner $(n,r,\ell)$-systems.
The \textit {partial Steiner $(n,r,\ell)$-system process}
begins with no edges on vertex set $[n]$ at time $0$, denoted by $\mathbb{S}(n,r,\ell; 0)$,
the $N$ edges arrive one by one according to a uniformly
chosen permutation, and each one is added if and only if no subsets of size $\ell$ with it
belong to two edges of the current hypergraph.
Let $\mathbb{S}(n,r,\ell; m)$ denote the partial Steiner
$(n,r,\ell)$-system when $m$ edges have been added.
Let $\tau_c= \min\{m\,:\mathbb{S}(n,r,\ell; m)\ {\mbox{is\ }
{\mbox {connected}}}\}$ be the hitting time for connectivity,
and $\tau_o= \min\{m\,:\mathbb{S}(n,r,\ell; m)\ {\mbox{has\ no\ } {\mbox {isolated\ vertices}}}\}$ be
the hitting time for the disappearance of the last isolated vertex
of the process. These two properties are certainly monotone increasing properties,
then $\tau_c$ and $\tau_o$ are well-defined. For $\ell=2$, it is \textit{uniform linear hypergraph process}.
The research on random greedy constrained hypergraph processes~\cite{bennett15,bohman15,bohman102,green03,kuhn16}
is not abundant and has broad applications to
various combinatorial problems.
Erd\H{o}s' open Question on the existence of Steiner triple systems
with arbitrary high girth was recently answered by analysing the high-girth triple-process~\cite{bohman19}.
General \textit{uniform hypergraph process} $\{\mathbb{H}_r(n,m)\}_m$
is a Markov process with time running through the set $\{0,1,\cdots, N\}$,
which is the typical random graph process $\{\mathbb{G}(n,m)\}_m$
that was introduced by Erd\H{o}s and R\'{e}nyi as $r=2$.
A typical result proved by Bollob\'{a}s and Thomason~\cite{bollo85}, says that,
with probability approaching $1$ as $n\rightarrow\infty$ (\textit{w.h.p.} for short),
$m= \frac{n}{2}\log n$ is a sharp threshold of connectivity for $\{\mathbb{G}(n,m)\}_m$
and the very edge which links the last isolated vertex with another vertex makes the graph connected.
Poole~\cite{poole15} proved the analogous result for $\{\mathbb{H}_r(n,m)\}_m$
when $r\geq 3$ is a fixed integer, which implies that
w.h.p., $m= \frac{n}{r}\log n$ is a sharp threshold of connectivity for $\{\mathbb{H}_r(n,m)\}_m$
and $\tau_c=\tau_o$ for $\{\mathbb{H}_r(n,m)\}_m$.
This is due to the fact that the $m$-th stage $\mathbb{H}_r(n,m)$ of uniform hypergraph process
can be identified with the uniform random hypergraph from $\mathcal{H}_r(n,m)$, where
$\mathcal{H}_r(n,m)$ denotes the set of $r$-graphs on
the vertex set $[n]$ with $m$ edges and they can usually be studied by
${H}_r(n,p)$. The classical random graph ${H}_r(n,p)$ is
defined on vertex set $[n]=\{1,\cdots,n\}$
and every $r$-set with $r\geq 2$ appears independently with probability $p$.
It is a general tool in analysing uniform graph process.
Nothing is known about the {partial Steiner $(n,r,\ell)$-system process, and
it might be surmised that the
threshold function of connectivity for $\{\mathbb{S}(n,r,\ell; m)\}_m$
is smaller than the one for $\{\mathbb{H}_r(n,m)\}_m$.
Similarly, the $m$-stage $\mathbb{S}(n,r,\ell; m)$ of the process is also
regarded as to be chosen uniformly at random from $\mathcal{S}(n,r,\ell; m)$,
where $\mathcal{S}(n,r,\ell; m)$ denotes the set of partial Steiner
$(n,r,\ell)$-systems with $m$ edges.
It is necessary to consider the enumeration of the set $\mathcal{S}(n,r,\ell; m)$.
The previous work most relevant to this one is \cite{mckay18}.
McKay and Tian~\cite{mckay18} obtained the asymptotic enumeration
formula for the set of $\mathcal{S}(n,r,2; m)$ as far as $m=o(n^{ \frac{3}{2}})$,
and they also have the probability that a random partial Steiner $(n,r,2)$-system
with $m=o(n^{ \frac{3}{2}})$ edges contains a given subhypergraph.
Let $[x]_t=x(x-1)\cdots(x-t+1)$ be the falling factorial.
We also use the standard asymptotic notation $o$, $O$, $\Omega$
and $\Theta$.
All asymptotics are with respect to $n\rightarrow\infty$.
\begin{theorem}{\rm{(\cite{mckay18}, Theorem 1.1)}}\label{t1.1}
For any given integer $r\geq 3$, let $m=m(n)$ be an integer
with $m=o(n^{ \frac{3}{2}})$. Then, as $n\rightarrow \infty$,
\begin{align*}
|\mathcal{S}(n,r,2; m)|
&={ \frac{ N^m}{m!}}\exp\biggl[- \frac{[r]_2^2[m]_2}{4n^2}-
\frac{[r]_2^3(3r^2-15r+20)m^3}{24n^4}+O\Bigl( \frac{m^2}{n^3}\Bigr)\biggr].
\end{align*}
\end{theorem}
\begin{theorem}{\rm(\cite{mckay18}, Theorem 1.4)}\label{t1.2}
For any given integers $r\geq 3$, let $m=m(n)$ and $k=k(n)$ be integers with
$m=o(n^{ \frac32})$ and $k=o\bigl( \frac{n^3}{m^2}\bigr)$.
Let $K=K(n)$ be a given $r$-graph in $\mathcal{S}(n,r,2; k)$ and
$H\in \mathcal{S}(n,r,2; m)$ be chosen uniformly at random. Then, as $n\to\infty$,
\[
\mathbb{P}[K\subseteq H]= \frac{[m]_k}{N^k}\exp\biggl[ \frac{[r]_2^2k^2}{4n^2}+
O\Bigl( \frac{k}{n^2}+ \frac{m^2k}{n^3}\Bigr)\biggr].
\]
\end{theorem}
Firstly, we generalize the above properties from $\ell=2$ in~\cite{mckay18} to
$3\leq\ell\leq r-1$, which are essential to prove the hitting time of connectivity
for $\mathbb{S}(n,r,\ell;m)$.
\begin{theorem}\label{t1.3}
For any given integers $r$ and $\ell$ such that $3\leq \ell\leq r-1$, let $m=m(n)$ be an integer
with $m=o(n^{ \frac{\ell+1}{2}})$.
Then, as $n\rightarrow \infty$,
\begin{align}
|\mathcal{S}(n,r,\ell; m)|
&={ \frac{N^m}{m!}}\exp\biggl[- \frac{[r]_{\ell}^2[m]_2}{2\ell!n^{\ell}}
+O\Bigl( \frac{m^2}{n^{\ell+1}}\Bigr)\biggr].\notag
\end{align}
\end{theorem}
\begin{theorem}\label{t1.4}
For any given integers $r$ and $\ell$ such that $3\leq \ell\leq r-1$,
let $m=m(n)$ and $k=k(n)$ be integers with
$m=o(n^{ \frac{\ell+1}{2}})$ and $k=o\bigl(
\frac{n^{\ell+1}}{m^2}\bigr)$.
Let $K=K(n)$ be a given $r$-graph in $\mathcal{S}(n,r,\ell; k)$ and
$H\in \mathcal{S}(n,r,\ell; m)$ be chosen uniformly at random. Then, as
$n\rightarrow \infty$,
\begin{align*}
\mathbb{P}[K\subseteq H]= \frac{[m]_k}{ N^k}\exp\biggl[ \frac{[r]_{\ell}^2k^2}{2\ell!n^{\ell}}
+O\Bigl( \frac{k}{n^{\ell}}+ \frac{m^2k}{n^{\ell+1}}\Bigr)\biggr].
\end{align*}
\end{theorem}
By Theorem~\ref{t1.1} to Theorem~\ref{t1.4}, for any given integers $r$ and $\ell$
such that $2\leq\ell\leq r-1$, it is surprised to show
$\{\mathbb{S}(n,r,\ell; m)\}_m$ has the same threshold
function of connectivity with $\{\mathbb{H}_r(n,m)\}_m$, independent of $\ell$,
and $\mathbb{S}(n,r,\ell; m)$ also becomes
connected exactly at the moment when the last
isolated vertex disappears.
\begin{theorem}\label{t1.6}
For any given integers $r$ and $\ell$ such that $2\leq\ell\leq r-1$, w.h.p., $m= \frac{n}{r}\log n$
is a sharp threshold of connectivity for $\mathbb{S}(n,r,\ell; m)$ and
$\tau_{c}=\tau_{i}$ for $\{\mathbb{S}(n,r,\ell; m)\}_m$.
\end{theorem}
From the calculations in the proof of Theorem~\ref{t1.6}, we also have a corollary about
the distribution on the number of isolated vertices in $\mathbb{S}(n,r,\ell; m)$ when
$m= \frac{n}{r}\bigl(\log n+c_n)$, where $c_n\rightarrow c\in \mathbb{R}$ as $n\rightarrow\infty$.
Let $Po(\lambda)$ denote the Poisson distribution with mean $\lambda$, and write $X\xrightarrow{d}Po(\lambda)$
if $X$ tends in distribution to $Po(\lambda)$.
\begin{corollary}\label{c1.8}
For any given integers $r$ and $\ell$ such that $2\leq\ell\leq r-1$, let $m= \frac{n}{r}\bigl(\log n+c_n)$,
where $c_n\rightarrow c\in \mathbb{R}$ as $n\rightarrow\infty$.
The number of isolated vertices in $\mathbb{S}(n,r,\ell; m)$ tends in distribution to
the Poisson distribution $Po(\lambda)$ with $\lambda=e^{-c}$.
\end{corollary}
The remainder of the paper is structured as follows. Notation and auxiliary results
are presented in Section~\ref{s:2}. In Section~\ref{s:3},
we consider Theorem~\ref{t1.3} and Theorem~\ref{t1.4}, where the way to
prove them is a refinement of Theorem~\ref{t1.1} and Theorem~\ref{t1.2} in~\cite{mckay18}.
In Section~\ref{s:6}, we prove Theorem~\ref{t1.6}. The last section concludes the work.
\section{Notation and auxiliary results}\label{s:2}
To state our results precisely, we need some definitions.
Let~$H$ be an~$r$-graph in $\mathcal{H}_r(n,m)$. Define the {\it excess} of $H$
as $ex(H)=(r-1)m-n$. Note that
from the definiton of the excess it follows that if
$ex(H)=d$, then $(r-1)|(n+d)$. Observe also that if $H$ is connected, then $ex(H)\geq -1$.
For~$U\subseteq [n]$, the \textit{codegree} of~$U$ in~$H$, denoted by~$\codeg({U})$,
is the number of edges of~$H$ containing~$U$.
In particular,~$\codeg({U})$ is the degree of~$v$ in~$H$
if~$U=\{v\}$ for~$v\in [n]$, denoted by~$\deg (v)$.
Let~$\Delta(H)=\max\{\deg (v)|\ v\in[n]\}$
and~$\delta(H)=\min\{\deg (v)|\ v\in[n]\}$.
Given an integer $\ell$ with $2\leq\ell\leq r-1$, any~$\ell$-set
$\{x_1,\cdots,x_{\ell}\}\subseteq [n]$ in an edge~$e$ of~$H$
is called a \textit{link} of $e$ if $\codeg({x_1,\cdots,x_{\ell}})\geq 2$.
Two edges $e_i$ and $e_j$ in $H$ are called \textit{linked edges} if $|e_i\cap e_j|=\ell$.
As defined in~\cite{mckay18},
let $G_H$ be a simple graph whose vertices are the edges of~$H$,
with two vertices of $G$ adjacent iff the corresponding edges of~$H$ are linked.
An edge induced subgraph of $H$ corresponding to a non-trivial
component of $G_H$ is called a \textit{cluster} of $H$.
Furthermore, for two positive-valued functions $f$, $g$
on the variable $n$, we write $f\ll g$ to denote $\lim_{n\rightarrow\infty} f (n)/g(n) =0$,
$f \sim g$ to denote $\lim_{n\rightarrow\infty} f (n)/g(n) =1$ and
$f\lesssim g$ if and only if $\lim_{n\rightarrow\infty} \sup f (n)/g(n) \leq 1$ .
For an event $A$ and a random variable $Z$ in an arbitrary probability space $(\Omega,\mathcal{F},\mathbb{P})$,
$\mathbb{P}[A]$ and $\mathbb{E}[Z]$ denote the probability of $A$ and the expectation
of $Z$. An event is said to occur with high probability (\textit {w.h.p.} for short), if
the probability that it holds tends to 1 as $n\rightarrow\infty$.
In order to identify several events which have low probabilities
in the uniform probability space $\mathcal{H}_r(n,m)$ as
$m=o(n^{ \frac{\ell+1}{2}})$,
the following two lemmas will be useful.
\begin{lemma}[\cite{mckay18}, Lemma~2.1]\label{l2.1}
Let $t=t(n)\geq 1$ be an integer and
$e_1,\ldots,e_{t}$ be distinct $r$-sets of $[n]$. For any given integer $r\geq 3$,
let $H$ be an $r$-graph that is chosen uniformly at random from $\mathcal{H}_r(n,m)$.
Then the probability that $e_1,\ldots,e_t$ are edges of $H$ is at most
$\bigl( \frac{m}{N}\bigr)^t$.
\end{lemma}
\begin{lemma}[\cite{mckay18}, Lemma~2.2]\label{l2.2}
Let $r$, $t$ and $\alpha$ be integers such that $r,t,\alpha=O(1)$ and $0\leq\alpha\leq rt$.
If a hypergraph $H$ is chosen uniformly at random from $\mathcal{H}_r(n,m)$,
then the expected number of sets of $t$ edges of $H$ whose union has
$rt-\alpha$ or fewer vertices is $O\bigl(m^t n^{-\alpha})$.
\end{lemma}
We will need the following Lemma~\ref{l2.6} from~\cite{green06} and
Lemma~\ref{l2.7} from~\cite{jason00} to find the enumeration formula of
$\mathcal{S}(n,r,\ell; m)$ and
the distribution of isolated vertices in $\mathbb{S}(n,r,\ell; m)$.
We state them here for completeness.
\begin{lemma}[\cite{green06}, Corollary~4.5]\label{l2.6}
Let $N\geq 2$ be an integer, and for $1\leq i\leq N$, let
real numbers $A(i)$, $B(i)$ be given such that
$A(i)\geq 0$ and $1-(i-1)B(i)\geq 0$. Define $A_1=\min_{i=1}^NA(i)$,
$A_2=\max_{i=1}^NA(i)$, $C_1=\min_{i=1}^NA(i)B(i)$
and $C_2=\max_{i=1}^NA(i)B(i)$. Suppose that there exists
a real number $\hat{c}$ with $0<\hat{c}< \frac{1}{3}$
such that $\max\{A/N,|C|\}\leq \hat{c}$ for all $A\in [A_1,A_2]$, $C\in[C_1,C_2]$.
Define $n_0$, $n_1$, $\cdots$, $n_N$ by $n_0=1$ and
\begin{align*}
\frac{n_i}{n_{i-1}}= \frac{A(i)}{i}(1-(i-1)B(i))
\end{align*}
for $1\leq i\leq N$, with the following interpretation: if $A(i)= 0$ or $1-(i-1)B(i)=0$, then $n_j=0$
for $i\leq j\leq N$. Then $\Sigma_1\leq \sum_{i=0}^{N}n_i\leq \Sigma_2$,
where $\Sigma_1=\exp[A_1- \frac{1}{2}A_1C_2]-(2e\hat{c})^N$ and
$\Sigma_2=\exp[A_2- \frac{1}{2}A_2C_1+ \frac{1}{2}A_2C_1^2]+(2e\hat{c})^N$.
\end{lemma}
\begin{lemma}[\cite{jason00}, Corollary~6.8]\label{l2.7}
Let $X=\sum_{\alpha\in A}I_\alpha$ be a counting variable, where $I_\alpha$ is an indicator variable.
If $\lambda\geq 0$ and $\mathbb{E}[X]_k\rightarrow\lambda^k$
for every $k\geq 1$ as $n\rightarrow\infty$, then $X\xrightarrow{d}Po(\lambda)$.
\end{lemma}
\section{Enumeration of $\mathcal{S}(n,r,\ell; m)$ and others}\label{s:3}
In this section, we first consider the asymptotic enumeration formula for
$\mathcal{S}(n,r,\ell; m)$ as
$3\leq\ell \leq r-1$ and $m=o(n^{ \frac{\ell+1}{2}})$ to extend the case of
$\ell=2$ and $m=o(n^{ \frac{3}{2}})$ in~\cite{mckay18}.
We remark that
the proof follows along the same line of~\cite{valelec,green06,mckay18} and
we are only giving the details here for the
sake of self-completeness.
Let $\mathbb{P}(n,r,\ell; m)$ denote the probability that an
$r$-graph $H\in \mathcal{H}_r(n,m)$ chosen uniformly
at random is a partial Steiner $(n,r,\ell)$-system. Then
$|\mathcal{S}(n,r,\ell; m)|={\binom{N}{m}} \mathbb{P}(n,r,\ell; m)$.
Our task is reduced to show that $\mathbb{P}(n,r,\ell; m)$
equals the later factor in Theorem~\ref{t1.3}.
Define $M=\bigl\lceil\log n+ \frac{3^{\ell+2}r^{2\ell}m^2}{\ell!n^{\ell}}\bigr\rceil$.
Let $\mathcal{S}^+(n,r,\ell; m)\subset\mathcal{H}_r(n,m)$
be the set of $r$-graphs $H$ which satisfy the following
properties $\bf(a)$ to $\bf(c)$. We show that the expected number of $r$-graphs
in $\mathcal{H}_r(n,m)$ not satisfying the properties of
$\mathcal{S}^+(n,r,\ell; m)$ is quite small such that
the removal of these $r$-graphs from our main proof will lead
to some simplifications.
$\bf(a)$\ The intersection of any two edges contains at most $\ell$ vertices.
$\bf(b)$\ $H$ only contains one type of cluster that
is shown in Figure 1 and the intersection of any two clusters contains at most one vertex.
(This implies that any three edges of $H$ involve at least
$3r-2\ell+1$ vertices and any four edges involve at least $4r-2\ell-1$
vertices. Thus, if there are two edges of $H$, for example $\{e_1,e_2\}$,
such that $|e_1\cup e_2|=2r-\ell$, then $|e\cap (e_1\cup e_2)|\leq \ell-1$
for any edge $e$ other than $\{e_1,e_2\}$ of $H$. Similarly,
if there are four edges of $H$, for example $\{e_1,e_2, e_3,e_4\}$,
such that $|e_1\cup e_2|=2r-\ell$ and $|e_3\cup e_4|=2r-\ell$,
then $|(e_1\cup e_2)\cap (e_3\cup e_4)|\leq 1$.)
\begin{figure}[!htb]
\centering
\includegraphics[width=0.5\textwidth]{noncomponent1.pdf}
\caption{The cluster of $H\in\mathcal{S}^+(n,r,\ell;m)$.\label{fig:1}}
\end{figure}
$\bf(c)$\ There are at most $M$ clusters in $H$.
\begin{lemma}\label{t3.2}
For any given integers $r$ and $\ell$ such that $3\leq\ell\leq r-1$, let $m=m(n)$ be integers with
$m=o(n^{ \frac{\ell+1}{2}})$.
Then, as $n\rightarrow \infty$,
\begin{align*}
\frac{|\mathcal{S}^+(n,r,\ell;\, m)|}
{|\mathcal{H}_r(n,m)|}=1-O\Bigl( \frac{m^2}{n^{\ell+1}}\Bigr).
\end{align*}
\end{lemma}
\begin{proof} Consider $H\in \mathcal{H}_r(n, m)$ chosen uniformly at random.
We apply Lemma 2.2 several times
to show that $H$ satisfies the properties $\bf(a)$-$\bf(c)$ with probability
$1-O( \frac{m^2}{n^{\ell+1}})$.
Applying Lemma 2.2 with $t=2$ and $\alpha=\ell+1$, the expected number of two edges
involving at most $2r-\ell-1$ vertices is $O( \frac{m^2}{n^{\ell+1}})$;
with $t=3$ and $\alpha=2\ell$, the expected
number of three edges involving at most $3r-2\ell$ vertices is $O( \frac{m^3}{n^{2\ell}})=
O( \frac{m^2}{n^{\ell+1}})$ as $m=o(n^{ \frac{\ell+1}{2}})$ and $\ell\geq3$;
with $t=4$ and $\alpha=2\ell+2$, the expected number of
four edges involving at most $4r-2\ell-2$ vertices is $O( \frac{m^4}{n^{2\ell+2}})
=O( \frac{m^2}{n^{\ell+1}})$. Thus, $H$ satisfies the properties $\bf(a)$ and $\bf(b)$ with probability
$1-O( \frac{m^2}{n^{\ell+1}})$.
Define the event $\mathcal{E}=
\{{\rm{There\ are\ at\ most}}\ M\ {\rm{clusters\ in}}\ H\}$
and $\mathcal{\overline{E}}$ as the complement of the event $\mathcal{E}$.
In the following, we also show that $\mathbb{P}[\mathcal{E}]=1-O( \frac{m^2}{n^{\ell+1}})$.
Let $d=M+1$. Let $\{x_1^i,\cdots,x_{\ell}^i\}\subseteq \binom{[n]}{\ell}$ be a set of
links with edges $e_i$ and $e_i'$ for $1\leq i\leq d$.
These links are called paired-distinct if these $2d$
edges are all distinct.
Let $\mathcal{E}'=\left\{\text{There are at most}\ M\
\text {paired-distinct}\right.$ $\left. \text{links in}\ H\right\}$.
By Lemma~\ref{l2.1}, we have
\begin{align*}
\mathbb{P}[\mathcal{\overline{E}}']=O\biggl(\binom{n}{r-\ell}^{2d}
\binom{\binom{n}{\ell}}{d}\Bigl( \frac{m}{N}\Bigr)^{2d}\biggr)
=O\biggl(\biggl( \frac{r^{2\ell}e m^2}{d\ell!n^{\ell}}\biggr)^{d}\biggr)
=O\Bigl( \frac{1}{n^{\ell+1}}\Bigr),
\end{align*}
where the last two equalities are true because
$d\geq\frac{3^{\ell+2}r^{2\ell}m^2}{\ell!n^{\ell}}$ and $d\geq\log n$.
Suppose the properties $\bf(a)$ and $\bf(b)$ are true.
Since the number of clusters in $H$ is equal to the number of pair-distinct
links in $H$, then it follows that $\mathbb{P}[\mathcal{\overline{E}}
\mid\text{Properties {\bf(a)} and {\bf(b)}\ hold}]
\leq\mathbb{P}[\mathcal{\overline{E}}']=O( \frac{1}{n^{\ell+1}})$
and $\mathbb{P}[\mathcal{\overline{E}}]= O( \frac{m^2}{n^{\ell+1}})$.
\end{proof}
For a nonnegative integer $t$, define
$\mathcal{S}^{+}(t)$ to
be the set of $r$-graphs $H\in \mathcal{S}^+(n,r,\ell; m)$
with exactly $t$ clusters and
we have $|\mathcal{S}^+(n,r,\ell; m)|=\sum_{t=0}^{M}|\mathcal{S}^{+}(t)|$.
By Lemma~\ref{t3.2}, we have $|\mathcal{S}^+(n,r,\ell; m)|\neq0$ and
there exists $t$ such that $|\mathcal{S}^{+}(t)|\neq0$.
Note that
$\mathcal{S}(n,r,\ell; m)=\mathcal{S}^{+}(0)\neq\emptyset$,
then it follows that
\begin{align}\label{e3.1}
\frac{1}{\mathbb{P}(n,r,\ell; m)}&=\Bigl(1-O\Bigl( \frac{m^2}{n^{\ell+1}}\Bigr)\Bigr)
\sum_{t=0}^{M} \frac{|\mathcal{S}^{+}(t)|}{|\mathcal{S}(n,r,\ell; m)|}
=\Bigl(1-O\Bigl( \frac{m^2}{n^{\ell+1}}\Bigr)\Bigr)
\sum_{t=0}^{M} \frac{|\mathcal{S}^{+}(t)|}{|\mathcal{S}^{+}(0)|}.
\end{align}
In order to calculate the ratio
$|\mathcal{S}^{+}(t)|/|\mathcal{S}^{+}(0)|$
when $1\leq t\leq M$. We design switchings to find a relationship between the sizes of
$\mathcal{S}^{+}(t)$ and $\mathcal{S}^{+}(t-1)$.
Let $H\in \mathcal{S}^{+}(t)$. A {\it forward switching} from $H$ is used to
reduce the number of clusters in $H$. Take any cluster $\{e,f\}$ and remove it from $H$.
Define $H_0$ with the same vertex set $[n]$ and the edge set $E(H_0)=E(H)\setminus \{e,f\}$.
Take any $r$-set from $[n]$ of which no $\ell$ vertices belong to the same edge of $H_0$ and
add it as a new edge. The graph is denoted by $H'$. Insert another new edge at an $r$-set of $[n]$ again of which
no $\ell$ vertices belong to the same edge of $H'$. The resulting graph is denoted by $H''$.
The two new edges in forward switching may have at most $\ell-1$ vertices in common and
the operation reduces the number of clusters in $H$ by one.
A {\it reverse switching} is the reverse of a forward switching.
A reverse switching from $H''\in \mathcal{S}^{+}(t-1)$
is defined by sequentially removing two edges of $H''$ not containing a link, then
choosing a $(2r-\ell)$-set $T$ from $[n]$ such that no $\ell$
vertices belong to any remaining edge of $H''$, then inserting two edges into $T$
such that they create a cluster.
\begin{lemma}\label{l5.3}
For any given integers $r$ and $\ell$ such that $3\leq\ell\leq r-1$, let $m=m(n)$ be an integer with
$m=o(n^{ \frac{\ell+1}{2}})$. Let $t$ be a positive integer
with $1\leq t\leq M$.\\
$(a)$\ Let $H\in \mathcal{S}^{+}(t)$. The number of forward switchings for $H$ is
$t N^2
(1+O( \frac{m}{n^{\ell}}))$.
\noindent$(b)$\ Let $H''\in \mathcal{S}^{+}(t-1)$. The number of reverse switchings for $H''$ is
$ \frac{(2r-\ell)!}{\ell!(r-\ell)!^2} \binom{m-2(t-1)}{2}
\binom{n}{2r-\ell}(1+O( \frac{m}{n^{\ell}}))$.
\end{lemma}
\begin{proof} $(a)$\ Let $H\in \mathcal{S}^{+}(t)$.
Let $\mathcal{R}(H)$ be the set of all forward switchings which can be applied to $H$.
There are exactly $t$ ways to choose a cluster. The number of $r$-sets to insert the new edge is
at most $N$. From this we subtract the $r$-sets that have $\ell$ vertices belong to some
other edge of $H$, which is at most $ \binom{r}{\ell} m \binom{n-\ell}{r-\ell}=O( \frac{m}{n^\ell}) N$.
Thus, in each step of the forward switching, there are $N
(1+O( \frac{m}{n^{\ell}}))$ ways to choose the new edge and
we have $|\mathcal{R}(H)|=t N^2
(1+O( \frac{m}{n^{\ell}}))$.
$(b)$\ Conversely, suppose that $H''\in \mathcal{S}^{+}(t-1)$.
Similarly, let $\mathcal{R}'(H'')$ be the set of all reverse switchings for $H''$.
There are exactly $2\binom{m-2(t-1)}{2}$ ways to delete two edges in sequence such that
neither of them contain a link. There are at most
$\binom{n}{2r-\ell}$ ways to choose a $(2r-\ell)$-set $T$ from $[n]$.
From this, we subtract the $(2r-\ell)$-sets
that have $\ell$ vertices belong to some
other edge of $H''$, which is at most $\binom{r}{\ell} m \binom{n-\ell}{2r-2\ell}
=O( \frac{m}{n^\ell}) \binom{n}{2r-\ell}$.
For every $T$, there are $ \frac{1}{2} \binom{2r-\ell}{\ell} \binom{2r-2\ell}{r-\ell}$ ways to create a
cluster in $T$.
Thus, we have $|\mathcal{R}'(H'')|=\binom{2r-\ell}{\ell}
\binom{2r-2\ell}{r-\ell} \binom{m-2(t-1)}{2} \binom{n}{2r-\ell}
(1+O( \frac{m}{n^\ell}))$.
\end{proof}
\begin{corollary}\label{c5.4}
With notation as above, for some $1\leq t\leq M$, the following hold: \\
$(a)$\ $|\mathcal{S}^{+}(t)|>0$ if and only if $m\geq 2t$.
\\
$(b)$\ Let $t'$ be the first value of $t\leq M$
such that $\mathcal{S}^{+}(t)=\emptyset$,
or $t'=M+1$ if no such value exists.
Then, as $n\rightarrow\infty$, uniformly for $1\leq t< t'$,
\begin{align*}
\frac{|\mathcal{S}^{+}(t)|}{|\mathcal{S}^{+}(t-1)|}=
\binom{m-2(t-1)}{2} \frac{[r]_\ell^2}{\ell!t n^{\ell}}
\Bigl(1+O\Bigl( \frac{1}{n}\Bigr)\Bigr).
\end{align*}
\end{corollary}
\begin{proof} $(a)$\ Firstly, $m\geq 2t$ is a necessary condition for $|\mathcal{S}^{+}(t)|>0$.
By Lemma~\ref{t3.2}, there is some $0\leq \hat{t}\leq M$ such that
$\mathcal{S}^{+}(\hat{t})\neq\emptyset$. We can move $\hat{t}$ to $t$ by a sequence of forward
and reverse switchings while no greater than $M$.
Note that the values given in Lemma~\ref{l5.3} at each
step of this path are positive, we have $|\mathcal{S}^{+}(t)|>0$.
$(b)$\ By $(a)$, if $\mathcal{S}^{+}(t)=\emptyset$, then
$\mathcal{S}^{+}(t+1),\cdots,\mathcal{S}^{+}(M) =\emptyset$.
By the definition of $t'$, the left hand ratio is well defined.
By Lemma~\ref{l5.3}, we complete the proof of $(b)$, where $O( \frac{m}{n^{\ell}})$
is absorbed into $O( \frac{1}{n})$ as $m=o(n^{ \frac{\ell+1}{2}})$ and $\ell\geq 3$.
\end{proof}
At last, by Lemma~\ref{l2.6}, we estimate
$\sum_{t=0}^{M} \frac{|\mathcal{S}^{+}(t)|}{|\mathcal{S}^{+}(0)|}$
in~\eqref{e3.1} to finish the proof of Theorem~\ref{t1.3}.
\begin{lemma}\label{l6.2}
For any given integers $r$ and $\ell$ such that $3\leq\ell\leq r-1$, let $m=m(n)$ be an
integer with $m=o(n^{ \frac{\ell+1}{2}})$.
With notation as above, as $n\rightarrow\infty$,
\begin{align*}
\sum_{t=0}^{M} \frac{|\mathcal{S}^{+}(t)|}{|\mathcal{S}^{+}(0)|}
=&\exp\biggl[ \frac{[r]_\ell^2[m]_2}{2\ell! n^{\ell}}+O\Bigl( \frac{m^2}{n^{\ell+1}}\Bigr)\biggr].
\end{align*}
\end{lemma}
\begin{proof} Let $t'$ be as defined in Lemma~\ref{c5.4}(b) and we have shown
$|\mathcal{S}^{+}(0)|=|\mathcal{S}(n,r,\ell; m)|\neq0$, then $t'\geq 1$. But if
$t'=1$, by Lemma~\ref{c5.4}(a), we have $m<2$ and the conclusion is obviously true.
In the following, suppose $t'\geq 2$. Define $n_{0},\cdots,n_{M}$ by $n_{0}=1$,
$n_{t}= |\mathcal{S}^{+}(t)|/|\mathcal{S}^{+}(0)|$
for $1\leq t<t'$ and $n_{t}=0$ for $t'\leq t\leq M$.
By Lemma~\ref{c5.4}(b), for $1\leq t< t'$, we have
\begin{equation}\label{e4.1}
\begin{aligned}[b]
\frac{n_{t}}{n_{t-1}}&= \frac{1}{t}
\binom{m-2(t-1)}{2} \frac{[r]_\ell^2}{\ell! n^{\ell}}
\Bigl(1+O\Bigl( \frac{1}{n}\Bigr)\Bigr).
\end{aligned}
\end{equation}
For $1\leq t\leq M$, define
\begin{equation}\label{e3.32}
\begin{aligned}[b]
A(t)&= \frac{[r]_\ell^2[m]_2}{2\ell! n^{\ell}}
\Bigl(1+O\Bigl( \frac{1}{n}\Bigr)\Bigr),\\
B(t)&=\begin{cases} \frac{2(2m-2t+1)}{m(m-1)},\text{for}\ 1\leq t<t';\\(t-1)^{-1},\text{otherwise}.\end{cases}
\end{aligned}
\end{equation}
As the equations shown in~\eqref{e4.1} and~\eqref{e3.32}, we further have $ \frac{n_{{t}}}{n_{t-1}}=
\frac{A(t)}{t}(1-(t-1)B(t))$.
Following the notation of Lemma~\ref{l2.6}, we have
$A_1,A_2=\frac{[r]_\ell^2[m]_2}{2\ell! n^{\ell}}
(1+O( \frac{1}{n}))$. For $1\leq t< t'$, we have
$A(t)B(t)= \frac{[r]_\ell^2(2m-2t+1)}{\ell! n^{\ell}}
(1+O( \frac{1}{n}))$.
Thus, we have $A(t)B(t)= O( \frac{m}{n^\ell})$ for
$1\leq t< t'$. For the case $t'\leq t\leq M$ and $t'\geq2$, by Lemma~\ref{c5.4}(a),
we have $2\leq m<2t$. As the equation shown in~\eqref{e3.32}, we also have
$A(t)B(t)= O( \frac{m}{n^\ell})$ for
$t'\leq t\leq M$. In both cases, we have $C_1,C_2=O( \frac{m}{n^\ell})$.
Note that $|C|=o(1)$ for all $C\in[C_{1},C_{2}]$
as $m=o(n^{ \frac{\ell+1}{2}})$.
Let $\hat{c}= \frac{1}{2(3^{\ell+2})}$, then $\max\{A/M,|C|\}\leq \hat{c}< \frac{1}{3}$ and
$(2e\hat{c})^{M}=O( \frac{1}{n^{\ell+1}})$ as $n\rightarrow\infty$.
Lemma~\ref{l2.6} applies to obtain $\sum_{t=0}^{M} \frac{|\mathcal{S}^{+}(t)|}{|\mathcal{S}^{+}(0)|}
=\exp\bigl[ \frac{[r]_\ell^2[m]_2}{2\ell! n^{\ell}}+
O\bigl( \frac{m^2}{n^{\ell+1}}\bigr)\bigr]$,
where $O( \frac{m^3}{n^{2\ell}})=O( \frac{m^2}{n^{\ell+1}})$
as $m=o(n^{ \frac{\ell+1}{2}})$.
\end{proof}
\begin{remark}By Lemma~\ref{l6.2}, as the equation shown in~\eqref{e3.1},
\begin{align*}
|\mathcal{S}(n,r,\ell; m)|&=\binom{N}{m} \exp\Bigl[ -\frac{[r]_\ell^2[m]_2}{2\ell! n^{\ell}}+
O\Bigl( \frac{m^2}{n^{\ell+1}}\Bigr)\Bigr]\\
&= \frac{N^m}{m!} \exp\Bigl[ -\frac{[r]_\ell^2[m]_2}{2\ell! n^{\ell}}+
O\Bigl( \frac{m^2}{n^{\ell+1}}\Bigr)\Bigr],
\end{align*}
where $\binom{N}{m}= \frac{N^m}{m!}
\exp\bigl[O\bigl( \frac{m^{2}}{N}\bigr)\bigr]
= \frac{N^m}{m!}\exp\bigl[O\bigl( \frac{m^{2}}{n^{\ell+1}}\bigr)\bigr]$. We complete the proof of Theorem~\ref{t1.3}
\end{remark}
\begin{corollary}\label{c1.6}
For any given positive integers $h$, $r$ and $\ell$ such that $2\leq \ell\leq r-1$,
let $m=m(n)$ be an integer with $m=o(n^{ \frac{\ell+1}{2}})$, $H\in\mathcal{S}(n,r,\ell; m)$
be chosen uniformly at random and $v_1,\cdots,v_h$ be distinct vertices in $[n]$.
Then, as $n\rightarrow \infty$,
\begin{align*}
\mathbb{P}[\deg (v_1)=\cdots=\deg (v_h)=0]=\exp\Bigl[- \frac{hrm}{n}
+O\Bigl( \frac{m}{n^2}+ \frac{m^2}{n^{\ell+1}}\Bigr)\Bigr].
\end{align*}
\end{corollary}
\begin{proof}By Theorem~\ref{t1.1} and Theorem~\ref{t1.3}, we have
\begin{align*}
\mathbb{P}[\deg (v)=0]&= \frac{|\mathcal{S}(n-1,r,\ell; m)|}{|\mathcal{S}(n,r,\ell; m)|}=
\frac{ \binom{n-1}{r}^m}{ \binom{n}{r}^m}\exp\biggl[O\Bigl( \frac{m^2}{n^{\ell+1}}\Bigr)\biggr]\\
&=\exp\biggl[- \frac{rm}{n}+O\Bigl( \frac{m}{n^2}+ \frac{m^2}{n^{\ell+1}}\Bigr)\biggr],
\end{align*}
where the last equality is true because $ \frac{ \binom{n-1}{r}}{\binom{n}{r}}
=\exp[- \frac{r}{n}+O( \frac{1}{n^2})]$.
Thus, for any given integer $h\geq 1$,
\begin{align*}
&\mathbb{P}[\deg (v_1)=\cdots=\deg (v_h)=0]= \frac{|\mathcal{S}(n-h,r,\ell; m)|}{|\mathcal{S}(n,r,\ell; m)|}\\
&= \frac{|\mathcal{S}(n-h,r,\ell; m)|}{|\mathcal{S}(n-h+1,r,\ell; m)|}\cdots
\frac{|\mathcal{S}(n-1,r,\ell; m)|}{|\mathcal{S}(n,r,\ell; m)|}
\end{align*}
to complete the proof of Corollary~\ref{c1.6}.
\end{proof}
\begin{remark}
By Theorem~\ref{t1.1} and Theorem~\ref{t1.3}, we have the asymptotic enumeration
formula of $\mathcal{S}(n,r,\ell; m)$ as $2\leq \ell\leq r-1$ and $m=o(n^{ \frac{\ell+1}{2}})$.
It is helpful
for us to consider the hitting time of connectivity
for $\{\mathbb{S}(n,r,\ell; m)\}_m$ in Section~\ref{s:6}.
We also extend the probability that a
random linear $r$-graph with $m=o(n^{ \frac{3}{2}})$ edges contains a given
subhypergraph (Theorem~\ref{t1.2} in~\cite{mckay18}),
by similar discussions with appropriate modifications,
to the case $3\leq\ell \leq r-1$ and $m=o(n^{ \frac{\ell+1}{2}})$.
We show it in the Appendix for reference.
\end{remark}
\section{Hitting time of connectivity for $\{\mathbb{S}(n,r,\ell; m)\}_m$}\label{s:6}
As one application of Theorem~\ref{t1.1} to
Theorem~\ref{t1.4},
we consider the hitting time of connectivity for partial Steiner $(n,r,\ell)$-system process
$\{\mathbb{S}(n,r,\ell; m)\}_m$ for any given integers $r$ and $\ell$ with $2\leq \ell\leq r-1$.
Let $\tau_{c}=\min\{m:\ \mathbb{S}(n,r,\ell; m)\ {\rm{is\ connected}}\}$ and
$\tau_{o}=\min\{m:\ \delta(\mathbb{S}(n,r,\ell; m))\geq 1\}$.
It is clear that $\tau_{o}\leq \tau_{c}$. Define
$m_{L}= \frac{n}{r}(\log n- \omega(n))$ and
$m_{R}= \frac{n}{r}(\log n+ \omega(n))$,
where $\omega(n)\rightarrow\infty$
arbitrarily slowly as $n\rightarrow\infty$ and taking $\omega(n)=\log\log n$ for convenience.
Theorem~\ref{t1.6} follows from a sequence of claims which we show next.
\bigskip
{\bf Claim 1}.~~\textit{W.h.p.}, there are at most $2\log n$
isolated vertices in $\mathbb{S}(n,r,\ell; m_{L})$,
while there are no isolated vertices in $\mathbb{S}(n,r,\ell; m_{R})$.
Thus, \textit{w.h.p.}, $\tau_{o}\in [m_{L},m_{R}]$.
\begin{proof}[Proof of Claim 1]\quad Let $X_{0,m}$ be the number
of isolated vertices in $\mathbb{S}(n,r,\ell; m)$, where
$m\in [m_{L},m_{R}]$. By Corollary~\ref{c1.6},
we have the $t$-th factorial moment of $X_{0,m}$ is
\begin{align}\label{e5.2}
\mathbb{E}[X_{0,m}]_t&=[n]_t\mathbb{P}[\deg (v_1)=\cdots=\deg (v_t)=0]\notag\\
&=[n]_t\exp\biggl[- \frac{trm}{n}
+O\Bigl( \frac{m}{n^2}+ \frac{m^2}{n^{\ell+1}}\Bigr)\biggr].
\end{align}
For $m=m_{R}$ and $t=1$, we have $\mathbb{E}[X_{0,m_{R}}]=
\exp\bigl[-\omega(n)+O\bigl( \frac{\log n}{n}+ \frac{\log^2 n}{n^{\ell-1}}\bigr)\bigr]\rightarrow0$
as $n\rightarrow\infty$. Thus, \textit{w.h.p.}, there are no isolated
vertices in $\mathbb{S}(n,r,\ell; m_{R})$.
For $m=m_{L}$ and $t=1$, we have $\mathbb{E}[X_{0,m_{L}}]=
\exp\bigl[\omega(n)+O\bigl( \frac{\log n}{n}+ \frac{\log^2 n}{n^{\ell-1}}\bigr)\bigr]\rightarrow\infty$
as $n\rightarrow\infty$.
For the second factorial moment, we have $\mathbb{E}[X_{0,m_L}]_2=\bigl(1+O\bigl( \frac{\log n}{n}+
\frac{\log^2 n}{n^{\ell-1}}\bigr)\bigr)\mathbb{E}^2[X_{0,m_L}]$.
By Chebyshev's inequality, \textit{w.h.p.}, we have $X_{0,m_L}$
is concentrated around $\exp[\omega(n)]=\log n$ by $\omega(n)=\log\log n$ and
we have at most $2\log n$ isolated vertices in $\mathbb{S}(n,r,\ell; m_{L})$.
\end{proof}
In fact, besides the isolated vertices, we show that the remaining
vertices in $\mathbb{S}(n,r,\ell; m_{L})$ are all in a giant component.
For any nonnegative integers $k$ and $h$,
let $Y_{k,h}$ be the number of components $C_{k,h}$ with given
$k$ vertices and exactly $h$ edges in $\mathbb{S}(n,r,\ell; m_{L})$.
By symmetry, suppose that $r\leq k\leq \frac{n}{2}$.
We also have $h=h(k)\geq \frac{k-1}{r-1}$ because
$ex(C_{k,h})\geq -1$, and
$h=h(k)\leq \min\bigl\{m_L, \binom{k}{\ell}/\binom{r}{\ell}\bigr\}$
because $C_{k,h}$ is also a partial Steiner $(n,r,\ell)$-system.
Define
$h_{\rm{min}}= \frac{k-1}{r-1}$, $h_{\rm{max}}=\min\bigl\{m_L, \binom{k}{\ell}/\binom{r}{\ell}\bigr\}$
and
$Y_k=\sum_{h_{\rm{min}}\leq h\leq h_{\rm{max}}} Y_{k,h}$.
Let $\mathcal{S}_{c}(k,r,\ell; h)$ be the set of all connected partial Steiner
$(k,r,\ell)$-systems in $\mathcal{S}(k,r,\ell; h)$.
We have
\begin{align*}
\mathbb{E}[Y_{k,h}]=
\frac{\binom{n}{k}|\mathcal{S}_{c}(k,r,{\ell}; h)|\cdot|\mathcal{S}(n-k,r,\ell; m_L-h)|}{|\mathcal{S}(n,r,\ell; m_L)|}.
\end{align*} Since
$|\mathcal{S}_{c}(k,r,{\ell}; h)| \leq |\mathcal{S}(k,r,\ell; h)|$, we have
\begin{align}\label{e5.5}
\mathbb{E}[Y_{k,h}]
&\leq \frac{ \binom{n}{k}|\mathcal{S}(k,r,\ell; h)|
\cdot|\mathcal{S}(n-k,r,\ell; m_L-h)|}{|\mathcal{S}(n,r,\ell; m_L)|},
\end{align}
and we will show
\begin{align*}
\sum_{r\leq k\leq \frac{n}{2}}\mathbb{E}[Y_{k}]=\sum_{r\leq k\leq \frac{n}{2}}\sum_{h_{\rm{min}}\leq h\leq h_{\rm{max}}}\mathbb{E}[Y_{k,h}]\rightarrow0
\end{align*}
as $n\rightarrow\infty$ to ensure, \textit{w.h.p.}, the remaining
vertices in $\mathbb{S}(n,r,\ell; m_{L})$ are all in a giant
component.
Let $I_1=[r, \frac{n}{\log n}]$
and $I_2=[ \frac{n}{\log n}, \frac{n}{2}]$.
Firstly, we show $\sum_{k\in I_1}\mathbb{E}[Y_{k}]\rightarrow0$ in the following two claims.
\bigskip
{\bf Claim 2}.~~As $k\in I_1$,
\begin{align*}
& |\mathcal{S}(k,r,\ell; h)|\cdot|\mathcal{S}(n-k,r,\ell; m_L-h)|\\
&= O\biggl( \frac{ \binom{k}{r}^{h_{{\rm min}}}}{h_{{\rm min}}!}
\frac{ \binom{n-k}{r}^{m_L-h_{{\rm min}}}}{(m_L-h_{{\rm min}})!}
\exp\Bigl[- \frac{[r]_\ell^2 [m_L-h_{{\rm min}}]_2}{2\ell!(n-k)^\ell}\Bigr]\biggr).
\end{align*}
\begin{proof}[Proof of Claim 2]\quad Clearly, $|\mathcal{S}(k,r,\ell; h)|\leq \binom{ \binom{k}{r}}{h}$.
Since $n-k\rightarrow\infty$ as
$k\in I_1$, $m_{L}= \frac{n}{r}(\log n- \omega(n))$, by Theorem~\ref{t1.1} and Theorem~\ref{t1.3}, we have
\begin{equation}\label{e5.77}
\begin{aligned}[b]
& |\mathcal{S}(k,r,\ell; h)|\cdot|\mathcal{S}(n-k,r,\ell; m_L-h)|\\
&= O\biggl( \frac{\binom{k}{r}^{h}}{h!} \frac{ \binom{n-k}{r}^{m_L-h}}{(m_L-h)!}
\exp\Bigl[- \frac{[r]_\ell^2 [m_L-h]_2}{2\ell!(n-k)^\ell}\Bigr]\biggr).
\end{aligned}
\end{equation}
Let
\begin{align*}
g_1(h)= \frac{\binom{k}{r}^{h}}{h!} \frac{\binom{n-k}{r}^{m_L-h}}{(m_L-h)!}
\exp\Bigl[- \frac{[r]_\ell^2 [m_L-h]_2}{2\ell!(n-k)^\ell}\Bigr].
\end{align*} We have that $g_1(h)$ is decreasing in $h$ below.
Note that
\begin{align*}
\frac{g_1(h+1)}{g_1(h)}&= \frac{\binom{k}{r}}{h+1} \frac{m_L-h}{\binom{n-k}{r}}
\exp\biggl[- \frac{[r]_\ell^2 [m_L-h]_2}{2\ell!(n-k)^\ell}+ \frac{[r]_\ell^2 [m_L-h-1]_2}{2\ell!(n-k)^\ell}\biggr]\\
&= \frac{\binom{k}{r}}{h+1} \frac{m_L-h}{\binom{n-k}{r}}\exp\Bigl[O\Bigl( \frac{m_L}{(n-k)^\ell}\Bigr)\Bigr]\\
&= O\Bigl( \frac{\binom{k}{r}}{h+1} \frac{n\log n}{\binom{n-k}{r}}\Bigr)\qquad
\Bigl( \text{by}\ m_L= \frac{n}{r}(\log n-\omega(n))\ \text{and}\ \ell\geq 2\Bigr)\\
&=O\Bigl( \frac{k^{r-1}\log n}{n^{r-1}}\Bigr)\rightarrow 0,
\end{align*}
where the last equality is true as $h\geq h_{{\rm min}}\geq \frac{k}{r}$, $r\leq k\leq \frac{n}{\log n}$
and $r\geq 3$.
Thus, as the equation shown in~\eqref{e5.77},
we complete the proof of Claim 2.
\end{proof}
\bigskip
{\bf Claim 3}.~~$\sum_{k\in I_1}\mathbb{E}[Y_{k}]\rightarrow0$ as $n\rightarrow\infty$.
\begin{proof}[Proof of Claim 3]\quad As shown in Theorem~\ref{t1.1}, Theorem~\ref{t1.3}
and $m_L= \frac{n}{r}(\log n- \omega(n))$, we also have
\begin{align}\label{e5.8}
|\mathcal{S}(n,r,\ell; m_L)|\sim \frac{\binom{n}{r}^{m_L}}{m_L!}
\exp\Bigl[- \frac{[r]_\ell^2 [m_L]_2}{2\ell!n^\ell}\Bigr].
\end{align}
By Claim 2 and $h\leq m_L$, as the equations shown in~\eqref{e5.5} and~\eqref{e5.8}, it follows that
\begin{align}\label{e5.81}
\mathbb{E}[Y_k]&\leq m_L\mathbb{E}[Y_{k,h}]\notag\\&=
O\biggl( \frac{ \binom{n}{k} m_L \frac{ \binom{k}{r}^{h_{{\rm min}}}}{h_{{\rm min}}!}
\frac{\binom{n-k}{r}^{m_L-h_{{\rm min}}}}{(m_L-h_{{\rm min}})!}
\exp\bigl[- \frac{[r]_\ell^2 [m_L-h_{{\rm min}}]_2}{2\ell!(n-k)^\ell}\bigr]}
{ \frac{\binom{n}{r}^{m_L}}{m_L!}\exp\bigl[- \frac{[r]_\ell^2 [m_L]_2}{2\ell!n^\ell}\bigr]}\biggr)\notag\\
&=O\biggl( \frac{ \binom{n}{k} \binom{k}{r}^{h_{{\rm min}}}
\binom{n-k}{r}^{m_L-h_{{\rm min}}}m_L^{1+h_{{\rm min}}}}{ h_{{\rm min}}! \binom{n}{r}^{m_L}}\biggr),
\end{align}
where the last equality is correct because
\begin{align*}
\exp\Bigl[- \frac{[r]_\ell^2 [m_L-h_{{\rm min}}]_2}{2\ell!(n-k)^\ell}+
\frac{[r]_\ell^2 [m_L]_2}{2\ell!n^\ell}\Bigr]&\sim\exp\Bigl[-
\frac{[r]_\ell^2 [m_L]_2}{2\ell!n^\ell}\Bigl(- \frac{2h_{\rm min}}{m_L}+ \frac{k\ell}{n}\Bigr)\Bigr]\\
&=
\exp\Bigl[O\Bigl( \frac{m_Lh_{\rm min}}{n^\ell}\Bigr)\Bigr]=O(1),
\end{align*}
as $k\in I_1$ and $\ell\geq 2$.
Note that
\begin{align*}
&\binom{n}{r}^{m_L}\sim \frac{n^{rm_L}}{r!^{m_L}}
\exp\Bigl[-
\frac{r(r-1) m_L}{2n}\Bigr],\\
&\binom{n-k}{r}^{m_L-h_{{\rm min}}}\sim
\frac{n^{r(m_L-h_{{\rm min}})}}{r!^{m_L-h_{{\rm min}}}}
\exp\Bigl[- \frac{kr(m_L-h_{{\rm min}})}{n}-
\frac{r(r-1) (m_L-h_{{\rm min}})}{2n}\Bigr],
\end{align*}
as the equation shown in~\eqref{e5.81}, then we further have
\begin{align}\label{e5.6
\mathbb{E}[Y_k]
&=O\biggl( \frac{n^k m_L^{1+h_{{\rm min}}}k^{rh_{{\rm min}}}}{k!h_{{\rm min}}!n^{rh_{{\rm min}}}}
\exp\Bigl[- \frac{kr(m_L-h_{{\rm min}})}{n}\Bigr]\biggr)\notag\\
&=O\biggl( \frac{k^{rh_{{\rm min}}}(\log n)^{k+1+ h_{{\rm min}}}}
{r^{1+h_{{\rm min}}}k!h_{{\rm min}}!n^{k- 2}}
\exp\Bigl[ \frac{kr(k-1)}{n(r-1)}\Bigr]\biggr),
\end{align}
where the last equality is correct because $m_L= \frac{n}{r}(\log n- \omega(n))$, $(r-1)h_{{\rm min}}=k-1$,
and $\exp\bigl[ \frac{kr m_L}{n}\bigr]=n^k(\log n)^{-k}$.
By $k!\geq ( \frac{k}{e})^k$ and
$h_{{\rm min}}!\geq ( \frac{h_{{\rm min}}}{e})^{h_{{\rm min}}}
\geq ( \frac{k}{re})^{h_{{\rm min}}}$, the equation~\eqref{e5.6} is finally reduced to
\begin{equation}\label{e5.55}
\begin{aligned
\mathbb{E}[Y_k]
&=O\biggl( \frac{(\log n)^{k+ h_{{\rm min}}+1} }{k n^{k- 2}}
\exp\Bigl[ k+ \frac{k-1}{r-1}+ \frac{kr(k-1)}{n(r-1)}\Bigr]
\biggr)
\end{aligned}
\end{equation}
.
Let
\begin{align*}
g_2(k)= \frac{(\log n)^{k+ \frac{k-1}{r-1}+1}}{ n^{k- 2}}
\exp\Bigl[ k+ \frac{k-1}{r-1}+ \frac{kr(k-1)}{n(r-1)}\Bigr].
\end{align*} $g_2(k)$ is decreasing in $k$ because $g_2(k+1)/g_2(k)=
O(n^{-1}(\log n)^{1+ \frac{1}{r-1}})\rightarrow 0$,
then $g_2(k)=O(n^{2-r}(\log n)^{ r+2})$ for all $k\in I_1$
and $\mathbb{E}[Y_k]=O( k^{-1}n^{2-r}(\log n)^{ r+2}
)$ as shown in~\eqref{e5.55}.
Since $\sum_{k\in I_1} k^{-1}=O(\log n)$,
$\sum_{k\in I_1}\mathbb{E}[Y_k]=O(n^{2-r}(\log n)^{ r+3})\rightarrow0$
to complete the proof of Claim 3.
\end{proof}
In the following, we show $\sum_{k\in I_2}\mathbb{E}[Y_{k}]\rightarrow0$
as $n\rightarrow\infty$. Since
$k\rightarrow\infty$ and $n-k\rightarrow\infty$ as $k\in I_2$,
by Theorem~\ref{t1.1} and Theorem~\ref{t1.3}
for $2\leq \ell \leq r-1$,
\begin{equation}\label{e5.9}
\begin{aligned}
|\mathcal{S}(k,r,\ell; h)|&\sim{ \frac{ {\binom{k}{r}}^h}{h!}}
\exp\biggl[- \frac{[r]_\ell^2[h]_2}{2\ell!k^\ell}\biggr],\\
|\mathcal{S}(n-k,r,\ell; m_L-h)|&\sim{ \frac{ {\binom{n-k}{r}}^{m_L-h}}{(m_L-h)!}}
\exp\biggl[- \frac{[r]_\ell^2[m_L-h]_2}{2\ell!(n-k)^\ell}\biggr].
\end{aligned}
\end{equation}
\bigskip
{\bf Claim 4}.~~$|\mathcal{S}(k,r,\ell; h)|\cdot|\mathcal{S}(n-k,r,\ell; m_L-h)|$
is decreasing in $h$ as $k\in I_2$ and $h\geq \frac{m_L}{2}$.
\begin{proof}[Proof of Claim 4]\quad As the equations shown in~\eqref{e5.9},
we have
\begin{align*}
& \frac{|\mathcal{S}(k,r,\ell; h+1)|\cdot|\mathcal{S}(n-k,r,\ell; m_L-h-1)|}
{|\mathcal{S}(k,r,\ell; h)|\cdot|\mathcal{S}(n-k,r,\ell; m_L-h)|}\\
&\sim \frac{\binom{k}{r}}{(h+1)} \frac{m_L-h}{\binom{n-k}{r}}
\exp\biggl[- \frac{[r]_\ell^2h}{\ell!k^\ell}+ \frac{[r]_\ell^2(m_L-h-1)}{\ell!(n-k)^\ell}\biggr].
\end{align*}
Note that $\exp\bigl[- \frac{[r]_\ell^2h}{\ell!k^\ell}+ \frac{[r]_\ell^2(m_L-h-1)}{\ell!(n-k)^\ell}\bigr]<1$
and $ \frac{m_L-h}{(h+1)}<1$
as $k\in I_2$ and $h\geq \frac{m_L}{2}$, then we have
\begin{align*}
& \frac{|\mathcal{S}(k,r,\ell; h+1)|\cdot|\mathcal{S}
(n-k,r,\ell; m_L-h-1)|}{|\mathcal{S}(k,r,\ell; h)|\cdot|\mathcal{S}(n-k,r,\ell; m_L-h)|}
< \Bigl( \frac{k}{n-k}\Bigr)^r\leq 1
\end{align*}to complete the proof of Claim 4.
\end{proof}
\bigskip
{\bf Claim 5}.~~As $k\in I_2$ and $h\leq \frac{m_L}{2}$,
\begin{align*}
\exp\biggl[- \frac{[r]_\ell^2[h]_2}{2\ell!k^\ell}- \frac{[r]_\ell^2[m_L-h]_2}
{2\ell!(n-k)^\ell}+ \frac{[r]_\ell^2[m_L]_2}{2\ell!n^\ell}\biggr]=O(1).
\end{align*}
\begin{proof}[Proof of Claim 5]\quad
As $k\in I_2$ and $ \frac{k-1}{r-1}\leq h\leq \frac{m_L}{2}$,
$h\rightarrow\infty$ and $m_L-h\rightarrow\infty$,
then $[h]_2\sim h^2$, $[m_L-h]_2\sim (m_L-h)^2$
and $[m_L]_2\sim m_L^2$. Suppose $h=t_1m_L$ and $k=t_2n$ for some variables
$t_1\leq \frac{1}{2}$ and $t_2\leq \frac{1}{2}$.
We complete the proof of Claim 5 by $ \frac{t_1^2}{t_2^\ell}+ \frac{(1-t_1)^2}{(1-t_2)^\ell} \geq 1$ as $\ell\geq 2$.
\end{proof}
\bigskip
{\bf Claim 6}.~~As $k\in I_2$ and $h\leq \frac{m_L}{2}$,
\begin{align*}
\frac{|\mathcal{S}(k,r,\ell; h)|\cdot|\mathcal{S}(n-k,r,\ell; m_L-h)|}{|\mathcal{S}(n,r,\ell; m_L)|}
=O\Bigl(n^{-\frac{1}{2}}(\log n)^{- \frac{(r-1)n}{r}}\Bigr).
\end{align*}
\begin{proof}[Proof of Claim 6]\quad Note that $k\rightarrow\infty$ and $n-k\rightarrow\infty$ as $k\in I_2$.
By Theorem~\ref{t1.1}, Theorem~\ref{t1.3} and the equations shown in~\eqref{e5.9}, we have
\begin{align}\label{e5.10}
& \frac{|\mathcal{S}(k,r,\ell; h)|\cdot|\mathcal{S}(n-k,r,\ell; m_L-h)|}{|\mathcal{S}(n,r,\ell; m_L)|}\notag\\
&\sim \frac{ \binom{k}{r}^{h} \binom{n-k}{r}^{m_L-h}}{\binom{n}{r}^{m_L}} \frac{m_L!}{h!(m_L-h)!}
\exp\biggl[- \frac{[r]_\ell^2[h]_2}{2\ell!k^\ell}- \frac{[r]_\ell^2[m_L-h]_2}
{2\ell!(n-k)^\ell}+ \frac{[r]_\ell^2[m_L]_2}{2\ell!n^\ell}\biggr]\notag\\
&=O \Bigl( \frac{ \binom{k}{r}^{h} \binom{n-k}{r}^{m_L-h}}{\binom{n}{r}^{m_L}} \frac{m_L!}{h!(m_L-h)!}\Bigr),
\end{align}
where the last inequality is true by Claim 5.
Define
\begin{align*}
g_3(k)= \frac{\binom{k}{r}^{h}\binom{n-k}{r}^{m_L-h}}
{\binom{n}{r}^{m_L}} \frac{m_L!}{h!(m_L-h)!}.
\end{align*} For any $k\in I_2$, consider $h\in [ \frac{k-1}{r-1}, \frac{m_L}{2}]$.
Since $h\rightarrow\infty$ and $m_L-h\rightarrow\infty$,
by Stirling formula, we have $h!\sim \sqrt{2\pi h}\bigl( \frac{h}{e}\bigr)^h$,
$(m_L-h)!\sim \sqrt{2\pi (m_L-h)}\bigl( \frac{m_L-h}{e}\bigr)^{m_L-h}$
and $m_L!\sim \sqrt{2\pi m_L}\bigl( \frac{m_L}{e}\bigr)^{m_L}$,
then we have
\begin{align}\label{e5.11}
g_3(k)&< \frac{\sqrt{m_L}}{\sqrt{ h(m_L-h)}} \frac{k^{rh}(n-k)^{r(m_L-h)}}
{n^{rm_L}} \frac{(m_L)^{m_L}}{h^h(m_L-h)^{m_L-h}}\notag\\
&\leq \frac{\sqrt{m_L}}{\sqrt{ h(m_L-h)}}\Bigl( \frac{h}{m_L}\Bigr)^{rh}
\Bigl(1- \frac{h}{m_L}\Bigr)^{r(m_L-h)} \frac{(m_L)^{m_L}}{h^h(m_L-h)^{m_L-h}},
\end{align}
where the last inequality is true because $k^{rh}(n-k)^{r(m_L-h)}$
obtains its maximum for each $h$ as $k= \frac{h n}{m_L}$.
Hence, $h= \frac{km_L}{n}\in [ \frac{m_L}{\log n}, \frac{m_L}{2}]$
in~\eqref{e5.11} as $k\in I_2$.
Let
\begin{align*}
g_4(h)&=\frac{\sqrt{m_L}}{\sqrt{ h(m_L-h)}}\Bigl( \frac{h}{m_L}\Bigr)^{rh}
\Bigl(1- \frac{h}{m_L}\Bigr)^{r(m_L-h)} \frac{(m_L)^{m_L}}{h^h(m_L-h)^{m_L-h}}.
\end{align*}
Note that $g_4'(h)=g_4(h)\bigl( -\frac{1}{2h}+ \frac{1}{2(m-h)} +(r-1)
\log \frac{h}{m-h}\bigr)\leq 0$ as $h\in [ \frac{m_L}{\log n}, \frac{m_L}{2}]$.
Putting $h= \frac{m_L}{\log n}$ and $m_L$ into the above display,
\begin{align*}
\frac{\sqrt{m_L}}{\sqrt{ h(m_L-h)}}=O(n^{-1/2})\quad {\text {and}}\quad
\frac{(m_L)^{m_L}}{h^h(m_L-h)^{m_L-h}}=(\log n)^{ \frac{m_L}{\log n}}
\Bigl( 1-\frac{1}{\log n}\Bigr)^{ -m_L(1-\frac{1}{\log n})},
\end{align*}
and then
\begin{align*}
& \frac{|\mathcal{S}(k,r,\ell; h)|\cdot|\mathcal{S}(n-k,r,\ell; m_L-h)|}{|\mathcal{S}(n,r,\ell; m_L)|}\\
&=O\biggl( n^{- \frac{1}{2}} \bigl( \log n\bigr)^{ -(r-1)\frac{m_L}{\log n}}
\Bigl( 1-\frac{1}{\log n}\Bigr)^{ (r-1)m_L(1-\frac{1}{\log n})}\biggr)\\
& =O\Bigl(n^{- \frac{1}{2}} \bigl( \log n\bigr)^{- \frac{(r-1)n}{r}}\Bigr),
\end{align*}
where the last equality is correct because $(r-1)\frac{m_L}{\log n}\sim \frac{(r-1)n}{r}$,
$1-x\leq \exp [-x]$ for all $x$
and $ \frac{(r-1)m_L}{\log n}(1-\frac{1}{\log n})> \frac{n}{2}$ as $n\rightarrow\infty$.
We complete the proof of Claim 6.
\end{proof}
\bigskip
{\bf Claim 7}.~~$\sum_{k\in I_2}\mathbb{E}[Y_k]\rightarrow0$ as $n\rightarrow\infty$.
\begin{proof}[Proof of Claim 7]\quad
For any given $k\in I_2$, as the equation shown in~\eqref{e5.5}
and $ \binom{n}{k}\leq \binom{n}{ \frac{n}{2}}$, we have
\begin{align*}
\sum_{h\geq \frac{m_L}{2}}\mathbb{E}\bigl[Y_{k,h}\bigr]&\leq \frac{ \binom{n}{ \frac{n}{2}}}
{|\mathcal{S}(n,r,\ell; m_L)|}\sum_{h\geq \frac{m_L}{2}}
|\mathcal{S}(k,r,\ell; h)|\cdot|\mathcal{S}(n-k,r,\ell; m_L-h)|\\
&=O\Bigl(\binom{n}{ \frac{n}{2}}n^{- \frac{1}{2}}
\bigl( \log n\bigr)^{- \frac{(r-1)n}{r}}\Bigr),
\end{align*}
where the sum of this expression over $h\geq \frac{m_L}{2}$ is
dominated by the term $h= \frac{m_L}{2}$ as stated in Claim 4, and the last equality is true by Claim 6.
We further have
\begin{align*}
\mathbb{E}[Y_k]&=\sum_{h\leq \frac{m_L}{2}}\mathbb{E}[Y_{k,h}] + \sum_{h> \frac{m_L}{2}}\mathbb{E}[Y_{k,h}]\\
&=O\biggl( \binom{n}{ \frac{n}{2}} m_L n^{- \frac{1}{2}} \bigl( \log n\bigr)^{- \frac{(r-1)n}{r}}\biggr)\\
&=O\Bigl( 2^n (\log n)^{ -\frac{(r-1)n}{r}+1}\Bigr),
\end{align*}
where the last equality is true by Stirling formula and $m_L= \frac{n}{r}(\log n-\omega(n))$.
At last, $\sum_{k\in I_2}\mathbb{E}[Y_k]=
O\bigl( 2^n n(\log n)^{ -\frac{(r-1)n}{r}+1}\bigr)\rightarrow0$ as $n\rightarrow\infty$.
\end{proof}
By Claim 1, Claim 3 and Claim 7, \textit{w.h.p.},
besides a set of isolated vertices in $\mathbb{S}(n,r,\ell; m_{L})$,
the remaining vertices are all in a giant component.
{\bf Claim 8}.~~The argument that $\mathbb{S}(n,r,\ell; m_{R})$ is connected is
the symmetric analogue of the above analysis from Claim 2 to Claim 7.
Thus, \textit{w.h.p.}, $\tau_{c}\in [m_{L},m_{R}]$.
\bigskip
{\bf Claim 9}.~~\textit{W.h.p.} $\tau_{o}=\tau_{c}$.
\begin{proof}[Proof of Claim 9]\quad Since $\mathbb{S}(n,r,\ell; m_{L})$
consists of a connected component and at most $2\log n$ isolated vertices $V_1$,
to create the connected $\mathbb{S}(n,r,\ell; m_{R})$, we add $m_{R}-m_{L}$ random edges.
By Theorem~\ref{t1.2} and Theorem~\ref{t1.4}, we have the probability of containing every edge,
we take our partial Steiner $(n,r,\ell)$-system process $\{\mathbb{S}(n,r,\ell; m)\}_m$
to $m_R$, by a union bound,
\begin{align*}
\mathbb{P}[\tau_{o}<\tau_{c}]&\leq o(1)+\bigl(m_R-m_L\bigr)\binom{2\log n}{r} \frac{m_R}{\binom{n}{r}}\exp\biggl[
O\Bigl( \frac{1}{n^\ell}+ \frac{m_R^2}{n^{\ell+1}}\Bigr)\biggr]\\
&=o(1)+O\Bigl( \frac{n(\log n)^{r+1}\log\log n}{ \binom{n}{r}}\Bigr)\\
&=o(1)
\end{align*}
We have \textit{w.h.p.} none of these $m_{R}-m_{L}$ edges are contained in $V_1$, then $\tau_{o}=\tau_{c}$.
\end{proof}
We finally complete the proof of Theorem~\ref{t1.6}.
We also have a corollary about
the distribution on the number of isolated vertices in $\mathbb{S}(n,r,\ell; m)$ for
$m= \frac{n}{r}\bigl(\log n+c_n)$, where $c_n\rightarrow c\in \mathbb{R}$ as $n\rightarrow\infty$.
\begin{proof}[Proof of Corollary~\ref{c1.8}]\quad Let $m= \frac{n}{r}\bigl(\log n+c_n)$,
where $c_n\rightarrow c\in \mathbb{R}$ as $n\rightarrow\infty$. Consider the factorial
moments of $X_{0,m}$, where $X_{0,m}$ denotes the number of isolated vertices in $\mathbb{S}(n,r,\ell; m)$,
by Corollary~\ref{c1.6}, then we have
\begin{align*}
\mathbb{E}[X_{0,m}]_t&=[n]_t\mathbb{P}[\deg (v_1)=\cdots=\deg (v_t)=0]\notag\\
&=[n]_t\exp\biggl[- \frac{trm}{n}
+O\Bigl( \frac{m}{n^2}+ \frac{m^2}{n^{\ell+1}}\Bigr)\biggr].
\end{align*}
Since $\mathbb{E}[X_{0,m}]_t\rightarrow \exp[-tc]$ for
every $t\geq 1$,
by Lemma~\ref{l2.7},
we have $X_{0,m}\xrightarrow{d}Po(e^{-c})$.
\end{proof}
\section{Conclusions}
For any given integers $r$ and $\ell$
with $2\leq\ell\leq r-1$, we finally have the asymptotic enumeration formula
of $\mathcal{S}(n,r,\ell; m)$ when $m=o( n^{ \frac{\ell+1}{2}})$.
Applying the enumeration formula, we show
$\{\mathbb{S}(n,r,\ell; m)\}_m$ has the same threshold
function of connectivity with $\{\mathbb{H}_r(n,m)\}_m$, independent of $\ell$,
and $\{\mathbb{S}(n,r,\ell; m)\}_m$ also becomes
connected exactly at the moment when the last
isolated vertex disappears. What is the final size in partial Steiner $(n,r,\ell)$-systems process?
Recently, the differential equation method~\cite{wormald99}
(A short and simple proof of Wormald's differ-
ential equation method was recently shown by Warnke~\cite{warnke19})
is motivated by the pseudorandom heuristic properties
in the evolution of several constrained graph processes to track various key parameters,
which in turn imply the upper or lower bounds of the final size when the processes terminate
(such as the triangle-free process~\cite{bohman09}, the triangle removal process~\cite{bohman15},
and other constrained process~\cite{bennett15,bohman102,bohman19,kuhn16}).
Next step, we will find the final size in partial Steiner $(n,r,\ell)$-systems process
based on the differential equation method.
\section*{Acknowledgement}
Most of this work was
finished when Fang Tian was a
visiting research fellow at Australian National University. She is very grateful
for what she learned there.
|
1,314,259,996,591 | arxiv | \section{Introduction}
Rodriguez-Villegas \cite{RV} discovered numerically some remarkable supercongruences on truncated hypergeometric series
related to a Calabi-Yau manifold. The simplest supercongruence of Rodriguez-Villegas is: for any odd prime $p$,
\begin{align}
\sum_{k=0}^{p-1}\frac{{2k\choose k}^2}{16^k} \equiv
(-1)^{(p-1)/2}\pmod{p^2}.
\label{eq:RV}
\end{align}
It has caught the interests of many authors (see \cite{CLZ, GZ14, Mortenson1, SunZH, SunZW, Tauraso1,Tauraso2}). For example, Guo and Zeng \cite{GZ14} proved a $q$-analogue of
\eqref{eq:RV}:
\begin{align}
\sum_{k=0}^{p-1}\frac{(q;q^2)_k^2}{(q^2;q^2)_k^2} \equiv
(-1)^{(p-1)/2}q^{(1-p^2)/4}\pmod{[p]^2}\quad\text{for any odd prime $p$}. \label{eq:GZ-RV}
\end{align}
Here and in what follows,
$(a;q)_n=(1-a)(1-aq)\cdots (1-aq^{n-1})$
is the {\it $q$-shifted factorial}, and $[n]=1+q+\cdots+q^{n-1}$ is the {\it $q$-integer}.
The $q$-congruence \eqref{eq:GZ-RV} has been further generalized by Guo, Pan, and Zhang \cite{GPZ},
Ni and Pan \cite{NP}, and Guo \cite{Guo-par}. A slight generalization of \eqref{eq:GZ-RV}
can be stated as follows (see \cite{Guo-par,NP}):
\begin{align*}
\sum_{k=0}^{n-1}\frac{(q;q^2)_k^2}{(q^2;q^2)_k^2} \equiv
(-1)^{(n-1)/2}q^{(1-n^2)/4}\pmod{\Phi_n(q)^2}\quad\text{for positive odd $n$},
\end{align*}
where $\Phi_n(q)$ is the $n$-th {\it cyclotomic polynomial} in $q$.
Recently, the author and Zudilin \cite[Conjecture 5.3]{GuoZu} conjectured that, for $d\geqslant 3$ and $n\equiv -1\pmod{d}$,
\begin{align}
\sum_{k=0}^{n-1}\frac{(q;q^d)_k^d q^{dk}}{(q^d;q^d)_k^d} \equiv 0 \pmod{\Phi_n(q)^2}. \label{eq:d-1}
\end{align}
They \cite[Conjecture 5.4]{GuoZu} also conjectured that, for
$n,d\geqslant 2$ and $n\equiv 1\pmod{d}$,
\begin{align}
\sum_{k=0}^{n-1}\frac{(q^{-1};q^d)_k^d q^{dk}}{(q^d;q^d)_k^d} \equiv
0 \pmod{\Phi_n(q)^2}. \label{eq:d-2}
\end{align}
In this paper, we shall confirm the above two conjectures. It turns out that much more is true and we shall prove the following unified generalization of \eqref{eq:d-1} and \eqref{eq:d-2}.
\begin{thm}\label{main-1}
Let $d\geqslant 2$ be an integer. Let $r\leqslant d-2$ be an integer such that $\gcd(r,d)=1$. Then, for all positive integers $n$
with $n\equiv -r\pmod{d}$ and $n\geqslant d-r$, we have
\begin{align}
\sum_{k=0}^{n-1}\frac{(q^r;q^d)_k^d q^{dk}}{(q^d;q^d)_k^d} \equiv 0 \pmod{\Phi_n(q)^2}. \label{eq:main}
\end{align}
\end{thm}
It is clear that if $d\geqslant 3$ and $r=1$, then the congruence \eqref{eq:main} reduces to \eqref{eq:d-1}, while if $d\geqslant 2$ and $r=-1$,
then the congruence \eqref{eq:main} leads to \eqref{eq:d-2}.
For $d=2$ and $r=-1$, we have the following stronger result and conjecture.
\begin{thm} \label{main-2}
Let $n>1$ be a positive odd integer. Then
\begin{align}
\sum_{k=0}^{n-1}\frac{(q^{-1};q^2)_k^2 q^{2k}}{(q^2;q^2)_k^2} &\equiv 0 \pmod{[n]\Phi_n(q)}, \label{eq:main-2} \\[5pt]
\sum_{k=0}^{(n+1)/2}\frac{(q^{-1};q^2)_k^2 q^{2k}}{(q^2;q^2)_k^2} &\equiv 0 \pmod{[n]\Phi_n(q)}. \label{eq:main-3}
\end{align}
\end{thm}
\begin{conj}The congruences \eqref{eq:main-2} and \eqref{eq:main-3} still hold modulo $[n]^2$.
\end{conj}
We shall also give some similar results, such as
\begin{thm}\label{thm-1}
Let $n>1$ be a positive integer. Then
\begin{align}
\sum_{k=0}^{n-1}\frac{(q,q,q^4;q^6)_k q^{6k}}{(q^6;q^6)_k^3}
&\equiv 0 \pmod{\Phi_n(q)^2}\quad\text{if $n\equiv 5\pmod{6}$,} \label{main-6-5}\\[5pt]
\sum_{k=0}^{n-1}\frac{(q^{-1}, q^{-1},q^{-4};q^6)_k q^{6k}}{(q^6;q^6)_k^3}
&\equiv 0 \pmod{\Phi_n(q)^2} \quad\text{if $n\equiv 1\pmod{6}$}. \label{main-6-1}
\end{align}
\end{thm}
We shall prove Theorems \ref{main-1} and \ref{thm-1} by using the {\it creative microscoping} method developed by the author and Zudilin \cite{GuoZu}. That is to say, to prove
a $q$-supercongruence modulo $\Phi_n(q)^2$, it is more convenient to establish its generalization with an additional parameter $a$ so that the generalized congruence
holds modulo $(1-aq^n)(a-q^n)$. The difference here is that we shall add the parameter $a$ in quite a different way for the proof of Theorem \ref{main-1}.
The proof of Theorem \ref{main-2} is based on Theorem \ref{main-1} and borrows some idea from \cite{GuoZu} for proving congruences modulo $[n]$.
We shall give more similar congruences modulo $\Phi_n(q)^2$ in Section 5 and propose some related open problems in the last section.
\section{Proof of Theorem \ref{main-1}}
We first establish the following parametric generalization of Theorem \ref{main-1}.
\begin{thm}
Let $d,r,n$ be given as in the conditions of Theorem \ref{main-1}. Then, modulo $(1-aq^n)(a-q^n)$,
\begin{align}
\sum_{k=0}^{n-1}\frac{(a^{d-1}q^r, a^{d-3}q^r,\ldots, a^2q^r;q^d)_k (a^{1-d}q^r, a^{3-d}q^r,\ldots, a^{-2}q^r;q^d)_k (q^r;q^d)_k q^{dk}}
{(a^{d-2}q^d, a^{d-4}q^d,\ldots,aq^d;q^d)_k (a^{2-d}q^d, a^{4-d}q^d,\ldots, a^{-1}q^d;q^d)_k (q^d;q^d)_k } \equiv 0\label{eq:a-1}
\end{align}
if $d$ is odd, and
\begin{align}
\sum_{k=0}^{n-1}\frac{(a^{d-1}q^r, a^{d-3}q^r,\ldots, aq^r;q^d)_k (a^{1-d}q^r, a^{3-d}q^r,\ldots, a^{-1}q^r;q^d)_k q^{dk}}
{(a^{d-2}q^d, a^{d-4}q^d,\ldots,q^d;q^d)_k (a^{2-d}q^d, a^{4-d}q^d,\ldots, q^d;q^d)_k } \equiv 0\label{eq:a-e}
\end{align}
if $d$ is even.
\end{thm}
\begin{proof} Since $\gcd(r,d)=1$ and $n\equiv -r\pmod{d}$, we have $\gcd(d,n)=1$ and so the numbers $d,2d,\ldots (n-1)d$ are all not
divisible by $n$. This means that the denominators of the left-hand sides of \eqref{eq:a-1} and \eqref{eq:a-e} do not contain
the factor $1-aq^n$ nor $1-a^{-1}q^n$.
Hence, for $a=q^{-n}$ or $a=q^n$, the left-hand side of \eqref{eq:a-1} can be written as
\begin{align}
\sum_{k=0}^{\frac{dn-n-r}{d}}\frac{(q^{r-(d-1)n}, q^{r-(d-3)n},\ldots, q^{r-2n};q^d)_k (q^{(d-1)n+r}, q^{(d-3)n+r},\ldots, q^{2n+r};q^d)_k (q^r;q^d)_k q^{dk}}
{(q^{d-(d-2)n}, q^{d-(d-4)n},\ldots, q^{d-n};q^d)_k (q^{(d-2)n+d}, q^{(d-4)n+d},\ldots, q^{n+d};q^d)_k (q^d;q^d)_k },\label{eq:a-2}
\end{align}
where we have used the fact that $(q^{r-(d-1)n};q^d)_k=0$ for $k>(dn-n-r)/d$, and by the conditions there holds $0<(dn-n-r)/d\leqslant n-1$.
Let
\begin{equation*}
{n\brack k}={n\brack k}_q=
\frac{(q;q)_n}{(q;q)_k(q;q)_{n-k}}
\end{equation*}
be the {\it $q$-binomial coefficient}. It is easy to see that
\begin{align}
\frac{(q^{r-(d-1)n};q^d)_k q^{dk}}{(q^d;q^d)_k} &=(-1)^k {(dn-n-r)/d\brack k}_{q^d} q^{d{k\choose 2}+(n+r-dn+d)k},\label{qdk-0}\\[5pt]
\frac{(q^{r-(d-3)n};q^d)_k}{(q^{d-(d-2)n};q^d)_k} &=\frac{(q^{d-(d-2)n+dk};q^d)_{(n+r-d)/d}}{(q^{d-(d-2)n};q^d)_{(n+r-d)/d}},\label{qdk-begin}\\[5pt]
\frac{(q^{r-(d-5)n};q^d)_k}{(q^{d-(d-4)n};q^d)_k} &=\frac{(q^{d-(d-4)n+dk};q^d)_{(n+r-d)/d}}{(q^{d-(d-4)n};q^d)_{(n+r-d)/d}},\\[5pt]
&\ \ \vdots \notag \\[5pt]
\frac{(q^{r-2n};q^d)_k}{(q^{d-3n};q^d)_k} &=\frac{(q^{d-3n+dk};q^d)_{(n+r-d)/d}}{(q^{d-3n};q^d)_{(n+r-d)/d}},
\end{align}
and
\begin{align}
\frac{(q^{(d-1)n+r};q^d)_k}{(q^{(d-2)n+d};q^d)_k} &=\frac{(q^{(d-2)n+dk+d};q^d)_{(n+r-d)/d}}{(q^{(d-2)n+d};q^d)_{(n+r-d)/d}},\\[5pt]
\frac{(q^{(d-3)n+r};q^d)_k}{(q^{(d-4)n+d};q^d)_k} &=\frac{(q^{(d-4)n+dk+d};q^d)_{(n+r-d)/d}}{(q^{(d-4)n+d};q^d)_{(n+r-d)/d}},\\[5pt]
&\ \ \vdots \notag \\[5pt]
\frac{(q^{2n+r};q^d)_k}{(q^{n+d};q^d)_k} &=\frac{(q^{2n+dk+d};q^d)_{(n+r-d)/d}}{(q^{n+d};q^d)_{(n+r-d)/d}},\\[5pt]
\frac{(q^r;q^d)_k}{(q^{d-n};q^d)_k} &=\frac{(q^{d-n+dk};q^d)_{(n+r-d)/d} }{(q^{d-n};q^d)_{(n+r-d)/d} }. \label{qdk-end}
\end{align}
Noticing that the right-hand sides of \eqref{qdk-begin}--\eqref{qdk-end} are all polynomials in $q^{dk}$ of degree $(n+r-d)/d$, and
$$
d{k\choose 2}+(n+r-dn+d)k=d{(dn-n-r)/d-k\choose 2}-d{(dn-n-r)/d\choose 2},
$$
we can write \eqref{eq:a-2} as
\begin{align}
\sum_{k=0}^{\frac{dn-n-r}{d}}(-1)^k q^{d{(dn-n-r)/d-k\choose 2}} {(dn-n-r)/d\brack k}_{q^d}P(q^{dk}), \label{eq:a-p}
\end{align}
where $P(q^{dk})$ is a polynomial in $q^{dk}$ of degree $(n+r-d)(d-1)/d=(dn-n-r)/d-(d-r-1)\leqslant (dn-n-r)/d-1$.
Recall that the finite form of the $q$-binomial theorem (see, for example, \cite[p. 36]{Andrews}) can be written as
\begin{align*}
\sum_{k=0}^{n}(-1)^k {n\brack k}q^{k\choose 2} z^k=(z;q)_n.
\end{align*}
Letting $z=q^{-j}$ and replacing $k$ with $n-k$ in the above equation, we obtain
\begin{align}
\sum_{k=0}^{n}(-1)^k {n\brack k}q^{{n-k\choose 2}+jk}=0\quad\text{for}\ 0\leqslant j\leqslant n-1. \label{eq:qbino}
\end{align}
This immediately implies that $\eqref{eq:a-2}=\eqref{eq:a-p}=0$. Namely, the congruence \eqref{eq:a-1} holds.
Along the same lines, we can prove the congruence \eqref{eq:a-e}.
\end{proof}
\begin{proof}[Proof of Theorem {\rm\ref{main-1}}]Note that $\Phi_n(q)$ is a factor of $1-q^m$ if and only if $n$ divides $m$.
It follows that the limits of the denominators of \eqref{eq:a-1} and \eqref{eq:a-e} as $a\to1$ are relatively prime to $\Phi_n(q)$,
since $n$ is coprime with $d$.
On the other hand, the limit of $(1-aq^n)(a-q^n)$ as $a\to1$ has the factor $\Phi_n(q)^2$.
Thus, the congruence \eqref{eq:main} follows from the limiting case $a\to1$ of \eqref{eq:a-1} and \eqref{eq:a-e}.
\end{proof}
\section{Proof of Theorem \ref{main-2}}
Letting $d=2$ and $r=-1$ in \eqref{eq:main}, we see that, for odd $n>1$,
\begin{align}
\sum_{k=0}^{n-1}\frac{(q^{-1};q^2)_k^2 q^{2k}}{(q^2;q^2)_k^2} &\equiv 0 \pmod{\Phi_n(q)^2}, \label{eq:main-2-2} \\[5pt]
\sum_{k=0}^{(n+1)/2}\frac{(q^{-1};q^2)_k^2 q^{2k}}{(q^2;q^2)_k^2} &\equiv 0 \pmod{\Phi_n(q)^2}, \label{eq:main-3-2}
\end{align}
because $(q^{-1};q^2)_k\equiv 0\pmod{\Phi_n(q)}$ for $(n+1)/2<k\leqslant n-1$.
We now let $\zeta\ne1$ be an $n$-th root of unity, not necessarily primitive.
In other words, $\zeta$ is a primitive root of unity of odd degree $d\mid n$. If $c_q(k)$ denotes the $k$-th term on the left-hand side of \eqref{eq:main-2-2},
i.e.,
$$
c_q(k)=\frac{(q^{-1};q^2)_k^2 q^{2k}}{(q^2;q^2)_k^2}.
$$
The congruences \eqref{eq:main-2-2} and \eqref{eq:main-3-2} with $n=d$ imply that
\begin{align*}
\sum_{k=0}^{(d+1)/2}c_\zeta(k)=\sum_{k=0}^{d-1}c_\zeta(k)=0.
\end{align*}
Observe that
$$
\frac{c_\zeta(\ell d+k)}{c_\zeta(\ell d)}
=\lim_{q\to\zeta}\frac{c_q(\ell d+k)}{c_q(\ell d)}
=c_\zeta(k).
$$
We get
$$
\sum_{k=0}^{n-1}c_\zeta(k)=\sum_{\ell=0}^{n/d-1}\sum_{k=0}^{d-1}c_\zeta(\ell d+k)
=\sum_{\ell=0}^{n/d-1}c_\zeta(\ell d) \sum_{k=0}^{d-1}c_\zeta(k)=0,
$$
and
$$
\sum_{k=0}^{(n+1)/2}c_\zeta(k)
=\sum_{\ell=0}^{(n/d-3)/2} c_\zeta(\ell d)\sum_{k=0}^{d-1}c_\zeta(k)+\sum_{k=0}^{(d+1)/2}c_\zeta((n-d)/2+k)=0,
$$
which mean that the sums $\sum_{k=0}^{n-1}c_q(k)$ and $\sum_{k=0}^{(n+1)/2}c_q(k)$ are both divisible
by the cyclotomic polynomial $\Phi_d(q)$. As this is true for arbitrary divisor $d>1$ of $n$, we conclude that these two sums are both divisible by
\begin{equation*}
\prod_{\substack{d\mid n,\, d>1}}\Phi_d(q)=[n].
\end{equation*}
Namely, the congruences \eqref{eq:main-2-2} and \eqref{eq:main-3-2} are also true modulo $[n]$. The proof then follows from $\gcd([n],\Phi_n(q)^2)=[n]\Phi_n(q)$.
\qed
\section{Proof of Theorem \ref{thm-1}}
The proof is similar to that of Theorem \ref{main-1}.
We first prove the following result.
\begin{align}
\sum_{k=0}^{n-1}\frac{(a^5q, q/a^5, q^4;q^6)_k q^{6k}}{(a^4q^6, q^6/a^4, q^6;q^6)_k}
\equiv 0 \pmod{(1-aq^n)(a-q^n)}\quad\text{if $n\equiv 5\pmod{6}$}. \label{eq:6-5}
\end{align}
The $r=1$ and $d=6$ case of \eqref{qdk-0} gives
\begin{align*}
\frac{(q^{1-5n};q^6)_k q^{6k}}{(q^6;q^6)_k} &=(-1)^k {(5n-1)/6\brack k}_{q^6} q^{6{(5n-1)/6-k\choose 2}-6{(5n-1)/6\choose 2} }.
\end{align*}
Moreover, we have
\begin{align*}
\frac{(q^{5n+1};q^6)_k}{(q^{4n+6};q^6)_k} &=\frac{(q^{4n+6k+6};q^6)_{(n-5)/6}}{(q^{4n+6};q^6)_{(n-5)/6}},\\[5pt]
\frac{(q^4;q^6)_k}{(q^{6-4n};q^6)_k} &=\frac{(q^{6-4n+6k};q^6)_{(2n-1)/3}}{(q^{6-4n};q^6)_{(2n-1)/3}}.
\end{align*}
It follows that
\begin{align}
&\sum_{k=0}^{n-1}\frac{(q^{1-5n}, q^{5n+1}, q^4;q^6)_k q^{6k}}{(q^{6-4n}, q^{4n+6}, q^6;q^6)_k} \notag\\[5pt]
&\quad=\sum_{k=0}^{n-1}(-1)^k {(5n-1)/6\brack k}_{q^6} q^{6{(5n-1)/6-k\choose 2}-6{(5n-1)/6\choose 2}} \frac{(q^{4n+6k+6};q^6)_{(n-5)/6}(q^{6-4n+6k};q^6)_{(2n-1)/3} }
{(q^{4n+6};q^6)_{(n-5)/6}(q^{6-4n};q^6)_{(2n-1)/3} }. \label{eq:sum}
\end{align}
Since $(q^{4n+6k+6};q^6)_{(n-5)/6}(q^{6-4n+6k};q^6)_{(2n-1)/3}$ is a polynomial in $q^{6k}$ of degree $(5n-7)/6<(5n-1)/6$, by the identity \eqref{eq:qbino},
we see that the right-hand side of \eqref{eq:sum} vanishes. This proves that the left-hand side of \eqref{eq:6-5} is equal to $0$ for $a=q^{-n}$ or $a=q^n$.
That is, the congruence \eqref{eq:6-5} holds. Finally, letting $a\to 1$ in \eqref{eq:6-5}, we are led to \eqref{main-6-5}.
Similarly we can prove \eqref{main-6-1}. Here we merely give its parametric generalization:
\begin{align*}
\sum_{k=0}^{n-1}\frac{(a^5/q, q^{-1}/a^5, q^{-4};q^6)_k q^{6k}}{(a^4q^6, q^6/a^4, q^6;q^6)_k}
\equiv 0 \pmod{(1-aq^n)(a-q^n)}\quad\text{if $n\equiv 1\pmod{6}$}.
\end{align*}
\section{More congruences modulo $\Phi_n(q)^2$}
It seems that there are many more similar congruences modulo $\Phi_n(q)^2$. Here we give some such results.
\begin{thm}
Let $n$ be a positive integer. Then
\begin{align*}
\sum_{k=0}^{n-1}\frac{(q,q,q^7;q^9)_k q^{9k}}{(q^9;q^9)_k^3}
&\equiv 0 \pmod{\Phi_n(q)^2}\quad\text{if $n\equiv 2,8\pmod{9}$,} \\[5pt]
\sum_{k=0}^{n-1}\frac{(q^2,q^2,q^5;q^9)_k q^{9k}}{(q^9;q^9)_k^3}
&\equiv 0 \pmod{\Phi_n(q)^2}\quad\text{if $n\equiv 4,7\pmod{9}$,} \\[5pt]
\sum_{k=0}^{n-1}\frac{(q^4,q^4,q;q^9)_k q^{9k}}{(q^9;q^9)_k^3}
&\equiv 0 \pmod{\Phi_n(q)^2}\quad\text{if $n\equiv 5,8\pmod{9}$.}
\end{align*}
\end{thm}
\begin{proof}The proof is similar to that of Theorem \ref{thm-1}. Here we just give the parametric generalizations of
these congruences. Modulo $(1-aq^n)(a-q^n)$, for $r=1,2,4$, we have
\begin{align*}
\sum_{k=0}^{n-1}\frac{(a^5q^r,q^r/a^5,q^{9-2r};q^9)_k q^{9k}}{(aq^9,q^9/a,q^9;q^9)_k}
&\equiv 0 \quad\text{if $n\equiv 2r\pmod{9}$,} \\[5pt]
\sum_{k=0}^{n-1}\frac{(a^8q^r,q^r/a^8,q^{9-2r};q^9)_k q^{9k}}{(a^7q^9,q^9/a^7,q^9;q^9)_k}
&\equiv 0 \quad\text{if $n\equiv -r\pmod{9}$.}
\end{align*}
\end{proof}
\begin{thm}
Let $n>9$ be a positive integer. Then
\begin{align*}
\sum_{k=0}^{n-1}\frac{(q^{-1},q^{-1},q^{-7};q^9)_k q^{9k}}{(q^9;q^9)_k^3}
&\equiv 0 \pmod{\Phi_n(q)^2}\quad\text{if $n\equiv 5\pmod{9}$,} \\[5pt]
\sum_{k=0}^{n-1}\frac{(q^{-2},q^{-2},q^{-5};q^9)_k q^{9k}}{(q^9;q^9)_k^3}
&\equiv 0 \pmod{\Phi_n(q)^2}\quad\text{if $n\equiv 2,5\pmod{9}$,}\\[5pt]
\sum_{k=0}^{n-1}\frac{(q^{-4},q^{-4},q^{-1};q^9)_k q^{9k}}{(q^9;q^9)_k^3}
&\equiv 0 \pmod{\Phi_n(q)^2}\quad\text{if $n\equiv 2\pmod{9}$.}
\end{align*}
\end{thm}
\begin{proof}This time the parametric generalizations of
these congruences are as follows. Modulo $(1-aq^n)(a-q^n)$,
\begin{align*}
\sum_{k=0}^{n-1}\frac{(a^7q^{-1},q^{-1}/a^7,q^{-7};q^9)_k q^{9k}}{(a^5q^9,q^9/a^5,q^9;q^9)_k}
&\equiv 0 \quad\text{if $n\equiv 5\pmod{9}$,} \\[5pt]
\sum_{k=0}^{n-1}\frac{(a^8q^{-2},q^{-2}/a^8,q^{-5};q^9)_k q^{9k}}{(a^7q^9,q^9/a^7,q^9;q^9)_k}
&\equiv 0 \quad\text{if $n\equiv 2\pmod{9}$ and $n>9$,} \\[5pt]
\sum_{k=0}^{n-1}\frac{(a^5q^{-2},q^{-2}/a^5,q^{-5};q^9)_k q^{9k}}{(aq^9,q^9/a,q^9;q^9)_k}
&\equiv 0 \quad\text{if $n\equiv 5\pmod{9}$ and $n>9$,} \\[5pt]
\sum_{k=0}^{n-1}\frac{(a^7q^{-4},q^{-4}/a^7,q^{-1};q^9)_k q^{9k}}{(a^5q^9,q^9/a^5,q^9;q^9)_k}
&\equiv 0 \quad\text{if $n\equiv 2\pmod{9}$ and $n>9$.}
\end{align*}
\end{proof}
\section{Concluding remarks and open problems}
In this section we propose several conjectures for further study. Not like before, there is no symmetry in the following two conjectures.
It seems difficult to find the corresponding parametric generalizations.
\begin{conj}\label{conj-3}
Let $n$ be a positive integer with $n\equiv 4,7\pmod{9}$. Then
\begin{align*}
\sum_{k=0}^{n-1}\frac{(q,q^2,q^6;q^9)_k q^{9k}}{(q^9;q^9)_k^3}
\equiv 0 \pmod{\Phi_n(q)^2}.
\end{align*}
\end{conj}
\begin{conj}\label{conj-4}
Let $n$ be a positive integer with $n\equiv 5\pmod{9}$. Then
\begin{align*}
\sum_{k=0}^{n-1}\frac{(q^{-1},q^{-2},q^{-6};q^9)_k q^{9k}}{(q^9;q^9)_k^3}
\equiv 0 \pmod{\Phi_n(q)^2}.
\end{align*}
\end{conj}
There are many similar conjectures.
Let $n>1$ be a positive integer. For any rational number $x$ whose denominator is coprime with $n$, let $\langle x\rangle_n$ denote the least non-negative residue of $x$ modulo $n$.
We would like to propose the following two conjectures.
\begin{conj}\label{conj-first}
Let $n$ be a positive integer with $n\equiv 2\pmod{3}$, and let $m$ be a positive integer with $\gcd(m,n)=1$. If $r$ is an integer satisfying
$0<\langle \frac{r}{3m}\rangle_n\leqslant\frac{2n-1}{3}$, then
\begin{align*}
\sum_{k=0}^{n-1}\frac{(q^{m};q^{3m})_k (q^{r};q^{3m})_k (q^{2m-r};q^{3m})_k q^{3mk}}{(q^{3m};q^{3m})_k^3}
\equiv 0 \pmod{\Phi_n(q)^2}.
\end{align*}
\end{conj}
\begin{conj}\label{conj-second}
Let $n>1$ be a positive integer with $n\equiv 1\pmod{3}$, and let $m$ be a positive integer with $\gcd(m,n)=1$. If $r$ is an integer satisfying
$0<\langle \frac{r}{3m}\rangle_n\leqslant\frac{2n-5}{3}$, then
\begin{align*}
\sum_{k=0}^{n-1}\frac{(q^{-m};q^{3m})_k (q^{r};q^{3m})_k (q^{-2m-r};q^{3m})_k q^{3mk}}{(q^{3m};q^{3m})_k^3}
\equiv 0 \pmod{\Phi_n(q)^2}.
\end{align*}
\end{conj}
Letting $d=3$, $r=1$ and $q\to q^{m}$ in Theorem \ref{main-1}, and noticing that $\Phi_n(q)$ is a factor of $\Phi_n(q^m)$ for $\gcd(m,n)=1$,
we see that Conjectures \ref{conj-first} and \ref{conj-second}
are true for $r=m$ and $r=-m$, respectively.
\vskip 5mm \noindent{\bf Acknowledgments.} The author was partially
supported by the National Natural Science Foundation of China (grant 11771175).
|
1,314,259,996,592 | arxiv | \section{Introduction}
Under suitable circumstances neutrinos can
oscillate in the presence of matter \cite{W} or
undergo resonant conversions \cite{valle} even when
they are strictly massless. In some models even
unmixed neutrinos can resonantly convert in
matter \cite{ER,GMP}. Massless-neutrino resonant conversions
are distinct from the usual MSW conversions \cite{W,MS} in that
they are independent of neutrino energy and affect
simultaneously neutrinos as well as anti-neutrinos.
For this reason this mechanism is expected to play an
important role in supernova physics \cite{valle,AS,NQRV}.
Moreover, in some of these models with flavour changing
neutrino neutral current (FCNC) interactions with matter
constituents it has been suggested that, for a certain
range of the corresponding parameters, they may account for
the observed deficit of solar neutrinos \cite{ER,GMP,BPW}.
The required ingredients can naturally emerge in the context
of various models beyond the standard model \cite{beyond}.
In particular, in this paper we consider this type of
interactions mediated by the scalar partners of quarks
and leptons in supersymmetric extensions of the standard
model with explicitly broken $R$ parity \cite{SW,rparity}.
The presence of $R$ parity breaking interactions
induce resonant neutrino conversions of the type
${\nu}_e \leftrightarrow {\nu}_\alpha$
as well as $\bar{\nu}_e \leftrightarrow \bar{\nu}_\alpha$.
Such conversions have important implications for
the supernova $r$-process nucleosynthesis \cite{QFMMW}
as well as the observed $\bar{\nu}_e$ energy
spectra from SN1987a \cite{old,SSB,JNR}.
In a recent work \cite{NQRV}, we have investigated
the constraints on massless neutrino resonant conversions
that follow from supernova considerations. In the present
paper we apply the same considerations in order to constrain
models with explicit $R$ parity violating supersymmetric
interactions which can effectively induce resonant
conversions even when neutrino masses are neglected.
We also suggest that resonant massless-neutrino
conversion may play a positive role in supernova shock reheating.
In addition, we generalize this approach in order to
include the possibility of non-zero neutrino masses.
These are typically expected to arise in these models
and could help to explain present observations.
We derive the corresponding constraints on flavour
changing neutral current couplings generated
by explicit $R$ parity violating interactions.
The paper is structured as follows.
In Sect. 2 we briefly present the form of the FCNC and flavour
diagonal neutral current (FDNC) interactions emerging from the
$R$ parity violating terms and the new effective neutrino
evolution Hamiltonian in matter. In particular we consider
two possible scenarios:
\begin{enumerate}
\item massless and unmixed neutrinos
($\delta m^2 =0, \sin 2\theta=0$) with
FCNC as well as {\it non standard} FDNC
interactions of neutrinos with matter;
\item massive neutrinos ($\delta m^2 \neq 0$) assuming negligible
mixing in vacuum ($\sin 2\theta=0$), but with FCNC interactions.
\end{enumerate}
Sect. 3 is devoted to a discussion of resonant massless-neutrino
conversions for supernova neutrino detection and $r$-process
nucleosynthesis. We show how the observed $\bar\nu_e$ energy
spectra from SN1987a and the supernova $r$-process nucleosynthesis
place important restrictions on the parameters of $R$ parity
violating models. In sect. 4 we discuss the second scenario
above and derive the corresponding restrictions. In sect. 5
we briefly suggest resonant massless-neutrino
conversion as a way to power supernova shock reheating.
Finally, we summarize our results and conclude in Sect. 6.
\section{The MSW effect with FCNC interactions}
$R$ parity is a quantum number which is +1 for all standard
particles and -1 for the super partners. It is directly related
to the baryon ($B$) and lepton ($L$) number as
$R= (-1) ^{3B +L + 2S}$, where $S$ is the particle spin.
In the Minimal Supersymmetric Standard Model (MSSM) \cite{mssm}
the $R$ discrete symmetry is imposed to enforce the
$L$ and $B$ number conservation
and no tree-level flavour changing interactions exist.
However no fundamental principle precludes the possibility
to violate these symmetries \cite{SW,rparity}.
Within the particle content of the MSSM
$R$ parity can be broken explicitly by renormalizable
(and hence {\sl a priori} unsuppressed) operators.
The following extra $L$ violating couplings
in the superpotential are directly relevant for neutrino
propagation through matter:
\begin{eqnarray}
\label{lepton}
\lambda_{ijk} L_i L_j E^c_k \, \\
\lambda'_{ijk} L_i Q_j D^c_k
\label{lq}
\end{eqnarray}
where $L, Q, E^c$ and $D^c$ are (chiral) superfields which
contain the usual lepton
and quark $SU(2)$ doublets and singlets, respectively, and $i,j,k$ are
generation indices.
In the next we focus only on the second term \eq{lq}
because the first is much more constrained by experimental data.
Note that the simultaneous presence of the
$\lambda'' U^c U^c D^c$ and $\lambda' L Q D^c$-type couplings
is very strongly
constrained ($\lambda',\lambda'' \leq 10^{-10}$)
from non-observation of proton decay \cite{BGH,VS}.
However, the constraints on $\lambda'$ (see below) are rather weak
in the absence of the $B$ violating $\lambda''_{ijk}U^c_i U^c_jD^c_k$
term. We will adopt this choice throughout this paper.
The couplings in \eq{lq}
at low energy ($< 100$GeV) give rise to the following
four-fermion effective Lagrangian for neutrinos interactions with $d$-quark
\footnote{
For simplicity
we omit in the $\lambda'$-type Yukawa couplings
the terms $\lambda'_{i1k} (\bar{\nu_{i L}})^c d_{1 L} (\tilde{d}_{k R})^*$
($i,k=1,2,3$). However, the coupling constants
$\lambda'_{i1k}$ are much more
constrained than $\lambda'_{ik1}$ \cite{BGH}.}:
\begin{equation}
\label{effec}
L_{eff} = - 2\sqrt{2} G_F \sum_{\alpha,\beta}
\xi_{\alpha\beta} \: \bar{\nu}_{L\alpha} \gamma^{\mu} \nu_{L\beta} \:
\bar{d}_{R}\gamma^{\mu}{d}_{R}\:\:\:\alpha,\beta = e,\mu, \tau \, ,
\end{equation}
where the parameters $\xi_{\alpha\beta}$ represent the strength of the
effective interactions normalized to the Fermi constant $G_F$.
For our purpose
we consider explicitly the following {\it non standard} FDNC couplings:
\begin{eqnarray}
\xi_{ee} &= &\sum_j \frac{|\lambda'_{1j1}|^2}{4 \sqrt{2} G_F m^2_
{\tilde{q}_{jL}}}
\, ,\\
\xi_{\mu\mu} &= &\sum_j \frac{|\lambda'_{2j1}|^2}
{4 \sqrt{2} G_F m^2_{\tilde{q}_{j L}}} \, ,\\
\xi_{\tau\tau}& = &\sum_j \frac{|\lambda'_{3j1}|^2}
{4 \sqrt{2} G_F m^2_{\tilde{q}_{j L}} }\, ,\,\,\, j = 1,2,3\, ,
\end{eqnarray}
and the FCNC ones:
\begin{eqnarray}
\label{fcnc}
\xi_{e\mu}& = &\sum_j \frac{\lambda'_{1j1}\lambda'_{2j1}}
{4 \sqrt{2} G_F m^2_{\tilde{q}_{j L}}} \, ,\\
\xi_{e\tau}& = &\sum_j \frac{\lambda'_{1j1}\lambda'_{3j1}}
{4 \sqrt{2} G_F m^2_{\tilde{q}_{j L}}}\, ,
\,\,\,\, j = 1,2,3\, ,
\end{eqnarray}
where $m_{\tilde{q}_{j L}}$
are the masses of the exchanged squarks and
$j = 1,2,3$ denotes $\tilde{d}_L, \tilde{s}_L, \tilde{b}_L$, respectively.
These effective neutral current interactions contribute to the neutrino
scattering off $d$ quarks in matter, providing new
flavour conserving as well as flavour changing terms
for the matter potentials of neutrinos.
The phenomenological implications of the $R$ parity violating
couplings have been extensively studied and constraints on the
coupling constants $\lambda'$ from low-energy processes (charged
current universality, $e-\mu-\tau$ universality,
$\nu_\mu-e$ scattering, atomic parity violation)
has been obtained \cite{BGH}. Recently, new bounds have been
derived from LEP electroweak observables to constrain $\lambda'_{i3k}$
(for all $i,k$) and from $D$- decays to constrain $\lambda'_{12k}$
and $\lambda'_{22k}$ as well as from $\tau$ decays to restrict
$\lambda'_{31k}$ (for all $k$) (see \cite{GB} and refs. therein).
In summary, the most stringent bounds on
the coupling constants entering our study
are the following
\footnote{In ref. \cite{KP} stringent bounds,
$\lambda'_{113}\lambda'_{131}\leq 1.1\times 10^{-7}$,
$\lambda'_{112}\lambda'_{121}\leq 3.2\times 10^{-5}$,
$\lambda'^2_{111} \leq 6.4\times 10^{-5}$,
are obtained from the non-observation of $0\nu\beta\beta$ decay
for squark masses of 100 GeV. However, these limits suffer from
some theoretical uncertainties on nuclear matrix elements.}
(at 1 $\sigma$ level):
\begin{eqnarray}
\label{boun}
\lambda'_{12k} &\leq 0.29\,,& \: \: \lambda'_{13k} \leq 0.26, \\\nonumber
\lambda'_{22k} &\leq 0.18\,,&\, \: \: \lambda'_{23k} \leq 0.44,\\ \nonumber
\lambda'_{33k} &\leq 0.26\,,&\, \: \:\hfil\\ \nonumber
\lambda'_{i1k} &\leq 0.05\,,&\,\,\, (i=1,2,\: k= 1,2,3) \,
\end{eqnarray}
normalized to a 100 GeV {\sl reference} squark mass.
The most general Schroedinger neutrino evolution equation
in matter takes the form
\begin{equation}
{i{d \over dr}\left(\matrix{
\nu_e \cr\ \nu_x \cr }\right)=
\left(\matrix{
{H}_{e}
& {H}_{ex} \cr
{H}_{ex}
& {H}_{x} \cr}
\right)
\left(\matrix{
\nu_e \cr\ \nu_x \cr}\right) }\,, \,\,\,x =\mu (\tau)
\label{evolution1}
\end{equation}
The entries of the Hamiltonian reads as
\begin{equation}
\label{hamil}
H_e = V_e - \frac{\delta m^2}{2E} \cos 2\theta \, , \,\,\, \,\, \,\,
H_{x} = V_x \, , \,\,\, \, \,\,
H_{ex} = V_{ex} +\frac{\delta m^2}{4E} \sin 2\theta
\end{equation}
where $E$ is the neutrino energy,
$\delta m^2$ is the mass squared difference, $\theta$ is
the neutrino mixing angle in vacuum and $V_e, V_x$ and
$V_{ex}$ are the effective matter potentials as given by
\begin{eqnarray}
\label{poten}
V_e &=& \frac{\sqrt{2} G_F \rho}{m_p}
\Bigl[\frac{3Y_e - 1}{2} + \xi_{ee} (2-Y_e)\Bigr]\, ,\\
V_x &= &\frac{\sqrt{2} G_F \rho}{m_p}
\Bigl[\frac{Y_e-1}{2} + \xi_{xx} (2-Y_e)\Bigr]\, ,\\
V_{ex}& = &\frac{\sqrt{2} G_F \rho}{m_p} \xi_{ex} (2-Y_e) \, .
\end{eqnarray}
Here $m_p$ is the nucleon mass, $\rho$ is the matter density,
$Y_e$ is the electron number per nucleon and charge neutrality is
assumed
\footnote{Here the $d$ quark number density $N_d$ in the
medium is understood to be expressed as $N_e + 2N_n$.}.
For the corresponding anti-neutrino states the sign of matter potentials
is opposite.
Let us note that the matter potential induced by the {\it non standard}
FDNC interactions plays the role of an extra effective mass,
whereas those induced by the FCNC couplings play the role of a new
{\it mixing} term.
As a result, in principle even for strictly massless neutrinos
($\delta m^2=0$) and vanishing $\theta$, these new matter potentials
make the resonant neutrino conversion in medium possible
\cite{W,ER,GMP}.
Let us now turn to the application
of the above to the neutrino conversions in a supernova.
Let us discuss separately the cases
of $\delta m^2 =0$ and $\delta m^2 \neq 0$.
\section{Massless neutrino resonant conversion in supernovae}
We now turn to the application of the previous
formalism to resonant neutrino conversion in supernovae.
By equating the diagonal terms in the Hamiltonian matrix of
\eq{hamil} one can infer, for the case of massless neutrinos,
that the resonance condition is given by
\begin{equation}
\label{rc}
\xi'\equiv \xi_{xx} -\xi_{ee} = \frac{Y_e}{2-Y_e}
\end{equation}
which is clearly energy independent.
Here we should note that a positive value of $\xi'$ is necessary for
the above equation to hold.
It is important to note that
the same resonance condition holds also for the anti-neutrino system
$\bar{\nu}_e \leftrightarrow \bar{\nu}_x$. As a result, {\sl both}
neutrinos {\sl and} anti-neutrinos can simultaneously undergo
resonant conversions as discussed in ref. \cite{valle}.
As a result, this can affect in an important way
supernova neutrino emission.
The mixing angle $\theta_m$ and the
neutrino oscillation length $L_m$ in matter are given by
\begin{eqnarray}
L_m & = &
\frac{\pi\sin 2\theta_m}{V_{ex}}\, , \\
\label{length}
\label{angle}
\tan 2\theta_m &
= & \frac{ 2 \xi_{ex} (2-Y_e)}{Y_e -\xi' (2-Y_e)}\, ,
\end{eqnarray}
respectively.
In our subsequent discussion, we will employ the simple Landau-Zener
approximation \cite{Landau,HPD} to estimate the conversion
probability after the neutrinos cross the resonance. Under this
approximation, the probability for $\nu_e\leftrightarrow\nu_x$
and $\bar\nu_e\leftrightarrow\bar\nu_x$ conversions is given by
\begin{eqnarray}
\label{LZ}
P & = & 1 -
\exp\Biggl(-\dfrac{\pi^2}{2}\dfrac{\delta r}
{L_{m}^{\rm res}} \Biggr) \nonumber \\
& \approx & 1 - \exp\left[
-5 \times
10^4 \times\left(\dfrac{\rho_{\rm res}}{10^{12} {\mbox{g/cm}^3}}\right)
\Biggl(\dfrac{h}{\mbox{cm}} \Biggr)
\dfrac{\xi^2_{ex}}{\xi'}
\right], \nonumber \\
\delta r & = & 4 h \dfrac{\xi_{ex}}{\xi'}, \,\,\,\,\,\,\,\,
h \equiv \left| \frac{\mbox {d}\ln Y_e}{\mbox{d}r}\right|^{-1}_{\rm res},
\end{eqnarray}
where $L_{m}^{\rm res}$ is the neutrino oscillation length
at resonance and $\rho_{\rm res}$ is the corresponding matter density.
Let us briefly review the supernova process we are going to consider.
A few seconds after the bounce, the electron number density $Y_e$
is very low just above the neutrinosphere, $Y_e \sim 10^{-2}$,
while at large radii it saturates to an asymptotic value
$\sim 0.4$ (see Sect. 4.1 in (\cite{NQRV}).
This implies, from \eq{rc}, that the resonance condition
requires lepton universality to be at least violated at the 1\%
level, $\xi' \gsim 10^{-2}$
which is not in contradiction with present bounds outlined in \eq{boun}.
To keep the discussion simple and more conservative, we consider,
for each flavour conversion ($\nu_e\rightarrow\nu_\mu$ or $\nu_e\rightarrow\nu_\tau$),
only the contribution due to the exchange of one left-handed $\tilde{q}$
at a time in the corresponding
effective couplings $\xi_{ee}, \xi_{xx}, \xi_{ex}$.
After the bounce of the core, all neutrinos, emitted from the
neutrinosphere, have approximately equal luminosities
but rather different energy spectra. Correspondingly, the
average neutrino energies satisfy the following hierarchy:
\begin{equation}
\label{hierarchy}
\langle E_{\nu_e} \rangle <\langle E_{\bar\nu_e}\rangle <
\langle E_{\nu_{\tau(\mu)}}\rangle
\approx\langle E_{\bar\nu_{\tau(\mu)}}\rangle.
\end{equation}
Typically, the average supernova neutrino energies are:
\begin{equation}
\label{average}
\langle E_{\nu_e}\rangle \approx 11\ \mbox{MeV},\ \langle E_{\bar\nu_e}\rangle
\approx 16\ \mbox{MeV},\ \langle E_{\nu_{\tau(\mu)}}\rangle \approx \langle
E_{\bar\nu_{\tau(\mu)}}\rangle\approx 25\ \mbox{MeV}.
\end{equation}
As a result, a considerable conversion $\bar{\nu}_e \leftrightarrow
\bar{\nu}_{\mu,\tau}$ leads to a permutation of the neutrino energy
spectra which would provide a high energy tail in the anti-neutrino
energy spectrum from the supernova SN1987a \cite{KA,IMB}.
Comparison with the SN1987A observations leads to an upper bound
for the transition probability $P$ close to 0.35 \cite{SSB}.
Following the same reasoning, we will constrain the effective FCNC
couplings that can arise in supersymmetric models with explicitly
broken R-parity. Using the density and $Y_e$ profiles from Wilson's
supernova model (see Fig. 1 in ref. \cite{NQRV}), we plot in Fig. 1
two contours of the conversion probability
in the ($(|\lambda'_{ij1}|^2-|\lambda'_{1j1}|^2),\:
\lambda'_{1j1} \lambda'_{ij1}$) parameter space ($i=2,3; j=1,2,3$).
Here the {\sl reference} squark mass has been chosen to be 100 GeV.
Should the squark mass be different the plot should be
appropriately re-scaled.
The solid line is for a conversion probability of $P \approx 0.5$,
and the dashed one is for $P \approx 0.35$. We see from the figure
that, provided the violation of universality induced by the new
diagonal interactions is sufficiently high that the resonant
conversions take place, i.e. if
$(|\lambda'_{ij1}|^2-|\lambda'_{1j1}|^2) \gsim 10^{-2}$
one can rule out
$\lambda'_{1j1} \lambda'_{ij1} \gsim 10^{-6} \div 10^{-4}$.
Note, that this bound on $\lambda'_{1j1} \lambda'_{ij1}$ is about
three orders of magnitude stronger than the present experimental
one in \eq{boun}.
In addition, the region above the neutrinosphere is also
supposed to be the site for the synthesis of heavy elements (with
mass number $A > 70$) through $r$ processes \cite{Woosley}.
A necessary condition required for this to occur is $Y_e < 0.5$
in the nucleosynthesis region. The value of $Y_e$ is controlled
by the charged current reactions:
\begin{eqnarray}
\label{nu-n}
\nu_e+n & \rightleftharpoons & p+e^-,\\
\label{nu-p}
\bar\nu_e+p&\rightleftharpoons&n+e^+.
\end{eqnarray}
Roughly speaking, the rates $\Gamma_{\nu_e N}$
of the above reactions are proportional to the products
of the $\nu_e$ and $\bar{\nu}_e$ luminosities and average energies,
\begin{equation}
\label{rates}
\Gamma_{\nu N}\approx \phi_\nu\,\langle \sigma_{\nu N}\rangle
\propto {L_\nu\over\langle E_\nu\rangle}\langle E_\nu^2\rangle
\propto L_\nu\langle E_\nu\rangle \, ,
\end{equation}
where $\phi_\nu$ is the neutrino flux, $\sigma_{\nu N}\propto E_\nu^2$ is
the neutrino absorption cross section, and $\langle\ \rangle$ denotes
the averaging over the neutrino energy distribution. As a result,
the relevant expression for $Y_e$ turns out to be very simple:
\begin{equation}
\label{fourthye}
Y_e\approx {\Gamma_{\nu_en}\over\Gamma_{\bar\nu_ep}+\Gamma_{\nu_en}}
\approx {1\over 1+\langle E_{\bar\nu_e}\rangle/\langle E_{\nu_e}\rangle}.
\end{equation}
Using the average energies in \eq{average},
we obtain $Y_e\approx 0.41$, in good
agreement with the numerical supernova models.
However, in the presence of neutrino conversions, average energies
of $\bar\nu_e$ and/or $\nu_e$ can be affected and consequently the
value of $Y_e$ can deviate from the predicted one.
As a result, in the nucleosynthesis region $Y_e$ should
be replaced by
\begin{equation}
\label{34}
Y_e \approx {1\over 1+{\VEV{E_{\bar\nu_e}}_{\rm eff}}/
\VEV{E_{\nu_e}}_{\rm eff}},
\end{equation}
where
\begin{eqnarray}
\VEV{E_{\bar\nu_e}}_{\rm eff} \equiv \VEV{E_{\bar\nu_e}} (1-P) +
\VEV{E_{\bar\nu_\tau}} P, \\\nonumber
\VEV{E_{\nu_e}}_{\rm eff} \equiv \VEV{E_{\nu_e}} (1-P) +
\VEV{E_{\nu_\tau}} P.
\end{eqnarray}
Due to the the {\sl simultaneous} occurrence of resonant
$\nu_e \leftrightarrow \nu_\tau$ and
$\bar{\nu}_e \leftrightarrow \bar{\nu}_\tau$ conversions,
there is a trend to equalize the average $\nu_e$ and $\bar\nu_e$
energies, and as a result, to increase $Y_e$ with respect to the
standard model case with no neutrino or anti-neutrino conversions.
For conversion probabilities of $P\approx0.15$, 0.3, and 0.8, we obtain
$Y_e\approx 0.43,$ 0.45, and 0.49. In Fig. 2, we present the contour lines
corresponding to these $Y_e$ values.
The dotted, dashed, and solid
lines in this figure are for $Y_e\approx 0.43$, 0.45, and 0.49, respectively.
If we take $Y_e<0.45$ as a criterion for a successful $r$-process, then
$\lambda'_{131} \lambda'_{ij1} \gsim 10^{-6} \div 10^{-4}$
is excluded for $(|\lambda'_{ij1}|^2-|\lambda'_{131}|^2) \gsim 10^{-2}$.
This excluded region is similar to the previous one obtained
by considering the $\bar\nu_e$ energy spectra from SN1987a,
because the limits on the conversion probability are about the
same in both cases. However, we note that if the $r$-process indeed
occurs in supernovae, then the resulting limits on the effective FCNC
couplings are much less dependent on the predicted average neutrino energies
than the previous one. This is because the $r$-process argument relies
only on the ratio of the average neutrino energies [cf. \Eq{fourthye}].
A remark is in order. The parameter space we have explored in this
section is complementary to the one relevant for the solar
neutrino problem \cite{GMP,BPW}. Indeed, in the solar case much larger
values of the FDNC couplings $(|\lambda'_{331}|^2-|\lambda'_{131}|^2)
\sim 0.4\div 0.6$ are necessary to satisfy the resonance condition in
the inner solar core where $Y_e \sim 0.7$.
Certainly, the $\bar{\nu}_e$ energy spectrum consideration
could be used to exclude, at least partially, the resonant
massless neutrino conversion as a solution to the solar neutrino
problem
\footnote{Note that such solution is already disfavoured, since it
predicts an energy-independent neutrino suppression, contrary to what
is indicated by present solar neutrino observations.}, as
suggested in \cite{AS}. In that case the value of the
effective FDNC couplings should be much larger in order to
allow the resonant neutrino conversion to take place, i.e.
($|\lambda'_{331}|^2-|\lambda'_{131}|^2) \geq 0.5$.
This would correspond to massless resonant neutrino conversion
very far from the neutrinosphere, unlike the case studied in the
present paper.
On the other hand, no complementary information can be obtained
from the r-process nucleosynthesis argument, since this requires
neutrinos to undergo the resonance just above the neutrinosphere.
\section{Massive Neutrino Conversion in Supernovae}
In models with explicitly broken R parity
neutrino masses are induced radiatively at the one-loop
level due to the exchange of down-type quarks and
squarks \cite{beyond}. A simple estimate of the corresponding
diagram shown in Fig. 3, leads to a typical neutrino mass parameter
$\lambda'^2 m_d^2 /m_{SUSY}$. For reasonable choices
of $m_{SUSY}$ and $\lambda'$ (see below) one can see that
the resulting neutrino masses could lie in the eV range
for which they could play an important role in neutrino
propagation in the supernova environment. Moreover, such
mass could account for the hot dark matter in the Universe.
In this section we include the effect of
non-zero $\delta m^2$ on our previous evolution Hamiltonian
of \eq{hamil}.
Let us assume, for definiteness, that the vacuum mixing angle
characterizing the
two-neutrino system is negligible and, moreover, that one of
the two neutrino species is much heavier than the other.
In our description we will neglect the {\sl non standard}
FDNC contributions in the Hamiltonian matrix \eq{hamil}, this
way evading the constraints given in \eq{boun}.
In contrast, the FCNCs generated by the R-parity breaking
interactions provide the required mixing term in the
evolution Hamiltonian, through the matter potential $V_{ex}$.
In this case the resonant condition reduces to the familiar
one for the MSW effect with vanishing mixing, i.e.
\begin{equation}
\frac{\delta m^2}{2E} = \frac{\sqrt2 G_F \rho}{m_p} Y_e
\end{equation}
A simple numerical check shows that the relevant neutrino mass
scale for which the corresponding resonant neutrino conversions
will occur in the supernova environment includes neutrino mass
range of few eV, which is precisely the one required in order
that one of the two neutrino species, \hbox{$\nu_e$ } or \hbox{$\nu_\tau$ } play a role
as hot dark matter \cite{cobe2}.
The neutrino wave length is still given by \eq{length}
where the mixing angle is now given by:
\begin{equation}
\label{angle2}
\tan 2\theta_m
= \frac{ 2 \xi_{ex}\rho(2-Y_e)}
{\rho Y_e -\delta m^2 m_p /(2\sqrt{2}G_FE)}\, .
\end{equation}
Therefore, the transition probability is given by
\begin{eqnarray}
\label{LZ2}
P & = & 1 -
\exp\Biggl(-\dfrac{\pi^2}{2}\dfrac{\delta r}
{L_{m}^{\rm res}} \Biggr) \nonumber \\
& \approx & 1 - \exp\left[
-1.6 \times 10^{-2} \times
\left(\dfrac{\delta m^2}{1{\mbox{eV}^2}}\right)
\left(\dfrac{10\mbox{MeV}}{E}\right)
\left(\dfrac{2-Y_e}{Y_e}\right)^2_{res}
\Biggl(\dfrac{h}{\mbox{cm}} \Biggr)
\xi^2_{ex}
\right], \nonumber \\
\delta r & = & 4 h \xi_{ex}\left(\dfrac{2-Y_e}{Y_e}\right),
\,\,\,\,\,\,\,\,
h \equiv
\left| \frac{\mbox {d}\ln (\rho Y_e)}{\mbox{d}r}\right|^{-1}_{\rm res},
\end{eqnarray}
This way we will constrain the $(\delta m^2,
\lambda'_{1j1}\lambda'_{ij1})$ parameter space irrespective of
any universality violation.
Let us note that for a given sign
\footnote{Here we set $\delta m^2 > 0$ for $m_{\nu_x} > m_{\nu_e}$.}
of $\delta m^2 $ only one kind of resonant conversion,
either $\nu_e\leftrightarrow\nu_x$ (for $\delta m^2>0$), or
$\bar{\nu}_e\leftrightarrow\bar{\nu}_x$ (for $\delta m^2<0$), can occur.
Therefore to discuss $\bar{\nu}_e$ energy spectra distortion
from SN1987a we have to assume $\delta m^2<0$.
The upper bound on $\bar{\nu}_e$ mass from $\beta$ decay experiment,
$m_{\nu_e} < 4.35$ eV (95\% C.L.) \cite{beta} cut off our relevant
$\delta m^2$ range in Fig. 3. One sees from this figure that for
$\delta m^2 \lsim 1 \div 20$eV$^2$
the FCNC couplings are restricted to be $\lsim 10^{-3}$
irrespective of any lepton non-universality. From this point of view
the limits derived
in this section are of more general validity than those of section 3.
For this mass hierarchy the resonant neutrino conversion would not
conflict with the nucleosynthesis process for any choice of
parameters, and therefore no constraint can be obtained.
On the other hand, for $\delta m^2>0$ one expects that
$\nu_e \leftrightarrow \nu_{x}$ transitions will occur
and they can affect the nucleosynthesis process.
In contrast, in this case the $\bar{\nu}_e$ spectra
would be unaffected. In Fig. 4 we plot the iso-contours
for different values of the electron abundance $Y_e$.
One can see that in the interesting range
$\delta m^2 \sim 1 \div 20 $eV$^2$,
favoured by the hot plus cold dark matter scenario \cite{cobe2},
we can rule out the FCNC couplings $\lambda'_{1j1} \lambda'_{ij1}$
at the level of few $10^{-3}$.
\section{Resonant Massless Neutrino Conversion and Supernova Shock Re-heating}
We would like to briefly address an interesting open problem
related with the energetics of supernova explosion.
It is now generally accepted that the prompt shock stalls at a
radius $\sim 100$ kilometres, due to photo-dissociation,
neutrino losses, and accretion \cite{burrows}. The main aspect
of a supernova explosion is the transfer of energy from the core
to the mantle. The mantle is less bound than the core, whose
binding energy can grow during the delay to
explosion. The core is the protoneutron star that will
evolve due to neutrino cooling and deleptonization over
many seconds. Bethe \& Wilson \cite{bw85} showed how neutrino
heating of the accreted material near the shock could lead to an
explosion. It seems compelling that neutrinos mediate this energy
transfer and are the agents of explosion \cite{burrows}.
If neutrinos have only standard model interactions the energy they
carry seems insufficient to re-energyse the shock material.
It has been argued that the occurrence of $\nu_e \rightarrow \nu_{\mu,\tau}$
MSW neutrino conversions behind the shock would increase the energy
deposited by $\nu$`s. This is due to the fact that the average energy
of $ \nu_{\mu,\tau}$ is about twice larger than that of $\nu_{e}$.
The capture processes in \eq{nu-n} and \eq{nu-p} are mostly responsible
for the energy deposit.
Our scenario is rather distinct from the MSW effect.
Unlike in the MSW case, the simultaneous $\nu_e \rightarrow \nu_{\mu,\tau}$ and
$\bar\nu_e \rightarrow \bar\nu_{\mu,\tau}$ conversions can power
both reactions \eq{nu-n} and \eq{nu-p} and as a result the effect
may be larger than for the standard MSW or resonant spin-flavour
precession \cite{FMMW,ALPS}.
We adopt the argument given by Fuller {\it et al.} in \cite{FMMW}
for providing the total heating rate by $\nu_e$ and $\bar{\nu}_e$.
Qualitatively, the heating rate $\dot{\epsilon}$ is just the product
$\vev{E} \Gamma_{\nu N} Y_N$ (see \eq{rates}), namely
\begin{equation}
\dot{\epsilon} \approx L_{\nu} \biggl(Y_n \langle E_{\nu_e}\rangle^2 +
Y_p \langle E_{\bar{\nu}_e}\rangle^2\biggr)
\end{equation}
In the presence of complete resonant conversions $\nu_{\mu}\rightarrow \nu_e$ and
$\bar{\nu}_{\mu}\rightarrow \bar{\nu}_e$ the rate can be increased by the amount
\begin{equation}
\label{ratio}
\frac{\dot{\epsilon}'}{\dot{\epsilon}} \approx
\frac{Y_n \langle E_{\nu_\tau} \rangle^2 +
Y_p \langle E_{\bar{\nu}_\tau}\rangle^2}
{Y_n \langle E_{\nu_e} \rangle^2 +
Y_p \langle E_{\bar{\nu}_e}\rangle^2} =
\biggl(\frac{\langle E_{\nu_\tau} \rangle}
{\langle E_{\nu_e} \rangle}\biggr)^2 \sim 2\, ,
\end{equation}
where it is assumed $\langle E_{\nu_\tau} \rangle =
\langle E_{\bar{\nu}_\tau} \rangle \sim 21$ MeV
and $\langle E_{\nu_e} \rangle =
\langle E_{\bar{\nu}_e}\rangle \sim 15$ as typical average energies for
the earlier epoch after the bounce $t \gsim 0.1$ s.
At this epoch, the $Y_e$ value is somewhat larger than
that characteristic of the later epoch discussed above $Y_e
\sim 10^{-2}$.
However, the present experimental bounds on
$\lambda'_{1j1}, \lambda'_{3j1}$ allow $\xi' \gsim 0.1$,
needed in order to have resonant neutrino conversions (see \eq{rc})
at $t\gsim 0.1$ s if $Y_e \sim 0.15$ at neutrino sphere.
We can notice that in the usual $\nu_e \leftrightarrow \nu_x$
MSW conversion
\footnote{
Our estimates of the heating rates are somewhat qualitative
but they are sufficient for our discussion.} the gain in reheating
rate with respect to that of the standard model is
\cite{FMMW} ${\dot{\epsilon}'}/{\dot{\epsilon}} \approx 5/3$
whereas in the resonant spin-flavour precession scenario
\cite{ALPS} ${\dot{\epsilon}'}/{\dot{\epsilon}} \approx 4/3$.
Clearly, for the massive neutrino case we can also expect
analogous effects. Actually the scenario, depending on
the sign of $\delta m^2$ looks like the usual MSW picture.
\section{Conclusions}
Supersymmetry with explicitly broken $R$ parity breaking
provides a variety of novel possibilities for neutrino
propagation properties in the presence of matter, even
when they are strictly massless.
The supernova matter background
seems to be one where most likely resonant conversions of
massless neutrinos can play an important role.
We have re-examined the resonant massless-neutrino conversion
in a supernova medium in the presence of flavour changing neutral
current (FCNC) couplings present in explicit $R$ parity violating
supersymmetric models.
We have shown how the observed $\bar\nu_e$ energy spectra
from SN1987a and the supernova $r$-process nucleosynthesis argument
may provide very stringent constraints on such new FCNC interactions.
Typically they are much more stringent than previously obtained at the
laboratory. From this point of view the SN1987a event provides
a strong sensitivity in restricting neutrino properties
in supersymmetric models with $R$ parity violation.
Our results here are summarized in Figs. 1 and 2.
We have also generalysed the description of MSW massive-neutrino
conversions in supernovae so as to account for the presence of
explicit $R$-parity-violating FCNCs and determined the corresponding
restrictions in the limit of vanishing vacuum mixing. Our results are
summarized in Figs. 3 and 4. The relevant
neutrino mass scale could play an important role in connection with
hot dark matter. While these constraints we derive on $R$ parity
violating interactions are weaker than the ones obtained in the
massless limit they are still stronger than those available
from laboratory experiments. More importantly, they
are of wider validity than those obtained in the
massless limit.
Last but nor least, our discussion of massless-neutrino
conversions in supernovae should highlight the interest in
improving the present laboratory limits on universality
violation and flavour changing R-parity breaking interactions.
\centerline{\bf Acknowledgement}
\noindent
We thank Alexei Smirnov for fruitful discussions.
This work was supported by DGICYT under Grant PB92-0084,
by the Human Capital and Mobility Program under Grant
ERBCHBI CT-941592 (A. R.), and by a DGICYT postdoctoral
fellowship (H. N.).
\noindent
|
1,314,259,996,593 | arxiv | \section{Introduction}
Ultracold bosonic gases in optical lattices offer a supreme laboratory for the study of a wide range of phenomena in strongly correlated quantum systems. Due to the flexibility of the experimental methods available for preparing and controlling the relevant parameters, one can address fundamental questions of importance in condensed matter physics and quantum information processing.
In addition to the powerful experimental tools for preparation and control, several techniques for probing the quantum state of these many-body systems are available. The simplest experimental observable is the matter-wave interference pattern after realease of the atoms from the lattice, which led to the observation of the superfluid (SF) to Mott-insulator phase transition (MI) \cite{GrMa02a,GrMa02b,StMo04,JaBr98,OoSt01,RoBu03a}. Another tool for characterizing the different quantum phases are dynamical excitations. The excitation via temporal lattice modulations corresponding to Bragg spectroscopy \cite{StMo04,StIn99} provides an excellent probe for the observation of quantum phase transitions. In contrast to static perturbations generated, e.g., by tilting the optical lattice \cite{GrMa02a,BrDu04}, Bragg spectroscopy provides a very precise probe for the response at a well-defined excitation energy \cite{StMo04,KBM05,KoIu06,ClJa06}.
Recent experiments also explore the effects of irregular lattice potentials on the phase diagram. One can employ speckle patterns to create random lattice potentials \cite{LyFa05,FoFa05,ScDr05,ClVa05,FaLy06} or use a superposition of two standing-wave lattices of different wavelengths to generate a two-colour superlattice potential \cite{RoBu03b,RoBu03c}.
Recently, the response of the Bose gases in time-dependent superlattices was investigated experimentally by Fallani \emph{et~al.}~\cite{FaLy06}. The spatial structure of the lattice potential gives rise to additional quantum phases, such as localized \cite{Gebh97} and (quasi) Bose-glass phases \cite{RoBu03c}. The systematic experimental investigation of the resulting phase diagram requires powerful probes that allow to distinguish the different phases. In addition to the matter wave interference pattern, the dynamical behaviour of the system provides important information.
The aim of this paper is to study the response of zero-temperature Bose gases in one and two-colour superlattices and to identify possible experimental hallmarks of the individual quantum phases. To this end, we simulate the time-evolution of the system in the presence of a time-dependent lattice potential in the framework of the Bose-Hubbard model. In section \ref{sec_theory} we formulate the time-dependent Bose-Hubbard Hamiltonian and discuss the numerical approach used for the time-evolution including a physically motivated basis truncation scheme. In preparation of the investigation of superlattice potentials, we study the dynamic response of Bose gases in regular lattice potentials in section \ref{sec_reg_lattice}. In addition to the full time-dependent simulations, we use a linear perturbation analysis \cite{KoIu06,ClJa06} to characterize and interpret the response of the system. In section \ref{sec_superlattice} we extend this discussion to two-colour superlattices and identify possible dynamical signatures of the Mott-insulator to quasi Bose-glass transition.
\section{Bose-Hubbard model and time evolution}\label{sec_theory}
\subsection{Bose-Hubbard model}\label{sec_bhm}
The Bose-Hubbard model has proven to be the appropriate framework for the description of zero-temperature Bose gases in optical lattices \cite{JaBr98}. It describes the full phase diagram ranging from weakly interacting superfluid phases to the regime of strong correlations, e.g. in the Mott-insulator phase. Assuming that the lattice potential is sufficiently deep, the single-particle Hilbert space can be restricted to the lowest energy band. A suitable single-particle basis is given by the localized Wannier functions of the lowest band. A state of $N$ bosons at $I$ lattice sites can be represented by an $I$-tuple of occupation numbers $\{n_1,\,n_2,\,\cdots,\,n_I\}$\, of the individual sites \cite{JaBr98,RoBu03a,RoBu03c}. The Fock states $\ket{\{n_1,\,n_2,\,\cdots,\,n_I\}_{\alpha}}$ for all possible compositions of occupation numbers span a basis of the single-band Hilbert space.
The Bose-Hubbard Hamiltonian for a single-component Bose gas in one dimension reads \cite{JaBr98}:
\begin{equation}
\op{H}=-J\sum_{i=1}^I\left(\conop{a}_{i+1}\desop{a}_i+\text{h.a.}\right)+\sum_{i=1}^I\epsilon_i\op{n}_i+\frac{U}{2}\sum_{i=1}^I\op{n}_i\left(\op{n}_i-1\right)\text{,}\label{eq_hamop}\
\end{equation}
with creation (annihilation) operators~$\conop{a}_{i}$~($\desop{a}_{i}$) for a boson at site~$i$ and occupation number operators~$\op{n}_i=\conop{a}_{i}\desop{a}_{i}$. The first term in (\ref{eq_hamop}) accounts for the hopping between adjacent sites with the tunnelling matrix element $J$. The second term introduces a site-dependent single-particle energy $\epsilon_i$ which is used to describe, e.g., an external harmonic trapping potential or a superlattice potential \cite{RoBu03b,RoBu03c}. The third term of the Hamiltonian (\ref{eq_hamop}) accounts for the on-site two-body interaction of the atoms with interaction strength $U$. Throughout this paper we use periodic boundary conditions, i.e., hopping between the first and the last site of the lattice is possible.
The parameters of the Bose-Hubbard Hamiltonian are directly connected to the physical lattice realized by an optical standing wave \cite{JaBr98}. The standing wave generates an array of microscopic potentials -- the lattice sites of the Hubbard model. A large intensity of the standing wave, i.e., a deep lattice potential, suppresses the hopping between adjacent sites and confines the atoms to individual sites. At low or vanishing lattice amplitudes the atoms move freely through the lattice and establish long-range coherence.
The dimension $D$ of the number basis for a bosonic system is given by
\begin{equation}
D=\frac{(N+I-1)!}{N!(I-1)!}\text{,}\nonumber
\end{equation}
which increases rapidly with the number of particles $N$ and lattice sites $I$. For fixed filling factor $N/I=1$ the basis dimension of $D=6435$ results for a system of $8$ sites, $D=92378$ for a $10$-site system, and $D=1352078$ for $12$ sites. In the number basis representation the Hamilton matrix is extremely sparse, since only the nearest-neighbour hopping term generates off-diagonal matrix elements. Due to this sparsity the lowest eigenstates can be obtained efficiently using Lanczos-type algorithms \cite{arpack} for system-sizes up to typically $I=N=12$ with standard desktop computers. The eigenstates are expanded in the number basis,
\begin{equation}
\ket{\nu}=\sum_{\alpha=1}^DC_{\alpha}^{(\nu)}\ket{\{n_1,n_2,\cdots,n_I\}_{\alpha}}\label{,}\label{eq_eigenstate}
\end{equation}
where $C_{\alpha}^{(\nu)}$ denotes the expansion coefficients of the $\nu$-th eigenstate. These coefficients are obtained by diagonalizing the Hamilton matrix.
In order to treat systems of larger size, we have introduced a truncation scheme for the number basis \cite{ScHi06}. In strongly correlated regimes only a few number states contribute to the low-lying eigenstates, since configurations with several atoms occupying a single site are suppressed due to the strong interaction. For the description of the ground state, the basis dimension can be reduced to less than a percent without a significant loss of quality. In contrast, the proper description of a system of weakly interacting atoms requires more number states in the basis. The Hamiltonian itself provides a simple \textit{a priori} measure for the importance of number states via its diagonal matrix elements. Only those states with diagonal matrix elements smaller than a certain truncation energy $E_{\text{trunc}}$, which depends on the Hubbard parameters, are included in the basis:
\begin{equation}
E_{\text{trunc}}\ge\matrixe{n_1,\cdots\ ,n_I}{\op{H}}{n_1,\cdots\ ,n_I}\text{.}
\end{equation}
By adjusting the truncation energy, one can tune the basis for a more accurate description of the physical system or a smaller basis dimension. Detailed benchmarks and applications of this basis truncation scheme can be found in \cite{ScHi06}.
\subsection{Time evolution}
Different experimental schemes have been developed to probe the dynamic response of atomic gases in optical lattices. One possible scheme involves the tilting of the lattice to induce a static force \cite{GrMa02a,KBM05}. Another method is Bragg spectroscopy \cite{StIn99}, which has the advantage of not involving any side effects like Bloch oscillations or Zener tunnelling \cite{StMo04,KBM05}. It therefore allows a very precise determination of the excitation spectrum.
In the case of atoms in an optical lattice this method can be implemented by a temporal modulation of the lattice potential $V_{\text{lat}}(x)$ with frequency $\omega$. A regular one-dimensional optical lattice generated by an optical standing wave is given by
\begin{equation}
V_{\text{lat}}(x)=V_0\sin^2(kx)\text{,}\label{eq_static_pot}
\end{equation}
with amplitude $V_0$ and wavenumber $k$. The modulation of the lattice is achieved by introducing a time-dependent factor:
\begin{equation}
V_{\text{lat}}(x,t) = V_{0}\left[1+F\sin(\omega t)\right]\sin^2(kx)\label{eq_osc_pot}\text{.}
\end{equation}
The amplitude of the potential (\ref{eq_osc_pot}) is oscillating around the background value $V_0$ with frequency $\omega$ and amplitude $V_0F$ with $F\ll1$. This introduces two sidebands with frequencies $\pm\omega$ and defines the corresponding excitation energy $\omega$ \cite{StMo04}.
As mentioned in the previous section, the physical lattice (\ref{eq_static_pot}) enters the Bose-Hubbard model via the parameters $J$, $U$, and $\epsilon_i$ \cite{JaBr98}. In order to obtain the time-dependent expressions of these parameters, the localized Wannier states are approximated by Gaussians of width $\sigma$ \cite{KBM05}. The optimal value of the width $\sigma(t)$ is determined by an energy minimization using the lattice potential (\ref{eq_osc_pot}) at a given time $t$. Computing the matrix elements of the kinetic energy and the interaction part of the first quantized Hamiltonian within this Gaussian approximation leads to the time-dependent Hubbard parameters:
\begin{equation}
\begin{array}{rcl}
J(t) & = & J_0 \exp[-F\sin(\omega t)] \;, \\
U(t) & = & U_0 [1 + F\sin(\omega t)]^{1/4} \;.
\end{array}\label{eq_hubbard_params}
\end{equation}
In lowest-order approximation, the temporal change of the on-site energies is directly given by the change of the potential $V_{\text{lat}}(x,t)$, i.e.,
\begin{equation}
\epsilon_i(t) = \epsilon_{i,0}[1+F\sin(\omega t)] \text{.}
\end{equation}
By substituting the static parameters of the Hamiltonian (\ref{eq_hamop}) with the dynamical ones (\ref{eq_hubbard_params}) we obtain the time-dependent Bose-Hubbard Hamiltonian.
For our simulations of the time evolution in modulated optical lattices, we start with an initial state given by the ground state of the Bose-Hubbard Hamiltonian for given parameters $J_0$, $U_0$, and $\epsilon_{i,0}$.
Starting from this initial state, we then evolve in time steps $\Delta t$ while modulating the lattice with a fixed frequency $\omega$ and amplitude $V_0F$. The frequency $\omega$ defines the probe energy and the response of the system is measured by evaluating observables using the time-evolved state $\ket{\psi,t}$.
\subsection{Numerical methods}
The time evolution is performed either by the Crank-Nicholson scheme (CN) or by a 5th order predictor-corrector method\footnote{We use the 5th order Adams-Bashforth predictors and the Adams-Moulton correctors.} (PC), depending on the system parameters. Since the Crank-Nicholson scheme is an implicit method which requires the solution of a set of linear equations\footnote{We employ the PARDISO solver \cite{ScGa00, ScGa04, pardiso} which comes with Intel's \textit{Math Kernel Library}.} of the basis dimension at each timestep, it is feasible only for small systems. Nevertheless, due to its unconditional stability and in combination with the basis truncation, it is a valuable tool to simulate atomic gases in the strongly correlated regime.
The predictor-corrector method is an explicit scheme which allows to treat larger system sizes. The drawback of this method is its numerical instability, especially in cases where small expansion coefficients of the states (\ref{eq_eigenstate}) are involved. Numerical errors in these coefficients accumulate and lead to a collapse of the computation after a few steps. Nevertheless, it is possible to apply this method in the weakly interacting regime, where only a few small coefficients appear. In connection with the basis truncation, which predominantly discards these number states with small coefficients, the stability is improved. It is, however, not feasible to evolve systems in the strongly interacting regime using the PC method. The initial states consist only of a few dominant number states, i.e., most of the coefficients in the number representation are very small. In these cases the truncation cannot be used to improve the situation, because one would have to neglect a large number of states and, consequently, would miss some features of the excitation spectrum.
\section{Bose gases in an optical lattice}\label{sec_reg_lattice}
\subsection{Setup and linear response analysis}
As a first benchmark, we study the dynamical properties of a Bose gas in a one-dimensional optical lattice with periodic boundary conditions. The system consists of $N=10$ bosons at $I=10$ sites of a regular lattice ($\epsilon_i=0$). The Bose-Hubbard Hamiltonian of such a system is given by
\begin{equation}
\op{H}(t)=-J\sum_{i=1}^I\left(\conop{a}_{i+1}\desop{a}_i+\text{h.a.}\right)+\frac{U}{2}\sum_{i=1}^I\op{n}_i\left(\op{n}_i-1\right)\text{.}\label{eq_reg_hamop}\
\end{equation}
In addition to the exact time evolution, we examine the excitation of the Bose gas using a linear approximation of this Hamilton operator as introduced in refs. \cite{KBM05,IuCa06,KoIu06,ClJa06}. The Hamiltonian (\ref{eq_reg_hamop}) can be written in terms of the hopping and interaction operators, $\op{H}_J$ and $\op{H}_U$, respectively,
\begin{equation}
\op{H}=-J(\tilde{V}_0)\op{H}_J+U(\tilde{V}_0)\op{H}_U\label{eq_short_hamop}
\end{equation}
with the time-dependent amplitude of the physical lattice potential (\ref{eq_osc_pot})
\begin{equation}
\tilde{V}_0(t)=V_0 [1+F\sin(\omega t)].
\end{equation}
We now linearize the Hamiltonian (\ref{eq_short_hamop}) with respect to the perturbation by retaining only the lowest-order terms of a Taylor expansion in the modulation amplitude $F$,
\begin{equation}
\op{H}_{\text{lin}} = \op{H}_0+F\frac{\partial\op{H}}{\partial F}\Bigg|_{F=0}.
\end{equation}
This Hamiltonian depends linearly on the temporal variation of the lattice amplitude. The initial Hamilton operator $\op{H}_0$ is given by (\ref{eq_reg_hamop}) at the time $t=0$. Evaluation of the derivative of the Hamiltonian~(\ref{eq_short_hamop}) using the dependence of the Hubbard parameters on the lattice amplitude results in
\begin{equation}
\hspace{-10mm}\op{H}_{\text{lin}}(t) = \op{H}_0+FV_0\sin(\omega t)\left[\frac{d\ln U}{d\tilde{V}_0}\Bigg|_{F=0}\op{H}_0-J\left(\frac{d\ln J}{d\tilde{V}_0}\Bigg|_{F=0}-\frac{d\ln U}{d\tilde{V}_0}\Bigg|_{F=0}\right)\op{H}_J\right]\text{,}\label{eq_lin_ham}
\end{equation}
which consists of the unperturbed Hamiltonian $\op{H}_0$ and a linear perturbation part. The first term of the perturbation is proportional to the unperturbed Hamiltonian and the second term includes the hopping operator $\op{H}_J$. Due to the small modulation amplitude the first part of the perturbation leads to a small energy shift which can be neglected. The second part generates excitations, since the hopping operator connects different eigenstates with the coupling parameter
\begin{equation}
\kappa=-JFV_0\sin(\omega t)\left(\frac{d\ln J}{d\tilde{V}_0}\Bigg|_{F=0}-\frac{d\ln U}{d\tilde{V}_0}\Bigg|_{F=0}\right)\text{.}
\end{equation}
In order to identify possible excitations of the ground state $\ket{0}$ to higher-lying states $\ket{\nu}$ of the static spectrum, we look for non-vanishing matrix elements $|\matrixe{\nu}{\op{H}_J}{0}|$ of the hopping operator $\op{H}_J$.
\begin{figure}[t]
\includegraphics{fig1a.eps}\includegraphics{fig1b.eps}\includegraphics{fig1c.eps}
\caption{Lowest energy eigenvalues for a system with $N=10$ bosons and $I=10$ sites at interaction strengths $U_0/J_0=30$, $15$, and $3$, computed by diagonalization of the Hamilton matrix in a truncated number basis (see text). The vertical lines mark the strongest matrix elements $|\matrixe{\nu}{\op{H}_J}{0}|$ between ground state and excited states. The strength of the matrix elements is represented by the gray level of the lines, where darker lines correspond to stronger values.}
\label{plot_spec_reg}
\end{figure}
As a first application and a preparation for the discussion of superlattice potentials we examine a system of $10$~bosons in a regular lattice with $10$~sites. The low-lying eigenstates are obtained using a basis consisting of the $5911$ energetically lowest number states out of the complete basis ($D=92378$), generated for the interaction strength $U_0/J_0=30$ with the truncation energy $E_{\text{trunc}}/J_0=90$. Figure \ref{plot_spec_reg} shows the spectra of the truncated system for interaction strengths $U_0/J_0=30$, $15$, and $3$.
Deep within the Mott-insulator regime, for $U_0/J_0=30$, we obtain the characteristic gapped energy spectrum shown in figure \ref{plot_spec_reg}(a). The vertical lines illustrate the strongest hopping matrix elements $|\matrixe{\nu}{\op{H}_J}{0}|$ between ground and excited states. The energy scale is shifted with respect to the ground state energy, so that the vertical lines represent excitations into the Hubbard bands ($U_0$-band) and the corresponding energies. It is remarkable that there are no sizable transition matrix elements to states in the $2U_0$-band but to the $3U_0$-band, as was pointed out in reference \cite{KoIu06,ClJa06}. With decreasing interaction strength the gaps are reduced and eventually vanish in the superfluid regime, as the sequence of plots in figure \ref{plot_spec_reg} demonstrates.
A simple interpretation of the excitations to the individual Hubbard bands can be given in the language of particle-hole excitations. In the limit of strong repulsive interactions, i.e., deep in the Mott-insulator regime, the ground state of a system with unit filling is given by $\ket{0}\approx\ket{1,1,1,1,1,1,\dots}$. The simplest mechanism to excite this state is a one-particle-one-hole excitation (1p1h) as illustrated graphically in table \ref{tab_exc}. The excitation energy associated with such a process equals the interaction strength~$\Delta E=U_0$, hence it represents a possible excitation from the ground state to a state in the first Hubbard band. Table \ref{tab_exc} lists the basic particle-hole excitations together with the Hubbard band they connect to.
\begin{table}[t!]
\centering
\begin{tabular}{llc}
schematic & type & energy transfer\\
\hline\hline
\\[2pt]
\includegraphics[scale=0.4]{tab1a.eps} & one-particle-one-hole (1p1h) & $U_0$
\\[1pt]
\hline
\\[2pt]
\includegraphics[scale=0.4]{tab1b.eps} & two-particle-two-hole (2p2h) & $2U_0$
\\[1pt]
\hline
\\[2pt]
\includegraphics[scale=0.4]{tab1c.eps} & three-particle-three-hole (3p3h) & $3U_0$
\\[1pt]
\includegraphics[scale=0.4]{tab1d.eps} & 2p2h with two particles at same site & $3U_0$
\\[1pt]
\hline\hline
\end{tabular}
\caption{Basic types of particle-hole excitations of first order, sorted by energy transfer.}
\label{tab_exc}
\end{table}
\subsection{Time evolution}
\begin{figure}[t!]
\includegraphics{fig2a.eps}\includegraphics{fig2b.eps}\includegraphics{fig2c.eps}\\
\includegraphics{fig2d.eps}\includegraphics{fig2e.eps}\includegraphics{fig2f.eps}\\[-10pt]
\begin{center}\includegraphics{fig2g.eps}\end{center}
\caption{Energy transfer $\Delta E$ for a system of $N=10$ bosons and $I=10$ lattice sites at different interaction strengths due to the modulation of the lattice amplitude ($F=0.1$). The density plots show the energy transfer as function of modulation frequency and time, the line plots on top depict the energy transfer as function of frequency averaged over the whole evolution. The sequence from (a) to (f) corresponds to interaction strength $U_0/J_0=30$, $20$, $15$, $10$, $5$, and $3$. The (red) arrows mark the energies of eigenstates $\ket{\nu}$ which are connected to the ground state by the hopping operator $\op{H}_J$ (compare figure \ref{plot_spec_reg}).}
\label{plot_sf-mott}
\end{figure}
In order to probe the dynamical behaviour of the Bose gas we choose a fixed ratio of the interaction to tunnelling strength $U_0/J_0$. The ground state of the system is obtained by exact diagonalization and is used as the initial state for the time evolution. Starting from this state the system is evolved in time while the lattice is modulated with frequency $\omega$ and amplitude $FV_0=0.1V_0$. The response of the system is probed via the energy transfer evaluated at each timestep using the time-evolved state $\ket{\Psi,t}$:
\begin{equation}
\Delta E(t) = \matrixe{\Psi,t}{\op{H}_0}{\Psi,t}-E_0\text{.}
\end{equation}
$\op{H}_0$ is the Hamiltonian at time $t=0$, and $E_0$ is the ground state energy of the initial Hamiltonian.
Figure \ref{plot_sf-mott} shows the energy transfer for $10$ bosons and $10$ lattice sites at several ratios~$U_0/J_0$, ranging from the Mott insulating phase (MI), depicted in panels \ref{plot_sf-mott}(a)-(e), to the superfluid regime (SF), shown in panel \ref{plot_sf-mott}(f). The density plots in the lower part of each panel show the energy transfer $\Delta E$ as a function of the modulation frequency $\omega$ and time. The line plots in the upper part of each panel show the energy transfer as a function of $\omega$ and averaged over the full time evolution. The (red) arrows indicate the energies of excited states with sizable hopping matrix elements to the ground state, the size of the matrix elements is reflected by the length of the arrows.
In the simple picture of particle-hole excitations (see table \ref{tab_exc}) one would expect excitations at integer multiples of the interaction strength in the strongly correlated regime. The simulations illustrated in figure \ref{plot_sf-mott}(a)-(d) confirm this assumption partially. A large resonance appears at the frequency $\omega=U_0$ in the Mott phase, in panels (b)-(d) weaker excitations also appear at $\omega=3U_0$. In the case of the simulation for $U_0/J_0=30$ the truncated basis does not include number states with energies of $2U_0$ and higher, and consequently the $3U_0$-resonance does not appear.
\begin{figure}[t]
\includegraphics{fig3.eps}
\caption{Relation between the excitation energies of the static Hubbard Hamiltonian and the resonance structure of a bosonic system ($I=N=10$) at interaction strength $U_0/J_0=20$. The left panel shows the time-averaged energy transfer as a function of the frequency $\omega$; the right panel depicts the lower part of the eigenvalue spectrum of the corresponding static Hamiltonian. The vertical gray lines mark the eigenstates which are connected to the ground state by strong matrix elements $|\matrixe{\nu}{\op{H}_J}{0}|$. The horizontal (red) lines point out the correspondence between the excitation energies and the resonance structure.}
\label{plot_substruct}
\end{figure}
The absence of a resonance peak at frequency $\omega=2U_0$ was already evident from the linear response analysis illustrated in figure \ref{plot_spec_reg}(a)-(c), which shows no significant matrix elements between the ground state and excited states in the $2U_0$-band. This reveals that 2p2h-processes as schematically depicted in table \ref{tab_exc} are not relevant for the excitation.
Towards the superfluid regime the resonances are significantly broadened, which is in agreement with experimental evidence \cite{StMo04}. Figure \ref{plot_sf-mott}(e) shows a simulation in the transition region from Mott-insulator to superfluid around $(U_0/J_0)_c=4.65$ \cite{RoBu03a}. The response spectrum reveals a significant shift of the resonance towards higher frequencies for decreasing interaction strength. The simple approximation of a single number state as the ground state is not adequate for weaker interactions and especially in the superfluid regime due to the delocalization of the atoms. The admixture of other number states leads to additional particle-hole excitations and therefore to a broadening of the resonances.
The fine-structure of the resonances (figure \ref{plot_spec_reg}) can be explained by the linear response analysis. Figure \ref{plot_substruct} reveals the connection between the excitations to higher eigenstates and the structure of the resonances appearing in the time evolution. The left hand side shows the time-averaged energy transfer of a $I=N=10$ bosonic system, the right hand side the corresponding energy spectrum obtained by solving the eigenvalue problem of the initial Hamiltonian. The vertical lines indicate the eigenstates that are connected to the ground state by large matrix elements of the hopping operator~$H_J$. The horizontal dashed (red) lines and the (red) arrows mark the corresponding excitation energies, which resemble --- with remarkable precision --- the fine-structure of the resonances emerging in the time evolution.
In addition to the resonances at integer multiples of the interaction strength, there is also a tiny resonance at $U_0/2$ in the strongly interacting cases. It results from non-linear effects, such as the absorption of two photons of the energy $U_0/2$, which are not captured by the linear response analysis.
\subsection{Benchmark for the truncation scheme}
In section \ref{sec_bhm} we introduced the basis truncation scheme in order to reduce the numerical effort of the simulations. For static ground state properties, we have shown that the basis dimension can be reduced typically by two orders of magnitude without a significant loss of precision \cite{ScHi06}. This does not necessarily hold for the exact time evolution discussed in this paper. In order to describe the excitation effects properly, one has to use a basis which allows to describe more than a few eigenstates.
\begin{figure}[t]
\includegraphics{fig4.eps}
\caption{Comparison of the time-averaged energy transfer for a Bose gas with $I=N=8$ at the interaction strength $U_0/J_0=30$. The different lines show the result for the full basis \linemediumsolid\ and for truncated bases with $E_{\text{trunc}}/J_0=90$ \linemediumdashed, $E_{\text{trunc}}/J_0=60$ \linemediumdashdot, and $E_{\text{trunc}}/J_0=30$ \linemediumdotted. Depicted are the main resonance at $\omega=U_0$ (a), the non-linear resonance at $\omega=U_0/2$ (b), and the $3U_0$-resonance (c).}
\label{plot_trunc-test}
\end{figure}
To test the basis truncation in the context of time-dependent calculations with a modulated lattice we compare time evolutions with several cut-off energies $E_{\text{trunc}}$. We perform the time evolution of a system of $N=8$~boson in $I=8$~sites for $U_0/J_0=30$, which is the largest system with integer filling that allows the use of the complete basis. We compare the energy transfer as a function of the modulation frequency $\omega$, averaged over the evolution time $t_{\max}/J_0^{-1}=10$. Table \ref{tab_trunc} specifies the characteristics of the bases used for the comparison. Figure~\ref{plot_trunc-test}(a) shows the results for the $U_0$-resonance. The response obtained with the basis truncated at $E_{\text{trunc}}/J_0=90$ is in excellent agreement with the calculation using the complete basis. The basis with cut-off energy $E_{\text{trunc}}/J_0=60$ shows small deviations in the peaks. The basis truncated at $E_{\text{trunc}}/J_0=30$ reproduces the peak as well as its fine-structure but the whole resonance is shifted by roughly $\Delta E=1.5J_0$. The shift towards higher energies for more severe truncations can be explained by the Hylleraas-Undheim-MacDonald theorem \cite{HyUn30,Ma33}, which states that the exact eigenenergies are lower bounds for the corresponding eigenvalues obtained with a truncated basis. In a variational picture, the basis truncation reduces the flexibility of the number state representation (trial states) and thus leads to an increase of the energy eigenvalues in the truncated space.
Figure~\ref{plot_trunc-test}(b) shows an enlarged energy interval around the non-linear resonance at frequency~$\omega=U_0/2$. The basis with the truncation energy $E_{\text{trunc}}/J_0=90$ perfectly reproduces the result of the complete basis, even the $E_{\text{trunc}}/J_0=60$-basis causes only small deviations. In agreement with the results for the $U_0$-resonance, the whole resonance is significantly shifted to higher energies in the simulation using the $E_{\text{trunc}}/J_0=30$-basis due to the overestimation of the eigenenergies. A closer look at the $3U_0$-resonance in figure~\ref{plot_trunc-test}(c) illustrates the limitations of truncated bases for the description of high-lying resonances. For truncation energies $E_{\text{trunc}}/J_0=30$ and $60$ the bases do not include any number states that correspond to energies as high as the excitation energy, hence no energy transfer is possible. The basis with the truncation energy $E_{\text{trunc}}/J_0=90$ just about includes the numbers states matching the excitation energy of $\omega=3U_0=90J_0$, and consequently, an energy shift occurs, which is comparable in size to the shift of the $U_0$-resonance for the truncated basis with $E_{\text{trunc}}/J_0=30$.
\begin{table}[t!]
\centering
\begin{tabular}{cll}
$E_{\text{trunc}}/J_0$ & basis dimension & excitations included \\
\hline\hline\\[-10pt]
$\infty$ & 6435 (complete) & all \\
90 & 1205 & up to 3-particle-3-hole \\
60 & 477 & up to 2-particle-2-hole \\
30 & 57 & up to 1-particle-1-hole \\
\hline\hline
\end{tabular}
\caption{Main characteristics of the bases used for the comparison. Shown are the truncation energy $E_{\text{trunc}}$, the basis dimension and the highest included particle-hole excitation with respect to the reference state $\ket{1,1,1,1,1,1,1,1}$. }
\label{tab_trunc}
\end{table}
\section{Bose gases in a two-colour superlattice}\label{sec_superlattice}
\subsection{Two-colour superlattice and the phase diagram}
\begin{figure}[t!]
\includegraphics{fig5.eps}
\caption{Left panels: Two different sets of on-site energies $\epsilon_{i}$ of period-five two-colour superlattices used in this paper. Right panel: Phase diagram of bosons ($N=10$) in an optical superlattice ($I=10$) spanned by the interaction strength~$U_0/J_0$ and the superlattice amplitude $\Delta/J_0$ for the set (A) of on-site energies. Depicted is the maximum coefficient $C^2_{\text{max}}$ of the number state expansion (\ref{eq_eigenstate}) of the ground state.}
\label{plot_phase_diag}
\end{figure}
In this section the dynamical signatures of Bose gases in two-colour superlattices are investigated. In experiment, superlattices can be generated by a superposition of two optical standing waves, which leads to a sinusoidal modulation of the depth of the individual lattice wells. The spatial modulation enters the Hubbard model via the external potential term discussed in section \ref{sec_bhm}, with an appropriate distribution of the on-site energies $\epsilon_{i}$. The left hand side of figure \ref{plot_phase_diag} shows two different distributions of on-site energies for a two-cell superlattice of 10 sites. Essentially, the two superlattices differ in the relative phase of the standing waves. We will use these two realizations to estimate the dependence of the response on the detailed topology of the superlattice. In the following we refer to these on-site energy distributions as superlattices (A) and (B).
The amplitude $\Delta$ of the superlattice is an additional parameter which generates a rich structure in the phase diagram \cite{RoBu03b,RoBu03c}. The right hand side of figure \ref{plot_phase_diag} shows the two-dimensional phase diagram for $N=10$ bosons and $I=10$ sites with on-site energies $\epsilon_{i}$ according to superlattice (A) (figure \ref{plot_phase_diag}). Plotted is the maximum coefficient $C^2_{\text{max}}$ of the expansion (\ref{eq_eigenstate}) of the ground state in the number basis. The darker (red) shadings represent regions of a large maximum coefficient which indicates that a single number state is dominating the ground state. Brighter shadings refer to small maximum coefficients which correspond to a ground state given by a superposition of many number states.
\begin{figure}[t!]
\centering
\psfrag{Delta}{$\Delta$}
\psfrag{label-a}{(a)}
\psfrag{label-b}{(b)}
\includegraphics[scale=0.5]{fig6a.eps}\hspace{2cm}
\includegraphics[scale=0.5]{fig6b.eps}
\caption{Schematic diagram of a single cell of the superlattice: If the superlattice amplitude $\Delta$ is close to the interaction strength $U_0$, the number states (a) and (b) have roughly the same energy. The relative height between the sites represents the different on-site energies $\epsilon_i$.}
\label{car_bg}
\end{figure}
The bright region at small interaction strengths and superlattice amplitudes indicates the superfluid phase. By increasing the interaction strength the system enters the homogenous Mott-insulator phase, which is represented by the dark (red) area below the diagonal~($U_0=\Delta$). As long as the interaction strength exceeds the superlattice amplitude it is energetically unfavourable to occupy a site with more than one boson. Close to the point where amplitude and interaction are equal, the maximum coefficient $C^2_{\text{max}}$ decreases: Here, the two number states depicted in figure \ref{car_bg} have the same energy, since the energy gain in on-site energy obtained by hopping to the deepest well (figure \ref{car_bg}(b)) is compensated by the increase in interaction energy.
These redistributions of the particles in favour of the deep lattice wells become more and more important if the superlattice amplitude increases further. As soon as the gain in on-site energy for a particular redistribution exceeds the loss due to the increased interaction energy, the particle will move to a deeper lattice well. These successive redistributions generate the lobe structure in the phase diagram (figure \ref{plot_phase_diag}). In the case of random lattices the redistributions happen almost continuously, giving rise to the Bose glass phase. Eventually all atoms are localized at the deepest lattice well. This complete localization also appears for small or vanishing interaction strengths $U_0/J_0$ in the phase diagram.
\subsection{Linear response analysis}
\begin{figure}[t!]
\includegraphics{fig7a.eps}
\includegraphics{fig7b.eps}
\caption{Energy spectrum of an $I=N=5$ system at fixed interaction strength $U_0/J_0=30$ and various superlattice amplitudes $\Delta/J_0$. The vertical lines mark the eigenstates that are connected to the ground state via the hopping operator $\op{H}_J$.}
\label{plot_spect}
\end{figure}
As a first analysis of the response of superlattice systems we examine the spectrum of the static Hamiltonian (\ref{eq_hamop}) for a single superlattice cell. Figure \ref{plot_spect} shows the spectra of the Bose-Hubbard Hamiltonian for $N=5$~bosons and $I=5$~sites at fixed interaction strength~$U_0/J_0=30$. Instead of two-cell superlattices such as (A) and (B) we consider a single cell only, because the small basis dimensions allow us to solve the full eigenproblem without truncation. Note that the energies plotted in figure \ref{plot_spect} are shifted by the ground state energy in such a manner that the ground state energy is zero.
The sequence of plots in figure \ref{plot_spect} reveals the effect of the superlattice amplitude~$\Delta$ on the Hubbard band structure. The vertical lines mark the eigenstates which are connected to the ground state via nonvanishing matrix elements of the hopping operator~$\op{H}_J$. The relative strength
of these matrix elements $|\matrixe{\nu}{\op{H}_J}{0}|$ is indicated by the gray level of the lines. Darker shadings correspond to larger matrix elements.
In the absence of a spatial modulation the typical Hubbard bands of the strongly correlated regime are visible, as depicted in figure \ref{plot_spect}(a). As discussed in section \ref{sec_reg_lattice}, the second Hubbard band ($2U_0$-band) is not connected to the ground state in first order. Figure~\ref{plot_spect}(b) shows the eigenenergy spectrum for a system with a small superlattice amplitude of $\Delta/J_0=5$. The gapped structure is still visible but the width of the individual bands is significantly increased and the gaps are reduced. The breaking of the spatial symmetry by the superlattice generates a larger number of matrix elements which connect the ground state to higher bands. At a superlattice amplitude $\Delta/J_0=10$, shown in figure \ref{plot_spect}(c), the gaps have completely vanished, but nevertheless, the largest matrix elements still couple the ground state to eigenstates in the $U_0$ and $3U_0$ bands.
At an amplitude $\Delta/J_0=20$, which is still within the homogenous Mott-insulator phase, the separation of the sizable matrix elements which were initially associated with the $U_0$ and $3U_0$ bands dissolves as shown in figure \ref{plot_spect}(d). An interval of large matrix elements remains for excitation energies up to $1.5 U_0$, which resembles the original $U_0$-band. Beyond that, there is a number of weaker matrix elements covering excitation energies of up to $3.5 U_0$. With increasing superlattice amplitude, any semblance of the band structure disappears. As depicted in figure \ref{plot_spect}(e) for $\Delta/J_0=50$ one finds a continuous distribution of matrix elements with dominant matrix elements at low excitation energies. For very large superlattice amplitudes, e.g. for $\Delta \approx 2 U_0$ as depicted in figure \ref{plot_spect}(f), the higher-lying matrix elements are suppressed and strong matrix elements appear only at the lower end of the energy scale. This characteristic behaviour emerges also from the dynamical simulations.
\subsection{Time evolution}
\begin{figure}[t]
\begin{minipage}{0.66\linewidth}
\includegraphics{fig8a.eps}\includegraphics{fig8b.eps}
\end{minipage}
\hspace{10mm}\begin{minipage}{0.33\linewidth}
\includegraphics{fig8c.eps}
\end{minipage}
\caption{Energy transfer during lattice modulation ($F=0.1$) for a superlattice system with $I=N=10$ and interaction strength~$U_0/J_0=30$ for two superlattice amplitudes~$\Delta<U_0$ in the homogenous Mott-insulator phase. The line plots show the time-averaged energy transfer as function of $\omega$ for the superlattices (A) \linemediumsolid\ and (B) \linemediumsolid[lgray]\ defined in figure \ref{car_bg}.
The (red) arrows mark the excitation energies predicted by linear response analysis. The density plots illustrate the energy transfer as a function of frequency and time for the superlattice (A).}
\label{plot_superlat1}
\end{figure}
\begin{figure}[t]
\includegraphics{fig9a.eps}\includegraphics{fig9b.eps}\includegraphics{fig9c.eps}\\[-8pt]
\begin{center}\includegraphics{fig9d.eps}\end{center}
\vspace{-12pt}
\caption{Energy transfer during lattice modulation ($F=0.1$) for a superlattice system with $I=N=10$ and interaction strength~$U_0/J_0=30$ for three superlattice amplitudes~$\Delta>U_0$ in the quasi Bose-glass phase. The line plots show the time-averaged energy transfer as function of $\omega$ for the superlattices (A) \linemediumsolid\ and (B) \linemediumsolid[lgray]\ defined in figure \ref{car_bg}.
The (red) arrows mark the excitation energies predicted by linear response analysis. The density plots illustrate the energy transfer as a function of frequency and time for the superlattice (A).}
\label{plot_superlat2}
\end{figure}
We now turn to fully dynamical simulations of Bose gases in time-dependent superlattices. We focus on a system of $N=10$ bosons in the two-cell superlattices with $I=10$ sites as defined by (A) and (B) in figure \ref{plot_phase_diag}. All simulations are performed for a fixed interaction strength $U_0/J_0=30$ and various values for the superlattice amplitude $\Delta/J_0$. As initial state we use the ground state of the static Bose-Hubbard Hamiltonian for the same parameters. In analogy to the procedure for regular potentials in section \ref{sec_reg_lattice} the lattice potential is modulated in time with a frequency $\omega$ and a fixed relative amplitude $F=0.1$.
Figures \ref{plot_superlat1} and \ref{plot_superlat2} show the energy transfer to the system at several
superlattice amplitudes for a fixed interaction of $U_0/J_0=30$. All simulations are performed with each of the two on-site energy distributions depicted on the left of figure \ref{plot_phase_diag} in order to assess the impact of the detailed distribution of the on-site energies $\epsilon_i$. The density plots in the lower part of each panel show the energy transfer as function of time and frequency for superlattice (A). The plots in the upper part represent the time-averaged energy transfer as function of the frequency for both superlattice topologies. The (red) arrows above the individual peaks mark the excitation energies associated with the strongest matrixelements $|\matrixe{\nu}{\op{H}_J}{0}|$. The size of the arrows indicates the relative strength of the matrixelements.
Figure \ref{plot_superlat1}(a) depicts the $U_0$-resonance in the case of a regular lattice, i.e., for vanishing superlattice amplitude $\Delta/J_0$. As pointed out before, the excitation energies associated with the strongest matrix elements resulting from the linear response analysis nicely describe the position and fine-structure of the resonance. If we increase the superlattice amplitude to the value $\Delta/J_0=20$ --- still remaining in the homogeneous Mott-insulator phase --- the overall width of the resonance structure increases. The characteristic scale of the fine structure increases as well, and additional peaks emerge. This is in accord with the results of the linear response analysis of the previous section. An increase of the superlattice amplitude leads to a broadening of the Hubbard bands (cf. figure \ref{plot_spect} (b)) which corresponds to the broadening of the resonance. The larger number of peaks in the resonance corresponds to the larger number of matrix elements. The comparison of the two different superlattice topologies shows that the envelopes of the resonance are practically identical in both cases. Only the details of the fine structure depend on the particular set of on-site energies $\epsilon_i$ used.
After crossing the transition to the quasi Bose-glass phase at $\Delta=U_0$, the resonance structure changes dramatically as shown in figure \ref{plot_superlat2}. Already at $\Delta/J_0=35$, i.e., slightly above the transition at $\Delta/J_0=U_0/J_0=30$, the resonance is completely fragmented. There is no longer a smooth envelope with centroid at $\omega=U_0$. Instead, the strength is split into two groups of individual peaks: one group around the original resonance position and another group at low excitation energies. In particular, we observe low-lying resonances at $\omega/J_0<10$ --- a regime where no response was observed for a system in the homogeneous Mott-insulator phase. With increasing superlattice amplitude, the response is shifted towards lower excitation energies, as indicated by the sequence of plots in figure \ref{plot_superlat2}. At $\Delta/J_0=50$, for instance, no response is left at the original resonance position $\omega=U_0$ and all peaks are concentrated at low excitation energies.
This characteristic behaviour of the response appears to be a clear signature for the Mott-insulator to quasi Bose-glass transition for boson in superlattices, which is directly accessible to experiments. The comparison of the two different superlattice topologies, both with a period of five sites, demonstrates that the fine-structure of the response depends on the details of the superlattice, but that the gross characteristics are not affected.
Finally, one should note that the linear response analysis already hints at these substantial changes in the resonance spectrum. For larger superlattice amplitudes however, effects beyond the simple linear perturbation scheme become increasingly important and lead to significant discrepancies in comparison to the full time-dependent simulation (cf. figure \ref{plot_superlat2}(c)).
\section{Summary and conclusions}
We have studied the dynamics of Bose gases in one dimensional optical lattices and superlattices with time dependent lattice amplitudes. The time evolution based on the time-dependent Bose-Hubbard Hamiltonian is performed numerically for systems with up to 10 bosons and 10 lattice sites. In order to reduce the numerical effort we have used an \textit{a priori} basis truncation scheme \cite{ScHi06}, which reduces the dimension of the number basis to a tractable size. The control parameter of this truncation scheme is the cut-off energy $E_{\text{trunc}}$, which defines the maximum energy of the number states in the basis. We have shown that the truncation also allows reliable dynamical calculations.
As a first application, we have examined Bose gases in a regular lattice potential. In agreement with experiment \cite{StMo04} and other theoretical results \cite{KBM05,KoIu06,ClJa06}, we illustrated the characteristic resonance structure of the Bose gas in the strongly correlated Mott-insulator regime, which is washed out and broadened towards the superfluid phase.
Treating the time-dependence of the Bose-Hubbard Hamiltonian as a linear perturbation demonstrates that excitations are generated by the hopping operator in first order~\cite{IuCa06,KoIu06,ClJa06}. The analysis of matrix elements of the hopping operator between ground and excited states allows a prediction of the resonance position and a detailed explanation of the fine structure. It is shown, that the individual peaks of a resonance correspond to large matrix elements $|\matrixe{0}{\op{H}_J}{\nu}|$, where $\ket{\nu}$ are the eigenstates of the initial Hamiltonian.
In the second part we have investigated dynamical signatures of quantum phase transitions in two-colour superlattice potentials. We have shown that for superlattice amplitudes $\Delta$ smaller than the interaction strength $U_0$, i.e., in the homogeneous Mott insulator phase, the resonances resemble those of regular lattices. With increasing superlattice amplitude, the resonance structures become broader and additional peaks appear in their fine-structure. This finding is consistent with recent experiments \cite{FaLy06} using time-dependent superlattices with incommensurate wavelengths of the superimposed standing waves. First calculations for incommensurate superlattices confirm that the gross characteristics of the resonance spectrum are independent of the precise lattice topology, only the fine-structure of the resonances is affected.
As soon as the superlattice amplitude exceeds the interaction strength, i.e., on entering the quasi Bose-glass phase, the resonance structure changes completely. The resonance at $\omega=U_0$ is fragmented and a strong low-energy component develops. Further increase of the superlattice amplitude eventually suppresses all strengths at the original resonance position, such that only the response at very low excitation energies remains. This characteristic behaviour might serve as an experimental indicator for the transition from homogeneous Mott-insulator to quasi Bose-glass phase.
In order to address the experimental scenario \cite{FaLy06} directly, we are going to investigate the influence of different lattice topologies, external trapping potentials, and filling factors in detail \cite{HiSc07}. Nevertheless, the results presented in this paper apply to the Mott domain with one atom per site even in the presence of a weak trapping potential. Another topic for future studies is the impact of non-zero temperatures on the response behaviour, which is beyond the present zero-temperature formalism.
\section*{Acknowledgements}
We thank K. Braun-Munzinger, J. Dunningham, and K. Burnett for fruitful discussions and for providing reference data during the development of the time-evolution code. RR thanks the DFG for support.
\section*{References}
|
1,314,259,996,594 | arxiv | \section*{This is an unnumbered first-level section head}
\section{Introduction}
In Ashtekar's theory of gravity, a $SU(2)$-connection captures the extrinsic curvature of a space-like leaf
in a time-transversal foliation, and the intrinsic geometry of the leaf is given by a tetrad \cite{ashtekar}.
It is shown that the Einstein-Hilbert functional and Einstein equations can be written
in terms of $SU(2)$-connections and tetrads, thus
these variables together recast Einstein's theory of gravity.
By rewriting Einstein's gravity with the connection variables and tetrad variables,
one could attempt to quantise gravity via the Hamiltonian formalism, and obtain a theory of quantum gravity \cite{ashham}.
The reader may refer to Thiemann's introductory \cite{LQG}.
This article offers an alternative view of the space of connections to ones that appear in other loop quantum gravity literatures, and proposes a semi-classical limit using strict $C^*$-algebraic deformation quantisation formalism \cite{rieffel}.
To take the connections as dynamic variables in a quantum theory, one studies wave functions, also known as probability amplitudes, on the space of connections. Unfortunately, the lack of a measure on such a space poses the first challenge, since in such case probabilities cannot be defined.
One typical solution to obtain a measure on the connection space comes from spaces of progressively
refined cylindrical functions. In another description, one uses a finite set of curves to probe the space of
$G$-connections to obtain a finite dimensional manifold that depend on the sets of curves. By successively refining
the finite sets of curves, such as successive triangulation of the manifold, one obtains a pro-manifold that extends
the original space of connections to the so-called space of generalised connections \cite{baez}.
As a step to quantising gravity in the Ashtekar framework, there are recent developments of describing such an extended space of connections using a spectral triple in noncommutative geometry \cite{AGN,lai}, which captures the geometry
of the space of generalised connections as operators on a Hilbert space.
While the geometries of the base manifold and the space of $G$-connections on it are in theory retained, the construction of the spectral triple is considered too discrete to practically allow one to recapture the geometry of the
base manifold and its $G$-connections.
To rid the discrete description of using finite sets of embedded curves, this article proposes an alternative approach to smoothly probe the space of connections using the tangent groupoid. And then followed by an application of strict deformation quantisation in $C^*$-algebraic formalism, a possible semi-classical limit of can be obtained.
The purposes of this article are to present an alternative idea to the studies of loop quantum gravity
with the goal of obtaining a semi-classical limit, which is what loop quantum gravity still lacks today.
This article contains six small sections, they are arranged as follows:
Section~\ref{sec2}
discusses the traditional way of putting a measure on the space of connections in loop quantum gravity literature, and the
short coming of such a method.
In Section~\ref{sec3}, we review the definitions of a Lie groupoid, a Lie algebroid, and tangent groupoid of a Lie groupoid in an elementary way.
Section~\ref{sec4} shows that how one uses the tangent groupoid as a tool
to model the space of connections in a smooth way, and discuses the corresponding gauge action.
Section~\ref{sec5} is the first attempt of deforming connections into noncommutating operators acting on a Hilbert space.
Section~\ref{sec6} introduces the notion of strict deformation quantisation in $C^*$-algebra formalism, and an important theorem by Landsman \cite{landsman}, which states that a tangent groupoid defines a strict deformation quantisation. Hence, using this formalism, we can deformation quantise $G$-connections.
Finally, Section~\ref{sec7} is an outlook that summarises the article and the current state of work toward obtaining a semi-classical limit in loop quantum gravity.
\section{A Measure on the Space of Connections}
\label{sec2}
We start by requiring that $G$ to be a semi-simple, simply connected, compact Lie group, such as $SU(2)$.
And $\g$ to be the corresponding Lie algebra of $G$. $\exp $ is the exponential map from $\g$ to $G$.
The space of smooth $G$-connections $\mathcal{A}$ over a manifold $M$ is the space of $\g$-valued $1$-forms over $M$.
Given a smooth manifold $M$ and a smooth (compact) curve $\gamma$ in $M$,
one obtains a map $\operatorname{Hol}_\gamma$ from the space of smooth $G$-connections $\mathcal{A}$ to $G$
\[
\operatorname{Hol}_\gamma: \mathcal{A} \to G,
\]
given by taking the holonomy
\[
\operatorname{Hol}_\gamma(A):= \exp\int_\gamma A
\]
of each connection $A\in \mathcal{A}$ along the curve $\gamma$.
Here it is assumed that there is a fixed local trivialisation of the principal bundle, so that $\mathcal{A}$ is realised as a vector space.
Suppose that there is a smoothly embedded finite graph $\Gamma$ in $M$ (edges are smooth, and the number of them is finite), one can repeat the holonomy evaluation to obtain a map from $\mathcal{A}$ to multiple copies of $G$'s, one for each edge.
Thus, one obtains
\begin{eqnarray}
\label{AGN:eqn:coarsegrainedhol}
\operatorname{Hol}_\Gamma:\mathcal{A} \to G^{\lvert \Gamma \rvert},
\end{eqnarray}
where $\vert \Gamma \rvert$ denotes the number of edges of the graph $\Gamma$.
\begin{proposition}[\cite{baezsawin,AGN}]
\label{AGN:prop:denserange}
For any finite
graph $\Gamma$,
the map
$\operatorname{Hol}_\Gamma$ \eqref{AGN:eqn:coarsegrainedhol} is a surjection.
\end{proposition}
While the space of smooth $G$-connections $\mathcal{A}$ lacks structures, such as a measure, it
surjects to $G^{\lvert \Gamma \rvert}$, a compact measure space.
It is done by forgetting the values of the connections outside the edges of the graph. Hence, a lot of information is lost. However, one can imagine that there is a collection of finite graphs with one finer than the other, such that the collection of graphs is dense in the manifold $M$ in a certain sense. Therefore, at every small neighbourhood in the manifold, there is an edge from the collection of graphs in the neighbourhood to probe the holonomies of the connections.
We do not define the notion of a {\bf directed system} of finite graphs, but to say that it is defined naturally by the associated groupoid of a directed finite graph \cite{lai}.
From which, there is associated a directed system of compact measure spaces $G^{\lvert \Gamma_j \rvert }$.
To put the above description in a mathematical context, we state
\begin{theorem}[\cite{AGN,lai}]
Denote by $\overline{\mathcal{A}}^\mathbf{\Gamma}$ the projective limit $
\varprojlim G^{\lvert \Gamma_j \rvert }$.
Suppose that there is a system of of smoothly embedded finite graphs $\mathbf{\Gamma}:=\{\Gamma_j\}_j$, such that the set of vertices is dense in $M$ and every neighbourhood of a point $x\in M$ contains
edges that span the vector space $T_x M$.
Then there exists an embedding
\[
\operatorname{Hol}_\mathbf{\Gamma}: \mathcal{A} \hookrightarrow \overline{\mathcal{A}}^\mathbf{\Gamma} .
\]
\end{theorem}
From the property of the product (Tychonoff) topology, one has that
\begin{proposition}[\cite{baezsawin,AGN}]
\label{AGN:prop:denserange}
Let $\mathcal{S}=\{\Gamma_i\}_{i\in I}$ be a directed system of finite graphs
in $M$. Then
the image of
$\mathcal{A}$ under $\operatorname{Hol}_\mathbf{\Gamma}$ is dense in $\overline{\mathcal{A}}^\mathbf{\Gamma}$.
That is
\[
\overline{
\operatorname{Hol}_\Gamma(\mathcal{A})}=\overline{\mathcal{A}}^\mathbf{\Gamma}.
\]
\end{proposition}
This compactification procedure depends on the system of graphs used in probing the connection space.
We give some examples of graph systems that provide good compactifications.
\begin{example}
\mbox{}
\begin{enumerate}
\item
\label{AGN:ex:triangulation}
Let $\mathcal{T}_1$ be a triangulation of $M$ and $\Gamma_1$ be the graph
consisting of all the edges in this triangulation with
any orientation.
Let $\mathcal{T}_{n+1}$ denote the triangulation obtained by
barycentric subdivision of each of the simplices in
$\mathcal{T}_1$ $n$ times. The graph $\Gamma_{n+1}$ is
the graph consisting the edges of $\mathcal{T}_{n+1}$ with consistent
orientation.
In this way
$\mathcal{S}_\triangle :=\{\Gamma_n \}_{n\in \mathbb{N}}$
is a directed system of finite graphs, and $\mathcal{A}$ densely embeds into $\overline{\mathcal{A}}^{\mathcal{S}_\triangle}$.
\item
\label{AGN:ex:dlattice}
Let $\Gamma_1$ be a finite,
$d$-dimensional lattice in $M$ and let $\Gamma_2$ denote the
lattice obtained by subdividing each cell in $\Gamma_1$ into $2^d$ cells.
Correspondingly, let $\Gamma_{n+1}$ denote the lattice obtained by repeating $n$ such subdivisions
of $\Gamma_0$ . In this way
$\mathcal{S}_\square :=\{\Gamma_n\}_{n\in \mathbb{N}}$
is a directed system of finite graphs, and $\mathcal{A}$ densely embeds into $\overline{\mathcal{A}}^{\mathcal{S}_\square}$.
\end{enumerate}
\end{example}
Moreover, $\overline{\mathcal{A}}^\mathbf{\Gamma}$ is a compact measure space. Therefore, such a procedure of
surjecting $\mathcal{A}$ to a coarse approximation $G ^{\lvert \Gamma_j \rvert }$, then consider the limit of the approximations provide an extension of the space $\mathcal{A}$ to a compact measure space $\overline{\mathcal{A}}^\mathbf{\Gamma}$.
The space $\overline{\mathcal{A}}^\mathbf{\Gamma}$ is called the space of generalized connections.
Now it is possible to consider probability amplitudes on the space of generalized connections, and proceed to canonical quantisation.
The drawback of this compactification is that the original space of connections $\mathcal{A}$ is forever lost in $\overline{\mathcal{A}}^\mathbf{\Gamma}$, consequently obtaining a semi-classical limit from quantisation on $\overline{\mathcal{A}}^\mathbf{\Gamma}$ is impossible.
The source of the problem comes from probing the space $\mathcal{A}$ with finite graphs, which are very rigid objects that cannot be perturbed.
One can understand or abstract this process of compactifaction of $\mathcal{A}$ as probing $\mathcal{A}$ with a groupoid, where for the case of a graph $\Gamma$, the groupoid is the associated fundamental groupoid. It is a finitely generated groupoid, which is very discrete and cannot be perturbed.
However, one can replace this discrete groupoid with a smooth groupoid, say a Lie groupoid. This article proposes the use of the tangent groupoid.
\section{Tangent Groupoid}
\label{sec3}
\begin{definition}
\label{groupoid}
A {\bf groupoid} $\mathcal{G}$ is a (small) category in which every morphism is invertible.
That is, a set of morphisms $\mathcal{G}$ together with a set of objects $X$ such that
\begin{itemize}
\item
there exist surjective structure maps, called the source and range maps
\[
\mathcal{G} \rightrightarrows ^{\hspace{-0.25cm}^{\operatorname{r}}}
_{\hspace{-0.25cm}_{\operatorname{s}}}X ,\]
\item
there exists an injection, called the identity inclusion,
\[
i:X \hookrightarrow \mathcal{G},
\]
\item
there exists a partially-defined associative composition
\[
\mathcal{G}\times \mathcal{G} \to \mathcal{G}
\]
with identities $i(X)$, and
\item
there exists an inversion map
\[
\left( \mbox{ } \right) ^{-1}: \mathcal{G} \to \mathcal{G}
\] with the usual properties.
\end{itemize}
\end{definition}
\begin{definition}
\label{liegroupoid}
A {\bf Lie groupoid} is a groupoid
$\mathcal{G} \rightrightarrows ^{\hspace{-0.25cm}^{\operatorname{r}}}
_{\hspace{-0.25cm}_{\operatorname{s}}}X$ with smooth manifold structures on $\mathcal{G}$ and $X$ such that $\operatorname{s}, \operatorname{r}$ are
submersions, the inclusion of $X$ in $\mathcal{G}$ as the identity homophism and the composition $\mathcal{G} \times \mathcal{G} \to \mathcal{G}$ are smooth.
\end{definition}
\begin{example}[\cite{tangent}]
\label{liegroupoidex}\mbox{ }
\begin{enumerate}
\item Any Lie group $G$ is a groupoid $G \rightrightarrows ^{\hspace{-0.25cm}^{\operatorname{r}}}
_{\hspace{-0.25cm}_{\operatorname{s}}} \{e\}$ over the identity $e$.
\item
The tangent bundle $TM$ of a manifold $M$ forms a Lie groupoid $TM \rightrightarrows ^{\hspace{-0.25cm}^{\operatorname{r}}}
_{\hspace{-0.25cm}_{\operatorname{s}}}M$ with the \emph{source} and \emph{range} maps
$\operatorname{s}, \operatorname{r}: TM\to M$ given by
$\operatorname{s}(x,V_x) = x = \operatorname{r}(x,V_x)$ for $(x,V_x) \in TM$, the inclusion $M \hookrightarrow TM$ given by
the zero section, and composition $TM\times TM \to TM$ given by $(x,V_x )\times (x,W_x) \mapsto
(x,V_x + W_x)$.
\item
The product $M\times M$ forms a Lie groupoid $M \times M
\rightrightarrows ^{\hspace{-0.25cm}^{\operatorname{r}}}
_{\hspace{-0.25cm}_{\operatorname{s}}} M $ with the \emph{source} and \emph{range} maps
$\operatorname{s}, \operatorname{r}: M \times M \to M$ given by
$\operatorname{s}(x,y ) = x $, $ \operatorname{r}(x,y)=y$, the inclusion $M \hookrightarrow M\times M$ is given by the
diagonal embedding, and the composition
$(M\times M) \times (M\times M) \to M \times M$ is given by
$(x,y) \times (y,z) \mapsto (x,z)$.
\item
Given two Lie groupoids
$\mathcal{G} \rightrightarrows ^{\hspace{-0.25cm}^{\operatorname{r}}}
_{\hspace{-0.25cm}_{\operatorname{s}}}X$ and $\mathcal{G}' \rightrightarrows ^{\hspace{-0.25cm}^{\operatorname{r}'}}
_{\hspace{-0.25cm}_{\operatorname{s}'}}X'$, their direct product
$\mathcal{G} \times \mathcal{G}' \rightrightarrows ^{\hspace{-0.40cm}^{\operatorname{r} \times \operatorname{r}'}}
_{\hspace{-0.40cm}_{\operatorname{s} \times \operatorname{s} '}}X\times X'$
is also a Lie groupoid.
\end{enumerate}
\end{example}
\begin{definition}
A \textbf{Lie algebroid} on a manifold $M$ is a vector bundle $E$ over $M$, which is equipped with a vector bundle map $\rho : E \to TM$(called the anchor), as well as with a Lie bracket $[ \mbox{ },\mbox{ } ]_E$ on the space
$C^\infty(M,E)$ of smooth sections of $E$, satisfying
\[
\rho \circ [X,Y]_E =[\rho \circ X,\rho \circ Y],
\]
where the right-hand side is the usual commutator of vector fields on $C^\infty(M,TM)$,
and
\[ [X,fY]_E =f[X,Y]_E +((\rho \circ X)f)Y\]
for all $X,Y \in C^\infty(M,E)$ and $f \in C^\infty(M)$.
\end{definition}
\begin{remark}
A Lie algebroid is also a Lie groupoid with groupoid product given by fibre-wise addition.
\end{remark}
\begin{example}\mbox{ }
\label{algebroidex}
\begin{enumerate}
\item
A Lie algebra $\g$ with its Lie bracket is a Lie algebroid over a point.
\item
The tangent bundle $TM$ of a manifold $M$ defines a Lie algebroid under the Lie bracket of vector fields, and the anchor map $\rho : TM \to TM$ is the identity.
\item \label{algebroidex3}
For a Lie groupoid $\mathcal{G} \rightrightarrows ^{\hspace{-0.25cm}^{\operatorname{r}}}
_{\hspace{-0.25cm}_{\operatorname{s}}}M$, let
$A(\mathcal{G})$ be the normal vector bundle defined by the embedding $M \hookrightarrow \mathcal{G}$, with
bundle projection given by $s$. Identify the normal bundle by $\operatorname{ker} d r \rvert _M $, the anchor map is given by
$\rho:=ds: \operatorname{ker} dr\to TM$.
Finally, by identifying $\operatorname{ker} d r$ with
$C^\infty(\mathcal{G}, T \mathcal{G})^L$, equip $A(\mathcal{G})$ with the Lie bracket coming from $C^\infty(\mathcal{G}, T \mathcal{G})$.
$A(\mathcal{G})$ is a Lie algebroid.
\item
Following the construction in the first example. $TM \times \g$ is the Lie algebroid of the Lie groupoid $M\times M \times G$. The anchor map $\rho$ consist of bundle projection on $TM$ and zero on $\g$.
\end{enumerate}
\end{example}
In Example~\ref{algebroidex}\eqref{algebroidex3} above, $A(\mathcal{G})$ is called the associated Lie algebroid of the Lie groupoid $\mathcal{G} \rightrightarrows ^{\hspace{-0.25cm}^{\operatorname{r}}}
_{\hspace{-0.25cm}_{\operatorname{s}}}M$.
It is in itself a groupoid over $M$, similar to $TM$.
\begin{theorem}[\cite{mackenzie}]
Given the associated Lie algebroid of $A(\mathcal{G})$ of the Lie groupoid $\mathcal{G} \rightrightarrows ^{\hspace{-0.25cm}^{\operatorname{r}}}
_{\hspace{-0.25cm}_{\operatorname{s}}}M$. There exists a unique local diffeomorphism
$\operatorname{Exp}:
A(\mathcal{G}) \to \mathcal{G}$.
\end{theorem}
\begin{definition}
Let $\mathcal{G} \rightrightarrows ^{\hspace{-0.25cm}^{\operatorname{r}}}
_{\hspace{-0.25cm}_{\operatorname{s}}}M$ be a Lie groupoid with Lie algebroid $A(\mathcal{G})$.
The {\bf tangent groupoid} $\mathcal{T} \mathcal{G}$ of $\mathcal{G} $ is the Lie groupoid $\mathcal{T}\mathcal{G}$ over the base
$M\times [0,1]$, such that
\begin{itemize}
\item As a set, $\mathcal{T} \mathcal{G}:= A(\mathcal{G}) \times \{0\} \bigsqcup \mathcal{G} \times (0,1] $;
\item
And $A(\mathcal{G})$ and $\mathcal{G}$ are glued together by the local diffeomorphism
$\operatorname{Exp}: A(\mathcal{G}) \to \mathcal{G}$.
\end{itemize}
\end{definition}
We do not elaborate the definition of $\operatorname{Exp}: A(\mathcal{G}) \to \mathcal{G}$ here, but rather attempt to illustrate it with an example below.
\begin{example}[\cite{ncg}]\mbox{}
\label{tangentgroupoid}
\begin{enumerate}
\item
The tangent groupoid $\mathcal{T} G$ of a Lie group $G$ is just $\g \times \{0\} \bigsqcup G \times (0,1]$ glued
together with the exponential map.
\item
The tangent groupoid $\mathcal{T} M$ of a manifold $M$ is
$\mathcal{T} M= TM \times \{ 0 \} \bigsqcup M \times M \times
(0,1]$ as a set.
And the groupoids
$TM $ and $ M \times M $ are glued together such that
for
$
p(0) =
(x,V_x,0) \in TM \times \{0\}$,
then $p(\hbar)= \left(x,\exp \hbar V_x, \hbar\right)\in M \times M \times (0,1]$.
\end{enumerate}
\end{example}
We think of an element $(x,y,\hbar)$ of $M\times M \times (0,1]$ as
a geodesic starting at $x$ and ending at $y$, and $\hbar$ is the time it takes to travel from $x$ to $y$
in a given velocity. Hence, it is considered a one dimensional object.
\section{Connections as Functions on Tangent Groupoid}
\label{sec4}
In this section, we propose using a smooth groupoid to probe the connection space with, and the holonomies will be encoded by the smooth $G$-valued functions over the smooth groupoid.
Denote by $\mathcal{A}_\hbar$ the space of smooth functions from $M \times M$ to $G$ and
$\mathcal{A}_0$ the space of smooth functions from $TM$ to $\g$ such that $A_0(x,\cdot):
T_x M \to \g$ is linear for $A_0 \in \mathcal{A}_0$ and each $x\in M $.\\
Here we think of an element $A_\hbar$ of $\mathcal{A}_\hbar$ as a holonomy presentation of a connection
for paths described by $M\times M \times (0,1]$.
\begin{definition}
\label{qconnection}
Define the space of {\bf q-connections} $\operatorname{Func}(\mathcal{T} M, \mathcal{T} G)$ to be
$\mathcal{A}_0 \times \{0\} \bigsqcup \mathcal{A}_\hbar \times (0,1]$ as a set.
And $\mathcal{A}_0$ and $\mathcal{A}_\hbar$ are glued together as
\[A_\hbar (x, \exp \hbar V_x ) = \exp (\hbar A_0(x,V_x))\] for all $\hbar \in [0,1]$.
\end{definition}
The definition is inspired by the following intuition.
The two points $x$ and $\exp \hbar V_x$ are connected by the geodesic $\gamma:t\to \exp t V_x$ for $t\in [0,\hbar]$.
And the holonomy along $\gamma$ is some group element. As $\hbar$ approaches to zero and
the end point of $\gamma$ shrinks to its starting point $x$, the holonomy contribution gets closer to the identity element in the group. Therefore, the infinitesimal of the geodesic $\gamma$ at $x$ gives the infinitesimal change in the group; so for a point $(x,V_x)$ in $TM$, one associates it an element in $\g$.
One observes the following remarks.
\begin{remark}
The gluing condition of $\operatorname{Func}(\mathcal{T} M, \mathcal{T} G)$ implies that \[A_h (x,y) \cdot A_h (y,x) \longrightarrow
^{\hspace{-0.6cm}\hbar \to 0 } I_G\] and \[A_h(x,x) \longrightarrow
^{\hspace{-0.6cm}\hbar \to 0 } I_G,\] where $I_G$ is the identity of $G$.
\end{remark}
\begin{remark}
Each $A_0\in \mathcal{A}_0$ gives rise to a $\g$-valued 1-form in a unique way. Hence,
$\mathcal{A}_0$ is naturally identified as the space of $G$-connections $\mathcal{A}$.
As a result, there is an embedding \[\mathcal{A} \hookrightarrow \operatorname{Func}(\mathcal{T} M, \mathcal{T} G).\]
\end{remark}
$\mathcal{A}_\hbar$ forms a group under point-wise multiplication of $G$, and
$\mathcal{A}_0$ forms a group under point-wise addition of $\g$.
\begin{proposition}
The product $\operatorname{Func}(\mathcal{T} M, \mathcal{T} G)$ inherits from $\mathcal{A}_0$ and $\mathcal{A}_\hbar$ is smooth.
\end{proposition}
\begin{proof}
The proof follows from
\[\left. \frac{d}{d\hbar}(A_\hbar \cdot A'_\hbar) ( x, \exp \hbar V_x) \right\rvert_{\hbar=0} = (A_0+A'_0 )(x,V_x) .\]
\end{proof}
The q-connection space $\operatorname{Func}(\mathcal{T} M, \mathcal{T} G)$ is a package that
captures information about probing the $G$-connection space with the tangent groupoid.
That is, an element $A_\hbar$ of $\mathcal{A}_\hbar$ evaluated at $(x,y)\in M \times M$
gives the holonomy of a connection along the geodesic from $x$ to $y$.
This holonomy presentation has a natural gauge action given by
applying a symmetry at the starting point $x$, then evaluate the holonomy along the geodesic
from $x$ to $y$, and finally apply a reverse symmetry at $y$.
We make it formal by the following definition.
Denote by $C^\infty(M,G)$ the set of smooth functions from $M$ to $G$.
Define the {\bf gauge action} of $C^\infty(M,G)$ on $\mathcal{A}_\hbar$ by
\begin{equation}
\label{eqn:qgaugeact}
( g\cdot A_\hbar)(x,y) := g(x) A_\hbar(x,y) g^{-1}(y)
\end{equation}
for $g\in C^\infty(M,G)$ and $A_\hbar \in \mathcal{A}_\hbar$.
And define the gauge action of $C^\infty(M,G)$ on $\mathcal{A}_0$ by
\begin{equation}
\label{eqn:cgaugeact}
(g\cdot A_0)(x,V_x):= g(x) A_0(x,V_x) g^{-1}(x) + g(x) \langle dg^{-1}(x),V_x\rangle
\end{equation}
for $g\in C^\infty(M,G)$ and $A_0 \in \mathcal{A}_0$.
\begin{remark}
Equation~\eqref{eqn:cgaugeact} is the usual gauge action on $G$-connections.
\end{remark}
The following proposition shows that the gauge actions defined on $\mathcal{A}_\hbar $ and $\mathcal{A}_0$ are compatible.
\begin{proposition}
The
$C^\infty(M,G)$ action on
the q-connection space $\operatorname{Func}(\mathcal{T} M, \mathcal{T} G)$ induced from Equations~\eqref{eqn:qgaugeact},\eqref{eqn:cgaugeact} is smooth.
\end{proposition}
\begin{proof}
Suppose that $A_\hbar (x,\exp \hbar V_x) = \exp \hbar A_0 (x,V_x)$.
Then
\begin{eqnarray*}
\left. \frac{d}{d\hbar} (g\cdot A_\hbar)(x,\exp\hbar V_x) \right\rvert _{\hbar=0}
&=&
\left. \frac{d}{d\hbar}
\left( g(x) A_\hbar (x,\exp \hbar V_x) g^{-1} (\exp \hbar V_x)\right) \right\rvert _{\hbar=0}\\
&=&
\left. g(x)\left( \frac{d }{d\hbar} A_\hbar (x,\exp \hbar V_x) \right)g^{-1} (\exp \hbar V_x) \right\rvert _{\hbar=0}
\\&&+
\left. g(x) A_\hbar(x, \exp \hbar V_x )\left( \frac{d }{d\hbar}g^{-1}(\exp \hbar V_x) \right)\right\rvert_{\hbar=0}\\
&=&
g(x) A_0 (x,V_x)g^{-1}(x) + g(x)\langle d g^{-1} (x), V_x\rangle\\
&=& (g\cdot A_0 )(x, V_x) .
\end{eqnarray*}
The proof is complete.
\end{proof}
Let $\operatorname{Diff}(M)$ denote the diffeomorphism group of $M$.
$\mathcal{T} M=TM \times \{ 0 \} \bigsqcup M \times M \times
(0,1]$ carries a smooth $\operatorname{Diff}(M)$ action given by
\[
\begin{array}{rl}
(x,y)\mapsto \left(\sigma(x),\sigma(y) \right) & \mbox{ for } (x,y) \in M \times M
\mbox{ and } \sigma \in \operatorname{Diff}(M) ,\\
(x,V_x)\mapsto (\sigma(x), d\sigma_{\sigma(x)} V_x ) & \mbox{ for } (x,V_x) \in TM
\mbox{ and } \sigma \in \operatorname{Diff}(M) .
\end{array}
\]
Subsequently, $\operatorname{Diff}(M)$ acts on the q-connection space $\operatorname{Func}(\mathcal{T} M, \mathcal{T} G)$ smoothly via the induced action
\[
(\sigma \cdot A_\hbar ) (x,y) = A_\hbar \left(\sigma(x), \sigma(y) \right)
\mbox{ and }
(\sigma \cdot A_0 )(x, V_x) = A_0 \left(\sigma(x), d\sigma_{\sigma(x)} V_x\right),
\]
for $M \in \operatorname{Diff}(M)$.
\section{Connections as Operators on Hilbert Space}
\label{sec5}
Suppose that $G $ unitarily represents on some finite dimensional vector space, say without loss
of generality $\mathbb{C}^N$. Then $G$
is included in the matrix algebra $M_N(\mathbb{C})$ as unitary matrices.
Let us fix an
orientation on $M$, hence a volume form.
Then one obtains the Hilbert space
$L^2(M, \mathbb{C}^N)$ that $\mathcal{A}_\hbar$ acts on by convolution
\[
(A_\hbar * \varphi)(y)= \int _M A_\hbar (x,y) \cdot \varphi (x) dx ,
\]
where $A_\hbar \in \mathcal{A}_\hbar$, $\varphi \in L^2(M,\mathbb{C}^N)$, the dot $\cdot$ is the
unitary representation of $G$.
Denote by ${\mathcal{A}_\hbar^\#}$ the space of $M_N(\mathbb{C})$-valued smooth functions on $M \times M$, thus
${\mathcal{A}_\hbar^\#} \supset \mathcal{A}_\hbar$ and it acts on $L^2(M,\mathbb{C}^N)$.
Elements of ${\mathcal{A}_\hbar^\#}$ will again be denoted by $A_\hbar$.
${\mathcal{A}_\hbar^\#}$ comes equipped with an involution given by the point-wise conjugate transpose of $M_N(\mathbb{C})$.
The action of ${\mathcal{A}_\hbar^\#}$ on $L^2(M,\mathbb{C}^N)$ gives rise to
a noncommutative product on
${\mathcal{A}_\hbar^\#} $
given by the convolution
\[
(A_\hbar * A'_\hbar)(x, z)= \int _M A_\hbar ( x,y) \cdot A'_\hbar (y,z) dy ,
\]
for $A_\hbar, A'_\hbar \in {\mathcal{A}_\hbar^\#}$.
The space ${\mathcal{A}_\hbar^\#}$ forms a $*$-algebra,
and it is identified with the ideal of trace-class operators
on the Hilbert space $L^2(M, \mathbb{C}^N)$.
Let $\operatorname{Tr}$ denote the operator trace on $L^2(M,\mathbb{C}^N)$,
and $\operatorname{Tr}:{\mathcal{A}_\hbar^\#} \to \mathbb{C}$ is explicitly given by
\[
\operatorname{Tr} (A_\hbar) = \int _M \operatorname{tr} A_\hbar (x,x) dx ,
\]
where $\operatorname{tr}$ is the matrix trace of $M_N(\mathbb{C})$.
The linear functional $\operatorname{Tr}: {\mathcal{A}_\hbar^\#} \to \mathbb{C}$ is invariant under the gauge action of $C^\infty(M,G)$,
as
\begin{eqnarray*}
\operatorname{Tr} ( g \cdot A_\hbar) &=& \int _M \operatorname{tr} \left( g(x) A_\hbar(x,x) g^{-1}(x) \right) dx \\
&=& \int _M \operatorname{tr} \left( A_\hbar(x,x) \right) dx \\
&=& \operatorname{Tr}(A_\hbar) .
\end{eqnarray*}
Such a property is called {\bf gauge invariant}.
The group element $A_\hbar(x,x)$ represents the holonomy of a connection around a loop with
base point $x$. The functional $\operatorname{Tr}:{\mathcal{A}_\hbar^\#} \to \mathbb{C}$ being gauge invariant
is parallel to the fact that loop variables being gauge invariant in loop quantum gravity.
The package of
${\mathcal{A}_\hbar^\#}$ acting on $L^2(M,\mathbb{C}^N)$ with gauge group $C^\infty(M,G)$ resembles the noncommutative standard model, where the algebra is given by the the gauge group, the Hilbert space is unchanged, and the resolvent of the Dirac operator gives rise to an element of ${\mathcal{A}_\hbar^\#}$.
\section{Strict Deformation Quantisation of $G$-Connections}
\label{sec6}
\begin{definition}
A {\bf system of Haar measures} for a groupoid $\mathcal{G} \rightrightarrows ^{\hspace{-0.25cm}^{\operatorname{r}}}
_{\hspace{-0.25cm}_{\operatorname{s}}}M$ is a family of measures $(\lambda^x)_{x\in M}$,
where each $\lambda^x$ is a positive, regular, Borel measure on $\mathcal{G}^x = s^{-1}(x)$.
\end{definition}
\begin{theorem}[\cite{landsman}]
For every Lie groupoid $\mathcal{G} \rightrightarrows ^{\hspace{-0.25cm}^{\operatorname{r}}}
_{\hspace{-0.25cm}_{\operatorname{s}}}M$, there exists
a smooth system of Haar measures $(\lambda^x)_{x\in M}$, so that the convolution product
with respect to the Haar system,
\[
( \varphi \ast \psi )(\gamma) := \int_{\mathcal{G}^{s(\gamma)}} f(\gamma \gamma_1^{-1}) g(\gamma_1) d \lambda^{s(\gamma)} (\gamma_1)
\mbox{ for }
\varphi, \psi \in C(\mathcal{G}),
\]
together with the star structure,
\[
\varphi^*(\gamma):=\overline{\varphi(\gamma^{-1})},
\]
give rise to a $C^*$-algebra structure on the space of continuous functions
$C^*(\mathcal{G})$ on $\mathcal{G}$.
\end{theorem}
Here we are not being specific on the $C^*$-norm and its closure. For our case of a Lie groupoid, the different closures coincide.
\begin{example}[\cite{landsman}]\mbox{}
\begin{enumerate}
\item
$C^*(G)$ of a Lie group $G$ is the group $C^*$-algebra.
\item
Given a smooth system of Haar measures $(\lambda^x)_{x\in M}$, where each $\lambda ^x$ is a Haar measure on $T_x M$, and two smooth functions $\varphi,\psi$ on $TM$. The product
\[
(\varphi\ast \psi ) (V_x):= \int _{W_x \in T_x M} \varphi(V_x - W_x) \psi(W_x) d\lambda ^x
\]
defines, by continuity, a $C^*$-algebra structure on the space of continuous functions on $TM$, denoted
$C^*(TM)$.
\item
Given a smooth system of Haar measures $(\lambda^x)_{x\in M}$, where each $\lambda ^x$ is a Haar measure on $M$, and two smooth functions $\varphi', \psi'$ on $M\times M$. The product
\[
(\varphi\ast \psi ) (x,z):= \int _{y \in M} \varphi'(x,y) \psi'(y,z) d\lambda ^x
\]
defines, by continuity, a $C^*$-algebra structure on the space of continuous functions on $M \times M$, denoted
$C^*(M \times M)$.
\end{enumerate}
\end{example}
Similarly, there is associated a groupoid $C^*$-algebra $C^*(\mathcal{T} M)$ to $\mathcal{T} M$, which can also be seen
as gluing the groupoid $C^*$-algebras $C^*(TM)$ and $C^*(M\times M)$ together.
By Fourier transform, the groupoid $C^*$-algebra $C^*(TM)$ of $TM$ is isomorphism to
the continuous function algebra $C_0(T^*M)$ on the cotangent bundle under point-wise multiplication.
This algebra contains the Poisson algebra $C^\infty(T^*M)$. Therefore, one has a Poisson structure on (a sub-algebra of)
$C^*(TM)$. Similarly, one has a Poisson structure on (a sub-algebra of) $C^*(\g)$.
\begin{definition}
A {\bf continuous field of $C^*$-algebras $\left(C, \{\mathcal{B}_\hbar,\varphi_\hbar\}_{\hbar \in [0,1] }\right)$} over $[0,1]$ consists of a $C^*$-algebra $C$,
$C^*$-algebras $\mathcal{B}_\hbar$, $\hbar \in [0,1]$, with surjective homomorphisms $\varphi_\hbar: C \to \mathcal{B}_\hbar$ and an action of $C([0,1])$ on $C$ such that
for all $c\in C$:
\begin{enumerate}
\item
the function $\hbar \mapsto \lVert \varphi_\hbar (c)\rVert$ is continuous;
\item
$\lVert c \rVert = \sup _{\hbar \in [0,1] } \lVert \varphi_\hbar(c) \rVert$;
\item
for $f\in C([0,1])$, $\varphi_\hbar (f c ) = f(\hbar) \varphi_\hbar (c)$.
\end{enumerate}
\end{definition}
\begin{example}[\cite{landsman}]
\label{fieldalgebra}
For the tangent groupoid $\mathcal{T} \mathcal{G}$, we define $\mathcal{T} \mathcal{G}_\hbar := \mathcal{G} \times \{\hbar\}$ for $\hbar \neq 0$ and $\mathcal{T}\mathcal{G}_0 = A(\mathcal{G})$.
The pullback of the inclusion $\mathcal{T}\mathcal{G}_\hbar \hookrightarrow \mathcal{T}\mathcal{G}$ induces a map
$\varphi_\hbar : C^\infty_c (\mathcal{T}\mathcal{G}) \to C^\infty_c (\mathcal{T}\mathcal{G}_\hbar)$, which extends by continuity to a surjective $\ast$-homomorphism
$\varphi_\hbar : C^\ast (\mathcal{T}\mathcal{G}) \to C^\ast (\mathcal{T}\mathcal{G}_\hbar)$.
The $C^*$-algebras $C:=C^*(\mathcal{T}\mathcal{G})$ and $\mathcal{B}_\hbar := C^*(\mathcal{T}\mathcal{G}_\hbar)$ with the maps $\varphi_\hbar$ for a continuous field over $[0,1]$.
\end{example}
\begin{definition}[\cite{rieffel}]
A strict deformation quantisation of a Poisson manifold $P$ consists of
\begin{enumerate}
\item a continuous field of $C^*$-algebras $(C, \{\mathcal{B}_\hbar,\varphi_\hbar \}_{\hbar \in [0,1]})$, with $\mathcal{B}_0 = C_0(P)$;
\item a dense Poisson algebra $\mathcal{B}^0 \subset C_0(P)$ under the given Poisson bracket
$\{\mbox{ } ,\mbox{ } \}$ on $P$;
\item a linear map $\mathcal{Q}:\mathcal{B}^0 \to C$ that satisfies (with $\mathcal{Q}_\hbar (f) := \varphi_\hbar (\mathcal{Q} (f) )$ )
\begin{eqnarray*}
\mathcal{Q}_0(f)&=&f , \\
\mathcal{Q}_\hbar (f^*) &=& \mathcal{Q}_\hbar (f)^*,
\end{eqnarray*}
for all $f\in \mathcal{B}^0$ and $\hbar \in I$, and for all $f,g \in \mathcal{B}^0$ satisfies the Dirac condition
\[
\lim _{\hbar \to 0}\left\lVert (i \hbar )^{-1} [\mathcal{Q}_\hbar(f), \mathcal{Q}_\hbar (g)] - \mathcal{Q}_\hbar ( \{f,g\} ) \right\rVert = 0 .
\]
\end{enumerate}
\end{definition}
\begin{theorem}[\cite{landsman}]
\label{liequantisation}
Let $\mathcal{G}$ be a Lie groupoid and $A(\mathcal{G})$ its associated Lie algebroid.
The continuous field of $C^*$-algebras $\left( C^*(\mathcal{T}\mathcal{G}), \{C^*(\mathcal{T}\mathcal{G}_\hbar), \varphi_\hbar \}_{\hbar \in [0,1]}\right)$, as defined in Example~\ref{fieldalgebra},
defines a strict deformation quantisation of the Poisson manifold $A^*(\mathcal{G})$.
\end{theorem}
The Poisson structure on $A^*(\mathcal{G})$ is induced dually by the Lie bracket on $A(\mathcal{G})$.
\begin{example}\mbox{}
\begin{enumerate}
\item
$C^\infty(\g^*)$ is a Poisson algebra with Poisson bracket induced from the Lie bracket of $\g$.
Take $C$ to be the $C^*$-algebra generated by the tangent groupoid $\mathcal{T}(G)$, $\mathcal{B}_0$ to be $C^*(\g)$, and
$\mathcal{B}_\hbar$ to be the group $C^*$-algebra $C^*(G)$ for $\hbar \neq0$.
The quantisation map $Q$ is the inclusion of $C^\infty(\g^*)$ into $C^*(\mathcal{T} G)$, and $Q_\hbar$
is $Q$ followed by map induced by the inclusion of $\g$ or $G$ into $\mathcal{T} (G)$.
\item
$T^*M$ is a symplectic manifold. Its symplectic structure defines the Poisson algebra $C^\infty(T^*M)$, which includes into continuous function algebra $C^0(T^*M)$. By Fourier transform, $C^0(T^*M)$ is isomorphic
to $C^*(TM)$. The groupoid $TM$ exponentiates to $M\times M$, thus one obtains
the inclusion of $C^\infty(T^*M)$ into the $C^*$-algebra $C^*(\mathcal{T} M)$.
\item
Take the Lie groupoid $M\times M \times G$, it has the associated Lie algebroid $TM \times \g$.
$C^\infty(T^*M \times \g^*)$ is a dense Poisson algebra in $C^*(TM \times \g)$.
The continuous field of $C^*$-algebras, $(C, \{\mathcal{B}_\hbar, \varphi_\hbar \}_{\hbar \in \mathbb{R}})$
is given by $C= C^*(\mathcal{T} M)$, $\mathcal{B}_\hbar=C^*(M\times M)$ for $\hbar \neq 0$,
and $\mathcal{B}_0=C_0(T^*M\times \g^*)=C^*(TM \times \g)$. The quantisation map
$Q: C^\infty ( T^*M \times \g^*) \to C^*(\mathcal{T} M)$ is the inclusion, and
$Q_\hbar : C^\infty (T^*M \times \g^* ) \to \mathcal{B}_\hbar$ is $Q$ followed by the restriction map.
\end{enumerate}
\end{example}
Recall that a $q$-connection $A_\hbar \in \operatorname{Func}(\mathcal{T} M, \mathcal{T} G)$ is precisely a $\g$-valued 1-form when $\hbar=0$.
By considering (the characteristic function supported
on) the graph of the function $A_\hbar$, one obtains a distribution on
the tangent groupoid
$\mathcal{T} (M \times M \times G)$ of the Lie groupoid $M \times M \times G$.
Therefore, one has an action of the space of connections $\mathcal{A}$ on the $C^*$-algebra $C^*(\mathcal{T}\mathcal{G})$, for $\mathcal{G}=M\times M \times G$.
By a smearing the characteristic function, that is integrating the distribution with
some smooth function,
one turns the characteristic function into an element in $C^*(\mathcal{T}\mathcal{G})$.
Therefore, a map from from $\operatorname{Func}(\mathcal{T} M, \mathcal{T} G)$, which includes the space of ordinary connections $\mathcal{A}$, to $C^*(\mathcal{T}\mathcal{G})$.
Then Theorem~\ref{liequantisation} allows us to strictly deformation quantise the connections.
Therefore, what one has obtained in this procedure is a strict deformation quantising of the space of connections $\mathcal{A}$, which is obtained by mapping $\mathcal{A}$ into a
$C^*$-algebra that constitutes the properties of a strict deformation quantisation.
The goal of this formalism is to provide a deforming parameter $\hbar$, so that when $\hbar=0$, ordinary connections are retrieved. This line of work is to provide loop quantum gravity a semi-classical limit, so that the theory of classical gravity, general relativity, returns as one takes the limit $\hbar \to 0$.
\section{Outlook}
\label{sec7}
The lack of a semi-classical limit in loop quantum gravity has been a long standing problem. The main obstacle of obtaining such a limit is by nature how one traditionally constructs a measure on the space of connections -- probing the connection space with a collection of finite graphs as described in Section~\ref{sec2}. Finite graphs are very rigid and discrete, they do not provide a parameter that one usually encounters in quantum theory to adjust. As a result, obtaining a semi-classical limit simply becomes impossible if a ``smoother'' way of probing the connection space is not introduced. To come up with a smooth way of probing the connection space, we understand this probing procedure as evaluating some one-dimensional objects, which is a groupoid. Thus, by using a smooth groupoid, or more specifically a Lie groupoid, one has a hope of circumventing the discreteness problem. The proposal we give here is the tangent groupoid. As it turns out, when the right tangent groupoid is used, the space of connections includes into the groupoid. From there, one could deform the connections to convolution operators acting on a Hilbert space. This formalism recreates some elements appear in the noncommutative standard model \cite{stdtriple} in the way that, the gauge group is the unitary part of the algebra in the spectral triple of noncommutative standard model, the Hilbert space remains the same, and the connections are realized as trace-class operators, with the trace being a gauge invariant quantity that resembles the loop variables in loop quantum gravity literature.
The tangent groupoid provides another important feature concerning deformation, which is the strict deformation quantisation result of Landsman \cite{landsman}. Deformation quantisation can naturally be formulated in terms of $C^*$-algebraic language, a theorem due to Landsman shows that a tangent groupoid gives rise to a deformation quantisation in the strict sense. By combining Landsman's result and our realization of connections using tangent groupoids, we obtain a deformation quantisation of $G$-connections. And this quantisation formalism permits the existence of a parameter $\hbar$, so that when $\hbar=0$, one retrieves the classical connections.
However, this is not the end of the story.
Since connection variables are only half of the variables in gravity in Arnowitt-Deser-Misner formulation, one still has to look into the other half of the variables that the connections are conjugate dual to, tetrads or metrics. It is known that tetrads are quantised to degree one differential operators \cite{AGN,lai}, and its semi-classical limit can be obtained from the $\hbar \to 0$ limit of integral kernel of the differential operator multiplied by $\hbar$, which gives nothing but the symbol of the differential operator \cite{lai2}. In the case of the manifold $M$ being three dimensional, the symbol is an $\mathfrak{su}(2)$-valued function
the Poisson manifold $T^*M$. Hence the symbol of the differential operator lives in the same space as a connection does. Following up the work of deformation quantisation of connections here, the next step is to examine the interaction of the tetrads with the connections at both the classical level $\hbar=0$ and the quantum level $\hbar \neq 0$. At $\hbar = 0$, one needs to examine the Poisson bracket of a connection and a symbol (of a differential operator), and determine which symbol is conjugate dual to a given connection. At $\hbar \neq 0$, one repeats the same procedure except now that the Poisson bracket is replaced with a commutator. We will leave those considerations to another article. Finally, the author would like to stress that the line of work here is toward obtaining a semi-classical limit in loop quantum gravity, and this tangent groupoid application has not been seriously considered before, thus the work here may appear incomplete. Nonetheless, the development so far shows that a semi-classical limit in loop quantum gravity is more within reach than before.
\bibliographystyle{amsalpha}
|
1,314,259,996,595 | arxiv | \section{Introduction}
The trilinear Higgs coupling ($\lambda_{HHH}$) is an important parameter of the Higgs boson potential in the Standard Model (SM). Even though $\lambda_{HHH}$ is related to the Higgs boson mass ($m_H$) in the SM, it might deviate from the SM value $\lambda_{HHH}^{\rm SM}$ in new physics (NP) models~\cite{Kanemura:2002vm,Han:2003wu,Kanemura:2004mg,Bhattacherjee:2014bca,Barger:2014qva,Hespel:2014sla,Wu:2015nba}. For example, the deviation of the trilinear Higgs coupling might emerge from non-vanishing higher-dimension operators starting with dimension 6. It is then of critical importance to measure $\lambda_{HHH}$ to test the SM. Such a goal will only be successful if information from a range of production channels of Higgs boson pairs is included. There are five major channels of Higgs pair productions. Figure~\ref{fig:xsec} plots the inclusive cross sections of all those five channels as a function of $\kappa$ at the 14~TeV Large Hadron Collider (LHC), where the factor $\kappa$ is introduced to describe possible NP effects in the trilinear Higgs coupling as $\lambda_{HHH}\equiv \kappa \lambda_{HHH}^{\rm SM}$. The leading production channel is the so-called gluon fusion (GF) channel, $gg\to HH$~\cite{Glover:1987nx,Moretti:2004wa,Dawson:2012mk,Baglio:2012np,Papaefstathiou:2012qe,Dolan:2012rv,Shao:2013bz,Goertz:2013kp,Li:2013flc,
Chen:2014xwa,Chen:2014xra,Frederix:2014hta,Maltoni:2014eza,Li:2015yia,Dawson:2015oha,He:2015spf,Cao:2015oaa,deFlorian:2013jea,
deFlorian:2015moa,Dawson:1998py,deFlorian:2013uza,Grigo:2013rya,Grober:2015cwa,Grigo:2014jma,Azatov:2015oxa,
Contino:2012xk,Plehn:1996wb,Nishiwaki:2013cma,Slawinska:2014vpa,Barr:2013tda,Barger:2013jfa,deLima:2014dta,Wardrope:2014kya}, the subleading channel is the vector boson fusion (VBF) process, $qq \to qqHH$~\cite{Moretti:2004wa,Baglio:2012np,Dolan:2013rja,Liu-Sheng:2014gxa,Frederix:2014hta,Dolan:2015zja}, the third channel is $t\bar{t}HH$ production~\cite{Moretti:2004wa,Baglio:2012np,Englert:2014uqa,Liu:2014rva,Frederix:2014hta}, while the last two channels are $WHH$ and $ZHH$ productions~\cite{Barger:1988jk,Moretti:2004wa,Baglio:2012np,Frederix:2014hta}. The GF channel has drawn a lot of attentions owing to its large cross section. Searches for Higgs pairs in the decay modes of $bb\bar{b}\bar{b}$, $b\bar{b}\tau\tau$, $b\bar{b}WW$ and $\gamma\gamma b\bar{b}$ have been carried out by the ATLAS collaboration recently~\cite{Aad:2014yja, Aad:2015uka, Aad:2015xja} and a combined upper limit of $\sigma(gg\to HH)\leq 0.69~{\rm pb}$ is observed. On the other hand, the VBF channel is shown to be less promising to measure the trilinear Higgs coupling~\cite{Dolan:2015zja} as it cannot compete with the process of $gg\to HHqq$, i.e. high order QCD corrections to the GF channel. The potential of probing $\lambda_{HHH}$ in the $t\bar{t}HH$ channel at the high-luminosity (HL) LHC (a 14~TeV $pp$ collider with an integrated luminosity of $3000~{\rm fb}^{-1}$) is examined in Refs.~\cite{Englert:2014uqa,Liu:2014rva} which shows that a bound of $\kappa\leq 2.51$ can be reached at 95\% CL. Unfortunately, the $WHH$ and $ZHH$ productions are not yet studied carefully in the literature, which is not surprising because the $VHH$ production rate are quite small. In this work we focus on the $VHH$ ($V=W^\pm,Z$) production channel and demonstrate that, the $VHH$ production could probe the trilinear Higgs coupling at a comparable level as the GF and $t\bar{t}HH$ channels at the HL-LHC.
\begin{figure}
\includegraphics[scale=0.32]{HH_xsection.pdf}
\caption{The production cross section (in unit of fb) of Higgs boson pair production as a function of $\kappa$ at the 14 TeV LHC.}
\label{fig:xsec}
\end{figure}
The $VHH$ production has many advantages over others production channels. First, the charged lepton and invisible neutrino from $W$- or $Z$-boson decays in the $VHH$ production provide a good trigger of signal events.
Note that $\sigma(VHH)$ is about one tenth of $\sigma(gg\to HH)$. However, after including the branching ratio (BR) of $V$-boson and Higgs boson decays, the cross section of $VHH$ channel is comparable to the GF channel with subsequent decays $HH\to \gamma\gamma b\bar{b}$. For example,
\begin{align}
& \sigma(W^\pm HH)\times {\rm BR}(W^\pm \rightarrow \ell^\pm \nu_\ell,HH\rightarrow b\bar{b}b\bar{b})=0.088\rm{fb},\nonumber\\
& \sigma(ZHH)\times {\rm BR}(Z\rightarrow \nu\bar{\nu},HH\rightarrow b\bar{b}b\bar{b})=0.059\rm{fb}, \nonumber\\
& \sigma(gg\rightarrow HH)\times {\rm BR}(HH\rightarrow\gamma\gamma b\bar{b})=0.15\rm{fb},\nonumber
\end{align}
where $\ell=e,\mu$. Here, we use the branching ratios at the tree level: ${\rm Br}(H\to b\bar{b})=0.83$, ${\rm Br}(Z\to\nu\bar{\nu})=0.20$, and ${\rm Br}(W^+\to \ell^+\nu_\ell)=0.11$. Also, the SM backgrounds can be dramatically reduced by tagging the charged lepton and missing neutrino from the $V$-boson decay.
Second, all the five channels of Higgs pair productions do not only involve a diagram with $\lambda_{HHH}$ but also additional contributions which then dilute the sensitivity to $\lambda_{HHH}$. The GF channel can be modified by several NP effective operators~\cite{Cao:2015oaa} which might arise from colored NP particles inside the loop diagrams or from the modified top-quark Yukawa coupling, etc. The $VHH$ production could be modified by the $HVV$ anomalous couplings, which are tightly bounded~\cite{CMS:2015kwa,ATLAS-CONF-2015-044}.
Last but not least, as depicted in Fig.~\ref{fig:xsec}, the dependence of $\sigma(VHH)$ on $\kappa$ is different from those of the GF, VBF and $t\bar{t}HH$ channels. The $t\bar{t}HH$ production is dominated by the large top Yukawa coupling such that it has a mild dependence on $\kappa$. On the other hand, both the GF and the VBF channels are sensitive to a negative $\kappa$ while the $VHH$ channel is sensitive to a positive $\kappa$. One can use the $VHH$ production to probe positive $\kappa$ and the GF channel to limit negative $\kappa$.
The $\kappa$ dependence can be understood as follows.
The GF channel exhibits a large cancellation between the triangle diagram and the box diagram around the threshold of Higgs boson pairs~\cite{Glover:1987nx}. When $\kappa<0$ the two diagrams interfere constructively so as to enhance the cross section.
The VBF and $VHH$ channels share the same subprocess of $V^\mu V^\nu \to HH$ and are related to each other by crossing symmetry. Consider the VBF channel first. The matrix element of $V^\mu(q_1) V^\nu(q_2) \to H(k_1) H(k_2)$ is
\begin{eqnarray}
M^{\mu\nu}&=& g^{\mu\nu}\left[ \kappa \frac{m_V^2}{v^2}\frac{6m_H^2}{\hat{s}-m_H^2}+\frac{2m_V^2}{v^2} \right. \nonumber\\
& +& \left. \frac{4m_V^4}{v^2}\left(\frac{1}{\hat{t}-m_V^2} +\frac{1}{\hat{u}-m_V^2}\right)\right] +{\rm others},
\label{eq:VVHH}
\end{eqnarray}
where $m_V$ denotes the mass of $V$-boson and $q_{1,2}$ ($k_{1,2}$) denotes the momentum of the $V$ (Higgs) boson, respectively. Figure~\ref{fig:VVHH} shows Feynman diagrams of $V^\mu V^\nu \to HH$.
For the VBF channel, $\hat{s}=(q_1+q_2)^2$, $\hat{t}=(q_1-k_1)^2<0$ and $\hat{u}=(q_1 - k_2)^2<0$. Near the threshold of Higgs boson pairs, $\hat{s}\sim 4m_H^2$ and $\hat{t}\simeq \hat{u}\sim 0$. It gives rise to
\begin{equation}
M^{\mu\nu} \sim \frac{2m_V^2}{v^2}(\kappa - 3) g^{\mu\nu} + \cdots, \nonumber
\end{equation}
yielding a small cross section around $\kappa \sim + 3$ and a large cross section for $\kappa<0$. The sub-amplitude of $VHH$ production can be obtained from $VV\to HH$ by crossing one gauge boson from initial state to final state. In the vicinity of the thresholds of $HH$ and $VH$ pairs,
$\hat{s}\sim 4m_H^2$ and $\hat{t}=\hat{u}\sim (m_H+m_V)^2$.
That yields
\begin{equation}
M^{\mu\nu}\sim \frac{2m_V^2}{v^2}\left(\kappa + 1 +\frac{4m_V^2}{m_H(m_H+2m_V)}\right) g^{\mu\nu} + \cdots,\nonumber
\end{equation}
which leads to a small cross section around $\kappa \sim -2$ and a large cross section for $\kappa>0$.
\begin{figure}
\includegraphics[scale=0.3]{VVHH.pdf}
\caption{Feynman diagrams of $V^\mu V^\nu \to HH$.}
\label{fig:VVHH}
\end{figure}
Next we perform a collider simulation at the parton level to investigate the sensitivity of the HL-LHC on the $\lambda_{HHH}$ measurement through the $VHH$ production.
\section{The $WHH$ production}
Our signal consists of both $W^+HH$ and $W^-HH$ productions. The signal and background events are generated with MadEvent~\cite{Alwall:2011uj}. To include higher order QCD corrections, we multiply the cross section of the signal at the tree-level with a factor of $K_{WHH}=1.39$~\cite{Baglio:2012np}. As the production rate is quite small, we demand both the Higgs bosons decay into the $b\bar{b}$ pair which has the largest decay branching ratio among all the decay modes of the Higgs boson. In order to trigger the signal events, we demand a leptonic decay of the $W$-boson which gives rise to a charged lepton and an invisible neutrino in the final state. The event topology of our signal is characterized by one isolated charged lepton ($\ell^\pm$), four $b$-jets and a large missing transverse momentum ($\not{\!\!{\rm E}}_{T}$) from the missing neutrino. Both electrons and muons are used in our analysis.
In the article the QCD corrections to the signal processes are taken into account by introducing a constant $K$ factor. It is worthwhile discussing how much our result will be influenced by the QCD corrections. The NNLO QCD corrections to the $VHH$ productions is calculated in Refs.~\cite{Baglio:2012np}, which shows the errors of cross sections are dominated by PDF uncertainty. For example, it shows that the total uncertainty is $+3.7\%$ and $-3.1\%$ for $WHH$ production, $7.0\%$ and $-5.5\%$ for $ZHH$ production at the 14 TeV LHC. The fully differential cross section of $WHH$ production at the next-to-next-to-leading order is calculated in Ref.~\cite{Li:2016nrr}. It shows the QCD effects mildly modify the transverse momentum distributions of $W$-boson and the Higgs bosons, which do not alter the acceptance of kinematic cuts used in this study.
\begin{table*}
\caption{Numbers of $WHH$ signal ($\kappa=1$) and background events after a series of cuts which are applied sequentially at the 14 TeV LHC with an integrated luminosity of $3000~{\rm fb}^{-1}$.
}
\label{tbl:whhcut}
\begin{tabular}{l|c|c|c|c|c|c|c|c}
\hline
&$WHH$&$Wbb\bar{b}\bar{b}$&$Zbb\bar{b}\bar{b}$&$t\bar{t}$&$t\bar{t}j$ &$t\bar{t}H(\rightarrow b\bar{b})$&$t\bar{t}Z(\rightarrow b\bar{b})$&$t\bar{t}b\bar{b}$ \\
\hline
Basic cuts & 200.9 &157770 &266580&$4.26\times 10^8$&$1.0\times 10^9$&296716&97888&$7.0\times10^6$ \\
\hline
Selection cuts &34.6 &544.4 &44.9&$3.62\times 10^7$&$3.8\times 10^7$&1454.0&442.4&65590.7\\
\hline
$4b$ tagging &7.2 &97.2 &9.9&767.6&1002&265.1&71.6&922.3\\
\hline
$\chi_{HH}<1.6$&7.2 & 2.3 &0&170.6&0&45.8&0.1&51.8 \\
\hline
$\chi_{tt}>3.2$ &4.8 & 0.6 &0&0&0&20.5&0.05&47.9\\
\hline
$m_T~\&~H_T$ cuts&3.5 &0.2 &0&0&0&4.1&0&2.0\\
\hline
\end{tabular}
\end{table*}
The SM backgrounds are rather complicated. In order to reduce the huge SM backgrounds, we require all the four hard jets are tagged as $b$-jets. In the study we also take into account the possibility that a light quark jet fakes a $b$-jet, with mistag efficiency $\epsilon_{j\to b}=0.1\%$ with $j=u,d,s$ and $\epsilon_{c\to b}=10\%$.
The major SM backgrounds include $W^\pm(\to \ell^\pm\nu_\ell)bb\bar{b}\bar{b}$, $Z(\to \ell^+ \ell^-)bb\bar{b}\bar{b}$,
$t\bar{t},~t\bar{t}j$, $t\bar{t}H$, $t\bar{t}Z$ and $t\bar{t}b\bar{b}$.
To mimic the signal events, top-quark pairs in the $t\bar{t}$ and $t\bar{t}j$ backgrounds are required to decay into semi-leptonic final states while top-quark pairs in the $t\bar{t}H$, $t\bar{t}Z$ and $t\bar{t}b\bar{b}$ backgrounds decay into dilepton final states.
To take into account higher order QCD corrections, we multiply the $t\bar{t}H$ and $t\bar{t}Z$ backgrounds with a factor of $K_{t\bar{t}H}=1.22$~\cite{Beenakker:2001rj,Beenakker:2002nc,Dawson:2002tg,Dawson:2003zu,Yu:2014cka,Frixione:2014qaa,Maltoni:2015ena,Frixione:2015zaa} and $K_{t\bar{t}Z}=1.49$~\cite{Lazopoulos:2008de,Garzelli:2011is,Kardos:2011na,Garzelli:2012bn,Maltoni:2015ena,Frixione:2015zaa}, respectively.
When generating both the signal and background events, we impose {\it basic} cuts as follows: $p_T^{\ell^\pm,b,j} > 5~{\rm GeV}$ with $|\eta^{\ell^\pm,b,j}| <5$, where $p_T$ and $\eta$ denotes the transverse momentum and rapidity, respectively. To get the isolated objects, we require the cone distance $\Delta R_{mn}\equiv \sqrt{(\eta^m-\eta^n)^2 + (\phi^m-\phi^n)^2}$ between the object $m$ and $n$ is at least 0.4. The numbers of signal and background events are shown in the second row of Table~\ref{tbl:whhcut}.
There are also many other SM backgrounds involving light non-$b$ jets, e.g. $Wb\bar{b}c\bar{c}$ (5.6~fb), $Wcc\bar{c}\bar{c}$ (3.8~fb), $Wb\bar{b}jj$ ($\sim 10^3~\rm{fb}$), $Wc\bar{c}jj$ ($\sim 10^3~\rm{fb}$), $Wjjjj$ ($\sim 10^5~\rm{fb}$), $WWZ$ (95.9~fb), $ZZZ$ (10.4~fb), $WZZ$ (30.2~fb), $WWW$ (12.6~fb) and $t\bar{t}ZZ$ ($1.8~\rm{fb}$). The numbers shown inside the bracket denote the production cross section after imposing $p_T^{b,j}\geq40$ GeV with $|\eta^{b,j}|\leq2.5$. All the above light-jet backgrounds can be safely ignored after tagging four $b$-jets. For example, $\sigma(WWZ\to \ell^\pm bbbb+\not{\!\!{\rm E}}_{T}) \sim 10^{-5}~\rm{fb}$, $\sigma(ZZZ\to \ell^\pm bbbb+\not{\!\!{\rm E}}_{T}) \sim 10^{-3}~\rm{fb}$. From now on we ignore those light-jet backgrounds in our collider simulations.
At the analysis level, all the signal and background events are required to pass a set of {\it selection} cuts~\cite{Aad:2015uka}:
\begin{align}
& p_T^e \geq 15~{\rm GeV}, && p_T^\mu \geq 10~{\rm GeV}, && p_T^b \geq 40~{\rm GeV},\nonumber\\
& \left| \eta^{e,\mu,b} \right|\leq 2.5~,&& \Delta R_{bb,b\ell}>0.4, && \not{\!\!{\rm E}}_{T}\geq 40~{\rm GeV}.
\label{eq:cut}
\end{align}
We model detector resolution effects by smearing the final state energy according to $\delta E/E= \mathcal{A}/\sqrt{E/{\rm GeV}}\oplus \mathcal{B}$, where we take $\mathcal{A}=10(85)\%$ and $\mathcal{B}=1(5)\%$ for leptons(jets). We demand only one charged lepton and four hard jets in the central region of the detector. Those reducible backgrounds with more jets or charged leptons could mimic the experimental signature of the signal events if the $p_T$ of additional jets or leptons is less than 10~GeV or its rapidity (in magnitude) is larger than 3.5~.
As shown in the third row of Table~\ref{tbl:whhcut}, roughly $1/6$ of the signal events pass the selection cuts.
Once the four jets are trigged, one can require them to be $b$-jets. That significantly reduces the SM backgrounds consisting of light jets; see the fourth row of Table~\ref{tbl:whhcut}. The $b$-tagging efficiency depends on both $p_T^b$ and $\eta^b$. We adapt the $b$-tagging efficiency given in Ref.~\cite{Cao:2015oaa} which yields on average a $b$-tagging efficiency of 70\% in our analysis.
\begin{table*}
\caption{The numbers of $ZHH$ ($\kappa=1$) signal and background events after a series of cuts which are applied sequentially at the 14 TeV LHC with an integrated luminosity of $3000~{\rm fb}^{-1}$.
}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c}
\hline
&$ZHH$ &$Zbb\bar{b}\bar{b}$ &$Zb\bar{b}c\bar{c}$ &$Zcc\bar{c}\bar{c}$ &$ZZ(\rightarrow b\bar{b})b\bar{b}$ &$t\bar{t}c\bar{c}$ & $t\bar{t}b\bar{b}$& $t\bar{t}Z(\to b\bar{b})$ &$t\bar{t}H(\rightarrow b\bar{b})$&$t\bar{t}$&$t\bar{t}j$ \\
\hline
Basic cuts &155.4 &$1.2\times10^6$ &$3.2\times10^6$ &$2.0\times 10^6$ & 24627 &$4.6\times10^6$ & $7.0\times10^6$ &97889 &296716&$4.3\times10^8$ &$1.0\times10^9$\\
\hline
Selection cuts &23.2 &2589.6 &4504.8 &2000.4 &308.0 &2131.3 &2435.0 &11.4 &29.4& $1.4\times10^6$ & $1.4\times10^6$ \\
\hline
$4b$ tagging &4.8 &499.4 &19.2 &0 &56.6 &0 & 27.7 &2.0 &3.6& 28.4 & 66.8 \\
\hline
$\chi_{HH}<1.6$ &4.8 &10.8 &0 &0 &0.1 &0 & 0 &0.05 &0.7 &0&0 \\
\hline
\end{tabular}
\label{tbl:zhh}
\end{table*}
The four $b$-jets in the signal events originate from the Higgs boson decay. We first order the jets by their values of $p_T$ and then demand at least one combination of the four jets to be consistent with those expected for the $HH\to b\bar{b}b\bar{b}$ decay, i.e.
\begin{equation}
\chi_{HH}\equiv \sqrt{ \left(\frac{m_{ij}-m_H}{\sigma_{m_H}}\right)^2+ \left(\frac{m_{i^\prime j^\prime}-m_H}{\sigma_{m_H}}\right)^2 } \le 1.6,
\label{eq:mh}
\end{equation}
where $m_{ij}$ denotes the invariant mass of the dijet $i$ and $j$, and $\sigma_{m_H}(= m_H/10)$ is the dijet mass resolution. All the signal events pass the cut while only 1\% of the background events remains.
At this stage of analysis, the dominant background is from $t\bar{t}$ production. Following the ATLAS study~\cite{Aad:2015uka}, we check the compatibility with top-quark decay hypothesis with the following variable
\begin{equation}
\chi_{tt}=\sqrt{ \left(\frac{\tilde{m}_W- m_{W}}{\sigma_{m_W}}\right)^2+ \left(\frac{\tilde{m}_t - m_{t}}{\sigma_{m_t}}\right)^2 },
\end{equation}
with $m_W = 80.419~{\rm GeV}$ and $m_t = 173~{\rm GeV}$.
The $\sigma_{m_W}=0.1 m_W$ and $\sigma_{m_t}=0.1 m_t$ represent the dijet and three-jet system mass resolutions, $\tilde{m}_W$ and $\tilde{m}_t$ are the invariant mass of the $W$ and top candidates. If either dijet in an event has $\chi_{tt}\leq 3.2$ for any possible combination with an extra jet, the event is rejected. This requirement sufficiently reduces the $t\bar{t}$ background, whilst retaining $\sim 67\%$ of signal events; see the sixth row of Table~\ref{tbl:whhcut}.
While the $\not{\!\!{\rm E}}_{T}$ in the signal events arises mainly from the missing neutrinos, that in the background events is contaminated by jets or leptons either falling into a large rapidity region or carrying a too small transverse momentum to be detected. To further suppress the SM backgrounds, we impose cuts on the transverse mass ($m_T$) of $\not{\!\!{\rm E}}_{T}$ and charged lepton, $m_T(\ell^\pm, \not{\!\!{\rm E}}_{T}) = \sqrt{2 p_T^{\ell}\not{\!\!{\rm E}}_{T}\left(1-\cos\phi\right)}$ with $\phi$ being the azimuthal angle between $\ell^\pm$ and $\not{\!\!{\rm E}}_{T}$, and $H_T$ defined as the scalar sum of $p_T$'s of jets and charged lepton as follows:
\begin{equation}
m_T \leq m_W,~~H_T\geq 400~{\rm GeV}.
\end{equation}
The SM backgrounds are suppressed efficiently such that only 6.3 background events survive after all the cuts. We end up with 3.5 signal events.
Equipped with the optimal cuts shown above, we vary $\kappa$ to obtain a 5 standard deviations ($\sigma$) statistical significance using
\begin{equation}
\sqrt{-2\left[(n_b + n_s) \log\frac{n_b}{n_s+n_b}+n_s\right]},
\label{eq:dis}
\end{equation}
from which we obtain a $5\sigma$ discovery significance requires $\kappa\geq 4.81$ or $\kappa\leq -7.68$. Here, $n_b$ and $n_s$ represents the numbers of the signal and background events, respectively.
A discovery significance of the SM trilinear Higgs coupling ($\kappa=1$) is found to be around $1.29\sigma$, which is comparable to the projected significance derived from the GF channel by the ATLAS ($1.19\sigma$)~\cite{ATL-PHYS-PUB-2014-019} and CMS collaborations ($1.65\sigma$)~\cite{CMS:2015nat}.
In the case that no evidence of Higgs pair production is observed, one can set a $2\sigma$ exclusion limit on $\kappa$ from
\begin{equation}
\sqrt{-2\left[n_b\log \frac{n_s + n_b}{n_b}-n_s\right]} = 2~,
\label{eq:ex}
\end{equation}
yielding $-5.11\leq\kappa\leq 2.24$~.
\section{The $ZHH$ Production}
Now consider the $ZHH$ production. We require that both the Higgs bosons decay into the $b\bar{b}$ pair and the $Z$ boson decays into neutrinos.
The topology of our signal events is characterized by four $b$-jets and a large $\not{\!\!{\rm E}}_{T}$ from the missing neutrinos. The major SM backgrounds are
\begin{align}
&Zbb\bar{b}\bar{b}, ~Zb\bar{b}c\bar{c}, ~Zcc\bar{c}\bar{c},~ZZ(\to b\bar{b})b\bar{b},~t\bar{t}c\bar{c}, \nonumber\\
&t\bar{t}b\bar{b},~t\bar{t}Z(\to b\bar{b}),~t\bar{t}H(\to b\bar{b}),~t\bar{t},~t\bar{t}j.\nonumber
\end{align}
Top quark pairs in the $t\bar{t}$ and $t\bar{t}j$ backgrounds are demanded to decay into semi-leptonic final states, while those pairs in the $t\bar{t}c\bar{c}$, $t\bar{t}b\bar{b}$, $t\bar{t}Z$ and $t\bar{t}H$ backgrounds decay into both semi-leptonic and dilepton final states. The background $t\bar{t}Z$ with $Z\to \nu\bar{\nu}$ is negligible after requiring four $b$-tagged jets.
The cross section of the signal is normalized to the NNLO precision~\cite{Baglio:2012np} while the $t\bar{t}H$ and $t\bar{t}Z$ backgrounds to the NLO accuracy~\cite{
Beenakker:2001rj,Beenakker:2002nc,Dawson:2002tg,Dawson:2003zu,Lazopoulos:2008de,
Garzelli:2011is,Kardos:2011na,Garzelli:2012bn,Yu:2014cka,Frixione:2014qaa,Maltoni:2015ena,Frixione:2015zaa}. We also consider a few SM backgrounds involving light jets, e.g. $ZZZ$ (10.4~fb), $WZZ$ (30.2~fb), $Zjjjj$ ($\sim 10^4~\rm{fb}$), $Zb\bar{b}jj$ ($\sim 10^3~\rm{fb}$), $Zc\bar{c}jj$ ($\sim 10^3~\rm{fb}$), $WZb\bar{b}$ (18~fb) and $t\bar{t}jj$ ($\sim 10^5~\rm{fb}$). The numbers shown inside the bracket denote the production cross section after imposing $p_T^{b,j}\geq40$ GeV with $|\eta^{b,j}|\leq2.5$ on the jets. Again we note that those light flavor jet backgrounds are negligible after requiring four $b$-tagged jets in the final state.
The {\it selection} cuts used in the $ZHH$ production are the same as those used in the $WHH$ channel (see Eq.~\ref{eq:cut}) except that now we demand $\not{\!\!{\rm E}}_{T}>100~{\rm GeV}$ to trigger the events. Table~\ref{tbl:zhh} displays the numbers of signal ($\kappa=1$) and background events after the selection cuts. For the four $b$-tagged jets, we also demand $\chi_{HH}<1.6$ which sufficiently suppresses the backgrounds. We end up with 4.8 signal events and 11.7 background events. Based on Eqs.~\ref{eq:dis} and \ref{eq:ex}, we obtain a $5\sigma$ discovery significance and $2\sigma$ exclusion limits on $\kappa$, respectively. It shows that the $5\sigma$ significance requires $\kappa\geq 4.85$ or $\kappa\leq -8.10$, while the $2\sigma$ exclusion bound is $-5.42\leq \kappa\leq 2.16$. The discovery significance of $\lambda_{HH}^{\rm SM}$ is $1.32\sigma$ in the $ZHH$ channel which is comparable to that of the GF channel.
\begin{table}[b]
\caption{The sensitivity to $\lambda_{HHH}=\kappa \lambda_{HHH}^{\rm SM}$ in several production channels of Higgs boson pairs at the HL-LHC. }
\label{tbl:sig}
\begin{tabular}{l|c|c|c}
\hline
& SM & $5\sigma$ discovery& $2\sigma$ exclusion \\
& ($\kappa=1$) & potential &bound\\
\hline
$WHH$ & 1.29$\sigma$ & $\kappa\leq -7.7$, ~$\kappa\geq 4.8$ & $-5.1\leq\kappa\leq 2.2$ \\
$ZHH$ & $1.32\sigma$ & $\kappa\leq -8.1$, ~$\kappa\geq 4.8$ & $-5.4\leq\kappa\leq 2.2$ \\
GF($b\bar{b}\gamma\gamma$)~\cite{ATL-PHYS-PUB-2014-019} & $1.19\sigma$ & $\kappa\leq -4.5$, ~$\kappa\geq 8.1$ & $-0.2\leq\kappa\leq 4.9$\\
GF($b\bar{b}\gamma\gamma$)~\cite{CMS:2015nat} & $1.65\sigma$ & $\kappa\leq -2.6$, ~$\kappa\geq 6.3$ & $~~0.5\leq\kappa\leq 4.1$\\
VBF~\cite{Dolan:2015zja} & $0.59\sigma$ & $\kappa\leq -1.7$, ~$\kappa\geq 5.0$ & $-0.4\leq\kappa\leq 3.5$ \\
$t\bar{t}HH$~\cite{Englert:2014uqa,Liu:2014rva} & $1.38\sigma$ & $\kappa\leq -11.4, \kappa\geq 6.9$ & $-7.2\leq \kappa\leq 2.5$\\
\hline
\end{tabular}
\end{table}
\section{Discussions and conclusions}
The sensitivity to the trilinear Higgs coupling is summarized in Table~\ref{tbl:sig}. The $\lambda_{HHH}^{\rm SM}$ can be probed at the level of $1.32\sigma$ ($1.29\sigma$) in the $ZHH$ ($WHH$) production at the HL-LHC, respectively.
The discovery potential of the $VHH$ production is comparable to those of the GF channel given by the ATLAS collaboration ($1.19\sigma$)~\cite{ATL-PHYS-PUB-2014-019} and CMS collaboration ($1.65\sigma$)~\cite{CMS:2015nat} and $t\bar{t}HH$ production ($1.38\sigma$)~\cite{Englert:2014uqa,Liu:2014rva}. The VBF channel can probe $\lambda_{HHH}^{\rm SM}$ only at the level of $0.59\sigma$~\cite{Dolan:2015zja}.
In order to combine all the channels to probe the Higgs trilinear coupling, we use the following significance formula~\cite{Cowan:2010js},
\begin{equation}
\sqrt{-2\log\left(\prod_{i=1}^N\dfrac{L(b_i|s_i+b_i)}{L(s_i+b_i|s_i+b_i)}\right)},
\end{equation}
where $s_i$ and $b_i$ denotes the number of signal and background events after imposing a set of cuts in the production channel $i$, respectively. The likelihood function is defined as
$L(x_i | n_i)=x_i^{n_i}e^{-x_i}/{n_i!}$. The $\lambda_{HHH}^{\rm SM}$ can be measured at the level of $3.13\sigma$ after combining all the above five channels.
The trilinear Higgs coupling might be modified sizeably in NP models. We vary $\kappa$ to estimate the value of $\lambda_{HHH}$ needed for a $5\sigma$ significance for each $HH$ production channel; see the third column in Table~\ref{tbl:sig}.
If no evidence of Higgs pair productions were observed, one could set a $2\sigma$ exclusion limit on $\kappa$ for each $HH$ production channel; see the fourth column in Table~\ref{tbl:sig}.
Figure~\ref{fig:sig} displays the $2\sigma$ exclusion regions on $\kappa$ from the GF channel (gray band from the ATLAS study and cyan band from the CMS result) and from the $WHH$ and $ZHH$ productions (orange band). The most stringent lower limit and upper limit on $\kappa$ arises from the GF channel and the $VHH$ production, respectively, which requires $0.5\leq\kappa\leq 2.2$ at the 95\% confidence level.
\begin{figure}
\includegraphics[scale=0.3]{Ex.pdf}
\caption{The 95\% exclusion bounds on $\lambda_{HHH}=\kappa \lambda_{HHH}^{\rm SM}$ derived from the $VHH$ and GF channels at the HL-LHC.}
\label{fig:sig}
\end{figure}
\begin{acknowledgements}
The work is supported in part by the National Science Foundation of China under Grand No. 11275009, 11675002 and 11635001.
\end{acknowledgements}
\bibliographystyle{apsrev}
|
1,314,259,996,596 | arxiv | \section{\em Introduction}
Let $M$ be a closed connected
$C^{\infty}$ manifold and $g$ a $C^{r}$ ($r\geq 2$) Riemannian metric on $M$.
Let $\phi_{t}:SM\rightarrow SM$ denote the geodesic flow of $g$, acting on the
unit sphere bundle $SM$. Denote the topological entropy of the
flow $\phi_{t}$ with respect to a compact subset $K$ of $SM$ by $h_{top}(K)$.
Then $h_{top} \df h_{top}(SM)$ is the topological entropy of $\phi_t$.
Given $p$ and $q$ in $M$ and $T>0$, define $n_{T}(p,q)$ as the number of
geodesic segments joining $p$ and $q$ with length $\leq T$. Already in 1962
Berger and Bott \cite{BB} observed that there are significant relationships
between integrals of this function and the dynamics of the geodesic flow. As
was pointed out in \cite{BB}, it is not hard to
see that, for each $T>0$, the counting function $n_{T}(p,q)$ is finite and locally constant on an
open full measure subset of $M\times M$, and integrable on $M \times M$. More generally, if $N$ is a compact submanifold of $M$, define $n_T(N,q)$ to be the number of geodesic segments with length $\leq T$ that join a point in $N$ to $q$ and are initially orthogonal to $N$. The function $n_T(N,q)$ enjoys properties similar to those of $n_T(p,q)$.
For $M$ and $N$ that are $C^\infty$, it was shown in \cite{PP} that Yomdin's theorem \cite{Y} can be used to prove that
\begin{equation}
\limsup_{T\rightarrow \infty}\frac{1}{T}\log \int_{M}n_{T}(N,q)\,dq
\leq h_{top}(S^\perp N),
\label{PPineq}
\end{equation}
where $S^\perp N$ is the set of unit vectors with footpoint in $N$ that are orthogonal to $N$. It was also shown in \cite{PP} that when $N$ is the diagonal in the product manifold $M \times M$, the above inequality reduces to
\begin{equation}
\limsup_{T\rightarrow \infty}\frac{1}{T}\log \int_{M\times M}n_{T}(p,q)\,dpdq
\leq h_{top} .
\label{2}
\end{equation}
Another proof of (\ref{2}) is given in \cite{M}.
On the other hand, for any $C^{r}$ Riemannian
metric ($r\geq 2$), Ma\~{n}\'{e} shows in \cite{M} that
\begin{equation}
\liminf_{T\rightarrow\infty}\frac{1}{T} \log \int_{M\times M}n_{T}(p,q)\,dpdq
\geq h_{top} .
\label{3}
\end{equation}
Ma\~{n}\'{e} thereby obtains the first purely Riemannian characterization of
$h_{top}$ for an arbitrary $C^{\infty}$ Riemannian metric: combining
(\ref{2}) and (\ref{3}) gives
\begin{equation}
\lim_{T\rightarrow \infty}\frac{1}{T}\log \int_{M\times M}n_{T}(p,q)\,dpdq
= h_{top}.
\label{4}
\end{equation}
Ma\~n\'e's result extends earlier work of Manning and Freire-Ma\~n\'e.
Suppose $\widetilde p$ is a lift of a point $p \in M$ to the universal cover
$\widetilde M$ of $M$ and $B_T(\widetilde p)$ is the ball of radius $T$ about
$\widetilde p$ in $\widetilde M$ (with the metric lifted from $M$). It follows
easily from the results in \cite{BB} that
\begin{equation}
\int_M n_{T}(p,q)\,dq \geq {\hbox {\rm Vol}}\,B_T(\widetilde p),
\label{4.5}
\end{equation}
with equality if $M$ has no conjugate points. Manning showed that,
for any $\widetilde p \in \widetilde M$, \ $T^{-1} {\hbox {\rm Vol}}\,B_T(\widetilde p)$
converges to a limit $\lambda$ that is independent of $\widetilde p$. From
(\ref{4}) and (\ref{4.5}), it is easy to obtain the inequality $h_{top} \geq
\lambda$ for any Riemannian manifold, which was first proved by Manning in \cite{Ma}.
One also sees that $h_{top} = \lambda$ if $M$ has no conjugate points, which was first proved by Freire and Ma\~n\'e in \cite{FM}.
Besides the natural appeal of a formula like (\ref{4}),
there are other reasons to be interested in relations between the topological
entropy of the geodesic flow and the growth rate of the
average number of geodesic segments between two points in the manifold. The
function $n_{T}(p,q)$ also counts the number of critical points of the energy
functional on the path space $\Omega^{T}(p,q)$ given by all the curves joining
$p$ and $q$ with length $\leq T$. Therefore using Morse theory, $n_{T}(p,q)$
can be bounded from below by the sum of the Betti numbers of $\Omega^{T}(p,q)$
(provided of course that $p$ and $q$ are not conjugate). By averaging over
$M$ and using results of Gromov \cite{Gr2}, one can obtain in this fashion
remarkable relations between the topology of $M$ and the vanishing of
$h_{top}$; we refer to \cite{Gr1,P,P1,PP} for a detailed description of these
relations.
The present paper continues the study of relationships between $h_{top}$ and
the exponential growth rate of $n_T(p,q)$. The strongest relationship is
for metrics with no conjugate points: Ma\~n\'e showed in \cite{M} that in this
case
\[ \lim_{T \to \infty}\frac1T \log n_T(p,q) = h_{top} \quad\hbox{\rm for all $p,q \in M$.}\]
For any $C^\infty$ Riemannian manifold, taking $N$ to be the single point $p$ in (\ref{PPineq}) gives the following inequality, that was first proved by G.P. Paternain in \cite{P}:
\begin{equation}
\limsup_{T\rightarrow\infty} \frac{1}{T} \log\int_{M}n_{T}(p,q)\,dq
\leq h_{top}(S_pM) \leq h_{top}\quad\hbox{\rm for all $p \in M$.} \label{5}
\end{equation}
As Ma\~{n}\'{e} indicates in \cite{M}, a simple argument using the Borel-Cantelli Lemma shows
that for every $p \in M$ one has
\begin{equation}
\limsup_{T \to \infty}\frac{1}{T}\log n_{T}(p,q) \leq
\limsup_{T\rightarrow\infty} \frac{1}{T} \log\int_{M}n_{T}(p,q')\,dq'
\quad\hbox{\rm for a.e. $q \in M$.} \label{5.1}
\end{equation}
It is immediate from (\ref{5}) and (\ref{5.1}) that for all $p\in M$ one has
\begin{equation}
\limsup_{T\rightarrow\infty}\frac{1}{T}\log n_{T}(p,q)\leq h_{top}
\quad \hbox{for a.e. $q\in M$.} \label{5.2}
\end{equation}
This inequality motivated Ma\~{n}\'{e} to pose the following questions in
\cite{M}. \bigskip
\noindent{\bf Question I.} {\em Is it true that
\begin{equation}
\lim_{T\rightarrow\infty}\frac{1}{T}\log n_{T}(p,q)= h_{top}
\quad\hbox{\rm for a.e. $(p,q)\in M\times M$?}
\label{6}
\end{equation}}
\noindent{\bf Question II.} {\em Is it true that equation (\ref{6}) holds for generic Riemannian
metrics when~${\hbox {\rm dim}}\, M = 2$?}
\medskip
The main purpose of the present paper is to give a negative answer to Question II.
Of course this also answers Question I negatively.
Because of inequalities (\ref{5}) and (\ref{5.1}) it is natural to consider modified versions of the first question. \bigskip
\noindent{\bf Question I$'$.} {\em Is it true that
\[\limsup_{T\rightarrow\infty}\frac{1}{T}\log\int_{M} n_{T}(p,q)\,dq
= h_{top} \quad\hbox{ for all $p \in M$?}\]}
\noindent{\bf Question I$''$.} {\em Is it true that
\[\limsup_{T\rightarrow\infty}\frac{1}{T}\log\int_{M} n_{T}(p,q)\,dq
= h_{top} \quad\hbox{for almost every $p \in M$?}\]}
We shall also give negative answers to Questions I$'$ and I$''$.
Because of (\ref{5.1}), a negative answer to Question I$''$ implies a negative answer to Question I. Our main example will be a surface for which Question I$''$ has a negative answer. The important features of this surface will be stable under small perturbations of the metric; we thereby obtain an open set of metrics for which the answer to Question I$''$ is negative. This implies a negative answer to Question II.
Let us describe the contents of the paper in more detail.
In Section 2 we obtain a slight improvement of inequality (\ref{5}) that is
needed for our main example.
We also prove an interesting related result, namely, that for
the case of surfaces, the set of points $p\in M$ for which inequality (\ref{5}) is an equality has non-empty interior.
This result will imply an alternative proof of Ma\~{n}\'{e}'s inequality (\ref{3}) for the two dimensional case.
In Section 3, we observe that the Weinstein examples, described in \cite{BeBe}, give rise to many manifolds
(such as ${\bf CP}^{k}$)
for which there exists a point $p$ such that $\displaystyle\int_{M}n_{T}(p,q)\,dq$ grows {\bf linearly} in $T$, even though $h_{top}>0$.
It follows from (\ref{5.1}) that $\limsup_{T \to \infty} T^{-1} \log n_T(p,q) = 0$
for a.e. $q \in M$.
All these examples have dimension
$\geq 3$. We also construct a metric on the two-sphere with $h_{top}>0$ which has a point $p$ such that all
the geodesics leaving from $p$ are simple, closed and have the same period, so again $\displaystyle\int_{M}n_{T}(p,q)\,dq$ grows {\bf linearly} in $T$.
Section 4 contains our main example that gives negative answers to
Ma\~{n}\'{e}'s Questions I and II and also to Question I$''$.
We start with a one parameter family of
surfaces of revolution $C_{d}$, $0< d<\infty$, which have the
properties shown in Figure 1.
\begin{figure}
\vspace{8cm}
\caption{Main example.}
\end{figure}
Each $C_d$ is diffeomorphic to the two sphere and contains a region $R$ bounded by a geodesic circle of latitude $\alpha$ which is shorter than any other geodesic circle of latitude in $R$.
Inside $R$ is a circle of latitude $\gamma_0$ that is a hyperbolic closed geodesic. The region $R$ contains geodesics that are both forward and backwards asymptotic to $\gamma_{0}$.
This means that $\gamma_{0}$ has a homoclinic connection.
Attached to $R$ there is a flat cylinder of length $d$ which we close smoothly with a cap $D$. Let $P$ be the center of the cap and $Q$ the center of the region $R$.
We perturb the metric in a small part of $R$ (shaded in Figure 1) so as to break the homoclinic
connection of $\gamma_{0}$ and obtain a horseshoe with entropy $h > 0$.
Moreover we arrange the perturbation so that the meridians of $C_{d}$ are preserved, i.e., all geodesics leaving from $P$ in the new metric are simple, closed, have the same period and they coincide outside $R$ with the meridians of the old metric.
A careful application of KAM theory (Lemma \ref{key}) shows that there is
$d_0 > 0$ such that for any $d\geq d_0$ the geodesic flow for the perturbed metric has an invariant torus whose projection to $C_{d}$ becomes singular on two curves, one near $P$, the other near $Q$, as shown in Figure 1.
For each $d$, the torus and its image under the flip $v\rightarrow -v$ separate the unit sphere bundle of $C_{d}$ into two invariant
sets, $W_{1}$ and $W_{2}$.
This forces the geodesics which pass sufficiently close to $P$ or $Q$ to cross
the piece of the surface around $\gamma_{0}$ fairly perpendicular to
$\gamma_{0}$; intuitively this separating surface keeps the geodesics that leave from points sufficiently close to $P$ or $Q$ away from the horseshoe. This property allows us to
estimate their Liapunov exponents and make them small compared with $h$, by
choosing $d$ large enough.
If $W_{2}$ is the set that contains the horseshoe, then the topological
entropy on $W_{2}$ is at least $h$ and the topological entropy on $W_{1}$ can
be made smaller than $h$, since the Lyapunov exponents in $W_1$ can be made small, as we explained above. The results in Section 2 will imply
that if $p$ is sufficiently close to $P$ or $Q$ , then
\[
\limsup_{T\rightarrow\infty}\frac{1}{T}\log n_{T}(p,q)
\leq
\limsup_{T\rightarrow\infty}\frac{1}{T}\log\int_{M}n_{T}(p,q')\,dq'\leq h_{top}(W_{1}), \]
for almost every $q\in M$.
Thus there exists a positive measure set $U\subset M\times M$ such that for
$(p,q)\in U$
\begin{equation}
\limsup_{T\rightarrow\infty}\frac{1}{T}\log n_{T}(p,q)\leq
\limsup_{T\rightarrow\infty}\frac{1}{T}\log\int_{M}n_{T}(p,q)\,dq'<h \leq
h_{top}.
\label{Upq}
\end{equation}
This gives negative answers to Question I and I$''$.
A careful look at the details in
Section 4 will show that for small perturbations of this example
(\ref{Upq}) still holds for a positive measure set of $p$ and $q$. Thus Question II also has a negative answer.
{\sl Acknowledgements:}
The second author is grateful to Detlef Gromoll for suggesting several years ago the study of the relationship between the exponential growth rate of $n_{T}(p,q)$ and $h_{top}$.
He is also grateful to the University of Maryland and Northwestern University for their hospitality while this work began.
The first author thanks the Universidad de la Rep\'ublica, Uruguay, for hospitality while this work was completed.
\section{\em Some properties of the function $n_{T}(p,q)$}
We begin this section by proving a result (cf. Proposition \ref{p1} below) that we shall use in our main example. In what follows we will always assume that the Riemannian metric is $C^{\infty}$.
In \cite{M}, Ma\~{n}\'{e} shows the following application of the Borel-Cantelli Lemma:
\begin{Lemma}Let $(X,{\cal A},\mu)$ be a probability space and $f_{n}:X\rightarrow (0,+\infty)$ a sequence of integrable functions. Then:
\[\limsup_{n\rightarrow\infty}\frac{1}{n}\log f_{n}(x)\leq\limsup_{n\rightarrow\infty}\frac{1}{n}\log\int_{X}f_{n}\,d\mu,\]
for $\mu$-a.e. $x\in X$. \label{Borel}
\end{Lemma}
As in the Introduction, denote by $SM$ the unit sphere bundle of $M$ and let $\pi:SM\rightarrow M$ be the canonical projection.
For $p\in M$, set $S_{p}= \pi^{-1}(p)$.
If $K\subset SM$ is a closed set, let $h_{top}(K)$ denote the topological entropy of the geodesic flow $\phi_{t}$ with respect to the set $K$; with this notation, $h_{top}(SM)=h_{top}$.
Combining Lemma (\ref{Borel}) above with Corollary 2.2 from \cite{PP} gives the following result; we include the proof for the convenience of the reader.
\begin{Proposition}Let $K$ be any closed set in $SM$ and suppose there exists $p\in M$, such that $S_{p}\subset K$, then
\[\limsup_{T\rightarrow\infty}\frac{1}{T}\log n_{T}(p,q)\leq
\limsup_{T\rightarrow\infty}\frac{1}{T}\log\int_{M}n_{T}(p,q')\,dq'\leq h_{top}(K),\]
for almost every $q\in M$. \label{p1}
\end{Proposition}
\PROOF It is known (cf. \cite{BB,P,PP}) that for any $p\in M$,
\begin{equation}
\int_{M}n_{T}(p,q)\,dq\leq\int_{0}^{T}{\hbox {\rm Vol}}(\phi_{t}S_{p})dt, \label{7'}
\end{equation}
where ``Vol" stands for the $n-1$ dimensional Riemannian volume ($n= {\hbox {\rm dim}}\, M$).
Now we use Yomdin's Theorem as stated in \cite{Gr1} to obtain:
\[\limsup_{T\rightarrow\infty}\frac{1}{T}\log {\hbox {\rm Vol}}(\phi_{T}S_{p})\leq h_{top}(S_{p}).\]
Since $S_{p}\subset K$, $h_{top}(S_{p})\leq h_{top}(K)$ and by combining the last inequality with (\ref{7'}) we obtain
\[\limsup_{T\rightarrow\infty}\frac{1}{T}\log\int_{M}n_{T}(p,q)\,dq\leq h_{top}(K),\]
and by using Lemma \ref{Borel} we conclude:
\[\limsup_{T\rightarrow\infty}\frac{1}{T}\log n_{T}(p,q)\leq
\limsup_{T\rightarrow\infty}\frac{1}{T}\log\int_{M}n_{T}(p,q')\,dq'\leq h_{top}(K),\]
for almost every $q\in M$.
~\hfill~ $\diamond$ \vspace{7mm}
For brevity, let $\sigma_{p}$ be:
\[\sigma_{p}\df\limsup_{T\rightarrow\infty}\frac{1}{T}\log\int_{M}n_{T}(p,q)\,dq.\]
Then (\ref{5}) can be writen as $\sigma_{p}\leq h_{top}$ for all $p\in M$.
Our main example in Section 4 possesses the following property: there exists an open set $V\subset M$ such that for any $p\in V$, $\sigma_{p}<h_{top}$.
To complete the description of the possible behavior of the
correspondence $p\rightarrow \sigma_{p}$ when ${\hbox {\rm dim}}\,M=2$, we now show that in this case
the set of points for which $\sigma_{p}=h_{top}$ always has
non-empty interior.
\begin{Theorem}Suppose ${\hbox {\rm dim}}\,M = 2$. Consider the set $\Omega$ of
points $p\in M$ such that
\[\lim_{T\rightarrow\infty}\frac{1}{T}log\int_{M}n_{T}(p,q)\,dq=h_{top}.\]
Then $\Omega$ has non-empty interior. \label{teo}
\end{Theorem}
\PROOF For each vector $v\in S_{p}$ consider the Jacobi equation along
the geodesic $\gamma_v$ defined by $v$:
\begin{equation}
y''(t)+K(t)y(t)=0, \label{jacobi}
\end{equation}
where $K(t)$ is the Gaussian curvature of $M$ at $\gamma_v(t)$.
Let $y_{v}(t)$ be the solution of
(\ref{jacobi}) that satisfies: $y_{v}(0)=0$ and $y'_{v}(0)=1$.
On account of the results in \cite{BB} we have:
\begin{equation}
\int_{M}n_{T}(p,q)\,dq=\int_{0}^{T}ds\int_{S_{p}} | y_{v}(s) | \,dv.
\label{one}
\end{equation}
On the other hand by definition, if $l$ denotes length, we can write
\begin{equation}
l(\phi_{T}S_{p})=\int_{S_{p}}\sqrt{y_{v}^{2}(T)+(y_{v}')^{2}(T)}\,dv.
\label{two}
\end{equation}
Equations (\ref{one}) and (\ref{two}) clearly imply
\begin{equation}
\limsup_{T\rightarrow\infty}\frac{1}{T}\log \int_{M}n_{T}(p,q)\,dq\leq\limsup_{T\to\infty}\frac{1}{T}\log l(\phi_{T}S_{p}).
\label{three}
\end{equation}
Since $y'_{v}(0)=1$, it follows from (\ref{jacobi}) that
\[y_{v}'(T)=1-\int_{0}^{T}K(s)y_{v}(s)\,ds;\]
if $L$ is the maximum of the absolute value of the curvature on $M$, we obtain
\[ | y_{v}'(T) | \leq 1+L\int_{0}^{T} | y_{v}(s) | \,ds.\]
Therefore using (\ref{two}) we can write
\begin{eqnarray*}
l(\phi_{T}S_{p}) &\leq& \int_{S_{p}} \left( | y_{v}(T) |
+1+L\int_{0}^{T} | y_{v}(s) | \,ds \right) \,dv \\
&=& \int_{S_{p}} | y_{v}(T) | \,dv+l(S_{p})+L\int_{0}^{T}\int_{S_{p}}
| y_{v}(s) | \,dvds. \end{eqnarray*}
{}From the last inequality and (\ref{one}) we obtain
\begin{equation}
\liminf_{T\rightarrow\infty}\frac{1}{T}\log l(\phi_{T}S_{p})\leq \liminf_{T\rightarrow\infty}\frac{1}{T}\log \int_{M}n_{T}(p,q)\,dq. \label{four}
\end{equation}
Recall now from \cite[page 220]{N}, that there exists an arc $\alpha\subset SM$ such that
\begin{equation}
\lim_{T\rightarrow\infty}\frac{1}{T}\log l(\phi_{T}\alpha)=h_{top}. \label{six}
\end{equation}
Moreover, if $\mu$ is an ergodic measure of maximal entropy, $\alpha$ could be {\sl any} arc transversal to the Pesin stable manifolds of $\mu$.
Let $v_{0}$ be a Pesin point of $\mu$ and let $W^{s}(v_{0})$ denote the weak stable manifold through $v_{0}$.
By a well known property of the geodesic flow ($W^{s}(v_{0})$ is $\phi_{t}$-invariant), there exists $r>0$ such that $T_{\phi_{r}v_{0}}W^{s}(v_{0})\cap T_{\phi_{r}v_{0}}S_{\pi(\phi_{r}v_{0})}=\{0\}$, therefore the curve $S_{\pi(\phi_{r}v_{0})}$ is transversal to $W^{s}(v_{0})$ at the point $\phi_{r}v_{0}$.
Hence (\ref{three}), (\ref{four}) and (\ref{six}) imply:
\begin{equation}
\lim_{T\rightarrow\infty}\frac{1}{T}\log \int_{M}n_{T}(p,q)\,dq=h_{top}, \label{seven}
\end{equation}
where $p=\pi(\phi_{r}v_{0})$.
On the other hand, $\pi:W^{s}(v_{0})\rightarrow M$ is a local diffeomorphism at $\phi_{r}v_{0}$, since $T_{\phi_{r}v_{0}}W^{s}(v_{0})\cap T_{\phi_{r}v_{0}}S_{\pi(\phi_{r}v_{0})}=\{0\}$.
Therefore every unit circle $S_{p'}$, with foot point $p'$ in a neighborhood of $p=\pi(\phi_{r}v_{0})$, is transversal to $W^{s}(v_{0})$ and thus (\ref{seven}) has to hold for an open set around $p$.
~\hfill~ $\diamond$ \vspace{7mm}
We finish this section by showing how Theorem \ref{teo} implies Ma\~{n}\'{e}'s inequality (\ref{3}) for the two dimensional case.
\begin{Corollary}Suppose ${\hbox {\rm dim}}\,M = 2$. Then:
\[\liminf_{T\rightarrow\infty}\frac{1}{T} \log \int_{M\times M}n_{T}(p,q)\,dpdq\geq h_{top}.\]
\end{Corollary}
\PROOF For $p\in M$, set $I_{p} = \int_{M}n_{T}(p,q)\,dq$.
Let $\Omega$ be the set from Theorem \ref{teo} and let $m$ denote its
measure (by Theorem \ref{teo}, $m>0$).
Then
\[ \int_{M\times M}n_{T}(p,q)\,dpdq=\int_{M}I_{p}(T)\,dp\geq \int_{\Omega}I_{p}(T)\,dp,\]
and by Jensen's inequality
\[\log \int_{M\times M}n_{T}(p,q)\,dpdq\geq\frac{1}{m}\int_{\Omega}\log I_{p}(T)\,dp.\]
Hence
\[\liminf_{T\rightarrow\infty}\frac{1}{T} \log \int_{M\times M}n_{T}(p,q)\,dpdq\geq\frac{1}{m}\int_{\Omega}\sigma_{p}\,dp=h_{top}.\]
~\hfill~ $\diamond$ \vspace{7mm}
\section{\em Simple examples}
We consider in this section the following modified version of Question I:\medskip
\noindent{\bf Question I$'$.} {\em Is it true that
\[\limsup_{T\rightarrow\infty}\frac{1}{T}\log \int_{M}n_{T}(p,q)\,dq=h_{top}
\quad\hbox{ for all $p$?}\]}
Suppose $M^{n}=D\cup_{S^{n-1}} D_{N}$, where $D$ is an $n$-dimensional disk with center $p$ and $D_{N}$ is a disk bundle over a manifold $N$, such that its associated sphere bundle $\partial D_{N}$ is diffeomorphic to $S^{n-1}$.
A typical example of such a manifold is ${\bf CP}^{k}$; if we remove a disk from it, we obtain a disk bundle over ${\bf CP}^{k-1}$.
Let $g_{N}$ be any Riemannian metric on $N$, then it is shown in \cite{BeBe} that one can construct a metric $g$ on $M$ so that:
\begin{itemize}
\item all geodesics leaving from $p$ (the center of $D$) return to $p$ exactly at the same time.
\item $(N,g_{N})\rightarrow (M,g)$ is a totally geodesic isometric embedding.
\end{itemize}
Choose a metric $g_{N}$ for which the topological entropy of the geodesic flow of $N$ is positive. Since $N$ is totally geodesic, this implies that $h_{top}$ of the geodesic flow of $M$ is also positive.
On the other hand since every geodesic leaving from $p$ returns to $p$ at exactly the same time, it follows that ${\hbox {\rm Vol}}(\phi_{T}S_{p})$ is uniformly bounded for all $T$ and by equation (\ref{7'}), $\displaystyle\int_{M}n_{T}(p,q)\,dq$ grows linearly with $T$.
This gives a negative answer to Question I$'$.
In fact, it also shows that for some $p$ the growth of $\displaystyle\int_{M}n_{T}(p,q)\,dq$ could be only linear, even though $h_{top}>0$.
Observe that this construction requires ${\hbox {\rm dim}}\,N \geq 2$, and hence ${\hbox {\rm dim}}\,M \geq 3$, so it is natural to ask if similar examples can be constructed in the case of surfaces.
We shall show below that there exist metrics on $S^{2}$ with $h_{top}>0$ and the additional property that there exists a point $p$ such that all the geodesics leaving from $p$ are simple, closed and with the same period.
As a consequence, $\displaystyle\int_{M}n_{T}(p,q)\,dq$ grows {\sl linearly} with $T$.
Let $C$ be a surface of revolution diffeomorphic to $S^{2}$.
On $C$ the geodesic flow is completely integrable with an integral of motion --- the Clairaut integral --- given by $r\sin\phi$, where $r$ is the radial distance of a point on the surface to the axis of revolution and $\phi$ is the angle a geodesic makes with the meridians.
We assume that $C$ contains a circle of latitude $\gamma_{0}$ where the function $r$ has a nondegenerate minimum. This means that $\gamma_0$ is a hyperbolic closed geodesic. We also assume that the other circles of latitude where $r$ has the same value $r_0$ as on $\gamma_0$ are not critical points of $r$. Thus $\gamma_0$ is the only closed geodesic on which $r = r_0$.
The closed orbit of the geodesic flow of $C$ corresponding to the geodesic $\gamma_{0}$ has a homoclinic connection.
This means that the weak stable and weak unstable manifolds of this orbit coincide.
They lie in the set of vectors tangent to the geodesics for which the value of the Clairaut integral is $r_0$.
These geodesics are asymptotic to $\gamma_{0}$ as $t\rightarrow \pm\infty$.
One of them is shown in Figure 2.
Let $P$ and $Q$ denote the poles where the axis of revolution intersects $C$.
We parametrize $C$ with geodesic polar coordinates $(\theta,l)$, $\theta\in [0,2\pi)$ and $l\in [0,R]$.
In these coordinates $C$ is determined by the profile function $r(l)$, which is required to be $C^{\infty}$, that gives the radius of a circle of latitude at a distance $l$ from the point $Q$ at the bottom of the surface.
For technical reasons we assume that there exists $l_1\in (0,R)$ and $b>0$, such that for $l\in (l_1-b,l_1+b)$ we have $r(l)\equiv r_1$, where $r_1$ is a constant such that $r_1>r_0$.
We will also assume that the value $l_0$ of $l$ on $\gamma_{0}$ satisfies $l_0<l_1-b$ and that $r$ is strictly increasing for $l\in (l_0,l_1-b)$.
Let $F$ denote the flat region given by those $p\in C$ such that $l(p)\in [l_1-b,l_1+b]$.
A typical shape for $C$ is shown in Figure 2.
\begin{figure}
\vspace{8cm}
\caption{Surface of revolution $C$.}
\end{figure}
Recall that the meridians of $C$ are all simple closed geodesics with the same period.
\begin{Lemma}There exist arbitrarily small smooth perturbations of the Riemannian metric of $C$ with support in $F$ such that:
(1) The meridians are preserved, i.e. all geodesics leaving from $P$ are simple, closed and with the same period, and coincide with the meridians outside $F$.
(2) $\gamma_{0}$ possesses a {\bf transverse} homoclinic orbit.\label{meridians}
\end{Lemma}
\PROOF The assumptions made above (in particular the requirement $r_1 > r_0$) and the properties of the Clairaut integral ensure that there are geodesics that pass through the flat region $F$ and are both forwards and backwards asymptotic to $\gamma_0$; indeed any vector in $F$ that makes angle~$\phi$ with the meridians will be tangent to such a geodesic if $r_1\sin\phi = \pm r_0$. Let $\gamma_{su}$ be such a geodesic and let $p$ be a simple point of $\gamma_{su}$ that lies in $F$.
Parametrize $\gamma_{su}$ so that $\gamma_{su}(0) = p$.
Consider the frame $\{E_{1}, E_{2}\}$ in $T_{p}C$, where $E_{1}=\gamma_{su}'(0)$ and $E_{2}$ is the unit vector tangent to the meridian through $p$ pointing towards $P$.
Let $\{E_{1}(t), E_{2}(t)\}$ denote the frame in $T_{\gamma_{su}(t)}C$ (not orthonormal) obtained by parallel transport along $\gamma_{su}$.
Consider the map $f:{\bf R}^{2}\rightarrow C$ given by
\[f(t,x)=\exp_{\gamma_{su}(t)}(xE_{2}(t)).\]
For a small enough $\delta > 0$, the map $f$ is a diffeomorphism form $\Delta = (-\delta,\delta)^2$ to a neighborhood~$U \subset F$ of the point~$p$; the set~$U$ is shaded in Figure 2.
For each fixed $t$, the curves $x\rightarrow f(t,x)$ are meridians parametrized by arc length.
In these coordinates the metric of $C$ satisfies:
\[g_{11}(t,x)=1,\;\;\;g_{12}(t,x)= a,\;\;\;g_{22}(t,x)=1,\]
where $a$ is a constant satisfying $-1 < a < 1$.
Let $\alpha : {\bf R}^2 \rightarrow {\bf R}$ be a smooth function with support inside $\Delta$
and let $g^\alpha$ be the metric defined by:
\[g^\alpha_{11}(t,x) = 1 - \alpha(t,x)x^{2},\;\;\;g^\alpha_{12}(t,x)= a,\;\;\;g^\alpha_{22}(t,x)=1.\]
A simple computation shows that the Christoffel symbols of $g^\alpha$ satisfy:
\[\Gamma^{1}_{22}\equiv \Gamma^{2}_{22}\equiv 0,\]
\[\Gamma^{1}_{11}(t,0)=\Gamma^{2}_{11}(t,0)=0.\]
Therefore the curve, $t\rightarrow f(t,0)$, and the curves, $x\rightarrow f(t_{0},x)$ for each fixed~$t_0$, are still geodesics, and thus the perturbation $g^\alpha$ preserves the geodesic $\gamma_{su}$ and the meridians.
A further computation shows that the Gaussian curvature of $g^\alpha$ at the point $\gamma_{su}(t) = f(t,0)$ is given by:
\begin{equation}
K_\alpha(t,0)=\frac{-\alpha(t,0)}{1-a^{2}}. \label{gauss}
\end{equation}
This equation and Donnay's arguments from \cite{D} imply that for a suitable choice of the function~$\alpha$ the stable and unstable manifolds of $\gamma_{0}$ must have a {\sl transverse} intersection at the point $\gamma_{su}'(0)$, which concludes the proof of the lemma.
For the convenience of the reader, we give a brief sketch of Donnay's idea.
Let $H^-$ and $H^+$ be the projections to $C$ of the strong stable and strong unstable manifolds associated to the geodesic $\gamma_{su}$.
Recall that the strong stable and strong unstable manifolds are given by the unit normal vectors to $H^-$ and $H^+$ that point to the same side as the tangent vector to $\gamma_{su}$.
The geodesic curvatures at $\gamma_{su}(t)$ of $H^-$ and $H^+$ are solutions $u^-$ and $u^+$ of the Riccati equation
\begin{equation}
u'(t) + u^2(t) + K(t) = 0, \label{riccati}
\end{equation}
where $K(t)$ is the curvature at $\gamma_{su}(t)$. Before the perturbation we have $u^-_{old} \equiv u^+_{old}$. Let $t_1$ be the time when $\gamma_{su}$ enters and $t_2$ the time when $\gamma_{su}$ leaves the support of the perturbation of the metric. Then $u^-_{new}(t) = u^-_{old}(t)$ for $t \geq t_2$ and $u^+_{new}(t) = u^+_{old}(t)$ for $t \leq t_1$. All that one needs to arrange is that $u^+_{new}(t_2) \neq u^-_{new}(t_2) = u^+_{old}(t_2)$. It is clear from (\ref{gauss}) and (\ref{riccati}) that this will be the case if the function $\alpha$ is chosen suitably.
~\hfill~ $\diamond$ \vspace{7mm}
\section{\em The main example}
As mentioned in the introduction, we begin with a one parameter family of surfaces of revolution $C_d$, $0<d<\infty$, which have
the properties shown in Figure 1.
Each $C_d$ contains a region $R$ whose geometry is the same for all $d$. The boundary of $R$ is a geodesic circle of latitude $\alpha$, which is shorter than any other geodesic circle of latitude in $R$: this ensures that any geodesic which enters $R$ must leave after a finite time. The region $R$ contains a flat cylinder $F$ and a hyperbolic closed geodesic $\gamma_{0}$; the distance of the surface from the axis of revolution is strictly decreasing as one moves from $F$ to $\gamma_{0}$. Attached to $R$ is a flat cylinder of length $d$ which is smoothly closed with a cap $D$; the geometry of $D$ is independent of $d$. The boundary of $D$ is a closed geodesic, which we shall denote by $\beta$. We choose $D$ so that $\beta$ is the only circle of latitude in $D$ that is a geodesic: this ensures that any geodesic which enters $D$ must leave after a finite time. Let $P$ be the center of the cap $D$ and $Q$ the center of the region $R$.
We parametrize $C_d$ with geodesic polar coordinates $(\theta,l)$ where $\theta \in {\bf R}/2\pi$ and $l$ is the geodesic distance of a point from $Q$. In these coordinates $C_d$ is determined by the profile function $r(l)$ which gives the distance from the axis of revolution of the circle of latitude which is at geodesic distance $l$ from $Q$. We choose $r(l)$ to be $C^\infty$. Let $l_\alpha$ be the value of $l$ on $\alpha$. Let $\rho = r(l_\alpha)$. Finally we introduce a third coordinate $\phi \in (-\pi/2,\pi/2]$ on the unit tangent bundle of $C_d \setminus \{P,Q\}$, which is the angle measured in the counterclockwise direction between a vector and the meridian passing through the point where the vector is based (we shall take $\phi = \pi/2$ for the tangent vectors to the circle of latitude where this description is ambiguous).
Define $S_1$ and $S_4$ to be the sets of unit vectors with footpoints on $\alpha$ that point out of and into $R$ respectively. Define $S_3$ and $S_2$ to be the sets of unit vectors with footpoints on $\beta$ that point out of and into $D$ respectively. The coordinates $\theta$ and $\phi$ allow us to identify $S_i$ with $ ({\bf R}/2\pi) \times (-\pi/2,\pi/2)$ for $i = 1,\dots,4$.
For a vector $v \in S_1$, let us follow the geodesic $\gamma_v$ defined by $v$ starting from time $t = 0$. We have chosen the geometry of $R$ and $D$ so that $\phi_t(v) = \gamma'_v(t)$ moves from $S_1$, through $S_2$, $S_3$ and $S_4$ in succession and then returns to $S_1$.
This process defines maps from
$S_{i}$ to $S_{i+1}$ for $i=1,\dots,4$~mod $4$,
which induce maps
\[\psi_{i}:({\bf R}/2\pi) \times (-\pi/2,\pi/2)\rightarrow ({\bf R}/2\pi) \times (-\pi/2,\pi/2),\;\;\;\;i=1,\dots,4.\]
Elementary trigonometry shows that
\begin{equation}
\psi_i(\theta,\phi) = (\theta + \frac{d\tan \phi}{\rho}, \phi) \qquad\hbox{for $i=1,3$.} \label{cylinder}
\end{equation}
The properties of the Clairaut integral imply that
\begin{equation}
\psi_i(\theta,\phi) = (\theta + a_i(\phi), -\phi), \qquad\hbox{for $i=2,4$,} \label{minus}
\end{equation}
where $a_2$ and $a_4$ are $C^\infty$ functions satisfying
\begin{equation}
a_2(0)= \pi = a_4(0).
\label{aa}
\end{equation}
Notice that $a_2$ and $a_4$ {\bf do not} depend on $d$.
As explained in the introduction, we make a small perturbation of the surfaces of revolution $C_d$.
Recall from Section 3 (cf. Lemma \ref{meridians}) that there exist arbitrarily small, smooth perturbations of the metric, such that:
(1) The support of the perturbation is confined to the interior of the flat cylinder $F \subset R$.
(2) The meridians are preserved, i.e. all geodesics leaving from $P$ are simple, closed and with the same period, and coincide with the meridians outside $F$.
(3) $\gamma_{0}$ possesses a transverse homoclinic orbit.
We can also assume that the perturbed metric has the following property:
(4) There is a constant $\phi_0 > 0$ such that every geodesic that enters $R$ through $\alpha$ with $|\phi| < \phi_0$ must exit $R$ after a finite time.
To see this, note that we can choose $\phi_0 > 0$, $T_0 > 0$ and $\varepsilon_0 > 0$ so that every geodesic of the original metric that enters $R$ with $|\phi| \leq 2\phi_0$ must exit the $2\varepsilon_0$ neighborhood of $R$ by time $T_0$ at the latest. If the perturbation is small enough, every geodesic of the new metric that crosses $\alpha$ with $|\phi| \leq \phi_0$ must exit the $\varepsilon_0$ neighborhood of $R$ by time $2T_0$ at the latest.
The above properties ensure that the orbits of the geodesic flow that start in $S_1$ with $|\phi| < \phi_0$ still pass through $S_2$, $S_3$ and $S_4$ in succession and then return to $S_1$. The transitions from $S_i$ to $S_{i+1}$, for $i = 1,2,3$, are exactly the same as for the unperturbed metric and do not change $|\phi|$. Property (4) means that every orbit that leaves $S_4$ with $|\phi| < \phi_0$ must pass through $S_1$. Let $\tilde{\psi}_{4}:({\bf R}/2\pi) \times (-\phi_0,\phi_0)\rightarrow ({\bf R}/2\pi) \times (-\pi/2,\pi/2)$ be the map induced by the transition from $S_{4}$ to $S_{1}$ for the perturbed surface.
Notice that $\tilde{\psi}_{4}$ is determined by the metric in $R$ and is independent of $d$.
The next result will be crucial for our construction:
\begin{Lemma}There exists $d_{0}>0$ and a perturbation of the metric in $R$ satisfying properties (1)--(4) above, such that for any $d > d_{0}$, the map
\[ \tilde\Psi_d \df \tilde \psi_4 \circ \psi_3 \circ \psi_2 \circ \psi_1 : ({\bf R}/2\pi) \times (-\phi_0,\phi_0) \to ({\bf R}/2\pi) \times (-\pi/2,\pi/2)\]
has a homotopically nontrivial invariant circle in the set
\( ({\bf R}/2\pi) \times [-1/d,1/d] . \) \label{key}
\end{Lemma}
\PROOF
For $d>0$ we define a scaling map $\varphi_{d} : ({\bf R}/2\pi) \times {\bf R} \to ({\bf R}/2\pi) \times {\bf R}$ by $\varphi_{d}(\theta,\phi) \df (\theta,d\phi)$.
Observe that $\varphi_{d}$ maps the region $| \phi| \leq 1/d$ diffeomorphically onto the set \[ S\df ({\bf R}/2\pi) \times [-1,1].\] In order to show that $\tilde\Psi_d$ has an invariant circle in the region $|\phi| < 1/d$, it suffices to show that the map \[\tilde k_d \df \varphi_d \circ \tilde\Psi_d \circ \varphi_d^{-1} : S \to ({\bf R}/2\pi) \times {\bf R},\] which is well defined for $d > 1/\phi_0$, has an invariant circle.
Let $\Psi_d = \psi_4 \circ \psi_3 \circ \psi_2 \circ \psi_1$ and let $k_d = \varphi_d\circ \Psi_d \circ \varphi_d^{-1}$.
Our first step is to study the limiting behaviour of $k_d$ and $\tilde k_d$ as $d \to \infty$ using:
\begin{Lemma}Let $f=(g,h):({\bf R}/2\pi) \times (-\phi_0,\phi_0)\rightarrow ({\bf R}/2\pi) \times (-\pi/2,\pi/2)$ be a $C^{\infty}$ map with the property
\[h(\theta,0)=0,\]
for all $\theta\in {\bf R}/2\pi $.
For $d>1/\phi_0$ define
\[f_{d}=\varphi_{d}\circ f\circ\varphi_{d}^{-1}:S\rightarrow ({\bf R}/2\pi) \times {\bf R}.\]
Then $f_{d}$ converges in the $C^{\infty}$ topology as $d \to \infty$ to the map
\[(\theta,\phi)\rightarrow (g(\theta,0),\phi\frac{\partial h}{\partial \phi}(\theta,0)).\] \label{technical}
\end{Lemma}
\PROOF
Since $h(\theta,0) = 0$, there is a $C^\infty$ function $H(\theta,\phi)$ such that
\begin{equation}
h(\theta,\phi) = \phi H(\theta,\phi).
\label{key*}
\end{equation}
We have
\begin{equation}
f_d(\theta,\phi) = (g(\theta,\phi/d),dh(\theta,\phi/d)).
\label{key**}
\end{equation}
It is easy to see that as $d \to \infty$ the first component of $f_d$ converges in the $C^\infty$-topology to the function
\[ (\theta,\phi) \to g(\theta,0).\]
It follows from (\ref{key*}) and (\ref{key**}) that the second component of $f_d$ is
\[ (\theta,\phi) \to \phi H(\theta,\phi/d),\]
which converges in the $C^\infty$-topology to the function
\[ (\theta,\phi) \to \phi H(\theta,0) = \phi \frac{\partial h}{\partial \phi}(\theta,0).\]
~\hfill~ $\diamond$ \vspace{7mm}
Now observe that $k_{d}$ is the composition of the four maps $\varphi_{d}\circ\psi_{i}\circ\varphi^{-1}_{d}$, $i=1,\dots,4$.
It is easily seen from (\ref{cylinder}) that for $i = 1$ and $3$
\[\varphi_{d}\circ\psi_{i}\circ\varphi^{-1}_{d}(\theta,\phi)=(\theta+\frac{d}{\rho}\tan\frac{\phi}{d},\phi),\]
which converges in the $C^{\infty}$ topology to the map
\[(\theta,\phi)\rightarrow (\theta+\frac{\phi}{\rho},\phi).\]
It follows from equations (\ref{minus}), (\ref{aa}) and Lemma \ref{technical} that for $i = 2$ and $4$ the map $\varphi_{d}\circ\psi_{i}\circ\varphi^{-1}_{d}$ converges to the map
\begin{equation}
(\theta,\phi)\to (\theta+\pi,-\phi).
\label{eins}
\end{equation}
A simple calculation now shows that as $d \to \infty$ the map $k_d$ converges in the $C^\infty$ topology to the map
\[ k_\infty (\theta,\phi) = ( \theta + \frac{2\phi}{\rho} , \phi ). \]
Observe that $k_\infty$ is a nondegenerate twist map. By Moser's twist theorem \cite{Mo}, there is a neighbourhood $\cal U$ of $k_\infty$ in the $C^\infty$ topology such that every map in $\cal U$ possesses a homotopically nontrivial invariant circle.
Now consider the perturbed surface.
Let $\tilde{\psi}_{4}=(\tilde{g},\tilde{h})$.
It follows immediately from Lemma \ref{technical} that $\varphi_{d}\circ\tilde{\psi_{4}}\circ\varphi^{-1}_{d}$ converges to the map
\begin{equation}
(\theta,\phi)\to (\tilde{g}(\theta,0),\phi\frac{\partial \tilde{h}}{\partial \phi}(\theta,0)).
\label{zwei}
\end{equation}
We now know that $\tilde k_d$ is the composition of $\varphi_{d}\circ\tilde\psi_{4}\circ\varphi^{-1}_{d}$ and $\varphi_{d}\circ\psi_{i}\circ\varphi^{-1}_{d}$, $i=1,2,3$, and each of these maps approaches a limit as $d \to \infty$. Hence there is a map $\tilde k_\infty$ such that $\tilde k_d$ converges to $\tilde k_\infty$ in the $C^\infty$ topology as $d \to \infty$. If the perturbation of the metric is small enough, $\tilde\psi_4$ will be close to $\psi_4$, the map in (\ref{zwei}) will be close to the map in (\ref{eins}), and $\tilde k_\infty$ will be in $\cal U$. It follows that for such a perturbation, $\tilde k_d$ belongs to $\cal U$, and thus has a homotopically nontrivial invariant circle, for all large enough $d$. This completes the proof of Lemma \ref{key}.
~\hfill~ $\diamond$ \vspace{7mm}
{From} now on we shall assume that $d \geq d_0$ so the conclusion of Lemma \ref{key} holds.
Let $\tilde{C}_{d}$ denote the perturbation of $C_{d}$ we constructed above.
Observe that the invariant circle of the map $\tilde\Psi_d$ in the region $|\phi | \leq 1/d$, gives rise to an invariant torus $T_{d}$ for the geodesic flow of $\tilde{C}_{d}$.
The projection map $\pi\mid_{T_{d}} : T_{d} \to \tilde{C}_{d}$ becomes singular on two simple closed curves which project to simple closed curves $\Gamma_{P,d}$ and $\Gamma_{Q,d}$ as indicated in Figure 1.
If $d$ is large enough, $\Gamma_{P,d}$ will lie in $D$ and $\Gamma_{Q,d}$ will lie in $R$.
Henceforth we assume that all $d\geq d_{0}$ have this property.
We shall also drop the subscript $``d"$ in order to simplify the notation.
Let $\Theta: S\tilde{C}_{d} \to S\tilde{C}_{d}$ denote the flip:
\[\Theta(p,v)=(p,-v).\]
Then $\Theta T_{d}$ is also an invariant torus of the geodesic flow, and the projection map $\pi\mid_{\Theta T_{d}}$ becomes singular above the curves $\Gamma_{P}$ and $ \Gamma_{Q}$.
Let $V_{P}$ and $V_{Q}$ be the open neighborhoods of $P$ and $Q$ that are bounded by $\Gamma_{P}$ and $\Gamma_{Q}$ respectively.
Clearly, $T_{d}$ and $\Theta T_{d}$ separate the unit sphere bundle of
$\tilde{C}_{d}$ into two invariant sets, $W_{1}$ and $W_{2}$.
One of them, which we shall call $W_{1}$, contains $S(V_{P}\cup V_{Q})$.
If $v$ is in $W_{1}$ the geodesic $\gamma_{v}$ oscillates between $D$ and $R$; in particular $\gamma_{v}$ enters both $D$ and $R$.
It is clear from this that the set $W_{2}$ contains the unit vectors tangent to $\gamma_{0}$, and thus the transverse homoclinic orbit associated with $\gamma_{0}$.
\begin{Lemma} There exists $t_{0}>0$, such that if $\gamma$ is any geodesic leaving from a point in $V_{P}\cup V_{Q}$, then $\gamma$ spends time at most $t_{0}$ inside $D\cup R$ (in each passage). The time $t_0$ is independent of $d$. \label{time}
\end{Lemma}
\PROOF The tangent vectors to $\gamma$ belong to the invariant set $W_1$ defined above.
Observe that $\overline{W}_1$ is compact and does not contain the tangent vectors to $\alpha$ and $\beta$, the boundaries of $R$ and $D$ respectively.
Both regions $R$ and $D$ have the property that their boundary is a closed geodesic and that any geodesic which crosses the boundary and enters the region must exit after finite time.
Since geodesics always cross transversally, it is easy to see that the time between entry and exit is a continuous function of the tangent vector at the time of entry.
Let $t_{0}$ be the supremum of the longest geodesic segment in $D\cup R$ that is part of a geodesic whose tangent vectors are in $\overline{W}_1$.
It is easy to see from the above that $t_{0}<\infty$ and each passage of $\gamma$ through $D\cup R$ takes time at most $t_{0}$.
~\hfill~ $\diamond$ \vspace{7mm}
\begin{Lemma}Given $\varepsilon>0$, there exists $d(\varepsilon)$ such that for $d>d(\varepsilon)$, the geodesic flow of $\tilde{C}_{d}$ satisfies:
\[h_{top}({W_{1}})<\varepsilon.\] \label{entropy}
\end{Lemma}
\PROOF On account of the variational principle for entropy and the relationship between Liapunov exponents and entropy, it suffices to show that given $\varepsilon>0$, there exists $d(\varepsilon)$ such that for $d>d(\varepsilon)$ and all $v\in W_{1}$ we have:
\[\limsup_{t\rightarrow +\infty}\frac{1}{t}\log \parallel d\phi_{t}(v)\parallel<\varepsilon.\]
We follow an argument of Manning \cite{Manning}.
Let $K$ denote the Gaussian curvature of $\tilde{C}_{d}$ and let $Y$ denote any Jacobi field along the geodesic $\gamma_{v}$ defined by $v$.
Also let $L$ be an upper bound for $ K^2$ in the region $R\cup D$. We may assume that $L > 1$ and $\varepsilon < 1$.
Consider the function:
\[ y_{\varepsilon}(t) \df \varepsilon^{2} \langle Y(t),Y(t) \rangle + \langle Y'(t),Y'(t) \rangle. \]
Using the Jacobi equation we deduce:
\[ y_{\varepsilon}'(t) = 2\varepsilon^{2} \langle Y'(t),Y(t)\rangle + 2\langle -K(t)Y(t),Y'(t) \rangle. \]
Let $t$ be such that $\gamma_{v}(t)\in R\cup D$. Since $\varepsilon < L$, we have
\[ |y_{\varepsilon}'(t)| \leq 2\varepsilon^{2} \| Y'(t)\|\| Y(t)\| + 2L^{2}\| Y'(t)\|\| Y(t)\| \leq 4L^{2}\| Y'(t)\|\| Y(t)\| \]
\[ = \frac{2L^{2}}{\varepsilon} (2\varepsilon \| Y'(t)\|\| Y(t)\|) \leq \frac{2L^{2}}{\varepsilon} (\varepsilon^{2} \langle Y(t),Y(t)\rangle + \langle Y'(t),Y'(t) \rangle) \]
\begin{equation}
= \frac{2L^{2}}{\varepsilon}y_{\varepsilon}(t). \label{cinco}
\end{equation}
On the other hand, if $t$ is such that $\gamma_{v}(t)$ belongs to the flat cylinder, we have:
\[ y_{\varepsilon}'(t) = 2\varepsilon^{2}\langle Y'(t),Y(t) \rangle , \]
and thus
\begin{equation}
|y_{\varepsilon}'(t)| \leq 2\varepsilon^{2}\| Y'(t)\|\| Y(t)\|
\leq \varepsilon(\varepsilon^{2} \langle Y(t),Y(t) \rangle + \langle Y'(t),Y'(t) \rangle) = \varepsilon y_{\varepsilon}(t). \label{seis}
\end{equation}
Next observe that
\[ \limsup_{t\rightarrow + \infty}\frac{1}{t}\log \| d\phi_{t}(v)\| \leq \limsup_{t \to +\infty} \frac{1}{t} \log\sqrt{\frac{y_{\varepsilon}(t)}{\varepsilon^{2}}}\]
\begin{equation}
=\limsup_{t\rightarrow+\infty}\frac{1}{2t}\log y_{\varepsilon}(t)=\limsup_{t\rightarrow+\infty}\frac{1}{2t}\int_{0}^{t}\frac{y_{\varepsilon}'(s)}{y_{\varepsilon}(s)}\,ds. \label{siete}
\end{equation}
Recall that the geodesic $\gamma_{v}$ oscillates between $D$ and $R$.
In each passage through the flat cylinder between $D$ and $R$, $\gamma_{v}$ spends time at least $d$ and by Lemma \ref{time}, it spends time at most $t_{0}$ during each visit to $R\cup D$.
By combining equations (\ref{cinco}), (\ref{seis}) and (\ref{siete}) we deduce:
\[\limsup_{t \to +\infty}\frac{1}{t}\log \| d\phi_{t}(v) \| \leq \frac{L^{2}t_{0}}{d\varepsilon} + \frac{\varepsilon}{2}.\]
If we take $d > d(\varepsilon) \df 2L^{2}t_{0} / \varepsilon^{2}$, we obtain
\[ \limsup_{t \to +\infty}\frac{1}{t}\log \| d\phi_{t}(v)\| < \varepsilon. \]
~\hfill~ $\diamond$ \vspace{7mm}
We are ready to prove the main result of this section:
\begin{Theorem}For all $d$ sufficiently large and $p\in V_{P}\cup V_{Q}$,
\[\limsup_{T\rightarrow\infty}\frac{1}{T}\log n_{T}(p,q)\leq
\limsup_{T\rightarrow\infty}\frac{1}{T}\log\int_{\tilde{C}_{d}}n_{T}(p,q)\,dq< h_{top},\]
for a.e. $q\in \tilde{C}_{d}$.
\end{Theorem}
\PROOF Since $\pi^{-1}(V_{P}\cup V_{Q}) \subset W_1$, we see from Proposition \ref{p1} that it suffices to show that for all $d$ sufficiently large,
\[h_{top}({W_{1}})<h_{top}.\]
Let $h$ denote the entropy of the horseshoe associated with $\gamma_{0}$; obviously $h$ is independent of $d$ and $h_{top}(\phi_{t})\geq h$.
The theorem now follows from the last lemma.
~\hfill~ $\diamond$ \vspace{7mm}
This theorem gives negative answers to Questions I, I$'$ and I$''$ in the Introduction.
Now observe that the tori $T_d$ and $\Theta T_d$ that were considered above will survive under any sufficiently small (in the $C^\infty$ topology) perturbation of the metric that we constructed above. These tori will still separate the unit tangent bundle and the conclusions of Lemma 4.4 and Theorem 4.5 carry over. It is clear from this that Question II in the introduction has a negative answer.
Finally we observe that since $h_{top}(S_p) \leq h_{top}(W_1)$ for any $p \in V_p \cup V_q$, we have $h_{top}(S_p) < h_{top}$ for any $p \in V_p \cup V_q$.
|
1,314,259,996,597 | arxiv | \section{Conclusions}
In conclusion, we revealed the existence of virtual polarization states of a metasurface stack by analyzing the reflection paths of its internal modal interaction. Analogously to Feynman paths, they represent all possible paths light can take during propagation through the stack and allow for the formulation of an interaction picture.
In this work we applied a geometric expansion to the S-matrix of an anisotropic patch-wire metasurface stack under the necessary condition of the FMA. We demonstrated that its transmission could be separated into a leading transmissive term and a series of interferometric terms, representing the reflection paths of the stack. By truncating the series and analyzing its constituent coefficients, we revealed the properties of paths of different order as well as their influence on the overall response.
The knowledge of reflection paths could prove useful in understanding the interaction of more complex stacks with multiple diffraction channels \cite{Chen2019a}. Furthermore, we believe that the concept of Feynman paths could help in developing semi-analytic models of near-field interactions of complex nano-structures which can be challenging to comprehend, even numerically \cite{Helgert2011,Kenanakis2015,Forouzmand2018}.
Finally, we would like to emphasize the benefit of adopting methodology from different fields of physics and identifying their similarities in order to gain more insight on certain physical phenomena.
\begin{acknowledgments}
We would like to thank Prof. Asger Mortensen for discussions leading to the idea of exploiting methods from electronic transport in mesoscopic systems. Furthermore, we would like to thank Prof. Yuri Kivshar for discussions on the scope of the paper. Thanks are also due for or fabrication team, Michael Steinert, Waltraut Gr\"af, Holger Schmidt, and Daniel Voigt, for their support in the experimental realization of the patch-wire stack. We gratefully acknowledge financial support by the German Federal Ministry of Education and Research in the program 'Zwanzig20 - Partnership for Innovation' as part of the research alliance 3Dsensation (grant numbers 03ZZ0466, 03ZZ0471D, and 03ZZ0451 as well as 03Z1H534).
\end{acknowledgments}
|
1,314,259,996,598 | arxiv | \section*{Introduction}
The calculation of effective potentials for gauge fields to one loop order
and beyond is certainly an important task. Recently Elizalde, Odintsov and
Romeo, \cite{eli}, has carried out such a calculation for a covariantly
constant $SU(N)$-field on a curved background of the form $S^2\times R^2,
S^1\times S^1\times R^2$, in the limit of small curvature (i.e. large radii).
Elsewhere we have considered the same problem in a more general setting,
\cite{os}, namely a general Yang-Mills field on a curved background with
not too violently varying curvature. We believe that this method solves some
of the problems faced by Elizalde et al. A fuller investigation of this
is in the making and will be submitted shortly, \cite{os2}. \footnote{In the
first of these two papers, a mistake has crept in: a term of the form
$\delta^a_b R_m^n$, where $R_m^n$ is the Ricci tensor was missing, this
mistake has been corrected in \cite{os2}, which also contains many
applications.}
Elizalde has also given a thorough discussion of the techniques underlying
his work with Odintsov and Romeo in \cite{eli2}.
As shown in \cite{os}, the effective Lagrangian for a general Yang-Mills field
on a general curved background can be written as
\begin{eqnarray}
V_{\rm eff}(A) &=&(4\pi)^{-2}{\rm Tr}~\left(-\frac{g^6}{128}{\cal A}^2
\ln~\frac{g^2}{4}{\cal A}
+\frac{3g^6}{256}{\cal A}^2\right.\nonumber\\
&&\hspace{20mm}\left.-\frac{1}{2}\left(\ln~\frac{g^2}{4}{\cal A}\right)~{\cal B}-
\frac{16}{3g^4}{\cal A}^{-1}{\cal C}\right)-
\frac{g^2}{4}F_{mn}^a F^{mn}_a \label{eq:veff}
\end{eqnarray}
where $\cal A,B,C$ are matrices defined by
\begin{eqnarray}
{\cal A}^{m(a)}_{n(b)} &=& \left(\partial_p{\cal E}^{mp}_n+\frac{3}{4}
{\cal E}^{mp}_k
{\cal E}_{np}^k\right)\delta^a_b+gf_{b\hspace{3pt}c}^{\hspace{3pt}a}
(\partial_nA^{mc}-
\partial^mA^c_n)+\nonumber\\
&&\qquad\frac{1}{2}\delta^m_ng^2f_{ebc}f_{d\hspace{3pt}c}^{
\hspace{3pt}a}A^e_pA^{pd} +\delta^a_b R^m_n
\label{eq:A}\\
{\cal B}_{n(b)}^{m(a)} &=& \Box_0{\cal A}_{n(b)}^{m(a)}
\equiv \eta^{pq}\partial_p\partial_q {\cal A}_{n(b)}^{m(a)}
\label{eq:B}\\
{\cal C}^{m(a)}_{n(b)} &=& (\partial_p {\cal A}^{m(a)}_{k(c)})(\partial^p
{\cal A}^{k(c)}_{n(b)})
\label{eq:C}
\end{eqnarray}
where
\begin{equation}
{\cal E}_n^{mp} = \left(\partial_ne^{m\mu}-\partial^me_n^\mu\right)
e^p_\mu \label{eq:E}
\end{equation}
and $\partial_m = e^\mu_m\partial_\mu$. The quantities $e_m^\mu$ are vierbeins,
$g^{\mu\nu} = e^\mu_m e^\nu_n\eta^{mn}$, and $A_m^a = e^\mu_m A_\mu^a$ etc.
In the case studied in \cite{eli} with
\begin{equation}
A_\mu^a = -\frac{1}{2}F_{\mu\nu}^ax^\nu
\end{equation}
covariantly constant ($F_{\mu\nu}^a=const$) we get the the gauge part of these
quantities to be
\begin{eqnarray}
{\cal A}^{m(a)}_{n(b)} &=& -\frac{1}{2}gf_{b~c}^{~a}\left(e^\mu_n
(\partial_\mu e^\nu_p)\eta^{mp} F^c_{\nu\rho}x^\rho -e^\nu_p
(\partial_\nu e^\mu_n)\eta^{mp} F^c_{\mu\rho}x^\rho
+e^\mu_n e^\nu_p\eta^{mp}F^c_{\mu\nu}\right)\nonumber\\
&&+\frac{1}{8}g^2\delta^m_n f_{ebc}f_d^{~ac}g^{\mu\nu} F_{\mu\rho}^d
F_{\nu\sigma}^d x^\rho x^\sigma+\mbox{curvature terms}\\
{\cal B}^{m(a)}_{n(b)} &=&-\frac{1}{2}gf_{b~c}^{~a}
\eta^{rs}\eta^{mp} e^\rho_r\left[
\partial_\rho (e^\sigma_s\partial_\sigma(e^\mu_n\partial_\mu e^\nu_p))
F_{\nu\kappa}^c x^\kappa-\partial_\rho(e^\sigma_s\partial_\sigma(
e^\nu_p\partial_\nu e^\mu_n))F^c_{\mu\kappa}x^\kappa\right]
\nonumber\\
&&-\frac{1}{2}gf_{b~c}^{~c}\eta^{rs}\eta^{mp}e^\rho_r\left[
\partial_\rho(e^\sigma_se^\mu_n\partial_\mu e^\nu_p)F_{\nu\rho}^c
-\partial_\rho(e^\sigma_s e^\nu_p\partial_\nu e^\mu_n)F_{\mu\rho}^c
-2\partial_\rho(e^\sigma_s\partial_\sigma(e^\mu_n e^\nu_p))
F_{\mu\nu}^c\right]\nonumber\\
&&+\frac{1}{8}g^2 \eta^{rs}\delta^m_n f_{ebc}f_d^{~ac}F_{\mu\lambda}^e
F_{\nu\kappa}^d e^\rho_r\partial_\rho(e^\sigma_s\partial_\sigma(
g^{\mu\nu}x^\lambda x^\kappa))+\mbox{curvature terms}\\
{\cal C}^{m(a)}_{n(b)} &=& g^{\rho\sigma}\left[(\partial_\rho(e^\mu_k
\partial_\mu e^\nu_p)\eta^{mp}A_\nu^c
-\frac{1}{2}e^\mu_k(\partial_\mu e^\nu_k)\eta^{mp}F_{\nu\rho}^c
\right.\nonumber\\
&&\left.-\partial_\rho(e^\nu_p\partial_\nu e^\mu_k)\eta^{mp}A_\mu^c
+\frac{1}{2}e^\nu_p(\partial_\nu e^\mu_k)\eta^{mp}F_{\mu\rho}^c
+\partial_\rho(e^\mu_ke^\nu_p)\eta^{mp}F_{\mu\nu}^c\right]
\times\left[\partial_\sigma(e^\lambda_n\partial_\lambda e^\kappa_q)
\eta^{kq}A_\kappa^d\right.\nonumber\\
&&\left.-\frac{1}{2}e^\lambda_n(\partial_\lambda e^\kappa_q)\eta^{kq}
F_{\kappa\sigma}^d-\partial_\sigma(e^\kappa_q\partial_\kappa
e^\lambda_n)\eta^{kq}A_\lambda^d
+\frac{1}{2}e^\kappa_q(\partial_\kappa e^\lambda_n)\eta^{kq}
F_{\lambda\sigma}^d+\partial_\sigma(e^\lambda_ne^\kappa_q)\eta^{kq}
F_{\lambda\kappa}^d\right]\nonumber\\
&&+\frac{1}{64}g^4\delta^m_nf_{ebc}f_{e'b'c'}f_d^{~b'c}f_{d'}^{~ac'}
F_{\mu\lambda}^eF_{\mu'\lambda'}^{e'}F_{\nu\kappa}^d
F_{\nu'\kappa'}^{d'} g^{\rho\sigma}\partial_\rho(g^{\mu'\nu'}
x^{\lambda'}x^{\kappa'})\partial_\sigma(g^{\mu\nu}x^\lambda x^\kappa)
\nonumber\\
&&+\frac{1}{4}g^2\delta^m_n f_{ebc}f_d^{~ac}F_{\mu\lambda}^e
F_{\nu\kappa}^d g^{\rho\sigma}\partial_\rho(g^{\mu\nu}x^\lambda
x^\kappa)
\times\partial_\sigma\left[e^\epsilon_k A^b_\phi(\partial_\epsilon
e^\phi_b)\eta^{kp}-e^\phi_pA^b_\epsilon(\partial_\phi e^\epsilon_k)
\right]\nonumber\\
&&+\mbox{curvature terms}
\end{eqnarray}
Now, for the spacetimes considered by Elizalde et al., \cite{eli}, namely $
S^2\times R^2, S^1\times S^1\times R^2$, the vierbeins can be chosen to
only depend on the radii, and hence ${\cal E}_n^{mp}\equiv 0$, the only
curvature dependency then comes from the Ricci tensor. Since this is
a constant, $R_{\mu\nu}\propto \rho^{-2}$ (where $\rho$ is the radius of
$S^2$, and the result for the torus $S^1\times S^1$ is slightly more
complicated but of the same nature), only $\cal A$ will contain this,
while $\cal B,C$ will be curvature independent. In these
cases, furthermore, $\cal B,C$ become diagonal (except, perhaps, in colour
space). Explicitly,
\begin{eqnarray}
{\cal A}^{m(a)}_{n(b)} &=& -\frac{1}{2}g f_{b~c}^{~a}\eta^{mp}
e^\mu_n e^\nu_pF_{\mu\nu}^c+\frac{1}{8}g^2\delta^m_ng^{\mu\nu}
f_{ebc}f_d^{~ac}F_{\mu\rho}^eF_{\nu\sigma}^d x^\rho x^\sigma
+\delta^a_bR^m_n\\
{\cal B}^{m(a)}_{n(b)} &=& \frac{1}{8}g^2\delta^m_n f_{ebc}
f_d^{~ac}F_{\mu\lambda}^eF_{\nu\kappa}^d g^{\rho\sigma} g^{\mu\nu}
(\delta_\rho^\lambda\delta^\kappa_\sigma+\delta^\lambda_\sigma
\delta^\kappa_\rho)\\
{\cal C}^{m(a)}_{n(b)} &=& \frac{1}{64}g^4\delta^m_ng^{\rho\sigma}
f_{e'b'c'}f_{d'}^{~ac'}f_{ebc}f_d^{~b'c}g^{\mu\nu}g^{\mu'\nu'}
\times\nonumber\\
&&F_{\mu'\lambda'}^{e'}F_{\nu'\kappa'}^{d'}F_{\mu\lambda}^e
F_{\nu\kappa}^d(\delta^{\lambda'}_\rho x^{\kappa'}+
\delta^{\kappa'}_\rho x^{\lambda'})(\delta^\lambda_\sigma x^\kappa+
\delta^\kappa_\sigma x^\lambda)
\end{eqnarray}
With the explicit choice of the the field strength tensor used by Elizalde
et al., (the first two coordinates being $S^2$, the last two $R^2$)
\begin{displaymath}
F_{\mu\nu}^a = \left(\begin{array}{cccc}
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & H^a\\
0 & 0 & -H^a & 0
\end{array}\right)
\end{displaymath}
and using that the Ricci tensor is simply
\begin{equation}
R_{\mu\nu} = \left(\begin{array}{cccc}
\rho^{-2} & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0
\end{array}\right)
\end{equation}
we get $\cal A$ to be block diagonal
\begin{equation}
{\cal A} = \left(\begin{array}{cccc}
X+\rho^{-2} & 0 & 0 & 0\\
0 & X+1 & 0 & 0\\
0 & 0 & X & h\\
0 & 0 &-h & X
\end{array}\right) \equiv \left(\begin{array}{cc} \tilde{X} & 0\\
0 & a \end{array}\right)
\end{equation}
where ($x,y$ are the coordinates of $R^2$)
\begin{eqnarray}
X &=& \frac{1}{8} g^2 g^{\mu\nu} f_{ebc}f_d^{~ac}F_{\mu\rho}^e
F_{\nu\sigma}^d x^\rho x^\sigma = \frac{1}{8}g^2 f_{ebc}f_d^{~ac}
H^eH^d(y^2-x^2)\\
h &=& -\frac{1}{2}g f_{b~c}^{~a}H^c
\end{eqnarray}
The logarithm of $\cal A$ appears
in the result for the effective action, since $\cal A$ is block diagonal the
calculation of this quantity is straightforward, it is
\begin{equation}
\ln {\cal A} = \left(\begin{array}{cc} \ln \tilde{X} & 0\\
0 & b \end{array}\right)
\end{equation}
with $a=e^b$. Now, any $2\times 2$ matrix can be expanded on the Pauli
matrices, $a = a_0 1_2+a_i\sigma^i$, using the algebraic properties of these
we then get
\begin{equation}
b = \left(\begin{array}{cc}
\ln X-\ln\cos h & i h\\
-ih & \ln X-\ln\cos h
\end{array}\right)
\end{equation}
As for Elizalde and coworkers, the effective action gets an imaginary part
from the logarithmic term.\\
The explicit form for the remaining matrices turn out to be
\begin{eqnarray}
{\cal B}_{n(b)}^{m(a)} &=& \delta^m_n \frac{1}{2}g^2 f_{ebc}f_d^{~ac}
H^eH^d \equiv \delta^m_n h_2\\
{\cal C}_{n(b)}^{m(a)} &=& \delta^m_n \frac{g^4}{16}f_{ebc}f_{e'b'c'}
f_d^{~b'c}f_{d'}^{ac'}H^eH^{e'}H^dH^{d'}(y^2-x^2) \equiv \delta^m_n h_4
\end{eqnarray}
Inserting all of this into our general formula (\ref{eq:veff}) we then
finally arrives at
\begin{eqnarray}
V_{\rm eff} &=& (4\pi)^{-2}{\rm tr}\left\{-\frac{g^6}{128}\left(
(X^2+\rho^{-2})^2\ln\frac{g^2}{4}(X^2+\rho^{-2})\right.\right.\nonumber\\
&&+(X+1)^2\ln
\frac{g^2}{4}(X+1)+2(X^2-h^2)\ln\frac{g^2}{4}X
\nonumber\\
&&\left.-2(X^2-h^2)(\ln\cos\frac{g^2}{4}h+\frac{1}{2}ig^2h^2X)
\right)
+\frac{3g^6}{256}\left(X^2+\rho^{-2})^2+3X^2-2h^2+2X+1
\right)\nonumber\\
&&-\frac{1}{2}h_2\left(\ln\frac{g^2}{4}(X+\rho^{-2})+2\ln\frac{g^2}{4}X
-2\ln\cos\frac{g^2}{4}h+\ln\frac{g^2}{4}(X+1)\right)\nonumber\\
&&\left.-\frac{16}{3g^4}\frac{h_4}{(h^2+X^2)(X+1)(X+\rho^{-2})}
\left((h^2+X^2)(2X+1+\rho^{-2})+2X(X+\rho^{-2})(X+1)
\right)\right\}\nonumber\\
&&-\frac{g^2}{2}H^aH_a
\end{eqnarray}
where the trace is over gauge algebra indices (the result is valid
for an arbitrary Lie algebra).\\
The result which Elizalde et al. finds is ($\Omega$ being the volume of
the two-sphere)
\begin{displaymath}
\frac{\Gamma}{\Omega} = a_0 (gH)^2\left[\frac{11}{48\pi^2}
\left(\ln\frac{gH}{\mu^{'2}}-\frac{1}{2}\right)-i\frac{1}{8\pi}\right]
+\frac{1}{4\pi^2}\frac{gH}{\rho^2}\left[-\frac{a_0+a_1}{2}\ln 2
+ia_1\frac{\pi}{2}\right]
\end{displaymath}
where $a_0,a_1$ are coefficients in the Schwinger-DeWitt expansion of the
heat kernel of the Laplacean on $S^2$, ($a_0=1,a_1=-1/3$), and where the gauge
group has been chosen to be $SU(2)$. A number of approximations have been made
here, first of all the calculation is linear in the curvature and secondly
it is only valid for $\rho$ large. This latter comment, however, does not
refrain the authors of \cite{eli} from studying $1\leq \rho\leq 10$, a regime
in which the approximation $\rho \gg 1$ certainly cannot be said to hold. With
this they claim to find a critical point at $\rho_c\sim 2$. One should note
that the calculation put forward here does not suffer from these problems.\\
In our calculation we have used the freedom in renormalisation to fix $\mu=1$.
The form of the result by Elizalde and coworkers is $(H^2)(\ln H-1/2)+ R H$
with $R=\rho^{-2}$, these two terms are, up-to a finite renormalisation, the
same as the lowest order terms in our formula. The remaining terms are due to
non-linear terms in the curvature and cannot therefore be found by the
mode summation of Elizalde, Odintsov and Romeo.
\section*{Conclusion}
We have shown how the effective action for a covariantly constant
Yang-Mills field on a simple background $S^2\times R^2$ can be found, including
terms non-linear in the curvature, thereby generalising the result of Elizalde,
Odintsov and Romeo. Furthermore, as our method is not based on an explicit
mode summation, it is applicable to spacetimes in which one does not know the
explicit form of the eigenvalues of the Laplace operator for spin one (i.e.
almost all spacetimes). An important example of such a spacetime manifold, mentioned
also by Elizalde et al., is de Sitter $S^4$. Since our approach only needs the
vierbein (and through them the Ricci tensor etc.), this spacetime is actually
within reach of the method proposed here. Manifolds with non-constant curvature
can also be treated, which means that all manifolds of physical interest should
be tractable.
Our result is also capable of handling non-covariantly
constant field configurations and to calculate curvature induced mean-fields
as shown in \cite{os,os2}, where also phase transitions are treated.
Further research into this is in progress.
|
1,314,259,996,599 | arxiv | \section{Introduction}
\setcounter{equation}{0}
Statistical reasoning and the modelization of physical phenomena by random
processes have taken a major place in modern physics and mathematics. A
physical example of this is the emergence of random behaviour from purely
deterministic laws, as in classically chaotic Hamiltonian systems (see
e.g.~\cite{ll}). Randomness also enters in the quantum version of these systems
(see e.g.~\cite{lh}). In fact, the statistical properties of the semiclassical
spectrum of fully chaotic systems are, in the universal regime, in good
agreement with those obtained from an ensemble of random matrices
\cite{mehta,bohigas}. The Gaussian orthogonal ensemble (GOE) statistics applies
to systems which are chaotic and have (generalized) time--reversal symmetry,
while the Gaussian unitary ensemble (GUE) statistics are appropriated to
describe systems which are chaotic and without time--reversal invariance. In
particular, the GUE two--point correlation function is (after normalization of
the average spacing between eigenvalues to unity)
\begin{equation} \label{11}
R_2^{GUE} (\epsilon) = 1 - \frac{\sin^2 (\pi \epsilon)}{\pi^2 \epsilon^2},
\end{equation}
or its Fourier transform, the two--point form factor, has the form:
\begin{equation} \label{12}
K_2^{GUE} (\tau) = \left\{ \begin{array}{ll}
|\tau| & \mbox{if $|\tau|<1$} \\
1 & \mbox{if $|\tau|>1$.} \end{array} \right.
\end{equation}
A particularly interesting example of applying statistical considerations
to a pure mathematical object is provided by the Riemann zeta function.
This function is defined by a series
\begin{equation} \label{13}
\zeta (s) = \sum_{n=1}^{\infty} \frac{1}{n^s}
\end{equation}
which converges for $Re(s)>1$ and can be analytically continued to the
whole complex plane \cite{tit}. The region $0<Re(s)<1$ is
called the critical strip and it was proved~\cite{reich} that in this
region the Riemann zeta function has properties of a random function.
The Riemann hypothesis asserts that all the complex zeros of $\zeta (s)$ lie on the
critical line $Re(s)=1/2$, which we henceforth denote by ${\cal L}_c \,$. Assuming the
Riemann hypothesis, Montgomery \cite{montgomery,gm} concluded that
asymptotically (i.e., for large values of $Im(s)$), the form factor of the
critical Riemann zeros coincides, for $|\tau|<1$, with the GUE result
(\ref{12}) (and conjectured that the agreement holds for arbitrary $\tau$).
More recently, some spectacular numerical results by Odlyzko \cite{odlyzko}
strongly support that conjecture. Assuming certain number theoretical
hypotheses on the correlations between prime numbers to hold, in Ref.\cite{gm}
and later on in Ref.\cite{keating} it was shown that the main term of the
two--point correlation function for the critical zeros of the Riemann zeta
function does coincide with (\ref{12}). In the context of "quantum chaos", an
analogue of Montgomery's result was found by Berry \cite{berry}, who also
discussed the validity of the random matrix theory.
These two apparently disconnected physical and mathematical results have a
common root in a formal analogy between the density of Riemann zeros expressed
in terms of prime numbers (cf. Eqs.(\ref{213}) below) and an asymptotic
approximation of the quantum spectral density in terms of classical periodic
orbits (the Gutzwiller trace formula~\cite{gutz}). This analogy has been
fruitful for both mathematical and physical fields. For example, the
correlations between prime numbers (the Hardy--Littlewood conjecture) inspired
some work on the correlations between periodic orbits \cite{ks}. In the
opposite direction, the statistical non--universalities of the spectral
density, related to short periodic orbits, were successfully transposed to the
Riemann zeta function \cite{berry2}.
It is thus clear that zeta functions are good models for investigating level
statistics and the semiclassical trace formula. There are many generalizations
of the Riemann zeta function \cite{encyclopedia}, and since very little is
known about their zeros it is of interest to investigate their distribution. In
\cite{ozluk} the analogue of Montgomery's result was proved for the average of
all Dirichlet $L$--functions having the same (large) modulus, while $Im(s)$ is
kept constant. The purpose of this paper is to study, in the limit of large
$Im(s)$, the statistical properties of zeros of {\sl individual} Dirichlet
$L$--functions having arbitrary modulus and character.
After a brief introduction (Section 2), in Section 3 we prove that the main
asymptotic term of the two--point correlation function of the non--trivial
zeros of an arbitrary Dirichlet $L$--function agrees with GUE. We also prove
the statistical independence of the zeros of $L$--functions having different
character and/or different modulus, i.e. the zeros of their product behave
like the superposition of uncorrelated GUE--sets. The demonstrations assume
that the Hardy-Littlewood conjecture on the correlations between prime numbers
is valid. The application of these results to the distribution of the zeros of
the zeta function of positive binary quadratic forms, a particular case of the
Epstein zeta function, is shortly discussed in Section 4.
\section{Dirichlet $L$--functions}
\setcounter{equation}{0}
Dirichlet $L$--functions are natural generalizations of the Riemann zeta
function (\ref{13}). When $Re(s)>1$ they are defined by a series (see, e.g.
\cite{bateman})
\begin{equation} \label{23}
L (s, \chi) = \sum_{n=1}^{\infty} \frac{ \chi (n)}{n^s} = \prod_p \( 1-\frac{\chi (p)}{p^{s}}\)^{-1}.
\end{equation}
where the product is taken over all primes $p$.
Given an arbitrary integer $k$ (called the modulus), a function $ \chi (n)$ (called a
Dirichlet character mod $k$) is a complex function of positive integers
satisfying:
(i) $\chi (nm) = \chi (n) \, \chi (m)$,
(ii) $\chi (n)=\chi (m)$ if $n\equiv m$ mod $k$,
(iii) $\chi (n)=0$ if $(n,k)\neq 1$,
where $(n,k)$ denotes the highest common divisor of $n$ and $k$.
A character is called {\sl principal} and denoted by $\chi_0$ if $\chi_0 = 1$
when $(n,k)=1$ and $\chi_0=0$ otherwise; the corresponding $L$--function
essentially reproduces the Riemann zeta function. In fact
$$
L ( s, \chi_0)=\zeta (s) \prod_{p|k} \(1-p^{-s}\),
$$
where the product is taken over all prime factors of $k$. It also follows from
the above definitions that $\chi(1)=1$ and $\[\chi(k-1)\]^2=\[\chi(-1)\]^2=1$.
In general $k$ can be any integer number. The total number of different
characters modulo $k$ is given by Euler's function $\phi(k)$
(the number of positive integers prime to, and not exceeding $k$). The value
$ \chi (n)$ is different from zero iff $(n,k)=1$ and its $\phi(k)-th$ power equals
one. Table 1 provides a list of non--principal characters for $k=4$ and $5$, to
be later on used in the numerical computations. (Detailed tables of characters
can be found in \cite{tc}).
A character $\chi$ mod $k$ is called nonprimitive if there is a divisor $k'$ of
$k$ such that when $n'\equiv n$ mod $k'$, $\chi (n')=\chi (n)$. Otherwise the
character is called primitive. For primitive characters Dirichlet
$L$--functions satisfy the functional equation \cite{dh,davies}:
\begin{equation} \label{26}
\xi (s,\chi) = (- {\rm i})^a \, W_\chi \; \xi (1-s, {\bar \chi})
\end{equation}
where
$$
\xi (s,\chi) = \( \frac{k}{\pi} \)^{s/2} \Gamma\(\frac{s+a}{2}\)\, L (s, \chi)
$$
and $a=\[1-\chi (-1) \]/2$. $W_\chi$ is a complex number of unit modulus which,
for a given character, is a constant
\begin{equation} \label{27}
W_\chi = \frac{1}{\sqrt{k}} \sum_{q=1}^{k-1} {\rm e}^{2 \pi {\rm i} q/k} \, \chi (q).
\end{equation}
Like in the Riemann case, the functional equation allows to define a real
function on the critical line ${\cal L}_c \,$ (where according to the generalized Riemann
hypothesis should lie all non--trivial zeros of $L$--functions):
\begin{equation}
Z (t, \chi) = {\rm e}^{- {\rm i} \Theta_\chi (t) /2} \, L ( 1/2 - {\rm i} t, \chi) \label{29}
= \left. \sum_{n=1}^{\infty}\right.^\prime \frac{1}{\sqrt{n}}
\cos \[ t \ln n - \Theta_\chi (t)/2 + \arg \chi (n)\] \label{210}
\end{equation}
where the symbol ${\sum}^{'}$ indicates that the summation is done over all
terms for which $\chi (n)\neq 0$ and
$$
\Theta_\chi (t) = \arg W_\chi + t \ln \( \frac{k}{\pi} \) - 2 \arg \[\Gamma
\(\frac{1+2a}{4} -\frac {{\rm i}}{2} t \)\]-\frac{a\pi}{2}. $$
Asymptotically
\begin{equation} \label{211}
\Theta_\chi (t) \stackrel{t\rightarrow\infty}{\simeq} \arg W_\chi + t\[\ln\(\frac{kt}{2\pi}\)
-1\]-\frac{\pi}{4} + {\cal O}(t^{-1}).
\end{equation}
An approximate functional equation (the analogue of the Riemann--Siegel formula
for $\zeta (s)$) also holds for $L$--functions in the large--$t$
limit~\cite{siegel,davies}. It takes the form of a resummation of the series:
instead of the infinite sum (\ref{210}), $Z (t, \chi)$ can be written as a truncated
sum
\begin{equation} \label{212}
Z (t, \chi) \simeq 2 \left.\sum_{n=1}^N\right.^\prime \frac{1}{\sqrt{n}} \cos \[ t
\ln n - \Theta_\chi (t)/2 + \arg \chi (n)\] + {\cal O}(t^{-1/4}),
\end{equation}
where $N=\[\sqrt{\frac{t}{2\pi k}}+\frac{1}{2}\]k$ (the square brakets denote here
integer part). An explicit form for the correction terms in Eq.(\ref{212}) can
be found in \cite{siegel,davies}. This expression is particularly useful for
numerical computations since we need to find the real zeros of a real function
expressed as a finite sum of oscillating terms, their number being proportional
to $\sqrt{t}$.
As usual (see for example \cite{keating}), one can express the density of
zeros lying on ${\cal L}_c \,$ as a sum of an average part and a fluctuating part, $d (t, \chi) =
d_{\rm av} (t, \chi) + d_{\rm osc} (t, \chi) $. The result is
\begin{subeqnarray} \label{213}
&& d_{\rm av} (t, \chi) = \frac{1}{2\pi} \frac{d \Theta_\chi (t)}{d t} \stackrel{t\rightarrow\infty}{\simeq} \frac{1}{2\pi} \ln \(kt/2
\pi
\) \\ \label{214}
&& d_{\rm osc} (t, \chi) = -\frac{1}{\pi} {\sum_p}^{'} \sum_{m=1}^\infty \frac{\ln p}
{{\rm e}^{\frac{m}{2}\ln p}} \cos \[ m \(t \ln p + \arg \chi (p) \)\].
\end{subeqnarray}
This decomposition is analogous to the Gutzwiller trace formula \cite{gutz},
where the quantum spectral density is expressed as a sum of an average part and
a fluctuating part. The average part of the level density is given by the
so--called Weyl or Thomas--Fermi term and is related in the lowest order
approximation to the derivative with respect to the energy of the volume of the
classical phase space energy shell. The oscillating term is expressed as a sum
over all the periodic orbits. Formally, Eq.(2.7b) is analogous to this latter
term for a system whose periodic orbits are all isolated and unstable
\cite{gutz}. In this analogy, the prime numbers are identified with periodic
orbits (whose lengths are given by the logarithm of the prime numbers) and the
characters, being pure phase factors for $(p,k)=1$, may be interpreted as
Maslov indices, a quantity classically related to the focusing of a flux tube
surrounding the periodic orbit. In part 5, however, we provide
a different (and perhaps more relevant) interpretation of the characters
in Eq.(2.7b) in terms of symmetries.
\section{Two--point correlation function}
\setcounter{equation}{0}
Let us formally insert the density of zeros given by Eqs.(\ref{213}) into the
definition of the two--point correlation function
\begin{equation} \label{31}
R_2 (\epsilon) = \langle d (t,\chi) \, d (t+\epsilon,\chi) \rangle
\end{equation}
and express $R_2 (\epsilon)$ as a sum over prime numbers. In (\ref{31}) the bracket
$\langle \; \rangle$ denotes an average over an interval $\Delta t$ which is
unavoidable when discussing statistical properties of a given function. Our
purpose is to compute $R_2 (\epsilon)$ in the large--$t$ limit and, accordingly, we choose
the smoothing interval such that $1\ll \Delta t \ll t$ \cite{berry3}. Thus,
any oscillating term with period smaller than ${\cal O}(1)$ will be washed away
by the averaging procedure. Then, from (\ref{31}) and (\ref{213})
\begin{equation} \label{32}
R_2 (\epsilon) \approx d^2_{\rm av} (t, \chi) + R_2^{\rm osc} (\epsilon)
\end{equation}
where
\begin{eqnarray}
R_2^{\rm osc} (\epsilon) &=& \langle d_{\rm osc} (t, \chi) \, d_{\rm osc} (t+\epsilon, \chi) \rangle \nonumber \\
&=& \frac{1}{4 \pi^2} \langle \sum_{p,p'} \sum_{m,m'=1}^{\infty}
\frac{ \chi (p) {\bar \chi} (p') \ln p \ln p'}{p^{m/2} \(p'\)^{m'/2}}
\exp\[{\rm i} t \(m\ln p - m' \ln p' \) - {\rm i} m \epsilon \ln p'\] \rangle
+ {\rm c.c.}. \nonumber
\end{eqnarray}
We must now show that $R_2^{\rm osc} (\epsilon) $ reproduces the second term in the r.h.s.
of Eq.(\ref{11}). The main lines of our proof follow those developed in
\cite{keating} for the Riemann zeta function.
The average in $R_2^{\rm osc} (\epsilon)$ is not zero since the difference $m \ln p - m' \ln p'$
can be arbitrarily small (for large values of $p$ and $p'$) and can produce
oscillating terms whose period is of order $\Delta t$ or bigger. Moreover, the
sums are convergent for values of $m, m'$ bigger than one. So we restrict to
$m=m'=1$
\begin{equation} \label{33}
R_2^{\rm osc} (\epsilon) \approx \frac{1}{4 \pi^2} \langle \sum_{p,p'}
\frac{ \chi (p) {\bar \chi} (p') \ln p \ln p'}{p^{1/2} \(p'\)^{1/2}}
\exp\[{\rm i} t \(\ln p - \ln p' \) - {\rm i} \epsilon \ln p'\] \rangle + {\rm c.c.}.
\end{equation}
Now we split the double sum into two parts $\sum_{p, p'} = \sum_{p=p'} +
\sum_{p
\neq p'}$, and first compute the diagonal part. Since $|\chi (p)|=1$ if $(p,k)
=1$, then
$$
R_2^{\rm osc} (\epsilon)_{\rm diag} = \frac{1}{4 \pi^2} {\sum_p}^{'} \( \frac{\ln^2 p}{p}
{\rm e}^{-{\rm i} \epsilon \ln p}\) + {\rm c.c.}.
$$
Taking into account that only for a finite number of primes $(p,k)\neq 1$, it
follows that one can replace the sum over primes for which $\chi (p)\neq 0$ by
an integral, using the usual prime number theorem for the density of primes
(see e.g.~\cite{tit}). Putting $\tau = (\ln p) / 2 \pi$ one obtains:
\begin{equation} \label{34}
R_2^{\rm osc} (\epsilon)_{\rm diag} \approx \int_0^\infty d \tau \, \tau \, {\rm e}^{-2 \pi {\rm i} \epsilon
\tau}
+{\rm c.c.} = - \frac{1}{2 \pi^2 \epsilon^2}.
\end{equation}
This contribution reproduces the non--fluctuating part of $R_2^{GUE}$.
Alternatively, the contribution of (\ref{34}) to the form factor is $\tau$, in
agreement with Eq.(\ref{12}) for $|\tau|<1$.
In the off--diagonal part of (\ref{33}) the smoothing will wash away terms
for which $\Delta t \ln (p/p') \geq 1$, and $\Delta t \rightarrow \infty$ as
$t \rightarrow \infty$. The main contribution then comes from large values
of $p$ and $p \sim p'$. Putting $p' = p + h$ and assuming that $h \ll p$
the off--diagonal part reads
\begin{equation} \label{35}
R_2^{\rm osc} (\epsilon)_{\rm off} \approx \frac{1}{4 \pi^2} \sum_{p} \frac{\ln^2 p}{p} {\rm e}^{-{\rm i}
\epsilon \ln p}
\langle \sum_h \chi (p) {\bar \chi} (p+h) {\rm e}^{-{\rm i} t h/p} \rangle + {\rm c.c.},
\end{equation}
where the external sum is taken over all primes and the internal one is taken
over integers $h$ such that $p+h$ is a prime. To proceed further we need some
information about the pair correlation function between prime numbers. An
important conjecture related to those correlations is due to Hardy and
Littlewood \cite{hl,lc}, which expresses the density $\Lambda_h (X)$ of primes
$p$ lying between $X$ and $X + dX$ such that $p+h$ is also a prime.
According to that conjecture
\begin{equation} \label{36}
\Lambda_h (X) \simeq \frac{ \alpha (h)}{\ln^2 X}
\end{equation}
where
$$
\alpha (h) = \alpha \prod_{p|h}\(1+\frac{1}{p-2}\)
\;\;\;\mbox{and }\;\;\;\alpha=2\prod_{q>2}\(1-\frac{1}{(q-1)^2}\).
$$
The first product is taken over primes which divide $h$ and the second one
is taken over all primes (except 2).
$ \alpha (h)$ is an irregular number--theoretic function, and its main contribution
comes from its average behavior, $\langle \alpha (h) \rangle$. Moreover, we will only
need the average behaviour of $ \alpha (h)$ for large values of $h$ (cf Eq.(\ref{311})
below). If one averages over {\em all} integers \cite{keating}
\begin{equation} \label{37}
\langle \alpha (h) \rangle \simeq 1 - \frac{1}{2h} \;\;\;\;\;\;\;\;\;\; {\rm as} \; h
\rightarrow \infty,
\end{equation}
expressing a repulsion between prime numbers for long distances.
Contrary to the Riemann case, to compute the two--point correlation function
for Dirichlet $L$--functions one has to compute the mean value of $ \alpha (h)$ not
averaging over all integers but over all equal integers mod $k$ (i.e., all
integers having the same remainder mod $k$). This is important because the
quantity $ \chi (p) {\bar \chi} (p+h)$ which enters Eq.(\ref{35}) depends only on $h$
mod $k$. The details of this calculation are given in appendix A. The result is
quite simple: for large $h$
\begin{equation} \label{38}
\langle \alpha (h) \rangle \approx \left\{ \begin{array}{ll}
1-1/2h & \mbox{if $h\equiv 0$ mod $k$} \\
1 & \mbox{otherwise.} \end{array} \right.
\end{equation}
This equation is the essential ingredient of our proof. It expresses a
non--trivial number--theoretic property of $\langle \alpha (h) \rangle$ and completely
eliminates the dependence on the character in (\ref{35}). This is due to the
fact that the $h$--independent (Poissonian) terms in (\ref{38}) introduce no
correlations between prime numbers. Then if in Eq.(\ref{35}) $ \chi (p) {\bar \chi}
(p+h)$ is replaced by its average, the sum over $h$ of the Poissonian
components vanishes. We thus only need to consider the terms $h\equiv 0$ mod
$k$, and for them $|\chi (p)|^2=1$ if $(p,k)=1$ and zero otherwise.
The computations are now exactly the same as for the Riemann
case~\cite{keating}. We briefly outline the main steps in appendix B, leading
to
\begin{equation} \label{39}
R_2^{\rm osc} (\epsilon)_{\rm off} \approx \frac{1}{2\pi^2} \frac{\cos \(2\pi d_{\rm av}
\epsilon\)}{\epsilon^2}.
\end{equation}
The final result for the two--point correlation function is, from (\ref{32}),
(\ref{34}) and (\ref{39})
\begin{equation} \label{315}
R_2 (\epsilon) = d^2_{\rm av}+R_2^{\rm osc} (\epsilon)_{\rm diag}+R_2^{\rm osc} (\epsilon)_{\rm off} \approx d^2_{\rm av}-
\frac{\sin^2 \( \pi d_{\rm av} \epsilon\)}{\pi^2\epsilon^2},
\end{equation}
which coincides with the GUE two--point correlation function (\ref{11}), once
the average density is set to one. Eq.(\ref{315}) holds for an arbitrary
modulus and character.
In order to illustrate this result we have numerically computed, using the
approximate functional equation (\ref{212}), the zeros of Dirichlet
$L$--functions for several values of $k$ and, for each $k$, for several
characters (complex and real) and verified in all cases the agreement with the
GUE statistics. For example, in Fig.1a we show $R_2 (\epsilon)$ for the approximately
$18000$ zeros lying in the interval $10^5 \leq t \leq 1.1\times 10^5$ for $k=5$
and character $\chi_2$ of table 1. For completness, in part b of that figure we
plot the nearest--neighbour spacing distribution for those zeros; the
continuous curve is the Wigner surmise $p(s)= a s^2 \exp \( -b s^2 \)$ where
$a=32/\pi^2$ and $b=4/\pi$. Note that our result (\ref{315}) for $R_2 (\epsilon)$ does not
imply the Wigner surmise for $p(s)$, since $p(s)$ has contributions from all
$n$--point correlation functions. The agreement suggests, however, the validity
of the GUE ensemble beyond the two-point correlation function.
We shall now consider the correlations between zeros of different
$L$--functions, which are generally believed to be statistically independent.
For that purpose, we take the product of several $L$--functions having
different characters $\chi_i$ mod $k_i$. The total density is $d_{\rm av} =\sum_{i}
d_{\rm av}^{i} = \sum_{i} f_{i} d_{\rm av} $, where the $f_i$'s are the relative
densities and $\sum_i f_i =1$. After averaging of exponential terms, the
two--point correlation function for the product of $L$--functions can be
written as
\begin{equation} \label{316}
R_2 (\epsilon) = d_{\rm av} ^2 + \sum_i R_2^{{\rm osc} (i)} (\epsilon)
+\sum_{i\neq j} \langle d_{\rm osc}^{(i)} (t) \, d_{\rm osc}^{(j)}
(t+\epsilon) \rangle .
\end{equation}
Now instead of $\chi (p) \bar{\chi} (p+h)$ in Eq.(\ref{35}) we will find that
$\langle d_{\rm osc}^{(i)} (t) d_{\rm osc}^{(j)} (t+\epsilon) \rangle $ is
proportional to $\chi_i (p) \bar{\chi_j} (p+h)$. Because of Eq.(\ref{38}), only
the terms $h \equiv 0$ mod $k_j$ contribute and the result is therefore
proportional to $\chi_i (p) \bar{\chi}_j (p)$. When $i\neq j$ the last quantity
defines a non--principal character modulo the least common multiple of $k_i$
and $k_j$. All other terms are smooth in this scale and therefore the main
contribution comes from the {\em average} value of this function. Since the
mean value of any non--principal character is zero~\cite{bateman}
$$
\frac{1}{k_i} \sum_{p=0}^{k_i-1} \chi_i(p)=0,
$$
it follows that
$$
\langle d_{\rm osc}^{(i)} (t) \, d_{\rm osc}^{(j)} (t+\epsilon) \rangle = 0,
\;\;\;\;\;\;\;\;\;\;\;\;\; \forall \; i \neq j.
$$
We thus conclude that zeros of different $L$--functions ({\it i.e.}, different $k$
and/or different character) are uncorrelated
\begin{equation} \label{317}
R_2 (\epsilon) = d_{\rm av} ^2 - \sum_i \frac{\sin^2 \( \pi f_i d_{\rm av} \epsilon\)}{\pi^2\epsilon^2}.
\end{equation}
We have also numerically verified this prediction. Fig.2 plots the correlations
between zeros for the product of the (three) non--principal characters for
$k=5$. The fluctuations are much smaller than in Fig.1 because of the better
statistics. The continuous curve in part a) is the prediction (\ref{317}),
while the GUE result for $p(s)$ for a product of independent sets can be
found in the appendix $2$ of Ref.\cite{mehta}. More extensive numerical
computations concerning the critical zeros of Dirichlet $L$--functions can be
found in \cite{hejhal,rumely}.
\section{Epstein zeta function}
\setcounter{equation}{0}
Let us consider briefly another interesting example of zeta function,
namely Epstein's zeta function associated with a positive definite
quadratic form $Q(x,y)=ax^2+bxy+cy^2$
\begin{equation} \label{41}
Z_Q (s) = \sum_{m,n} Q (m,n) ^{-s},
\end{equation}
where the summation is done over all integers $m$, $n$ except $m=n=0$. $Z_Q
(s)$ is an analytic function, regular for $Re(s) >1$, satisfying a functional
equation and an approximate functional equation (see e.g. \cite{potter,pt}). In
the following, we will consider the particular case $a$, $b$ and $c$ integers.
The properties of these functions strongly depend on the value of the
discriminant $\Delta=b^2-4ac$. If the class number of quadratic forms with a
given discriminant is one, $Z_Q (s)$ can be written as a product of two
Dirichlet $L$--series, like for example
\begin{equation} \label{42}
\sum_{m,n} \frac{1}{\( m^2+n^2\)^s} = 4 \, \zeta (s) \, L (s, \chi)
\end{equation}
where $L (s, \chi) $ is the non--principal Dirichlet $L$--function for $k=4$ (cf. table
1)
$$
L (s, \chi) = 1 - \frac{1}{3^s} + \frac{1}{5^s} - \frac{1}{7^s} + \cdots .
$$
This type of Epstein zeta functions obviously have an Euler product and are
assumed to satisfy the generalized Riemann hypothesis. For them, the results of
the previous section hold, and we arrive at the conclusion that the statistical
properties of their critical zeros are those of an uncorrelated superposition
of two GUE sets, Eq.(\ref{317}). Fig.3 illustrates this behaviour for the
function (\ref{42}).
When the class number is bigger than one, Epstein's functions cannot be
factorized as in (\ref{42}) and have no Euler product. Only a sum over all
classes of forms with a given discriminant is believed to have the
above--mentioned properties. One (simple) member of this non--Euler class of
functions is
\begin{equation} \label{43}
Z_Q (s) = \sum_{m,n} \frac{1}{\( m^2+5n^2\)^s}.
\end{equation}
For such functions with integer coefficients it is known
\cite{pt}-\cite{hejhal2} that
\begin{itemize}
\item[(i)] there is an infinite number of zeros lying on ${\cal L}_c \,$;
\item[(ii)] many zeros lie off that line;
\item[(iii)] however, almost all the zeros lie on ${\cal L}_c \,$ or in its immediate
neighbourhood.
\end{itemize}
Statement (iii) was recently made more precise. In fact, in
Refs.~\cite{bombieri,hejhal2} it was proved that $N_c (t)/N(t) \rightarrow 1$
as $t\rightarrow \infty$, where $N(t)$ denotes the number of zeros of $Z_Q (s)$
whose imaginary part is less or equal to $t$ and $N_c (t)$ the fraction of them
lying on ${\cal L}_c \,$. To prove this the generalized Riemann hypothesis for Dirichlet
$L$--functions was assumed to be valid as well as certain assumptions on the
correlations between zeros.
Eq.(\ref{43}) is an example of class number $2$ Epstein's zeta function, which
in general can be expressed as a sum of two terms $L_1 L_2 \pm L_3 L_4$, the
$L_i$ being appropriate Dirichlet $L$--functions. In \cite{bombieri,hejhal2} it
was proved that in sums of this type, in a certain range of $t$, typically one
of the two terms `dominate'. This suggests that the statistical properties of
the critical zeros for functions of the type (\ref{43}) should be close to an
uncorrelated superposition of two GUE sets.
\section{Concluding Remarks}
We have computed the two--point correlation function for the critical zeros of
Dirichlet $L$--functions using the Hardy--Littlewood conjecture for the
distribution of prime numbers and showed that for any modulus and character the
main term agrees with the statistics of the Gaussian Unitary Ensemble of random
matrices. These results generalized those of Ref.~\cite{gm,keating} for the
Riemann zeta function, and provide a unifying property for all $L$--functions.
The problem of estimating the next--to--leading terms (which should tend to
zero as $t\rightarrow \infty$) is not simple, since it is connected to the
short--range correlations between prime numbers~\cite{montgomery,gm}.
The Hamiltonian matrix of a quantum system having a discrete symmetry (like
parity) splits, in the appropriate basis, into uncoupled submatrices, each of
them corresponding to a symmetry class. It is well known (but not proved) that
for a dynamical system with at least two degrees of freedom the eigenvalues
belonging to different submatrices are uncorrelated. To each of these
submatrices is associated a character of the symmetry group, which enters in
the semiclassical trace formula. In fact, the symmetry-projected semiclassical
spectral density includes the character of the symmetry group \cite{robbins}
exactly in the same way as in Eq.(2.7b) for Dirichlet $L$--functions. Thus,
different characters in $L$--functions are the analog of different symmetry
classes in quantum mechanics. In the light of these considerations, our result
on the statistical independence of the zeros of different $L$--functions can be
interpreted as the analog of the above-mentioned independence of the
eigenvalues of different symmetry classes in quantum mechanics.
In all the cases we have investigated the zeros of zeta functions were
distributed according to the statistics of the Gaussian {\em Unitary} Ensemble
of random matrices, or a superposition of a few of them. If there exists a
number--theoretic zeta function whose zeros obey the statistics of the
Gaussian {\em Orthogonal} Ensemble remains an open problem.
\vspace{0.4in}
\noindent\undertext{\bf Acknowledgements}: We acknowledge discussions with O.
Bohigas and M. C. Gutzwiller at the initial stage of this work, and one of the
referees for pointing out the reference \cite{rumely}. One of us (PL) thanks
the Department of Physics of the University of Buenos Aires where part of the
paper was written.
\vspace{0.4in}
\renewcommand{ |
1,314,259,996,600 | arxiv | \section{Introduction}\label{Se:intro}
Let $T >0, u^0\in H,$ and consider the initial value
problem of seeking $u \in C((0,T];D(A))\cap C([0,T];H)$ satisfying
\begin{equation}
\label{ivp}
\left \{
\begin{aligned}
&u' (t) + Au(t) = 0, \quad 0<t<T,\\
& u(0)=u^0 ,
\end{aligned}
\right .
\end{equation}
with $A$ a positive definite, selfadjoint, linear operator on a
Hilbert space $(H, (\cdot , \cdot )) $ with domain $D(A)$
dense in $H.$
We consider the $q$-step backward difference formula (BDF) method, generated by the polynomials $\alpha$ and $\beta,$
\begin{equation}
\label{BDF1}
\alpha (\zeta)= \sum_{j=1}^q \frac 1j \zeta^{q-j} (\zeta-1)^j
=\sum_{j=0}^q \alpha_j \zeta^j,
\quad \beta(\zeta)=\zeta^q.
\end{equation}
The BDF methods are $A(\vartheta_q)$-stable with
$\vartheta_1=\vartheta_2=90^\circ,
\vartheta_3\approx 86.03^\circ, \vartheta_4\approx 73.35^\circ, \vartheta_5\approx 51.84^\circ$ and $\vartheta_6 \approx 17.84^\circ$;
see \cite[Section V.2]{HW}. Exact values of $\vartheta_q, q=3,4,5,6,$
are given in \cite{AK2}. The order of the $q$-step method is $q.$
Let $N\in {\mathbb N},$ $\tau:=T/N$ be the time step, and $t^n :=n \tau,
n=0,\dotsc ,N,$ be a uniform partition of the interval $[0,T].$
We recursively define a sequence of approximations $u^m$ to
the nodal values $ u(t^m)$ by the $q$-step BDF method,
\begin{equation}
\label{ab}
\sum_{i=0}^q \alpha_i u^{n+i}+\tau A u^{n+q}=0,\quad n=0,\dotsc,N-q,
\end{equation}
assuming that starting approximations $u^0, \dotsc, u^{q-1}$ are given.
Let $| \cdot |$ denote the norm on $H$ induced by the inner product $(\cdot , \cdot )$, and introduce on $V, V:=D(A^{1/2}),$
the norm $\| \cdot \|$ by $\| v\| :=| A^{1/2} v |.$
We identify $H$ with its dual, and denote by $V'$ the dual of $V$,
and by $\| \cdot \|_\star$ the dual norm on $V', \|v \|_\star=| A^{-1/2} v |.$
We shall use the notation $(\cdot , \cdot )$ also for the antiduality
pairing between $V'$ and $V.$
Stability of the A-stable one- and two-step BDF methods \eqref{ab}
can be easily established by the energy method. The powerful
Nevanlinna--Odeh multiplier technique extends the applicability
of the energy method to the non A-stable three-, four- and five-step BDF methods.
In contrast, as we shall see, no Nevanlinna--Odeh multiplier exists for the six-step BDF method.
Here, we show that, in combination with the Grenander--Szeg\"o theorem,
the energy technique is applicable even with multipliers satisfying milder
requirements than Nevanlinna--Odeh multipliers. We introduce such
multipliers for the six-step BDF method and prove stability by the
energy technique.
An outline of the paper is as follows: In Section \ref{Se:mult}, we relax the requirements
on the multipliers for BDF methods and present multipliers for the six-step
BDF method. In Section \ref{Se:stab}, we use a new multiplier in combination with
the Grenander--Szeg\"o theorem and prove stability of the six-step BDF method
for the initial value problem \eqref{ivp}.
\section{Multipliers for the six-step BDF method}\label{Se:mult}
Multipliers for the three-, four- and five-step BDF methods were introduced
by Nevanlinna and Odeh already in 1981, see \cite{NO}, to make the energy
method applicable to the stability analysis of these methods for parabolic
equations; no multipliers are required for the A-stable one- and two-step BDF methods.
The multiplier technique became widely known and popular after its first actual application
to the stability analysis for parabolic equations by Lubich, Mansour, and Venkataraman
in 2013; see \cite{LMV}.
The multiplier technique hinges on the celebrated equivalence of A- and G-stability for multistep methods
by Dahlquist; see \cite{D}.
\begin{lemma}[\cite{D}; see also \cite{BC} and
{\cite[Section V.6]{HW}}]\label{Le:Dahl}
Let $\alpha(\zeta)=\alpha_q\zeta^q+\dotsb+\alpha_0$ and
$\mu(\zeta)=\mu_q\zeta^q+\dotsb+\mu_0$ be polynomials,
with real coefficients, of degree
at most $q\ ($and at least one of them of degree $q)$
that have no common divisor.
Let $(\cdot,\cdot)$ be a real inner product with associated norm $|\cdot|.$
If
\begin{equation}
\label{A}
\Real \frac {\alpha(\zeta)}{\mu(\zeta)}>0\quad\text{for }\, |\zeta|>1,
\tag{A}
\end{equation}
then there exists a positive definite symmetric matrix $G=(g_{ij})\in {\mathbb R}^{q,q}$
and real $\delta_0,\dotsc,\delta_q$ such that for $v^0,\dotsc,v^{q}$ in the inner product space,
\begin{equation}
\label{G}
\Big (\sum_{i=0}^q\alpha_iv^{i},\sum_{j=0}^q\mu_jv^{j}\Big )=
\sum_{i,j=1}^qg_{ij}(v^{i},v^{j})
-\sum_{i,j=1}^qg_{ij}(v^{i-1},v^{j-1})
+\Big |\sum_{i=0}^q\delta_iv^{i}\Big |^2.
\tag{G}
\end{equation}
\end{lemma}
\begin{definition}[Multipliers and Nevanlinna--Odeh multipliers]\label{De:mult}
Let $\alpha$ be the generating polynomial of the $q$-step BDF method defined in \eqref{BDF1}.
Consider a $q$-tuple $(\mu_1,\dotsc,\mu_q)$ of real numbers such that
with the given $\alpha$ and
$\mu(\zeta):=\zeta^q-\mu_1\zeta^{q-1}-\dotsb-\mu_q,$
the pair $(\alpha,\mu)$ satisfies the A-stability condition \eqref{A},
and, in addition, the polynomials $\alpha$ and $\mu$ have no common divisor.
Then, we call $(\mu_1,\dotsc,\mu_q)$ \emph{Nevanlinna--Odeh multiplier} for the
$q$-step BDF method if
\begin{equation}
\label{NO-multiplier}
1-|\mu_1|-\dotsb-|\mu_q|>0, \tag{P1}
\end{equation}
and simply \emph{multiplier} if it satisfies the \emph{positivity} property
\begin{equation}
\label{pos-prop}
1-\mu_1\cos x-\dotsb-\mu_q\cos (qx) >0 \quad \forall x \in {\mathbb R}. \tag{P2}
\end{equation}
\end{definition}
Notice that, with the notation of this definition, \eqref{A} and \eqref{G}, respectively,
mean that the $q$-step scheme described by the parameters
$\alpha_q,\dotsc,\alpha_0,1,-\mu_1,\dotsc,-\mu_q$
and the corresponding one-leg method are A- and G-stable, respectively.
Of course, these are necessarily low order methods but this is irrelevant here;
we do not compute with them; we only use them to establish stability of
the $q$-step BDF method.
Optimal Nevanlinna--Odeh multipliers, i.e., the ones with minimal $|\mu_1|+\dotsb+|\mu_q|$,
for the three-, four- and five-step BDF methods were given in \cite{AK1}.
\begin{comment}
To prove stability of the method by the energy technique, we test \eqref{ab}
by $u^{n+q}-\mu_1u^{n+q-1}-\dotsb-\mu_qu^n$ and obtain
\begin{equation}
\label{ab-energy}
\Big (\sum_{i=0}^q \alpha_i u^{n+i},u^{n+q}-\sum_{j=1}^q \mu_ju^{n+q-j}\Big )
+\tau \Big (A u^{n+q},u^{n+q}-\sum_{j=1}^q \mu_j u^{n+q-j}\Big ) =0,
\end{equation}
$n=0,\dotsc,N-q.$ The first term on the left-hand side can be estimated
from below using \eqref{G}, which is ensured by the property \eqref{A} of the multiplier.
Now, the second term on the left-hand side of \eqref{ab-energy} can be easily
estimated from below in the case of Nevanlinna--Odeh multipliers
employing elementary Cauchy--Schwarz and arithmetic--geometric
mean inequalities; cf., e.g., \cite{AL}, \cite{A2}, \cite{AK1};
this is the motivation for the condition \eqref{NO-multiplier}.
In the case of multipliers, this is a little more involved: we first sum in \eqref{ab-energy}
over $n$ and then utilize the Grenander--Szeg\"o theorem to suitably estimate
the second sum from below; the motivation for the positivity condition
\eqref{pos-prop} is that, in combination with the Grenander--Szeg\"o theorem,
it enables us to show positive definiteness of suitable matrices; see section \ref{Se:stab}.
\end{comment}
Some comments on the requirements in Definition \ref{De:mult} and
their role in the stability analysis are in order.
To prove stability of the method by the energy technique, we test \eqref{ab}
by $u^{n+q}-\mu_1u^{n+q-1}-\dotsb-\mu_qu^n$ and obtain
\begin{equation}
\label{ab-energy}
\Big (\sum_{i=0}^q \alpha_i u^{n+i},u^{n+q}-\sum_{j=1}^q \mu_ju^{n+q-j}\Big )
+\tau \Big (A u^{n+q},u^{n+q}-\sum_{j=1}^q \mu_j u^{n+q-j}\Big ) =0,
\end{equation}
$n=0,\dotsc,N-q.$ The first term on the left-hand side can be estimated
from below using \eqref{G}; this is the motivation for the requirement \eqref{A}.
Which one of the other two conditions, \eqref{NO-multiplier} or \eqref{pos-prop},
enters into the stability analysis, depends on the way we handle the second term
on the left-hand side of \eqref{ab-energy}.
If we estimate this term from below at every time level and then sum over $n$,
requirement \eqref{NO-multiplier} is crucial; cf., e.g., \cite{AL}, \cite{A2}, \cite{AK1}.
Instead, if we sum over $n$ and subsequently estimate the sum of the second
terms, the relaxed positivity condition \eqref{pos-prop} suffices.
In the latter approach, in view of the Grenander--Szeg\"o theorem,
\eqref{pos-prop} ensures that symmetric band Toeplitz matrices, of any dimension,
with generating function the positive trigonometric polynomial
$(1-\varepsilon)-\mu_1\cos x-\dotsb-\mu_q\cos (qx),$
for sufficiently small $\varepsilon$, are positive definite;
see section \ref{Se:stab}
It is well known that any multiplier for the $q$-step BDF method satisfies the
property
\begin{equation}
\label{multiplier-lower-b}
|\mu_1|+\dotsb+|\mu_q|\geqslant \cos\vartheta_q;
\end{equation}
see \cite{NO}. In particular, for the six-step BDF method this means that $|\mu_1|+\dotsb+|\mu_6|\geqslant 0.9516169.$
Actually, as we shall see, no Nevanlinna--Odeh multiplier exists for the six-step BDF method; see Remark \ref{rem:mult-general}.
This was the motivation for our relaxation on the requirements for multipliers.
Fortunately, the relaxed positive condition \eqref{pos-prop} leads to a positive result.
\begin{proposition}[A multiplier for the six-step BDF method]\label{Pr:mult-six}
The set of numbers
\begin{equation}
\label{mu2}
\mu_1=\frac {13}{9},\quad \mu_2=-\frac {25}{36},\quad \mu_3=\frac 19,\quad \mu_4=\mu_5=\mu_6=0,
\end{equation}
is a multiplier for the six-step BDF method.
\end{proposition}
\begin{proof}
The proof consists of two parts; we first prove the A-stability property
\eqref{A} and subsequently the positivity property \eqref{pos-prop}.
\emph{A-stability property \eqref{A}}.
The corresponding polynomial $\mu$ is
\begin{equation}
\label{mu1}
\mu(\zeta)=\zeta^3\big (\zeta-\frac 12\big )^2\big (\zeta-\frac 49\big )
= \zeta^6-\frac{13}{9}\zeta^5+\frac{25}{36}\zeta^4-\frac 19\zeta^3
=\frac 1{36}\zeta^3(36\zeta^3-52\zeta^2+25\zeta-4).
\end{equation}
We recall the generating polynomial $\alpha$ of the six-step BDF method,
\begin{equation*}
\label{alpha1}
60 \alpha(\zeta)=147\zeta^6-360\zeta^5+450\zeta^4-400\zeta^3+225\zeta^2- 72\zeta+10.
\end{equation*}
First, $\alpha(1/2)=-37/3840$ and $\alpha(4/9)=-0.003730423508913,$ whence
the polynomials $\alpha$ and $\mu$ have no common divisor.
Now, $\alpha(z)/\mu (z)$ is holomorphic outside the unit disk in the
complex plane, and
\[\lim_{|z|\to \infty}\frac {\alpha(z)}{\mu (z)}=\alpha_6=\frac {147}{60}>0.\]
Therefore, according to the maximum principle for harmonic functions,
the A-stability property \eqref{A} is equivalent to
\begin{equation*}
\Real \frac {\alpha(\zeta)}{\mu(\zeta)}\geqslant 0 \quad \forall \zeta\in {\mathcal K},
\end{equation*}
with ${\mathcal K}$ the unit circle in the complex plane, ${\mathcal K}:=\{\zeta\in {\mathbb C} : |\zeta|=1\},$
i.e., equivalent to
\begin{equation}
\label{Real1}
\Real \big [\alpha({\rm e}^{{\rm i} \varphi})\mu ({\rm e}^{-{\rm i} \varphi})\big ]\geqslant 0 \quad \forall \varphi \in {\mathbb R}.
\end{equation}
In view of \eqref{mu1}, the desired property \eqref{Real1} takes the form
\begin{equation}
\label{Real2}
\Real \big [60\alpha({\rm e}^{{\rm i} \varphi})
{\rm e}^{-{\rm i} 3\varphi} \big (36{\rm e}^{-{\rm i} 3\varphi}-52{\rm e}^{-{\rm i} 2\varphi}+25 {\rm e}^{-{\rm i} \varphi}-4\big )\big ]\geqslant 0 \quad \forall \varphi \in {\mathbb R}.
\end{equation}
Now, it is easily seen that
\begin{equation*}
\begin{aligned}
60\alpha({\rm e}^{{\rm i} \varphi}){\rm e}^{-{\rm i} 3\varphi}
&=\big [157 \cos(3\varphi)-432 \cos(2\varphi) +675 \cos\varphi-400\big ]\\
&+{\rm i}\big [137 \sin(3\varphi)-288 \sin(2\varphi) +225 \sin\varphi\big ].
\end{aligned}
\end{equation*}
With $x:=\cos\varphi,$ recalling the elementary trigonometric identities
\[\cos(2\varphi) =2x^2-1,\ \cos(3\varphi) =4x^3-3x,\ \sin(2\varphi) =2x\sin\varphi,\ \sin(3\varphi) =(4x^2-1)\sin\varphi,\]
we easily see that
\begin{equation}
\label{Real3}
60\alpha({\rm e}^{{\rm i} \varphi}){\rm e}^{-{\rm i} 3\varphi}
=4(1-x)(8+59x- 157x^2)+\i4 (137x^2-144x+22)\sin\varphi.
\end{equation}
Notice that the factor $1-x$ in the real part of $\alpha({\rm e}^{{\rm i} \varphi}){\rm e}^{-{\rm i} 3\varphi}$
is due to the fact that $\alpha(1)=0.$ Similarly,
\[\begin{aligned}
36{\rm e}^{-{\rm i} 3\varphi}-52{\rm e}^{-{\rm i} 2\varphi}+25 {\rm e}^{-{\rm i} \varphi}-4
&=\big [36 \cos(3\varphi)-52 \cos(2\varphi) +25 \cos\varphi-4\big ]\\
&-{\rm i}\big [36 \sin(3\varphi)-52 \sin(2\varphi) +25 \sin\varphi\big ]
\end{aligned}\]
and
\begin{equation}
\label{Real4}
\begin{aligned}
36{\rm e}^{-{\rm i} 3\varphi}-52{\rm e}^{-{\rm i} 2\varphi}+25 {\rm e}^{-{\rm i} \varphi}-4
&= (144x^3-104x^2-83x+48 )\\
&-{\rm i} (144x^2-104x-11 )\sin\varphi.
\end{aligned}
\end{equation}
In view of \eqref{Real3} and \eqref{Real4}, the desired property \eqref{Real2}
can be written in the form
\begin{equation}
\label{Real5}
4(1-x)P(x)\geqslant 0 \quad \forall x \in [-1,1]
\end{equation}
with
\[\begin{aligned}
P(x)&:=(8+59x-157x^2)(144x^3-104x^2-83x+48)\\
&+(1+x)(137x^2-144x+22)(144x^2-104x-11),
\end{aligned}\]
i.e.,
\begin{equation}
\label{Real6}
P(x)=2(71+611x+1334x^2-5150x^3+4784x^4-1440x^5).
\end{equation}
It is now easy to see that $P$ is positive in the interval $[-1,1],$ and thus that \eqref{Real2} is valid.
First, the quadratic polynomial $ 71+611x+ 1334x^2$ is positive for all real $x$,
since it does not have real roots. All other terms are positive for negative $x,$
whence $P(x)$ is positive for negative $x.$ Furthermore, for $0\leqslant x \leqslant 1,$ we obviously have
$71+611x\geqslant 682x^2,$ and can estimate $P(x)$ from below as follows
\[\begin{aligned}
P(x)&\geqslant 2x^2 (2016-5150x+4784x^2-1440x^3)\\
&=2x^2 \big [(2016-5150x+3344x^2)+1440x^2(1-x) \big ].
\end{aligned}\]
Again, the quadratic polynomial $2016-5150x+3344x^2$ is positive for all real $x$, and the positivity of $P(x)$ follows.
\emph{Positivity property \eqref{pos-prop}.}
To prove the desired positivity property \eqref{pos-prop} for the multiplier \eqref{mu2},
we consider the function $f$,
\begin{equation}
\label{f}
f(x):=\frac{31}{32} -\frac{13}{9}\cos x+\frac{25}{36}\cos(2x)-\frac{1}{9}\cos(3x), \quad x\in {\mathbb R}.
\end{equation}
Now, elementary trigonometric identities lead to the following form of $f$
\[f(x)=-\frac{4}{9}\cos^3 x+\frac{25}{18}\cos^2x-\frac{10}{9}\cos x+\frac{79}{288}.\]
Hence, we consider the polynomial $p,$
\begin{equation}
\label{p}
p(x):=-\frac{4}{9}x^3+\frac{25}{18}x^2-\frac{10}{9}x+\frac{79}{288},\quad x\in [-1,1].
\end{equation}
It is easily seen that $p$ attains its minimum at $x^\star=(25-\sqrt{145})/24$
and
\[p(x^\star)=0.009321552602567 > 0.\]
Therefore, $f$ is indeed positive; in particular,
the desired positivity property \eqref{pos-prop} is satisfied. See also Figure \ref{Fig:f}.
\end{proof}
\begin{figure}[!ht]
\centering
\psset{yunit=0.7cm,xunit=0.7}
\begin{pspicture}(-3.9,-0.6)(4.6,4.05)
\psaxes[ticks=none,labels=none,linewidth=0.6pt]{->}(0,0)(-4,-0.6)(4.3,3.67)%
[$\!\!x$,0][$ $,90]
\psset{plotpoints=10000}
\psFourier[cosCoeff=1.9375 -1.44444444444 0.69444444444 -.1111111111111, sinCoeff=0,
linewidth=0.5pt,linecolor=blue]{-3.1415926}{3.1415926}
\uput[0](-0.44,3.88){\small $y$}
\uput[0](-2.37,1.88){\small $f$}
\uput[0](2.74,-0.22){\small $\pi$}
\uput[0](-3.9,-0.22){\small $-\pi$}
\pscircle*(0,1){0.038}
\pscircle*(0,2){0.038}
\pscircle*(0,3){0.038}
\uput[0](-0.62,1){\small $1$}
\uput[0](-0.62,2){\small $2$}
\uput[0](-0.62,3){\small $3$}
\uput[0](-0.78,-0.26){\small $O$}
\pscircle*(3.1415926,0){0.038}
\pscircle*(-3.1415926,0){0.038}
\end{pspicture
\hspace*{0.9cm}
\psset{yunit=0.7cm,xunit=2.7}
\hspace*{-0.05cm}\begin{pspicture}(-1.18,-0.6)(1.40,4.05)
\psaxes[ticks=none,labels=none,linewidth=0.6pt]{->}(0,0)(-1.2,-0.6)(1.3,3.67)%
[$\!\!x$,0][$ $,90]
\psset{plotpoints=10000}
\psPolynomial[coeff=0.27430556 -1.11111111111 1.38888889 -0.444444444, linewidth=0.5pt,linecolor=blue]{-1}{1}
\uput[0](-0.77,1.08){\small $p$}
\uput[0](-0.17,3.88){\small $y$}
\uput[0](0.86,-0.27){\small $1$}
\uput[0](-1.2,-0.27){\small $-1$}
\pscircle*(0,1){0.038}
\pscircle*(0,2){0.038}
\pscircle*(0,3){0.038}
\uput[0](-0.22,1){\small $1$}
\uput[0](-0.22,2){\small $2$}
\uput[0](-0.22,3){\small $3$}
\uput[0](-0.28,-0.27){\small $O$}
\pscircle*(1,0){0.038}
\pscircle*(-1,0){0.038}
\end{pspicture
\caption{The graphs of the function $f$ and the polynomial $p$ of \eqref{f} and \eqref{p}.}
\label{Fig:f}
\end{figure}
\subsection{On the conditions \eqref{pos-prop} and \eqref{NO-multiplier}}\label{SSe:discrepancy}
We briefly comment on the discrepancy between the conditions \eqref{pos-prop} and \eqref{NO-multiplier}.
Obviously, \eqref{NO-multiplier} implies \eqref{pos-prop}.
Let $S_q\subset {\mathbb R}^q$ denote the region of the points $(\mu_1,\dotsc,\mu_q)$ satisfying the
positivity condition \eqref{pos-prop}. Since \eqref{NO-multiplier} and \eqref{pos-prop} are obviously equivalent
for $q$-tuples $(\mu_1,\dotsc,\mu_q)$ with only one nonvanishing component,
the intersection of $S_q$ with each coordinate axis is an interval of the form $(-1,1).$
Let us next focus on the instrumental case of the intersection of $S_q$ with the $\mu_1\mu_2$ plane, i.e.,
consider the set of points $(\mu_1,\dotsc,\mu_q)\in S_q$ with $\mu_3=\dotsb=\mu_q=0.$
Then, the positivity condition reads
\begin{equation}
\label{p-discrep1}
p(x):=1-\mu_1 x-\mu_2 (2x^2-1)>0,\quad x\in [-1,1].
\end{equation}
For $\mu_2=0,$ this condition is satisfied if and only if $|\mu_1|<1.$
For nonvanishing $\mu_2,$ the derivative of $p$ vanishes at
$x^\star=-\mu_1/(4\mu_2)$ and
%
\begin{equation}
\label{p-discrep2}
p(x^\star)=1+\mu_2 +\frac 18 \frac {\mu_1^2} {\mu_2}.
\end{equation}
For positive $\mu_2$, this is a positive global maximum of $p.$
Therefore, in this case \eqref{p-discrep1} is satisfied if and only if $p(-1)$ and $p(1)$ are positive,
whence
\begin{equation}
\label{p-discrep3}
\mu_2 < 1-|\mu_1|.
\end{equation}
For negative $\mu_2$, the expression in \eqref{p-discrep2} is a global minimum of $p.$
Now, we distinguish two subcases. It $|\mu_2|\leqslant |\mu_1|/4,$ then the minimum is attained
at a point $|x^\star|\geqslant 1,$ whence \eqref{p-discrep3} suffices for \eqref{p-discrep1}.
If, on the other hand, $|x^\star|< 1,$ then \eqref{p-discrep1} is satisfied if and only
if the expression on the right-hand side of \eqref{p-discrep2} is positive, i.e.,
\[4\Big (\mu_2+\frac12 \Big )^2+\frac 12 \mu_1^2 <1;\]
that is, $(\mu_1,\mu_2)$ lies in the interior of an ellipse.
Summarizing, \eqref{p-discrep1} is satisfied if and only if $(\mu_1,\mu_2)$ lie in the region
\[S:=\big \{(\mu_1,\mu_2): -\frac {|\mu_1|}4\leqslant \mu_2 < 1-|\mu_1|\big \}
\cup \big \{(\mu_1,\mu_2): 4\Big (\mu_2+\frac12 \Big )^2+\frac 12 \mu_1^2 <1 \text{ and } |\mu_2|>\frac {|\mu_1|}4\big \}.\]
Notice that the lines $\mu_2=\pm(1- \mu_1)$ are tangent to the ellipse at their intersection points
with the lines $\mu_2=\mp \mu_1/4$, respectively, i.e., at the points $(\pm 4/3,-1/3)$.
This is, of course, due to the fact that for these values the global minimum in \eqref{p-discrep2}
is attained at the points $x^\star=\pm 1.$ Therefore, the intersection $S$ of $S_q$ with the
$\mu_1\mu_2$ plane is the union of two overlapping simple sets, a triangle and an ellipse,
\begin{equation}
\label{p-discrep4}
S=\big \{(\mu_1,\mu_2): -\frac 13\leqslant \mu_2 < 1-|\mu_1|\big \}
\cup \big \{(\mu_1,\mu_2): 4\Big (\mu_2+\frac12 \Big )^2+\frac 12 \mu_1^2 <1 \big \};
\end{equation}
see Figure \ref{Fi:NO-posit}, right. Notice, in particular, that
\begin{equation}\label{eqn:mu1-mu2}
|\mu_1|<\sqrt{2} \quad \text{and}\quad |\mu_2|<1.
\end{equation}
Replacing $x$ by $x/2$ and by $x/3$, respectively, in the positivity condition \eqref{pos-prop},
it is obvious that the intersection of $S_q$ with the $\mu_2\mu_4$ plane, for $q\geqslant 4$,
and with the $\mu_3\mu_6$ plane, for $q=6$, respectively, is of the form \eqref{p-discrep4}
with $(\mu_1,\mu_2)$ replaced by $(\mu_2,\mu_4)$ and by $(\mu_3,\mu_6)$, respectively.
\begin{figure}[!ht]
\centering
\psset{yunit=1.7cm,xunit=1.7}
\begin{pspicture}(-1.6,-1.43)(1.98,1.63)
\psset{plotpoints=10000}
\pspolygon[linewidth=0.4pt,linecolor=black,fillstyle=solid,fillcolor=lightblue](1,0)(0,1)(-1,0)(0,-1)
\psaxes[ticks=none,labels=none,linewidth=0.6pt]{->}(0,0)(-1.6,-1.4)(1.7,1.5)%
[$\!\!\mu_1$,0][$ $,90]
\uput[0](-0.18,1.6){\small $\mu_2$}
\uput[0](-0.27,1.03){\small $1$}
\uput[0](-0.40,-1.12){\small $-1$}
\uput[0](0.9,0.12){\small $1$}
\uput[0](-1.42,0.12){\small $-1$}
\pscircle*(0,1){0.038}
\pscircle*(0,-1){0.038}
\pscircle*(1,0){0.038}
\pscircle*(-1,0){0.038}
\end{pspicture
\quad
\begin{pspicture}(-1.9,-1.43)(2.03,1.63)
\psset{plotpoints=10000}
\psellipse[linewidth=0.4pt,linecolor=black,fillstyle=solid,fillcolor=lightblue](0,-0.5)(1.41421356237,0.5)
\pspolygon[linestyle=none,linecolor=black,fillstyle=solid,fillcolor=lightblue](0,1)(-1.3333333333,-0.3333333333)(1.3333333333,-0.3333333333)
\psaxes[ticks=none,labels=none,linewidth=0.6pt]{->}(0,0)(-1.9,-1.4)(1.9,1.5)%
[$\!\!\mu_1$,0][$ $,90]
\psline[linewidth=0.4pt](0,1)(1.3333333333,-0.3333333333)
\psline[linewidth=0.4pt](0,1)(-1.3333333333,-0.3333333333)
\pscircle*(0,1){0.038}
\pscircle*(0,-1){0.038}
\pscircle*(1,0){0.038}
\pscircle*(-1,0){0.038}
\uput[0](-0.18,1.6){\small $\mu_2$}
\uput[0](-0.27,1.03){\small $1$}
\uput[0](-0.40,-1.12){\small $-1$}
\uput[0](0.9,0.12){\small $1$}
\uput[0](-1.42,0.12){\small $-1$}
\uput[0](0.52,-0.35){\small $S$}
\end{pspicture}
\caption{Illustration of the conditions \eqref{NO-multiplier} and \eqref{pos-prop},
left and right, respectively, for $\mu_3=\dotsb=\mu_6=0$; cf.\ \eqref{p-discrep4}.}
\label{Fi:NO-posit}
\end{figure}
\subsection{On the construction of multipliers}\label{SSe:construction}
In this part, we describe some necessary conditions of multipliers satisfying
the A-stability condition \eqref{A} and the relaxed positive condition \eqref{pos-prop}.
To begin with, we show that no multiplier with $\mu_3=\dotsb=\mu_6=0$ exists.
\begin{proposition}\label{prop:no-2m}
There is no multiplier with $\mu_3=\dotsb=\mu_6=0$, satisfying \eqref{A}
and \eqref{pos-prop}.
\end{proposition}
\begin{proof}
The positivity condition \eqref{pos-prop} is satisfied if and only if
\begin{equation}
\label{multiplier-2}
1-\mu_1x-\mu_2(2x^2-1)>0\quad \forall x\in [-1,1];
\end{equation}
see \eqref{p-discrep1}.
The A-stability condition \eqref{A} is in this case equivalent to
\eqref{Real5} with
\begin{equation}
\label{multiplier-4}
\begin{aligned}
P(x)
&=(8+59x-157x^2)\big (4x^3-\mu_1(2x^2-1)-3x-\mu_2x\big )\\
&\quad+(1+x)(137x^2-144x+22)(4x^2-2\mu_1x-\mu_2-1).
\end{aligned}
\end{equation}
First, the estimate $|\mu_1|<\sqrt{2}$ in \eqref{eqn:mu1-mu2} and the nonnegativity of
\[P(- 4/{25})=-41.65312\mu_2+7.86979\mu_1-39.13478\]
lead to
\begin{equation}
\label{multiplier-4n}
\mu_2
\frac {7.86979\sqrt{2}-39.13478}{41.65312}=-0.672343782385853.
\end{equation}
On the other hand, for $\mu_2<-0.672343782385853,$ we have $|\mu_2|>|\mu_1|/4,$ and thus $(\mu_1,\mu_2)$ must lie
in the interior of the ellipse in \eqref{p-discrep4}. Now, $P(0.99)=a\mu_2+b\mu_1+c$ with
\[a=\frac {2086460708677967}{35184372088832},\quad b=\frac{1053766469372221}{35184372088832},\quad c=\frac{9685378027}{109951162777600},\]
and the intersection points of the line $P(0.99)=0$ and the ellipse $4 (\mu_2+1/2 )^2+ \mu_1^2/2 =1$ are
\begin{equation}
\label{multiplier-AB}
\left\{\begin{aligned}
&A=(2.941186035762484\cdot 10^{-6}, -1.08131109678632\cdot 10^{-12}),\\
&B=(1.328818676149621, -0.671118740185537).
\end{aligned}
\right.
\end{equation}
It is easily seen that $P(0.99)$ is nonnegative only in the part of the interior of the ellipse
to the right of the segment $AB;$ cf.\ Figure \ref{Fi:mult-2}.
Therefore, $P(0.99)\geqslant 0$ implies
\[\mu_2\geqslant -0.671118740185537.\]
This together with \eqref{multiplier-4n} leads to a contradiction;
hence, no multiplier of
the form $(\mu_1,\mu_2,0,$ $\dotsc,0)$ exists for the six-step BDF method.
\end{proof}
\begin{figure}[!ht]
\centering
\psset{yunit=1.7cm,xunit=1.7}
\begin{pspicture}(-2,-1.5)(2,0.6)
\pscustom[linewidth=0.01pt,fillstyle=solid,fillcolor=lightblue]{
\pscurve[linewidth=0.01pt](1.328818676149621,-0.671118740185535)(1.41421356237,-0.5)
(1.3333333333,-0.3333333333)(1.1,-0.185754872750587)(1,-0.146446609406726)
(0.9,-0.114318784486462)(0.6,-0.047230743093129)(0.3,-0.011379492857698)
(0.000002941186035762484,-0.000000000001081311309678635)
\psline[linewidth=0.0pt](0.000002941186035762484,-0.000000000001081311309678635)(1.328818676149621,-0.671118740185535)}
\psaxes[ticks=none,labels=none,linewidth=0.6pt]{->}(0,0)(-1.9,-1.4)(1.9,0.5)%
[$\!\!\mu_1$,0][$ $,90]
\psset{plotpoints=10000}
\psellipse[linewidth=0.4pt,linecolor=black,fillcolor=lightblue](0,-0.5)(1.41421356237,0.5)
\pscircle*(0,-1){0.038}
\pscircle*(0,-0.5){0.038}
\pscircle*(1,0){0.038}
\pscircle*(-1,0){0.038}
\pscircle*(1.328818676149621,-0.671118740185535){0.038}
\pscircle*(0.000002941186035762484,-0.000000000001081311309678635){0.038}
\psline[linewidth=0.4pt](0.000002941186035762484,-0.000000000001081311309678635)(1.328818676149621,-0.671118740185535)
\uput[0](0.83,0.12){\small $1$}
\uput[0](-1.35,0.12){\small $-1$}
\uput[0](-0.435,-0.50){\small $-\frac 12$}
\uput[0](-0.40,-1.12){\small $-1$}
\uput[0](1.22,-0.78){\small $B$}
\uput[0](-0.3,0.13){\small $O$}
\uput[0](-0.05,0.13){\small $A$}
\uput[0](-0.18,0.6){\small $\mu_2$}
\end{pspicture}
\caption{Out of the interior points $(\mu_1,\mu_2)$ of the ellipse, $P(0.99)$, see \eqref{multiplier-4}, is nonnegative only
in the blue region; in the blue region, $\mu_2\geqslant -0.671118740185535$. The points $A$ and $B$
are given in \eqref{multiplier-AB}. The discrepancy between $A$ and $O=(0,0)$ is invisible.}
\label{Fi:mult-2}
\end{figure}
\begin{comment}
Furthermore, from the fact that $(\mu_1,\mu_2)$ belongs to the ellipse in \eqref{p-discrep4},
we have
\begin{equation}
\label{multiplier-4nn}
\mu_1<\sqrt{2}\, \sqrt{1-4\Big (\mu_2+\frac12 \Big )^2};
\end{equation}
in particular, since $\mu_2<-0.672343782385853,$
\[\mu_1<\sqrt{2}\, \sqrt{1-4(0.172343782385853)^2}=1.327546972948039.\]
With this bound for $\mu_1,$ \eqref{multiplier-4n} yields
\[\mu_2<\frac {7.86979\cdot 1.327546972948039-39.13478}{41.65312}=-0.688718254665275.\]
With this new bound for $\mu_2,$ \eqref{multiplier-4nn} yields
\[\mu_1<\sqrt{2}\, \sqrt{1-4(0.188718254665275)^2}=1.309611913067661.\]
With this bound for $\mu_1,$ \eqref{multiplier-4n} yields
\[\mu_2<\frac {7.86979\cdot 1.309611913067661-39.13478}{41.65312}=-0.692106840079669.\]
With this new bound for $\mu_2,$ \eqref{multiplier-4nn} yields
\[\mu_1<\sqrt{2}\, \sqrt{1-4(0.192106840079669)^2}=1.305664465303715.\]
Now,
\begin{equation}
\label{multiplier-5}
\left\{
\begin{aligned}
&P(0)=4\mu_1-11\mu_2-11,\\
&P(0.99)=8.8088\cdot 10^{-5}+59.30072\mu_2+29.9498444\mu_1.
\end{aligned}
\right.
\end{equation}
Since $\mu_1<\sqrt{2},$ $P(0)>0$ yields $\mu_2<-0.48574052277.$
Furthermore, from $P(0.99)>0,$ we obtain $\mu_2>-0.671119.$ {\color{red}(How???)}
Finally,
\begin{equation}
\label{multiplier-6}
\begin{aligned}
P(-\frac 4{25})
&=-41.65312\mu_2+7.86979\mu_1-39.13478\\
&\leqslant 41.65312\cdot 0.672+7.86979\sqrt{2}-39.13478=-0.01431<0
\end{aligned}
\end{equation}
a contradiction.
We infer that no multiplier of the form $(\mu_1,\mu_2,0,\dotsc,0)$
exists for the six-step BDF method.
\end{comment}
Our next attempt was to seek a multiplier with $\mu_4=\mu_5=\mu_6=0$.
In this case, the A-stability condition \eqref{A} and the positivity condition \eqref{pos-prop}
lead, respectively, to the conditions
\begin{equation}
\label{multiplier-7}
\begin{aligned}
P(x)
&=(8+59x-157x^2)\big (4x^3-\mu_1(2x^2-1)-3x-\mu_2x-\mu_3\big )\\
&\quad+(1+x)(137x^2-144x+22)(4x^2-2\mu_1x-\mu_2-1)\geqslant 0 \quad \forall x\in [-1,1]
\end{aligned}
\end{equation}
and
\begin{equation}
\label{multiplier-8}
f(x):=1-\mu_1\cos x-\mu_2\cos (2x)-\mu_3\cos (3x) >0 \quad \forall x \in {\mathbb R}.
\end{equation}
Necessary conditions for \eqref{multiplier-7} and \eqref{multiplier-8} could be derived
by evaluating $P$ and $f$ at certain points.
For instance, we claim the following necessary condition,
which helps us to construct multipliers.
\begin{proposition}\label{prop:nec-cond}
If $(\mu_1,\mu_2,\mu_3,0,0,0)$ is a multiplier of the six-step BDF method, then there holds
\begin{equation*}
0.41990729<\mu_1<\sqrt{3}, \ -1<\mu_2< -0.58852878,\ 0<\mu_3<1, \ |\mu_1|+|\mu_2|+|\mu_3|>1.
\end{equation*}
\end{proposition}
\begin{proof}
First, $|\mu_2|<1$ follows immediately from the positivity of
$f(\pi/2)$ and of $f(0)$ and $f(\pi).$ Furthermore,
\[2f(2\pi/3)+f(0)=3(1-\mu_3)\quad\text{and}\quad 2f(\pi/3)+f(\pi)=3(1+\mu_3),\]
whence $|\mu_3| <1.$ In view of
\[f(\pi/6)=\frac{1}{2}\big(-\sqrt{3}\mu_1-\mu_2+2\big)\quad\text{and}\quad f(5\pi/6)=\frac{1}{2}\big(\sqrt{3}\mu_1-\mu_2+2\big),\]
we have $\sqrt{3}|\mu_1|<2-\mu_2,$ and, in combination with $\mu_2>-1,$ infer that
$|\mu_1|<\sqrt{3}.$
Up to this point, we did not use the nonnegativity of $P$. Now we check $P(0)\geqslant 0$, i.e.,
\begin{equation}
\label{multiplier-10}
P(0)=2\big [4(\mu_1-\mu_3)-11(1+\mu_2)\big ] \geqslant 0.
\end{equation}
Since $1+\mu_2>0,$ we infer that $\mu_3<\mu_1$. Furthermore, since $\mu_1<\sqrt{3}$ and $|\mu_3| <1,$
\[11\mu_2< 4\big (\sqrt{3}+1\big ) -11<-0.07179,\quad\text{whence}\quad \mu_2<-0.65263636\cdot 10^{-2}.\]
Meanwhile, since $274/625+1154\mu_2/25<0,$ the nonegativity of
\begin{equation}
P(0.8)=\frac {274}{625}+\frac {1154}{25}\mu_2+\frac {3572}{125}\mu_1+\frac {1132}{25}\mu_3
\end{equation}
yields $3572 \mu_1/125+1132\mu_3/25> 0$, which together with $\mu_3<\mu_1$
leads to
\[\frac {3572}{125}\mu_1+\frac {1132}{25}\mu_1>\frac {3572}{125}\mu_1+\frac {1132}{25}\mu_3> 0,\]
i.e., $\mu_1>0.$ Therefore, we arrive at
\begin{equation*}
0<\mu_1<\sqrt{3},\quad -1<\mu_2<-0.65263636\cdot 10^{-2}\quad\text{and}\quad0<|\mu_3|<1.
\end{equation*}
Next, we prove $\mu_3>0$ by contradiction. If $\mu_3\leqslant 0$, then the positivity of $f(\pi/4)$ yields
\[ f(\pi/4)=1- \frac{\sqrt{2}}{2 } (\mu_1-\mu_3)>0
\implies\mu_1<\sqrt{2}.\]
%
This and the nonnegativity of $P(-4/25)$ imply $\mu_2<-0.672$. Then, we can derive a lower bound $\mu_1>1.3426$
by examining $P(0.999)\geqslant 0$.
However, with $\mu_1>1.3426$, $\mu_2<-0.672$ and $\mu_3\leqslant 0$, it is easy to observe that
\begin{equation*}
2f(\pi/3)=-\mu_1+\mu_2+2\mu_3+2\leqslant - 1.3426-0.672+2< -0.0146,
\end{equation*}
which violates the positive condition \eqref{multiplier-8}. Therefore, we conclude that $\mu_3>0$.
Moreover, from $\mu_1<\sqrt3, \mu_3>0$ and the nonnegativity of
\[P(- 66/625)=7.33518936\mu_1 -34.64182239\mu_2 -0.01883648\mu_3-33.09263039,\]
we infer that
\begin{equation*}
\label{multiplier-4n}
\mu_2
\frac {7.33518936\sqrt{3}-33.09263039}{34.64182239}<-0.58852878.
\end{equation*}
Then, the nonnegativity of $P(27/125)$ yields $ \mu_1> 0.41990729$. Thus, we arrive at
\begin{equation*}
0.41990729<\mu_1<\sqrt{3},\quad-1<\mu_2<-0.58852878\quad \text{and}\quad 0<\mu_3<1.
\end{equation*}
Finally, the property $|\mu_1|+|\mu_2|+|\mu_3|>1$ is a special case
of the more general result of the next Remark.
\end{proof}
\begin{remark}[Nonexistence of Nevanlinna--Odeh multipliers for the six-step BDF method]\label{rem:mult-general}
The multiplier \eqref{mu2} is not unique. In general, the A-stability condition \eqref{A}
and the positivity condition \eqref{pos-prop} lead to the conditions
\begin{equation*}
\begin{aligned}
P(x)
={}&(-80x^5+208x^4-122x^3-82x^2+98x-22)+(40x^4-104x^3+71x^2+15x+8)\mu_1 \\
&+(20x^3-52x^2+114x-22)\mu_2-(8+59x-157x^2)\mu_3\\
& +(294x^3-66x^2-130x+22)\mu_4+(588x^4-132x^3-417x^2+103x+8)\mu_5\\
&+(1176x^5-264x^4-1128x^3+272x^2+146x-22)\mu_6\geqslant 0
\end{aligned}
\end{equation*}
and
\begin{equation*}
\begin{aligned}
p(x)
={}&1-x\mu_1-(2x^2-1)\mu_2-(4x^3-3x)\mu_3-(8x^4-8x^2+1)\mu_4\\
&-( 16x^5-20x^3+5x )\mu_5-( 32x^6-48x^4+18x^2-1 )\mu_6>0,
\end{aligned}
\end{equation*}
respectively, for all $x\in [-1,1]$. In Table \ref{Ta:mult-four},
we list several multipliers satisfying these conditions.
Furthermore, evaluating $P$ at $x=3/40,$ we have
\[P\big (3/40\big )< -15.1563+13.7341 \sum_{i=1}^6|\mu_i|.\]
Assuming $|\mu_1|+\dotsb +|\mu_6|\leqslant 1,$ we observe that
\[P\big (3/40\big )< -1.4222 < 0,\]
and infer that no Nevanlinna--Odeh multiplier exists for the six-step BDF method.
\end{remark}
\begin{table}[!ht]
\begin{center}
\caption{Multipliers for the six-step BDF method;
see also \eqref{mu2}.}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$\mu_1$ & $\mu_2$ & $\mu_3$ & $\mu_4$ & $\mu_5$ & $\mu_6$\\
\hhline{|======|}
$1.6$ & $-0.92$ & $0.3$ & $0$ & $0$ & $0$ \\
\hhline{|------|}
$0.8235$ & $-0.855$ & $0.38$ & $0$ & $0$ & $0$ \\
\hhline{|------|}
$1.67$ & $-1$ & $0.4$ & $-0.1$ & $0$ & $0$ \\
\hhline{|------|}
$0.8$ & $-0.7$ & $0.2$ & $0.1$ & $0$ & $0$ \\
\hhline{|------|}
$1.118$ & $-1$ & $0.6$ & $-0.2$ & $0.2$ & $0$ \\
\hhline{|------|}
$0.6708$ & $-0.2$ & $-0.2$ & $0.6$ & $-0.2$ & $0$ \\
\hhline{|------|}
$0.735$ & $-0.2$ & $-0.4$ & $0.8$ & $-0.4$ & $0.2$ \\
\hhline{|------|}
\end{tabular}
\label{Ta:mult-four}
\end{center}
\end{table}
\section{Stability}\label{Se:stab}
In this section we prove stability of the six-step BDF method \eqref{ab} by the energy
technique. The result is well known; the novelty is in the simplicity of the proof,
the main advantage of the energy technique.
Proofs by other stability techniques are significantly more involved.
For a proof by a spectral technique in the case of selfadjoint operators, we refer to \cite[chapter 10]{T};
for a proof in the general case, under a sharp condition on the nonselfadjointness of the operator
as well as for nonlinear parabolic equations,
by a combination of spectral and Fourier techniques, see, e.g., \cite{A3} and references therein.
For a long-time estimate in the case of selfadjoint operators and an application to the Stokes--Darcy problem, see \cite{LiWangZhou:2020}.
For simplicity, we denote by $\langle\cdot,\cdot\rangle$ the inner product on $V,$ $\langle v, w\rangle:=(A^{1/2}v, A^{1/2}w).$
Before we proceed, for the reader's convenience, we recall the notion of the generating function of
an $n\times n$ Toeplitz matrix $T_n$ as well as an auxiliary result, the Grenander--Szeg\"o theorem.
\begin{definition}[{\cite[p.\ 13]{Chan:07}}; the generating function of a Toeplitz matrix]\label{De:gen-funct}
Consider the $n \times n$ Toeplitz matrix $T_n=(t_{ij})\in {\mathbb C}^{n,n}$ with diagonal entries $t_0,$ subdiagonal entries
$t_1,$ superdiagonal entries $t_{-1},$ and so on, and $(n,1)$ and $(1,n)$ entries
$t_{n-1}$ and $t_{1-n}$, respectively, i.e., the entries $t_{ij}=t_{i-j}, i,j=1,\dotsc,n,$ are constant along the diagonals of $T_n.$
Let $t_{-n+1},\dotsc, t_{n-1}$ be the Fourier coefficients of the trigonometric polynomial $f$, i.e.,
\begin{equation*}
t_k=\frac{1}{2\pi}\int_{-\pi}^{\pi}f(x){\rm e}^{-{\rm i} kx}\, \mathrm{d} x,\quad k=1-n,\dotsc,n-1.
\end{equation*}
Then, $f, f(x)=\sum_{k=1-n}^{n-1} t_k{\rm e}^{{\rm i} kx},$ is called \emph{generating function} of $T_n$.
\end{definition}
If the generating function $f$ is real-valued, then the matrix $T_n$ is Hermitian; if $f$ is real-valued and even, then $T_n$ is symmetric.
Notice, in particular, that the generating function of a symmetric band Toeplitz matrix of bandwidth $2m+1,$
i.e., with $t_{m+1}=\dotsb=t_{n-1}=0,$ is a real-valued, even trigonometric polynomial, $f(x)= t_0+2t_1 \cos x+\dotsb+2t_m \cos (mx),$
for all $n\geqslant m+1.$
\begin{lemma}[{\cite[pp.\ 13--14]{Chan:07}}; the Grenander-Szeg\"{o} theorem]\label{Le:GS}
Let $T_n$ be a symmetric Toeplitz matrix as in Definition \ref{De:gen-funct} with generating function $f$.
Then, the smallest and largest eigenvalues $\lambda_{\min}(T_n)$ and $\lambda_{\max}(T_n)$, respectively, of $T_n$
are bounded as follows
\begin{equation*}
f_{\min} \leqslant \lambda_{\min}(T_n) \leqslant \lambda_{\max}(T_n) \leqslant f_{\max},
\end{equation*}
with $f_{\min}$ and $f_{\max}$ the minimum and maximum of $f$, respectively.
In particular, if $f_{\min}$ is positive, then the symmetric matrix $T_n$ is positive
definite.\footnote{For real-valued $f$ and $z=(z_0,\dotsc,z_{n-1})^\top\in {\mathbb C}^n,$ we have
$(T_nz,z)=\frac{1}{2\pi}\displaystyle{\int_{-\pi}^{\pi}f(x)\Big |\sum_{k=0}^{n-1} z_k{\rm e}^{{\rm i} kx}\Big |^2\, \mathrm{d} x}$
and $(z,z)=\frac{1}{2\pi}\displaystyle{\int_{-\pi}^{\pi}\Big |\sum_{k=0}^{n-1} z_k{\rm e}^{{\rm i} kx}\Big |^2\, \mathrm{d} x}$,
and the result is evident.}
\end{lemma}
\begin{theorem}[Stability of the six-step BDF method]\label{Th:stab}
The six-step BDF method \eqref{ab} is stable in the sense that
\begin{equation}
\label{stab-abg2}
|u^n|^2 + \tau \sum_{\ell=6}^n \|u^\ell\|^2
\leqslant C\sum_{j=0}^5 \big (|u^j|^2+\tau \|u^j\|^2\big ),\quad n=6,\dotsc,N,
\end{equation}
with a constant $C$ independent of $\tau$ and $n$.
\end{theorem}
\begin{proof}
Taking in \eqref{ab} the inner product with
$u^{n+6}-\frac {13}9u^{n+5}+\frac {25}{36}u^{n+4}-\frac 19u^{n+3}$, cf.\ \eqref{ab-energy}, we have
\begin{equation}
\label{abg31}
\Big (\sum\limits^6_{i=0}\alpha_i u^{n+i},u^{n+6}-\sum_{j=1}^3\mu_j u^{n+6-j}\Big )+\tau I_{n+6}=0
\end{equation}
with
\begin{equation}\label{2.a7}
I_{n+6}:=\Big \langle u^{n+6},u^{n+6}-\sum_{j=1}^3\mu_j u^{n+6-j}\Big \rangle.
\end{equation}
\begin{comment}
Let
\begin{equation}\label{2.a8}
\begin{aligned}
\alpha(\zeta)
&=\sum_{j=1}^{q}\frac{1}{j}\zeta^{q-j}(\zeta-1)^{j}=\sum_{i=0}^{q}\alpha_i{\zeta}^{i}\\
&=\frac{147}{60}\zeta^6-6\zeta^5+\frac{15}{2}\zeta^4-\frac{20}{3}\zeta^3+\frac{15}{4}\zeta^2-\frac{6}{5}\zeta+\frac{1}{6}\\
&=\zeta^6\left(\frac{147}{60}-6\zeta^{-1}+\frac{15}{2}\zeta^{-2}-\frac{20}{3}\zeta^{-3}+\frac{15}{4}\zeta^{-4}-\frac{6}{5}\zeta^{-5}+\frac{1}{6}\zeta^{-6}\right)~~~{\rm with}~~~q=6,
\end{aligned}
\end{equation}
and
\begin{equation}\label{2.a9}
\begin{aligned}
\mu(\zeta)
&=\zeta^{6}-\frac {13}9\zeta^{5}+\frac {25}{36}\zeta^{4}-\frac 19\zeta^{3}\\
&=\zeta^{6}\left(1-\frac {13}9\zeta^{-1}+\frac {25}{36}\zeta^{-2}-\frac 19\zeta^{-3}\right).
\end{aligned}
\end{equation}
Let $z=\frac{1}{\zeta}$ and $z=e^{ix}$, $x\in [0,2\pi]$. Form \eqref{2.a8} and \eqref{2.a9}, we have
\begin{equation*}
{\mathbb R}\left\{\frac{\alpha(\zeta)}{\mu(\zeta)}\right\}
={\mathbb R}\left\{\frac{\frac{147}{60}-6z+\frac{15}{2}z^2-\frac{20}{3}z^3+\frac{15}{4}z^4-\frac{6}{5}z^5+\frac{1}{6}z^6}{\left(1-\frac{1}{2}z\right)^2\left(1-\frac{4}{9}z\right)}\right\}.
\end{equation*}
{\color{red}From eq. (3.3) of Lemma 3.10 (see the Attachment BDF6 of Page 13), there exists}
\begin{equation}\label{2.a10}
{\mathbb R}\left\{\frac{\alpha(\zeta)}{\mu(\zeta)}\right\}\geq 0,
\end{equation}
which can be intuitive understanding in Fig \ref{Fig.5.1}.
\end{comment}
With the notation $\mathcal{U}^n:=(u^{n-5},u^{n-4},u^{n-3},u^{n-2},u^{n-1},u^{n})^\top$ and the norm $|\mathcal{U}^n|_G$ given by
%
\begin{equation*}
|\mathcal{U}^n|_G^2=\sum_{i,j=1}^6g_{ij}\left(u^{n-6+i},u^{n-6+j}\right),
\end{equation*}
using \eqref{G}, we have
\begin{equation}
\Big (\sum\limits^6_{i=0}\alpha_i u^{n+i},u^{n+6}-\sum_{j=1}^3\mu_j u^{n+6-j}\Big )\geqslant |\mathcal{U}^{n+6}|_G^2-|\mathcal{U}^{n+5}|_G^2.
\end{equation}
Thus, \eqref{abg31} yields
\begin{equation}\label{2.87}
|\mathcal{U}^{n+6}|_G^2-|\mathcal{U}^{n+5}|_G^2+\tau I_{n+6}\leqslant 0.
\end{equation}
Summing in \eqref{2.87} from $n=0$ to $n=m-6$, we obtain
\begin{equation}\label{2.88}
|\mathcal{U}^{m}|_G^2-|\mathcal{U}^{5}|_G^2+\tau \sum_{n=6}^mI_n\leqslant 0.
\end{equation}
It remains to estimate the sum $\sum_{n=6}^mI_n$ from below; we have
%
\begin{equation}
\label{abg36}
\sum_{n=6}^mI_n=\sum_{n=6}^m\Big \langle u^n,u^n-\sum_{j=1}^3\mu_j u^{n-j}\Big \rangle.
\end{equation}
First, motivated by the positivity of the function $f$ of \eqref{f}, to take advantage of the
positivity property \eqref{pos-prop}, we introduce the notation $\mu_0:=-31/32,$
and rewrite \eqref{abg36} as
\begin{equation}
\label{abg37}
\sum_{n=6}^mI_n=\frac 1{32}\sum_{n=6}^m\|u^n\|^2+J_m\ \text{ with }\
J_m:=-\sum_{j=0}^3\mu_j \sum_{i=1}^{m-5}\langle u^{5+i},u^{5+i-j} \rangle.
\end{equation}
Our next task is to rewrite $J_m$ in a form that will enable us to estimate
it from bellow in a desired way.
To this end, we introduce the lower triangular Toeplitz matrix $L=(\ell_{ij})\in {\mathbb R}^{m-5,m-5}$ with entries
\[\ell_{i,i-j}=-\mu_j, \quad j=0,1,2,3, \quad i=j+1,\dotsc,m-5,\]
and all other entries equal zero.
With this notation, we have
\[\sum_{i,j=1}^{m-5}\ell_{ij} \langle u^{5+i},u^{5+j} \rangle=-\sum_{j=0}^3\mu_j \sum_{i=j+1}^{m-5}\langle u^{5+i},u^{5+i-j} \rangle,\]
i.e.,
\begin{equation}
\label{abg38}
\sum_{i,j=1}^{m-5}\ell_{ij} \langle u^{5+i},u^{5+j} \rangle=J_m+\langle u^6,\mu_1u^5+\mu_2u^4+\mu_3u^3 \rangle
+\langle u^7,\mu_2u^5\!+\!\mu_3u^4 \rangle+\langle u^8,\mu_3u^5 \rangle.
\end{equation}
\begin{comment}
Our next task is to rewrite $J_m$ in a form that will enable us to estimate
it from bellow in a desired way.
To this end, we introduce the lower triangular Toeplitz matrix $L=(\ell_{ij})\in {\mathbb R}^{m-5,m-5}$ with entries
\[\begin{alignedat}{4}
&\ell_{ii}=-\mu_0, &&i=1,\dotsc,m-5, \quad &&\ell_{i,i-1}=-\mu_1, &&i=2,\dotsc,m-5,\\
&\ell_{i,i-2}=-\mu_2,\quad &&i=3,\dotsc,m-5,\quad &&\ell_{i,i-3}=-\mu_3,\quad &&i=4,\dotsc,m-5,
\end{alignedat}\]
and all other entries equal zero.
With this notation, we have
\[\begin{aligned}
\sum_{i,j=1}^{m-5}\ell_{ij} \langle u^{5+i},u^{5+j} \rangle=&-\sum_{i=1}^{m-5}\mu_0\langle u^{5+i},u^{5+i} \rangle-\sum_{i=2}^{m-5}\mu_1\langle u^{5+i},u^{4+i} \rangle\\
&-\sum_{i=3}^{m-5}\mu_2\langle u^{5+i},u^{3+i} \rangle-\sum_{i=4}^{m-5}\mu_3\langle u^{5+i},u^{2+i} \rangle,
\end{aligned}\]
which can also be rewritten in the form
\begin{equation*}
\begin{aligned}
&\sum_{i,j=1}^{m-5}\ell_{ij} \langle u^{5+i},u^{5+j} \rangle=-\sum_{j=0}^3\mu_j\sum_{i=1}^{m-5}\langle u^{5+i},u^{5+i-j} \rangle\\
+\langle u^6,&\mu_1u^5+\mu_2u^4+\mu_3u^3 \rangle+\langle u^7,\mu_2u^5+\mu_3u^4 \rangle+\langle u^8,\mu_3u^5 \rangle,
\end{aligned}
\end{equation*}
i.e.,
\begin{equation}
\label{abg38}
\sum_{i,j=1}^{m-5}\ell_{ij} \langle u^{5+i},u^{5+j} \rangle=J_m+\langle u^6,\mu_1u^5+\mu_2u^4+\mu_3u^3 \rangle
+\langle u^7,\mu_2u^5\!+\!\mu_3u^4 \rangle+\langle u^8,\mu_3u^5 \rangle.
\end{equation}
\end{comment}
Now, in view of the positivity of the generating function $f,$ see \eqref{f}, of the symmetric part
$L_s:=(L+L^\top)/2$ of the matrix $L,$ the Grenander--Szeg\"o theorem, see Lemma \ref{Le:GS},
ensures positive definiteness of $L_s,$ and thus also of $L$ itself, since $(Lx,x)=(L_sx,x)$ for $x\in {\mathbb R}^{m-5}.$
Therefore, the expression on the left-hand side of \eqref{abg38} is nonnegative; hence,
\eqref{abg38} yields the desired estimate for $J_m$ from below,
i.e.,
\begin{equation}
\label{abg39}
J_m\geqslant -\langle u^6,\mu_1u^5+\mu_2u^4+\mu_3u^3 \rangle
-\langle u^7,\mu_2u^5\!+\!\mu_3u^4 \rangle-\langle u^8,\mu_3u^5 \rangle.
\end{equation}
From \eqref{2.88}, \eqref{abg37} and \eqref{abg39}, we obtain
\begin{equation}\label{abg40}
\begin{aligned}
|\mathcal{U}^{m}|_G^2+\frac 1{32}\sum_{n=6}^m\|u^n\|^2&\leqslant
|\mathcal{U}^{5}|_G^2 +\tau\langle u^6,\mu_1u^5+\mu_2u^4+\mu_3u^3 \rangle\\
&+\tau \langle u^7,\mu_2u^5+\mu_3u^4 \rangle+\tau \langle u^8,\mu_3u^5 \rangle.
\end{aligned}
\end{equation}
Now, with $c_1$ and $c_2$ the smallest and largest eigenvalues of the matrix $G,$ we have
\[ |\mathcal{U}^{m}|_G^2\geqslant c_1|u^m|^2\quad\text{and}\quad |\mathcal{U}^{5}|_G^2\leqslant
c_2\sum_{j=0}^5 |u^j|^2;\]
furthermore, the terms $|\langle u^i,u^j \rangle|$ with $i>j$ can be estimated in the form
$|\langle u^i,u^j \rangle|\leqslant \varepsilon \|u^i\|^2+\|u^j\|^2/(4\varepsilon)$
with $\varepsilon<1/32.$ This leads then to the desired stability estimate
\eqref{stab-abg2}.
Let us also note that, due to the fact that $\mu_4=\mu_5=\mu_6=0,$
the terms $\|u^2\|^2, \|u^1\|^2$ and $\|u^0\|^2$ are actually not needed
on the right-hand side of \eqref{stab-abg2}.
\end{proof}
\subsection{Time-dependent operators}\label{SSe:t-dep}
In this section we use a perturbation argument to extend the stability result to the case
of time-dependent selfadjoint operators $A (t) : V \to V', t\in [0,T].$
We fix an $s\in [0,T]$ and define the norm on $V$ in terms of $A (s), \|v\|:=|A (s)^{1/2}v|.$
Our structural assumptions are that all operators $A(t), t\in [0,T],$ share the same domain,
produce equivalent norms on $V,$
\[|A(t)^{1/2}v|\leqslant c |A(\tilde t)^{1/2}v|\quad \forall t,\tilde t\in [0,T]\ \, \forall v\in V,\]
and $A(t): V\to V'$ is of bounded variation with respect to $t,$
\begin{equation}
\label{BV-A}
\|\big (A(t)- A(\tilde t)\big )v\|_\star \leqslant [\sigma(t)-\sigma(\tilde t) ] \|v\|,\quad
0\leqslant \tilde t\leqslant t\leqslant T, \quad \forall v\in V,
\end{equation}
with an increasing function $\sigma : [0,T]\to {\mathbb R}.$
First, for given perturbation terms $v^6,\dotsc,v^N\in V',$ we let $u^6,\dotsc,u^N$
satisfy the perturbed six-step BDF method
\begin{equation}
\label{ab-v}
\sum_{i=0}^6 \alpha_i u^{n+i}+\tau A u^{n+6}=\tau v^{n+6},\quad n=0,\dotsc,N-6,
\end{equation}
i.e., the scheme \eqref{ab} for $q=6$ with perturbed right-hand side,
assuming that starting approximations $u^0, \dotsc, u^5$ are given.
Then, it is easily seen that we have the following stability result
\begin{equation}
\label{stab-v}
|u^n|^2 + \tau \sum_{\ell=6}^n \|u^\ell\|^2
\leqslant C\sum_{j=0}^5 \big (|u^j|^2+\tau \|u^j\|^2\big ) + C\tau \sum_{\ell=6}^n \|v^\ell\|^2_\star,\quad n=6,\dotsc,N,
\end{equation}
with a constant $C$ independent of $\tau$ and $n$. Indeed, the terms that are due to the perturbation $v^{n+6},$
namely, $\big ( v^{n+6}, u^{n+6}-\sum_{j=1}^3\mu_j u^{n+6-j}\big )$,
can be easily estimated in the form
\[\big ( v^{n+6}, u^{n+6}-\sum_{j=1}^3\mu_j u^{n+6-j}\big ) \leqslant C_\varepsilon \|v^{n+6}\|^2_\star+\varepsilon \sum_{j=1}^3|\mu_j|\, \|u^{n+6-j}\|^2,\]
with sufficiently small $\varepsilon$ such that the terms involving $\|u^i\|^2$ can be absorbed in the corresponding
sum on the left-hand side.
We shall next use \eqref{stab-v} to extend the stability result \eqref{stab-abg2} to the case of
time-dependent operators. Before that, let us note that with $v^\ell=f(t^\ell),$ \eqref{stab-v} is a stability
result for the inhomogeneous equation $u' (t) + Au(t) = f(t)$ with respect to both the starting approximations
and the forcing term. Furthermore, with $v^\ell$ the consistency error of the method,
i.e., the amount by which the exact solution $u$ misses satisfying the numerical method \eqref{ab},
with $q=6,$
\begin{equation}
\label{ab-v-cons}
\tau v^{n+6}=\sum_{i=0}^6 \alpha_i u(t^{n+i})+\tau A u(t^{n+6})-\tau f(t^{n+6}),\quad n=0,\dotsc,N-6,
\end{equation}
the error $e^\ell:=u(t^\ell)-u^\ell, \ell=0,\dotsc,N,$ satisfies \eqref{ab-v}. In this case, the stability
result \eqref{stab-v}, in combination with the trivial estimate of the consistency error,
leads to optimal order error estimates.
Now, the six-step BDF method for the initial value problem \eqref{ivp} with time-dependent
operator $A(t)$ is
\begin{equation}
\label{ab-time}
\sum_{i=0}^6 \alpha_i u^{n+i}+\tau A(t^{n+6}) u^{n+6}=0,\quad n=0,\dotsc,N-6,
\end{equation}
assuming that starting approximations $u^0, \dotsc, u^5$ are given.
Let us now fix an $6\leqslant m\leqslant N.$ From \eqref{ab-time}, we obtain
\begin{equation}
\label{ab-time2}
\sum_{i=0}^6 \alpha_i u^{n+i}+\tau A(t^m) u^{n+6}=\tau \big [A(t^m)-A(t^{n+6})\big ] u^{n+6},\quad n=0,\dotsc,m-6.
\end{equation}
Since the time $t$ is frozen at $t^m$ in the operator $A(t^m)$ on the left-hand side,
we can apply the already-established stability estimate \eqref{stab-v}
with perturbation terms $v^\ell:= [A(t^m)-A(t^\ell) ] u^\ell$ and obtain
\begin{equation}
\label{ab-time3}
|u^m|^2 + \tau \sum_{\ell=6}^m \|u^\ell\|^2
\leqslant C\sum_{j=0}^5\big (|u^j|^2+\tau \|u^j\|^2\big ) + C M^m
\end{equation}
with a constant $C$ independent of $\tau$ and $m$, and
%
\begin{equation}
\label{ab-time4}
M^m:=\tau\sum_{\ell=6}^m \| [A(t^m)-A(t^\ell) ] u^\ell\|^2_\star.
\end{equation}
Now, with
\[E^\ell:=\tau \sum_{j=6}^\ell \|u^j\|^2,\quad \ell=6,\dotsc,m,\quad E^5:=0,\]
estimate \eqref{ab-time3} yields
\begin{equation}
\label{ab-time5}
E^m \leqslant C\sum_{j=0}^5 \big (|u^j|^2+\tau \|u^j\|^2\big ) + C M^m.
\end{equation}
Furthermore, in view of the bounded variation condition \eqref{BV-A},
\[M^m\leqslant \tau\sum\limits^{m-1}_{\ell=6}\big [\sigma(t^m)-\sigma(t^\ell)\big ]^2\|u^\ell\|^2
=\sum\limits^{m-1}_{\ell=6}\big [\sigma(t^m)-\sigma(t^\ell)\big ]^2(E^\ell-E^{\ell-1}),\]
whence, by summation be parts, we have
\begin{equation}
\label{ab-time6}
M^m\leqslant \sum\limits^{m-1}_{\ell=6}a_\ell E^\ell,
\end{equation}
with $a_\ell:=\big [\sigma(t^m)-\sigma(t^\ell)\big ]^2-\big [\sigma(t^m)-\sigma(t^{\ell+1})\big ]^2,$
and \eqref{ab-time5} yields
\begin{equation}
\label{ab-time7}
E^m \leqslant C\sum_{j=0}^5\big (|u^j|^2+\tau \|u^j\|^2\big )+C\sum\limits^{m-1}_{\ell=6}a_\ell E^\ell.
\end{equation}
Since the sum $\sum_{\ell=6}^{m-1} a_\ell$ is uniformly bounded by a constant independent
of $m$ and the time step $\tau,$
\[\sum_{\ell=6}^{m-1} a_\ell=\big [\sigma(t^m)-\sigma(t^6)\big ]^2\leqslant \big [\sigma(T)-\sigma(0)\big ]^2,\]
a discrete Gronwall-type argument applied to \eqref{ab-time7}
leads to
\begin{equation}
\label{ab-time8}
E^m\leqslant C\sum\limits^5_{j=0} \big (|u^j|^2+\tau \|u^j\|^2\big ) .
\end{equation}
Combining \eqref{ab-time3} with \eqref{ab-time6} and \eqref{ab-time8},
we obtain the desired stability estimate \eqref{stab-abg2} for the case
of time-dependent operators.
\bibliographystyle{amsplain}
|
1,314,259,996,601 | arxiv | \section{Introduction}
\IEEEPARstart{P}{ower} lines inspection for uninterrupted supply has become an important topic due to the increasing dependency of modern-day societies on electricity.
The power line is established by several types of components with different function that include insulator, tower, conductor and fitting.
Due to out-door environment in complex landform and volatile weather, the power line component could be damaged frequently.
One faulty component (e.g., conductor fault), or generally the combination of multiple damaged components (e.g., fitting faults) can cause a power outage.
Once the power lines are malfunction in one region, it may lead to supra-regional blackouts and may even cause catastrophic accidents such as fire in forest area \cite{wang2019danerous}.
The objective of power lines inspection is to check the condition of the power line component,
And then, the inspection result as a guide is used for power companies to decide which component should be maintained or replaced.
A fast and accurate inspection can greatly increase the efficiency of maintenance decision-making, and further reduce the possibility of power line failures, which is the guarantee of safe and reliable power supply \cite{qin2017fault_causes}.
However, the power lines inspection is facing several challenging problems such as extensive area, a large number of components, and complex natural environments.
Traditional inspection methods including manual ground survey and helicopter-assisted patrol which have been used for decades \cite{liu2019ground_aerial_inspection}.
Both methods inspect the power lines by human visual observations, which are high cost, high risk, low efficiency, and long-term operating \cite{matikainen2016review_remote}.
In recent years, the development of Unmanned Aerial Vehicle (UAV) and digital image technologies provides a new platform for power lines inspection \cite{shakhatreh2019uav_review}.
The UAV inspection method separates the traditional inspection into two parts: data collection and data analysis.
The inspector remotely operates the UAV to collect images for inspection targets, and then the captured images or videos are sent to workers who have professional skills for data analysis.
Due to the advantages of low cost, high security and high efficiency, deploying UAV inspection to replace traditional methods which is based on manual labor has been tried extensively.
UAV inspection as a recent method greatly reduces the work intensity of inspectors and improves the efficiency of power lines inspection, but it also brings massive data.
In addition, these images and videos are usually analyzed by a time-consuming manual approach which is expensive, potentially dangerous and not enough accurate \cite{martinez2018big_data}.
In the past few years, many researchers have been seeking to develop fast and accurate analysis methods to automatically recognize the condition of power lines in aerial images \cite{nguyen2018review_dl_inspection}.
These researches cover a wide range of power line components and their faults with various image processing technologies.
Moreover, most of them are task-specific that focusing on one particular component or fault.
The main objective of this paper is to provide the state-of-the-art of vision-based inspection of power line components in research literature, and to present some degree of taxonomy that gives readers a helpful accessible understanding of similarities and differences between a wide variety of studies.
We aim to offer an overview of the possibilities and challenges provided by modern computer vision technology from the perspective of inspection data analysis to discuss the potential and limitations of different analysis methods.
Note that the visible images captured from UAVs are the most commonly used in power lines inspection due to their low cost, humanized observation and detailed information.
Therefore, in this review paper, we only consider the analysis method of visible images while works about other data sources and the procedure of data collection are not included.
In this paper, we first provide some related works of vision-based power lines inspection from the perspective of data analysis.
The bibliometrics analysis of the literature, relevant review articles, datasets for public and the taxonomy used in this paper are included.
Next, we introduce several basic concepts in power lines inspection.
These concepts contain inspection method and data source with spacial attention paid to UAV inspection and visible images, and main components with their roles and common faults.
Then, we review the studies found in literature of analysis methods of visible images in power lines inspection.
These research articles mainly published in the past five years, are summarized into two categories including component detection and fault diagnosis.
The main ideas of the analysis method, description of the dataset, and some representative quality analysis results are presented to understand the capabilities of various analytic approaches in different applications.
Based on that, we propose an in-depth discussion of deep-learning-related methods in the researches reviewed above.
A brief introduction of fundamental deep learning technologies, the exploration of analysis methods related to deep learning, and a basic conception of inspection data analysis system using several alternative image processing approaches are presented.
Finally, we discuss open research questions for future research directions.
The remainder of this paper is organized as follows.
Section II provides the related works.
Section III offers a brief introduction of power lines inspection including inspection method, data source, and power line components with their common faults.
Section IV conducts the survey on inspection data analysis from the perspective of component detection and fault diagnosis.
Section V presents the in-depth discussion of the analysis methods reviewed in Section IV that are deep-learning-related.
Section VI discusses the open research issues.
Section VII draws the conclusion.
\section{Related works}
\subsection{Bibliometric Analysis}
In order to provide an overview of the existing research in vision-based inspection of power lines, a bibliometric analysis was conducted on 9 December 2019 using the acknowledged databases, Google Scholar.
The query for Google Scholar is as follows: power AND (visual OR image* OR vision) AND (aerial OR UAV* OR overhead) AND "power line *".
Intend to further screen out researches related to deep learning, an extended query is applied: power AND (visual OR image* OR vision) AND (aerial OR UAV* OR overhead) AND "power line*" AND "deep learning".
\mbox{Fig. \ref{fig:num_pub_w_DL}} illustrates the number of publications indexed by Google Scholar from 2009 to 2019.
Totally 477 research articles can be found that include 84 publications related to deep learning.
Before 2015, the number of total publications was at a relatively low and stable level.
Since 2016, researches in vision-based power lines inspection increased yearly and reached the number of 92 in the year of 2019.
As early as 2013, there was a research mentioned about "deep learning" but the deep learning technology didn't really apply until 2016.
These deep-learning-based publications also increased year by year since 2016 and reached 37 in the year of 2019.
This result should not be a surprise that aerial inspection just have been widely applied by power companies in recent years with the development of UAV and deep learning technologies.
It takes times for power companies to collect inspection data and for researchers to design and evaluate their methods in a specific real-world application.
\begin{figure}[ht]
\centering
\includegraphics[width=8.5cm]{num_pub_w_DL.jpg}
\caption{Number of publications indexed by Google Scholar.}\label{fig:num_pub_w_DL}
\end{figure}
\subsection{Relevant Review Articles}
Several review articles related to power line inspection have been published in the past decade.
Some of them focused on inspection platforms.
Katrasnik et al. \cite{katrasnik2009survey_robots} presented the achievements in power line inspection by mobile robots including flying robots and climbing robots.
Toussaint et al. \cite{toussaint2009review_robots} conducted a review of power line inspection and maintenance which focused on climbing robots designed to cross obstacles.
Tong et al. \cite{Tong2010review_helicopter} summarized the image processing based applications in power line inspection by helicopter.
A few review articles discussed the specific application of power line inspection, which focused on one kind of the component or fault.
Ahmad et al. \cite{ahmad2013review_vegetation} proposed a review of advantages and limitations related to the vegetation encroachment monitoring of power lines
Prasad et al. \cite{prasad2016review_insulator} discussed the vision-based techniques for insulator monitoring of power lines.
With the development of sensor technique, a number of remotely sensed data sources were applied in power line inspection.
Mirall{\`e}s et al. \cite{miralles2014review_cv_inspection} conducted a review of several vision-based applications in the management of power lines with respect to different vision sensors.
Matikainen et al. \cite{matikainen2016review_remote} presented a remote sensing-based survey of power lines and their surroundings in research literature.
A wide range of data sources was discussed from coarse satellite images to detailed visible images.
Deep learning has achieved great success in computer vision since 2012, but the deep learning based application in power lines inspection was not reported until 2016.
Nguyen et al. \cite{nguyen2018review_dl_inspection} conducted a literature review of automatic vision-based inspection of power lines which aimed to discuss the role and possibilities of deep learning technique.
They summarized the existing researches of the vision-based power line inspection from the perspectives of UAV navigation and inspection task.
However, the research reviewed in \cite{nguyen2018review_dl_inspection} is mostly pre-2018 when the deep learning was hardly applied to power lines inspection at that time.
Hence, they proposed a potential concept of automatic power line inspection system based on deep learning rather than reviewing the research articles.
The review papers mentioned above summarized the researches of power lines inspection from different aspects including inspection platforms\cite{katrasnik2009survey_robots, toussaint2009review_robots,Tong2010review_helicopter}, specific inspection applications\cite{ahmad2013review_vegetation, prasad2016review_insulator}, inspection data sources \cite{miralles2014review_cv_inspection, matikainen2016review_remote} and automatic inspection systems\cite{nguyen2018review_dl_inspection}.
Our paper differs from the above reviews especially reference \cite{nguyen2018review_dl_inspection} by only focusing on component inspection task of power lines rather than including data collection.
Special attention is paid to visible image analysis based on deep learning.
In addition, an in-depth exploration of the analysis methods that are aiming at components detection and their faults diagnosis are provided.
Beside that, after years of development, more novel methods are proposed and more challenges are defined.
The existing reviews prior to the recent striking success are not as up-to-date as this paper.
We give more emphasis on the research over the past five years while typical works that were published earlier are also included.
\subsection{Datasets for public}
Due to the confidentiality of the inspection data of power lines, most of the power companies are hesitant to make their data public available.
This results in research challenges such as data insufficiency and missing evaluation baseline.
Nevertheless, there are several datasets offered by personal researchers that have been released to the public over the past few years.
Here, we summarize these datasets with brief description and provide their website.
\begin{itemize}
\item
\textbf{Insulator dataset in reference \cite{tao2018ILN_DDN_ins}}:
The dataset consists of 848 aerial images that the main object in this dataset is the insulator in power lines.
Totally 600 of them are captured in real-world and labeled with insulator.
The rest images are synthetized by hand and labeled with insulator fault, in particular the missing-cap fault of insulator.
\item
\textbf{Tower dataset in reference \cite{bian2019Fst_tower}}:
There are 1300 images in this dataset, and the major object is electrical tower.
Most of the images are collected from the inspection video and the internet.
Various kinds of tower with different backgrounds are included.
\item
\textbf{Conductor dataset in reference \cite{lee2017weakly_line}}:
This dataset contains totally 8400 images collected from visible and infrared cameras in equal quantity.
To achieve multi scale recognition, images with close and far scene are included.
In addition, the dataset is separated into two sub-sets according to different annotations for weakly supervised learning.
Sub-set 1 is labeled with image level annotations which has 8000 images while another is labeled with pixel level annotations.
\end{itemize}
\begin{table*}[ht]
\centering
\caption{Basic information of several open inspection datasets}
\label{tab:open_dataset}
\renewcommand\arraystretch{1.2}
\begin{tabular}{ c l c m{7cm}}
\hline
\textbf{Dataset} & \textbf{~~~~~~~~~~~~~~~Brief Description} & \textbf{Quantity} & \textbf{~~~~~~~~~~~~~~~~~~~~~~~~~~~Website} \\
\hline
Insulator \cite{tao2018ILN_DDN_ins} & \tabincell{l}{Real-world images labeled with insulator \\ Synthetic images labeled with defect (missing-cap)} & 848 & \url{https://github.com/InsulatorData/InsulatorDataSet} \\
\hline
Tower \cite{bian2019Fst_tower} & \tabincell{l}{Collected from internet and inspection videos \\ Various types of towers and backgrounds} & 1300 & \url{https://drive.google.com/drive/folders/1UyP0fBNUqFeoW5nmPVGzyFG5IQZcqlc5} \\
\hline
Conductor \cite{lee2017weakly_line} & \tabincell{l}{Captured by visible and infrared cameras \\ Sub-set1 labeled with image level annotations \\
Sub-set2 labeled with pixel level annotations} & 8400 & \tabincell{l}{Dataset1:\url{https://data.mendeley.com/datasets/n6wrv4ry6v/3} \\ Dataset2:\url{https://data.mendeley.com/datasets/twxp8xccsw/6}} \\
\hline
\end{tabular}
\end{table*}
\subsection{Taxonomy}
The purpose of component inspection is to identify the condition of power lines and use it as the basis for maintenance decision-making.
\mbox{Fig. \ref{fig:inspection_system}} depicts a fundamental power lines inspection system based on UAVs.
The UAV captures the images of power line components and then sends to the ground monitoring center by wireless communication for further analysis.
\begin{figure}[ht]
\centering
\includegraphics[width=8.5cm]{inspection_system.jpg}
\caption{Basic inspection system in power lines.}\label{fig:inspection_system}
\end{figure}
According to the content of captured aerial images, the main inspection items cam be taxonomically classified into four categories: insulator, tower, conductor and fitting.
In addition, each kind of component has several common fault types.
The detailed taxonomy of the researches reviewed in this paper is illustrated in \mbox{Fig. \ref{fig:taxonomy}}.
The analysis methods of inspection images are classified into component detection and fault diagnosis in the light of their research objective.
Detection of power line components belongs to the object detection task.
In this kind of research, several image features including color, shape, texture and deep features are utilized to locate and classify the component.
Another kind focuses on the diagnosis of faults belonging to components.
Due to the diversity and data scarcity of component faults, the fault diagnosis methods are quite different for different faults in aspects of analytic procedure, applied approach and research popularity.
Therefore, the review of such studies is fault-specific.
Finally, according to these analysis methods, an in-depth discussion of the deep-learning-related researches is provided.
\begin{figure}[ht]
\centering
\includegraphics[width=8.5cm]{taxonomy.jpg}
\caption{Taxonomy for inspection data analysis in power lines.}\label{fig:taxonomy}
\end{figure}
\section{A brief introduction of power lines inspection}
In this section, we first introduce the typical inspection methods (the way to inspect power lines), special attention is paid to the UAV inspection.
Then, we summarize the data source that has been applied in power lines inspection and point out the reasons why visible images are the most widely used.
Finally, we survey the main components and their common faults in power lines while highlight their function, appearance, and potential fault causes.
\subsection{Inspection method}
Conventional power lines inspection methods involve ground inspection and airspace inspection.
Both methods typically identify the condition of power liens by using visual observation \cite{liu2019ground_aerial_inspection}.
The ground inspection is conducted by a team traveling along the power line corridor on foot or by off-road vehicle \cite{aracil2002telerobotic}.
During this procedure, inspectors visually inspect the power lines by using observation tools such as binoculars, infrared cameras and corona detection cameras.
Although the ground inspection has been widely applied for decades due to the high accuracy, but the problems including labour-intensive, low efficiency, and extremely complex landform and weather, all make the ground inspection is gradually replaced by airspace inspection.
The airspace inspection is typically performed by a climbing system or an aerial system.
The former applies a mobile robot to cross obstacles found on power lines and inspects the passing components along the line \cite{fan2018climbing_inspection}.
Climbing robots can obtain high quality images due to its proximity to the conductors.
However, the disadvantages of the climbing system limit its application including the damage to lines, low efficiency, incomplete inspection, and obstruction by obstacles.
The aerial system inspects power lines based on aerial vehicles such as helicopter, multi-rotor UAV and fixed wing UAV \cite{deng2014aerial_platform}.
The aerial vehicle travels along the power line which is controlled by human or flies automatically.
During this procedure, multiple sensors on-board the aerial vehicle are utilized for visual observation and data collection.
Several advantages of aerial system make it a routine inspection method:
1) Access to hard-to-reach locations which means the high flexibility in data acquisition.
2) Capable of loading multiple sensing devices for inspection.
3) Address the problems of low efficiency and damage to lines.
Among the aerial inspection system, the multi-rotor UAV inspection offers a further level of superiority over other inspection methods \cite{menendez2019position}.
The reasons are as follows:
The multi-rotor UAV can fly relatively close to power lines to capture detailed images of power line components.
In addition, it is much cheaper than other aerial vehicles with low operation cost.
Therefore, power lines inspection based on multi-rotor UAV has become the mainstream inspection method.
\subsection{Data sources}
The inspection data acquired from different inspection methods should be analyzed by human or computers to identify the condition of power lines.
Different types of data (or different data sources) have different data analysis methods.
Hence, it is important to determine the data source in a power lines inspection system.
In this paper, we summarize the data sources into two main categories: image data and non-image data.
The non-image data mainly refers to the airborne laser scanner (ALS) data which is also known as georeferenced point cloud data \cite{chen2018LiDAR_inspection}.
It can generate detailed 3D data with the coordinate information of objects and has been applied in the mapping and 3D reconstruction of the power line corridor.
Besides that, the text data such as inspection information and flight record also belongs to non-image but there are rare practical applications based on that.
The image data is the major data source in the of power lines inspection because most conditions of the power lines can be identified through visual observation.
The image data mainly includes visible images \cite{jenssen2019ssd_multi}, infrared images \cite{zhao2016infrard_ins}, ultraviolet images \cite{chen2019ultraviolet_ring}, synthetic aperture radar images \cite{wang2019radar_inspection}, and optical satellite images\cite{michalski2019satellite_tower}.
The infrared image reflects the temperature of objects that can be applied to detect the abnormal heat.
The ultraviolet image is typically used to detect corona discharges of the power lines.
The synthetic aperture radar image and optical satellite image provide large-area coverage that have been used in vegetation monitoring near the power lines.
Among data sources belonging to image data, the visible image is the most widely used data source in power lines inspection due to the following advantages:
1) The vast majority of the faults have visible characteristics and can be diagnosed by visible visual observation.
2) The visible image is more appropriate to the intuitive habit of human.
3) The acquisition of visible images is flexible, low-cost and high-quality that benefited from the well-developed visible camera and aerial photography technology.
\subsection{Power line components and their common faults}
The inspection of power line components is the fundamental task and is among the most popular research topic in the field of power lines inspection.
The objective of this task is to identify the condition of these components and check for faults that should be maintained.
There are many types of components including tower, conductor and accessories (e.g., insulator and fitting) attached to them, and each component type has various faults.
In this paper, we summarize the power line components into four categories including insulator, tower, conductor and fitting \cite{lan2018rcnn_multi}.
\subsubsection{Insulator}
The insulator is an essential component with the dual function of electrical insulation and mechanical support in power lines.
As can be seen in \mbox{Fig. \ref{fig:component_samples} (a)}, the insulator has a repetitive geometric structure with stacked caps.
Depending on the voltage level and nearby environment of the power line, the appearance of insulators is different in color, size and string number (e.g., single string and double strings).
Due to the outdoor working environment, insulators are exposed to the weather especially thunder-strike and icing which can make them malfunction.
The common faults of insulators are missing-cap and surface fault.
The missing-cap refers to one or more caps falling off the insulator that can be seen in \mbox{Fig. \ref{fig:fault_samples} (a)}.
The surface fault would reduce the insulation ability that occurs to the surface of insulator cap including flashover (see \mbox{Fig. \ref{fig:fault_samples} (b)}), icing and pollution.
\subsubsection{Tower}
The role of towers is to support power lines for maintaining the safety distance between conductors and the ground.
There are two forms of tower appearance: lattice-like structure and pole-like structure that can be seen in \mbox{Fig. \ref{fig:component_samples} (b)}.
Generally, the former is made of lattice steel with metallic surface while the later is constructed by reinforced concrete.
Two common faults of towers that should be taken into considered in the inspection are corrosion and bird's nest.
As can be seen in \mbox{Fig. \ref{fig:fault_samples} (c)}, the corrosion (also known as deterioration) occurs on the surface of tower materials that would shorten the service life of towers.
Bird encroachment is another tower fault threatening the safety of power liens which can be seen in \mbox{Fig. \ref{fig:fault_samples} (d)}.
Birds nesting on towers would affect the tower's insulation performance and cause trip accident.
\subsubsection{Conductor}
Conductors are generally made of copper or aluminum that are utilized to transport the electrical energy.
Depending on the photography distance, the conductor has different appearances in the aerial image which is shown in \mbox{Fig. \ref{fig:component_samples} (c)}.
In the long distance, the conductors can be treated as slender parallel lines.
When the camera is close to conductors, they present the appearance of spiral strips.
The conductor faults that cause frequently are vegetation encroachment, broken strand and foreign body.
The power lines cover a wide area and sometimes cross the forests that nearby growing trees may touch the conductor and then cause short-circuit accidents.
An example of vegetation encroachment is shown in \mbox{Fig. \ref{fig:fault_samples} (e)}
The broken strand is generally caused by conductor galloping and heating that can be seen in \mbox{Fig. \ref{fig:fault_samples} (f)}.
The foreign body such as kite, ballon and plastic bag hanging on conductors by the wind would threaten the safety of power system.
\subsubsection{Fitting}
The role of fittings is to reinforce and protect other components such as insulators and conductors.
Due to the variety of fittings, the category of fitting has some subclasses including damper, clamp, arcing ring, spacer , and fastener that are shown in \mbox{Table. \ref{fig:component_samples} (d)}.
With the increasing service life of power lines, part of fittings became invalid causing other components to loosen or even fall off.
Broken fitting is a common fault that fittings show signs of corrosion, wear, cracking, and loosening.
\mbox{Fig. \ref{fig:fault_samples} (g)} shows a broken damper with missing half of the body.
Fasteners are widely used fittings in power lines for mechanical reinforcement which are composed of bolt, nut and pin.
Missing pin is another common fault of fittings which can be seen in \mbox{Fig. \ref{fig:fault_samples} (f)}.
The left is the normal fastener while the right fastener in the red bounding box lost its pin.
\begin{figure}[ht]
\centering
\includegraphics[width=8.5cm]{components.jpg}
\caption{Samples of the power line component}\label{fig:component_samples}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=8.5cm]{fault_samples.jpg}
\caption{Samples of the common fault in power lines}\label{fig:fault_samples}
\end{figure}
\section{Literature review of data analysis in power lines inspection}
In this section, the works on inspection data (almost visible images) analysis are reviewed from two
perspectives.
The first is component detection.
It is very important not only for further fault identification, but also can be used in other practical applications such as UAV navigation, resource management, and video tracking.
Researches about component detection are divided into five groups according to the image features they used: color, shape, texture, fusion and deep.
The second is fault diagnosis which is equally important for determining the condition of power lines.
The works on fault diagnosis are summarized from the perspective of different fault types including surface fault of insulator, missing-cap of insulator, corrosion of tower, bird's nest, broken strand of conductor, foreign body, vegetation encroachment, broken fitting, and missing pin of fitting.
To elaborate the characteristics of the literature reviewed in this section, two tables (\mbox{Table. \ref{tab:summary_obj}} and \mbox{Table. \ref{tab:summary_diag}}) are made which provide the information of the methods, data and performance.
Some valuable details in the researches are also provided such as classifier, image preprocessing approach, and main image features.
Finally, two main limitations of current literature are introduced including the insufficient research on some components with their faults and the lack of practical engineering.
\subsection{Component detection}
The detection of power line components is the key prerequisite for further analysis.
The number of research articles dealing with component detection has significantly increased in the last few years.
As can be seen in \mbox{Fig. \ref{fig:det_procedure}}, the common detection procedure can be divided into two stages: feature extraction and feature classification.
Features were extracted from images and then input to the classifier for identifying whether they belong to the component.
In this paper, the extracted features can be grouped into five major categories : color feature, shape feature, texture feature, fusion features and deep feature.
The features beside the deep feature are also defined as hand-craft features or shallow features.
As for classification stage, the learning-based algorithms are frequently used as the feature classifier such as SVM, ANN, and Adaboost.
Besides that, some hand-craft rules based on the characteristics of power line components are also responsible for classification.
For instance, the insulator has a repetitive geometric structure with multiple caps that have distinctive circular shape.
According to this rule, the insulator can be detected by searching the ellipse in the image.
In the following content, we will summarize the current literature based on different image features with special attention to the core method, component types, image preprocessing approaches, classifier, data for training and testing, and the method performance.
\begin{figure}[ht]
\centering
\includegraphics[width=8.5cm]{det_procedure.jpg}
\caption{The common procedure of component detection in power lines}\label{fig:det_procedure}
\end{figure}
\begin{table*}[ht]
\caption{Summary of the related work for components detection.}
\label{tab:summary_obj}
\centering
\renewcommand\arraystretch{1.2}
\setlength{\tabcolsep}{2.2mm}{
\begin{tabular}{ c l l l l m{2.5cm} m{2.5cm}}
\hline
\textbf{Features} & \textbf{~~~~~~Method} & \textbf{Component} & \textbf{Image preprocessing} & \textbf{~~Classifier} & \textbf{~~~~~~~~~Data} & \textbf{~~~~~Performance}\\
\hline
\multirow{7}{*}{\textbf{Color}} & Color model \cite{zhang2010simple} & Insulator & \tabincell{l}{RGB to HSI \\ Morphological filter} & Thresholding & Test: 2 & ----\\
\cline{2-7}
& Color model \cite{yao2012Color_ins} & Insulator & \tabincell{l}{RGB to HSI \\ Morphological filter} & Rules & Test: 50 & Complete: 50, incomplete: 42\\
\cline{2-7}
& Color model \cite{reddy2013condition} & Insulator & \tabincell{l}{RGB to Lab \\ K-means cluster} & SVM & Test: 33 & Recall: 100\% \\
\cline{2-7}
& Color model \cite{castellucci2013Color_ANN_tower} & Tower & \tabincell{l}{RGB to HSI \\ RGB to YCbCr} & ANN & Train: 350, Test:350 & Hit rate: 70\% \\
\hline
\multirow{7}{*}{\textbf{Shape}} & OAD-BSPK \cite{zhao2015OAD-BSPK} & Insulator & \tabincell{l}{RGB to Gray \\ Morphological filter} & Rules & Test:4 & Positioning accuracy: 58.4\% \\
\cline{2-7}
& Canny \cite{tragulnuch2018OBIC_tower} & Tower & \tabincell{l}{RGB to Gray \\ Gaussian filter} & Rules & Test: 2 videos with 25 FPS & Recall: 100\% \\
\cline{2-7}
& PLineD \cite{Santos2017PLineD_line} & Conductor & RGB to Gray & Rules & Test: 82 & ----\\
\cline{2-7}
& MLP \cite{liu2017MLP_fitting} & Fitting & ---- & Rules & Test: 2000 & Correct recognition rate: 80.42\% \\
\cline{2-7}
& \tabincell{l}{Profile projection\\ + SVM \cite{li2012PP_SVM_ins}} & Insulator & \tabincell{l}{RGB to HSI \\ Morphological filter} & SVM & Test: 637 & Correct rate: 95.01\% \\
\hline
\multirow{6}{*}{\textbf{Texture}} & GLCM-GMACM \cite{wu2012GMAC} & Insulator & RGB to Gray & K-means & Test: 100 & False alarm rate: 5\% \\
\cline{2-7}
& LDP+SVM \cite{jabid2016rotation} & Insulator & ---- & SVM & Test: 325 & Recall: 94.24\% \\
\cline{2-7}
& RI-LDP+SVM \cite{jabid2018RI-LDP_SVM_ins} & Insulator & ---- & SVM & Test: 395 & Recall: 95.74\% \\
\cline{2-7}
& Harr+AdaBoost \cite{Jin2012Haar_damper} & Fitting & \tabincell{l}{RGB to Gray \\ Smoothing filter} & AdaBoost & Train: 4517, test: 100& True positive rate: 92.48\% \\
\cline{2-7}
& HM-LA \cite{Fu2017Haar_fitting} & Fitting & RGB to Gray & AdaBoost & Test: 21 & Detection rate: 90\% \\
\hline
\multirow{4}{*}{\textbf{Fusion}} & HOG-LBP+SVM \cite{tiantian2017HOG_LBP_ins} & Insulator & \tabincell{l}{Otsu thresholding \\Morphological filter} & SVM & Test: 500 & Right detection rate: 89.1\% \\
\cline{2-7}
& CGT-LBP-HSV \cite{wang2016CGT_LBP_HSV_ins} & Insulator & ---- & Rules & Test: 100 & Recall: 88.9\% \\
\cline{2-7}
& ACF+Boost \cite{han2016ACF_tower} & Tower & ---- & Boost & Train: 600, test: 400 & Test error: 3.25\% \\
\hline
\multirow{12}{*}{\textbf{Deep}} & CNN+SW \cite{liu2016CNN_softmax_SW_ins} & Insulator & \tabincell{l}{Augmentation\\Resize} & Softmax & Train: 3000, test: 341 & True positive rate: 90.9\% \\
\cline{2-7}
& Faster R-CNN \cite{liu2018frcnn_ins} & Insulator & \tabincell{l}{Augmentation\\Resize} & Softmax & Train: 3000, test: 1500 & Recall: 87.53\% \\
\cline{2-7}
& Faster R-CNN \cite{Wang2017RCNN_fitting} & Fitting & Resize & Softmax & Train: 4500, test: 1500 & Recall: 84.03\% \\
\cline{2-7}
& SSD \cite{xu2018SSD_for_ins} & Insulator &\tabincell{l}{Augmentation\\Resize} & Softmax & Train: 2000, test: 500 & Mean average precision: 94.7\% \\
\cline{2-7}
& YOLOv2 \cite{wang2018YOLO_for_ins} & Insulator & \tabincell{l}{RGB to Gray\\Resize} & Softmax & Train: 800, test: 200 & Recognition accuracy: 83.5\% \\
\cline{2-7}
& YOLOv3 \cite{chen2019YOLO_tower} & Tower & \tabincell{l}{Augmentation\\Resize} & Logistic & Train: 11951, test: 1478(mixing with simulated and actual images) & Mean average precision: 90.45\% \\
\cline{2-7}
& FCNs \cite{hui2018Fst_FCNs} & Conductor & ---- & Softmax & Train: 400, test: 200 & Accumulative pixel errors : 450 pixels \\
\cline{2-7}
& cGAN \cite{chang2018cGAN_for_line} & Conductor & \tabincell{l}{Augmentation\\Resize} & Discriminator & Train: 5000, test: 1000 & Accuracy rate: 94.8\% \\
\hline
\end{tabular}
}
\end{table*}
\subsubsection{Color feature}
Detection of power line components has been investigated in few studies related to color feature.
In all studies, the images were converted to a specific color space and most of the studies concentrated on HSI(Hue, saturation, intensity) color space.
Zhang et al. \cite{zhang2010simple} obtained the intensity image by converting the aerial image into HSI color space from RGB color space.
Then, the morphological filter is utilized to denoise, and the connects components analysis is proposed to locate the possible area of insulators.
Finally, the glass insulator is detected through screening these areas by color thresholding.
Some images describing the detection process are used as the results the research.
Yao et al. \cite{yao2012Color_ins} also converted the aerial image into HSI color space and the saturation image was used to recognize insulators.
The morphological filter and Optimal Entropic Threshold (OET) were applied for contour extraction.
Contours belonging to insulators were identified according to the factors in hand-craft rules (e.g., circularity, duty-factor, Hu-moment Invariant).
The method was tested in 50 inspection images.
They found that all the complete insulators were correctly detected while 8 incomplete insulators were miss detected.
Some studies concentrated on the Lab color space for insulator detection.
Reddy et al.\cite{reddy2011dost,reddy2013condition} converted the RGB image to Lab color space and obtained the required cluster by applying K-Means.
The potential bounding box that may contain the insulator was drew by thresholding.
Then, the color feature of each candidate box was fed into the trained ANFIS \cite{reddy2011dost} or SVM \cite{reddy2013condition} for identifying the correct box.
The combination of different color spaces was also discussed by Castellucci et al. \cite{castellucci2013Color_ANN_tower}.
They investigated the tower detection approach based on color features of HSI and YCbCr(Luma, blue-difference, red-difference) color space.
Color maps were obtained by converting the aerial images into HSI and YCbCr color space respectively.
Then, Channels B, S and Cr from these color maps were utilized to compose the input vector of the ANN.
The 3-layer ANN classified the color features into four class: pole, crossarm, vegetation and others.
In this research, a transmission tower consisted of a pole and a crossarm.
Therefore, the tower can be detected once the pole and crossarm are found.
Totally 700 images were utilized in this research and the hit rate of 70\% was achieved.
To summarize, the color feature represents the global information more than the local information, which limits its practical application.
Further, how to determine the range of color values is a challenging problems in the complex background of power lines.
Hence, most of the studies based on the color feature are early researches (before the year of 2013) in the field of power lines inspection.
\subsubsection{Shape feature}
Compared to the color feature, the shape feature shows better representation of power line components due to their line-based structure.
In most studies, the contours or edges were extracted for further classification by using sharpening edge \cite{zhao2015OAD-BSPK}, Canny edge detector \cite{tragulnuch2018OBIC_tower}, edge drawing \cite{Santos2017PLineD_line} and crossing gradient template \cite{liu2017MLP_fitting}
.
Zhao et al. \cite{zhao2015OAD-BSPK} proposed an insulator detection method based on Orientation Angle Detection and Binary Shape Prior Knowledge (OAD-BSPK).
During image preprocessing, the binarization and morphological filter were performed to obtain binary image.
Then, the orientation angle was computed by using sharpening edge, and was used to rotate the binary image that made insulator vertically.
According to the binary shape prior knowledge of insulators and the possible orientation angles, small regions were removed thus the insulator was detected.
Four real-world aerial images were used to evaluated the proposed method.
In stead of sharpening edge, Tragulnuch et al. \cite{tragulnuch2018OBIC_tower} detected power towers based on a commonly used edge detector called Canny.
At first, Canny edge detector was utilized to extract the contours.
Then, the image was separated into 10 $\times$ 10 pixel boxes and Hough line transformations was applied to obtain straight-line.
The box that have long straight-line pass through it was marked as the candidate box.
Finally, the hand-craft rules such as the length and number of the straight-line were used to remove the false box and classify the power tower.
The method was tested in two inspection videos that have 1920$\times$1080 pixels resolution with 25 frames per second.
Results showed that all the towers appeared in videos were correctly detected.
By using Edge Drawing, Santos et al. \cite{Santos2017PLineD_line} studied the detection of power conductors.
First, Straight line segments were extracted through Edge Drawing.
Then, the hand-craft rules consisted of four steps were designed to identify these segments.
Step 1 was cutting the bending segments into horizontal segments and vertical segments.
In step 2, the short segments were removed according to the covariance between each segment.
The rest segments were grouped on the basis of line spacing in the step 3.
Finally in step 4, the segments belonging to the conductor were picked out according to the number of parallel lines in each group.
In the experiment, they extracted all the conductors in 82 real-world aerial images.
The crossing gradient template was applied for damper detection in the research of Liu et al. \cite{liu2017MLP_fitting}.
The detection scheme so-called multi-level perception consisted of three perception levels including low-level, middle-level and high-level.
The low-level perception adopted crossing gradient template for segments extraction.
In the middle-level perception, the aerial image was firstly divided into multiple blocks, and then the parallel lines and cross lines were utilized to define the conductor area and the tower area respectively.
Finally in the high-level perception, the power line components were recognized according to the designed hand-craft rules.
The rules were based on the local contour feature of damper and position relation between damper, tower and conductor.
The algorithm was evaluated at real-world images that 1608 dampers were correctly detected among the whole 2000 dampers in the dataset.
The aforementioned researches utilized hand-craft rule as the classifier.
The reasons account for this phenomenon were as follow: the power line components such as towers and conductors have obvious linear structure compared to the background in the aerial images.
Once the shape feature such as contours and edges were obtained, we can design some simple rules, for example, the length, number or positional relationship of the segments, to filter the extracted shape features.
Then, the components can be detected after several filtering operations.
However, in addition to the segment itself, some deeper information of the shape feature was worth studying, and the learning-based method is another good choice for feature classification.
Li et al. \cite{li2012PP_SVM_ins} provided an example who introduced a profile projection method to locate the potential area of insulators.
Next, the principal component analysis was introduced for tilt correction of the potential area.
After that, shape feature was derived from vertical profile projection curve.
Finally, the trained SVM was utilized to indicate the extracted features of insulators.
In the experiments, 637 cropped images were used to test the proposed method, and correct rate of 95.01\% was obtained.
\subsubsection{Texture feature}
The following studies discussed the detection of power line components based on texture feature and most of them concentrated on insulators \cite{wu2012GMAC,jabid2016rotation,jabid2018RI-LDP_SVM_ins} and fittings \cite{Jin2012Haar_damper,Fu2017Haar_fitting}.
Contrast to the color feature, the texture feature more characterize the local feature that was appropriate for the detection of those components with repetitive geometric structure (e.g., insulator, damper, and spacer).
Wu et al. \cite{wu2012GMAC} introduced texture segmentation algorithm for insulator detection.
The texture feature was extracted by Gray Level Co-occurrence Matrix (GLCM) and classified into two classes by K-means.
Then, insulators were recognized by means of the Global Minimization Active Contour Model (GMACM).
Experiments on 100 aerial images with 5\% false alarm rate demonstrated the performance of the proposed algorithm.
Local Directional Pattern (LDP) was a commonly used method for texture feature extraction and applied in some studies for insulator detection.
Jabid et al. \cite{jabid2016rotation} dealt with the orientation variation problem in the insulator detection.
The proposed method presented in the article consists of three steps: correcting the orientation of insulators into horizontal, performing LDP to extract texture feature, and classifying the texture feature based on SVM.
They established a evaluation set contained 325 images to verify the presented algorithm and achieved the recall rate of 94.24\%.
In later research \cite{jabid2018RI-LDP_SVM_ins}, they improve the LDP method to solve the issue of orientation variation which called Rotation Invariant LDP (RI-LDP).
Thus, the step 1 of detection scheme in \cite{jabid2016rotation} which needs to correct the insulator orientation can be removed.
The SVM still applied as the feature classifier.
The evaluation set increased to 395 image with 722 labeled insulators and this improved method achieved 95.74\% recall.
Besides insulators, there are some studies focused on the fitting detection based on Haar-like features.
Jin et al. \cite{Jin2012Haar_damper} extracted Haar-like features to detect dampers.
The cascade Adaboost classifier was used to identify the features from sliding windows of original image.
Totally 4517 images with 1518 damper images and 2999 background images were collected for training the classifier and 100 images were used for testing.
Results showed the effectiveness of the proposed method with 92.48\% true positive rate.
Fu et al. \cite{Fu2017Haar_fitting} also concentrated on the detection of fittings such as dampers and fasteners.
In stead of detecting the entire component, they decomposed it into multiple sub components and detected them respectively.
The combination of the Haar-like feature and AdaBoost classifier were used for recognition of these sub components.
Then, the damper or nut can be detected according to the positional relationship of the sub components.
The method was evaluated at 21 images and achieved over 90\% detection rate under simplex photography situation.
\subsubsection{Fusion Feature}
A few attempts have been made to detect power line components based on fusion features.
In the following studies, multiple types of features (e.g., shape, color, and texture) were combined for components detection.
Yan et al. \cite{tiantian2017HOG_LBP_ins} discussed the use of fusion feature for insulator detection.
The Histogram of Oriented Gradients (HOG) and Local Binary Pattern (LBP) features were extracted and then classified by SVM.
The SVM classifier was trained with 700 local sub insulator images from aerial videos.
The proposed method was evaluated at 500 images with 89.1\% detection rate.
Authors also discussed the benefit of the fusion feature compared to single feature method.
The HOG-based method and LBP-based method achieved 85.1\% and 81.8\% detection rate separately.
The results illustrated that the fusion feature showed more capacity for the representation of insulators.
Authors in \cite{tiantian2017HOG_LBP_ins} mentioned that the fusion feature can achieve higher accuracy than the single feature.
Wang et al. \cite{wang2016CGT_LBP_HSV_ins} proposed an insulator detection method that merged the shape, color and texture features.
As for shape feature, the edges were extracted using different directions gradient operators.
Then the candidate regions were produced by parallel lines clustering.
With respect to color and texture features, HSV color space converting and LBP were performed on the candidate regions.
Finally, the insulator can be detected by similarity calculation based on the Euclidean distance of HSV and LBP features.
Experiments were implemented on 100 images and 88.9\% detection rate was achieved.
The methods mentioned above classified different features separately, the following study polymerized different features into a multi-channel feature map for classification.
Han et al. \cite{han2016ACF_tower} described a process for tower detection based on the fusion feature in 10 channels.
The Aggregate Channel Features (ACF) computed several feature channels including 1 channel of normalized gradient magnitude, 6 channels of histogram of oriented gradients and 3 channels of LUV color space.
After the feature extraction, the Adaboost classifier was utilized to distinguish towers from background.
The proposed method was tested by using 200 images and attained 96.75\% accuracy.
Although the application of fusion feature for power line component detection is rare, it still shows considerable potential under the situation of data insufficiency.
Compared with single feature methods, fusion features can describe the components more comprehensively, which means higher accuracy can be obtained.
However, this improvement was based on the sacrifice of detection speed due to the extraction of multiple features.
\subsubsection{Deep Feature}
The number of research articles dealing with component detection of power lines based on deep learning has significantly increased in the last few years, especially since 2016.
Theses researches extracted deep feature from aerial images for component detection, and most of them achieved better performance than the researches based on hand-craft features that mentioned above.
The comparative experiments can be found in papers \cite{liu2016CNN_softmax_SW_ins,hui2018Fst_FCNs,chang2018cGAN_for_line}.
In deep learning approaches, the data quantity is an important factor for their performance.
Thus, data augmentation was applied in order to solve data insufficiency in researches \cite{liu2016CNN_softmax_SW_ins,liu2018frcnn_ins,xu2018SSD_for_ins,chen2019YOLO_tower,chang2018cGAN_for_line}.
Resizing of the images also became a common process that mentioned in \cite{liu2016CNN_softmax_SW_ins,liu2018frcnn_ins,Wang2017RCNN_fitting,xu2018SSD_for_ins,wang2018YOLO_for_ins,chen2019YOLO_tower,chang2018cGAN_for_line}.
There are two main reasons for resizing: on the one hand, some deep learning frameworks required fixed size input;
On the other hand, aerial images collected from UAV had high resolution.
Resize the image to a smaller size can save a lot computation resource.
In the early research of component detection based on the deep feature, the simple CNN combined with sliding window was introduced.
Liu et al. \cite{liu2016CNN_softmax_SW_ins} introduced a deep-learning-based method for insulator recognition.
A six-layer convolutional neural network combined with sliding windows scheme was applied for the detection of insulators.
They evaluated the method by using 341 images and achieved 90.9\% true positive rate.
The comparative experiments were also conducted with Bag of word (Bow) and Deformable Parts Model (DPM with HOG feature), the result demonstrated the improvement of the proposed method compared to these shallow-feature-based methods.
With the development of deep learning technology, a large number of famous object detection frameworks have emerged in recent years.
Researchers in the field of power line inspection attempted to introduce these existing frameworks into the detection of components.
For example, Liu et al. \cite{liu2018frcnn_ins} applied Faster Regions with Convolutional Neuron Network (Faster R-CNN) to detect insulators in the aerial image.
Wang et al. \cite{Wang2017RCNN_fitting} also employed Faster R-CNN for fitting detection including dampers, spacers and arcing ring.
These two researches both cropped the aerial image with object as main part in the center and then resized this sub-window to 500 $\times$ 500 resolution.
The insulator detection was also investigated by using Single Shot multi-box Detector (SSD) in the paper of Xu et al.\cite{xu2018SSD_for_ins}, and You Only Look Once v2 (YOLOv2) in the article of Wang et al. \cite{wang2018YOLO_for_ins}.
Pixel sizes of the aerial image were resized to 512$\times$512 for SSD and 448$\times$448 for YOLOv2.
As for tower detection, Chen et al. \cite{chen2019YOLO_tower} trained five YOLOv3 models with various pixel sizes containing 288$\times$288, 352$\times$352, 416$\times$416, 480$\times$480 and 544$\times$544.
Due to the lack of real-world inspection data, they generated 13,429 simulated images for training and testing.
The results showed that the model trained with 352$\times$352 pixel size can achieve 90.45\% mean Average Precision (mAP).
The process to detect conductors based on deep feature is quite different from other components.
In stead of region-based framework, the researchers were more inclined to use pixel-wise framework due to the slender line characteristic.
Hui et al. \cite{hui2018Fst_FCNs} employed the Fully Convolutional Networks (FCNs) to detect transmission conductors from aerial images.
A sequence of images collected from aerial videos were utilized to evaluate the proposed method.
Results showed the improvement of the deep-feature-based method compared with edge-based method.
Chang et al. \cite{chang2018cGAN_for_line} utilized conditional Generative Adversarial Nets (cGANs) to detect the conductor.
For model training, they constructed a specific dataset including four types of conductor images: normal (clear strip texture), linear (slightly farther than the normal ones), quadrangular(emphasize the strip
texture by close observation), noWire (background only).
Meanwhile, data augmentation was applied and the images were all resized to 256$\times$256.
The proposed method was tested by using 1000 images (500 for simplex samples and 500 for complex samples) and achieved 94.8\% average accuracy.
Comparison experiments were conducted with shallow-feature-based methods such as Line Segment Detector (25.2\%) and HOG (19.4\%), and other deep-feature-based methods such as PCANet (86.8\%) and ENet (95.4\%).
The result illustrated the high efficiency of the deep feature.
In this section, we only introduce several representative works that utilize deep features for component detection.
There are some other researches that apply deep learning method to analyze inspection data, which will be further reviewed in Section V.B.
A detail and in-depth discussion with special attention paid to deep learning is provided.
\subsubsection{Remarks}
\mbox{Table. \ref{tab:summary_obj}} provides the valuable information of researches in power line component detection, which includes the main image features used in the proposed method, inspection component, image preprocessing operation, classifier for the extracted features, brief description of data, and the method performance.
The component detection is a relatively mature area since it has many applications and large available data.
In a majority of existing works, the image feature extractor is manually designed according to the characteristics of components while the feature classification is mainly implemented by the hand-craft rules and shallow learning models.
There are some attempts in applying deep learning models to achieve end-to-end component detection, but the related investigation is still limited.
To improve the performance of component detection, there are at least two ways:
1) using refined aggregated features instead of single feature.
2) improving deep learning networks based on the characteristics of different components that are distinguished from other generic objects.
For the category of detected component, the insulator has received most of the attention.
To fully monitor the condition of power lines, other component types would need to be further concerned especially the fitting.
In addition, we also find that the description of the experimental data is unclear in part of the literature.
The data quality is an important factor that greatly influences the evaluation of the proposed method.
This information, such as the data size, image resolution, data collection approach and samples for visualization, should be well introduced.
Furthermore, evaluation metrics used in current works are inconsistent.
Many metrics have been applied to illustrate the performance of the proposed method such as recall, precision, accuracy, true positive rate, and average precision.
Even the same metric may have different definitions in different researches.
Besides, we notice that in the existing literature, the authors evaluate the method based on their own private dataset and the comparative experiment is quite limited.
Without the same evaluation metrics and dataset, the superiority of a certain method cannot be guaranteed.
A standard evaluation baseline including metrics and open dataset will promote the research in the whole area of inspection data analysis.
\subsection{Fault diagnosis}
Here, we consider the fault diagnosis of power line components by using visible inspection images.
The fault diagnosis researches are much less than the component detection due to the following reasons:
1) faulty components do harm to the power system, but they are relatively rare compared to normal components.
2) there are multiple types of faults in the same component.
3) there are many manifestations of the same fault type in images.
The reasons mentioned above lead to the lack of fault data that limits the use of learning based approaches, while the hand-craft based methods are difficult to deal with such a variety of component faults.
As can be seen in \mbox{Fig. \ref{fig:dia_procedure}}, the typical procedure of the fault diagnosis composed of two stages: detecting the component and identifying the fault.
At the first stage, the component region as the Region of interest (RoI) should be detected and cropped in order to filter out background for further analysis.
Then in the second stage, the fault identification method can be applied in the RoI.
Notice that in few studies (e.g., \cite{yang2017IULBP, maeda2017DELM-LRF}), the component detection stage was not considered since the component was already the principal part in the image.
On the other hand, the existence of some objects is a kind of fault such as bird's nest \cite{Xu2017HSV_GLCM_tower,Lu2018CF-CC_tower} and foreign body \cite{Song2015CED-HT_fitting,Tang2018Fst_fitting}, these types of faults are obvious enough to be analyzed directly without the stage of component detection.
In the following content, the literature will be summarized according to the fault categories with special attention to the fault identification stage, while the image features, data, and performance are also concerned.
\begin{figure}[ht]
\centering
\includegraphics[width=8.5cm]{dia_procedure.jpg}
\caption{The common procedure of fault diagnosis}\label{fig:dia_procedure}
\end{figure}
\begin{table*}[ht]
\caption{Summary of the related work of fault diagnosis}
\label{tab:summary_diag}
\centering
\renewcommand\arraystretch{1.2}
\setlength{\tabcolsep}{1.8mm}{
\begin{tabular}{ m{1.7cm} l l m{2cm} l m{2.5cm} m{2.5cm} }
\hline
\textbf{~~~~~Fault} & \textbf{~~~~~~Method} & \textbf{~~~~Detection} & \textbf{Identification} & \textbf{Main features} & \textbf{~~~~~~~~~Data} & \textbf{~~~~~Performance}\\
\hline
\multirow{6}{1.7cm}{Surface fault of insulator} & IULBP \cite{yang2017IULBP} & ---- & IULBP+Rules & Texture & ---- & ----\\
\cline{2-7}
& GSS-GSO \cite{Hao2018GSS-GSO_ins_ice} & GrabCut & Rules & Shape & ---- & ---- \\
\cline{2-7}
& M-SA \cite{Zhai2018M-SA_ins_flash} & F-PISA & Color model & Color & Test: 100 & Detection rate: 92.7\% \\
\cline{2-7}
& CGL-EGL \cite{oberweger2014CGL-EGL_ins} & CGL & EGL & Shape & Test: 20 instances & True positive rate: 95\%\\
\cline{2-7}
& M-PDF \cite{zhao2016M-PDF_ins} & OAD-BSPK \cite{zhao2015OAD-BSPK} & AlexNet & Deep & Train: 300, test: 700 & Mean average precision: 98.71\% \\
\hline
\multirow{12}{1.7cm}{Missing-cap of insulator} & GLCM \cite{wang2016CGT_LBP_HSV_ins} & CGT-LBP-HSV & GLCM+Rules & Texture & ---- & ---- \\
\cline{2-7}
& S-AM \cite{zhai2017S-AM_ins_drop} & Saliency detection & Adaptive morphology & Fusion & Test: 100 & Detection rate: 92.4\% \\
\cline{2-7}
& SMF \cite{zhai2018SMF_ins} & Color model & Morphology & Fusion & Test: 74 & Detection success rate: 91.7\% \\
\cline{2-7}
& M-YOLO+AM \cite{Han2019MYOLO_AM_ins} & M-YOLO & Adaptive morphology & Shape & Test: 42 & Recall: 93.3\% \\
\cline{2-7}
& \tabincell{l}{Faster R-CNN \\+ U-net\cite{ling2018Fst-Unet}}& Faster R-CNN & U-net & Deep & Train: 165, test: 55 & Recall: 95.5\% \\
\cline{2-7}
& \tabincell{l}{R-FCN \cite{li2018insulator_rfcn}}& ---- & R-FCN & Deep & Train: 2626, test: 500 & Mean average precision: 90.5\% \\
\cline{2-7}
& \tabincell{l}{Up-Net+CNN \cite{sampedro2019UpNet_CNN_ins}}& Up-Net & CNN & Deep & Train: 2400, test: 400 (synthetic images) & Accuracy rate: 98.8\% \\
\hline
\multirow{3}{1.7cm}{Corrosion of tower} & DELM-LRF \cite{maeda2017DELM-LRF} & ---- & DELM-LRF & Deep & Train: 2237, test: 560 & F-measure: 79.6\% \\
\cline{2-7}
& CMDELM-LRF \cite{maeda2018CMDELM-LRF} & ---- & CMDELM-LRF & \tabincell{l}{Deep\\(visual+text)} & Train: 2414, test: 603 & F-measure: 88.8\% \\
\hline
\multirow{2}{1.7cm}{Bird's nest of tower} & HSV-GLCM \cite{Xu2017HSV_GLCM_tower} & PED & HSV-GLCM & Fusion & Test: 50 & Accuracy rate: 87.5\% \\
\cline{2-7}
& CF-CC \cite{Lu2018CF-CC_tower} & ---- & CF-CC & Fusion & Train: 2972, test: 200 & Accuracy rate: 97.33\% \\
\hline
\multirow{4}{1.7cm}{Broken strand of conductor} & CED-IFR \cite{Liu2012IFR_line} & CED & IFR & Shape & Test: 100 (10 fault images) & Recognition rate: 100\% \\
\cline{2-7}
& LED-HT \cite{Yin2016LED-HT_line} & LED-HT & Rules & Shape & ---- & ----\\
\cline{2-7}
& CT \cite{Wang2015CT} & Gestal & Rules & Shape & ---- & ---- \\
\cline{2-7}
& GVN-SWT \cite{zhang2019GVN_line} & GVN & SWT & Texture & Test: 400 & Accuracy rate: 85.5\% \\
\hline
\multirow{3}{1.7cm}{Foreign body of conductor} & DAG-SVM \cite{mao2019HOG_SVM_line} & ---- & DAG-SVM & Shape & Train: 301, test: 34 & Accuracy rate: 84.3\% \\
\cline{2-7}
& SSD \cite{Wang2018SSD_line} & ---- & SSD & Deep & Train: 4500, test: 1500 & Mean average precision: 85.2\% \\
\hline
\multirow{2}{1.7cm}{Vegetation encroachment} & PCNN \cite{Mills2010PCNN_line} & ---- & PCNN & ---- & Test: 10 & Detection rate: 96\% \\
\cline{2-7}
& CNN-SM \cite{Qayyum2018Deep_stereo_line} & ---- & CNN-SM & Deep & Test: 40 instances & Accuracy rate: 90\% \\
\hline
\multirow{2}{1.7cm}{Broken of fitting} & CED+HT \cite{Song2015CED-HT_fitting} & CED+HT & Rules & Shape & ---- & ---- \\
\cline{2-7}
& Faster R-CNN \cite{Tang2018Fst_fitting} & ---- & Faster R-CNN & Deep & Train: 1000, test: 500 & Recall: 83.4\% \\
\hline
\multirow{3}{1.7cm}{Missing pin of fitting} & HM-LA \cite{Fu2017Haar_fitting} & Haar+Adaboost & HT+LSD & Shape & ---- & ---- \\
\cline{2-7}
& CNN \cite{Wang2018CNN_fitting} & ACF+Adaboost & CNN & Deep & Train: 1900, test: 752 & Accuracy rate: 96.54\% \\
\hline
\end{tabular}
}
\end{table*}
\subsubsection{Surface fault of Insulator}
Some studies concentrated on the single surface fault of insulators such as icing and flashover.
Yang et al. \cite{yang2017IULBP} presented a classification method of ice types on insulators based on the texture feature descriptor.
According to the severity, they categorized the ice types into free of ice, glaze ice, heavy rime, medium rime, slight rime, partial rime and snow.
An improved uniform LBP (IULBP) was proposed for feature extraction.
Then, the extracted feature were compared with the predetermined template of the ice type.
Thus, the ice type of the insulator can be classified according to the similarity between the extracted feature and the predetermined template.
The authors evaluated their method at few images that were cropped to focus on the icing part.
Therefore, they excluded the insulator detection stage from general fault diagnosis framework.
Hao et al. \cite{Hao2018GSS-GSO_ins_ice} assessed the icing condition of insulators based on the geometric structure of the icing insulator.
The GrabCut was employed to segment the insulator from images.
The hand-craft rules were designed to classify the icing condition based on the distance properties between two neighbouring insulator caps.
These distance properties were defined as Graphical Shed Spacing (GSS) and Graphical Shed Overhang (GSO).
The method was tested by using 8 images and results showed it can recognize icing conditions quantitatively.
Zhai et al. \cite{Zhai2018M-SA_ins_flash} applied Faster Pixel-wise Image Saliency Aggregating (F-PISA) to detect insulators.
The flashover area in the detected insulator can be extracted based on the color determination in Lab color space.
The method was evaluated by using 100 insulator images with flashover fault and achieved 92.7\% detection rate.
Few researchers introduced the fault diagnosis scheme to determine multiple surface faults of insulators and most of them followed the same basic diagnosis procedure: detected the insulator first, then divided the insulator region into several parts, finally calculated the similarity between each part.
Oberweger et al. \cite{oberweger2014CGL-EGL_ins} extracted Difference of Gaussian key-points and calculated Circular GLOH-like (CGL) descriptor at each key-point.
The descriptors were reduced through Principal Components Analysis (PCA) and then classified by using RANSAC-based clustering approach for identifying the insulator.
Since the insulator region was detected, each caps can be separated from the insulator region by means of Grabcut segmentation and Canny edge detection.
Then, the Elliptical GLOH-like (EGL) descriptor was computed at every individual caps.
Finally, the faulty cap can be determined according to the Local Outlier Factor (LOF) between each cap.
The method was tested by using 400 aerial images with 20 faulty caps including 16 cracked caps and 4 flashover caps.
The true positive rate with 95\% that was outperformed the GLCM-based method which was introduced in \cite{Zhang2010GLCM_ins_fault}.
Zhao et al. \cite{zhao2016M-PDF_ins} presented a deep-learning-based method for the classification of the insulator status including normal, damaged, dust contamination and missing caps.
The insulator was detected by utilizing OAD-BSPK which was proposed in \cite{zhao2015OAD-BSPK}.
After insulator detection, the insulator region was divided into several parts.
Then, these sub-images as multiple image patches were resized to 256$\times$256 and can be input to the pre-trained AlexNet (a CNN framework for classification) for feature extracting.
Finally, the feature vector obtained from AlexNet with 4096-dimension can be classified by means of a trained SVM.
Experiments were conducted on 1000 samples with 98.71\% mAP.
\subsubsection{Missing-cap of Insulator}
The diagnosis of insulator missing-cap is a popular research issue in the power lines inspection domain.
The number of relevant literature is also the largest compared with other inspection tasks.
The main reason for this phenomenon can be attributed to the following points:
1) the insulator is widely used in the power lines and has significant function for mechanical support and electrical insulation.
2) the missing-cap of insulator occurs frequently.
3) the characteristic of missing-cap fault in the aerial image is invariable and obvious.
The missing-cap can be detect through out the partition based procedure that separates the insulator region into several parts and calculate their similarities.
Wang et al \cite{Wang2014ins_drop,wang2016CGT_LBP_HSV_ins} located the insulator by using the fusion feature based method which is introduced in the Section IV.A.
Then, the insulator region was rotated to horizontal and divided into 23 blocks.
The texture feature of GLCM was extracted from each block and used for similarity calculation.
Finally, the anomalous block was identified as the missing-cap region.
A sequential images of the diagnosis process demonstrated the performance of the proposed method.
The partition based procedure was limited by the size setting of the part and the repeated computation of the similarity calculation.
Therefore, some attempts have been made for missing-cap detection by using morphological operation in the whole insulator region to high light the faulty area.
Zhai et al. \cite{zhai2017S-AM_ins_drop} detected the missing-cap of insulators based on saliency and adaptive morphology (S-AM).
The insulator region was located by using saliency detection that combined with color feature and gradient feature.
Color model was used to segment the insulator from the located region for fault analysis.
The missing cap fault can be high lighted after the operation of adaptive morphology.
In experiments, the proposed method achieved 92.4\% detection rate on 100 aerial images and was compared with other competitive approaches (\cite{Wang2014ins_drop} with 65.4\%, \cite{Zhang2014ins_drop} with 85.7\%).
However, the S-AM can only deal with the fault of glass insulators.
To this end, authors improved the S-AM method in the study of \cite{zhai2018SMF_ins} to handle both glass and ceramic insulators.
They located the insulator by using color model and rotated the insulator into horizontal.
Then, the morphological operation was performed to obtain the projected curve of fault features.
Finally, according to the hand-craft rules, the fault position can be determined.
Experiment results demonstrated the ability of the proposed method (92.8\%) compared with S-AM (92.4\%) \cite{zhai2017S-AM_ins_drop}.
Han et al \cite{Han2019MYOLO_AM_ins} also diagnosed the missing-cap by utilizing morphological operation.
The modified YOLOv2 detection framework was introduced to detect insulators.
Similar to the research in \cite{zhai2017S-AM_ins_drop}, they used adaptive morphology to high light the fault region of missing-cap.
But in the segmentation of the insulator, the color model combined with GrabCut was applied rather than the color model.
Totally 120 images (42 original images augment to 120 processed images) with missing-cap of insulators were used to test the proposed method.
In competitively experiments, the researchers compared their method with S-AM \cite{zhai2017S-AM_ins_drop} and SMF \cite{zhai2018SMF_ins} that mentioned above and achieved the best performance with 96.3\% precision and 93.4\% recall.
Recently, deep learning had attracted considerable interests in the power lines inspection and most of the studies concentrated on the detection of insulator and its fault
\cite{tao2018ILN_DDN_ins},
\cite{ling2018Fst-Unet},
\cite{li2018insulator_rfcn},
\cite{sampedro2019UpNet_CNN_ins},
\cite{liu2018frcnn_ins, yang2019DCNN_ins, gao2017Fst_FCN_for_ins, tian2018Parallel_ins, jiang2019EL-MLP, chen2019SOFCN}.
For example, Ling et al. \cite{ling2018Fst-Unet} applied Faster R-CNN to detect the insulator and employed U-Net to segment the missing-cap fault area in the detected region.
The method was evaluated by using 55 faulty images and achieved 95.1\% precision and 95.5\% recall.
Li et al. \cite{li2018insulator_rfcn} detected missing-cap by using Region based Fully Convolutional Network (R-FCN).
The training and testing sets composed of 2626 and 500 respectively and the method achieved 90.5\% AP.
Sampedro et al. \cite{sampedro2019UpNet_CNN_ins} proposed a Up-Net to segment the insulator and constructed a 10-layer CNN to determine the missing-cap fault.
For training and testing the diagnosis model, 2400 and 400 images were used, and the method obtained the accuracy of 98.8\%.
More details about these deep learning based approaches will be further discussed in section V.B.
\subsubsection{Corrosion of tower}
A few examples can be found on the corrosion determination of power towers by using the closely photographed image.
Maeda et al. \cite{maeda2017DELM-LRF} estimated the corrosion level of the transmission tower based on Local Receptive Field (LRF) and Deep Extreme Learning Machine (DELM).
The research focused on surface images of the tower and applied LRF to extract features for further diagnosis.
The LRF functioned as CNN that performed convolution and pooling in the input image.
Then, the DELM was utilized to classify the extracted features into three corrosion levels.
Totally 2797 images with 5-fold cross validation were utilized in the experiment and 79.6\% F-measure was achieved.
In the research \cite{maeda2018CMDELM-LRF}, the authors modified the DELM-LRF \cite{maeda2017DELM-LRF} by combining the text feature.
Text information such as type of towers, height of towers, voltage level and coating year was translated into a feature vector, and then was inputted to the framework of DELM-LRF with visual feature simultaneously.
In the experiment, totally 3017 samples with 5-fold cross validation was utilized.
The performance with 88.8\% F-measure of the modified DELM-LRF (defined as Correlation-Maximizing DELM-LRF) showed a great improvement compared to DELM-LRF.
\subsubsection{Bird's nest of tower}
Studies presented in the following discussed the detection of bird's nest on the power tower which is similar to the common object detection task.
Xu et al. \cite{Xu2017HSV_GLCM_tower} presented a bird's nest detection method for transmission towers.
In the detection stage, the tower region was located by using Prewitt direction operator and hand-craft rules.
For fault diagnosis, the image region of the tower was converted to HSV color space and the candidate regions were identified based on the color model.
GLCM was calculated at each candidate region to analyze the texture feature of bird's nest and then the fault can be detected.
Experiments were conducted on 50 aerial images and 87.5\% detection rate was achieved.
Contrast to \cite{Xu2017HSV_GLCM_tower}, the research in \cite{Lu2018CF-CC_tower} removed the detection stage and directly located the bird's nest.
The nest suspected region can be identified by using local adaptive binarization and template convolution.
Then, a cascade classifier was established to determine the correct nest region.
This cascade classifier was constructed by 3 SVMs including: SVM-1 with trunk feature, SVM-2 with projection features and SVM-3 with improved burr feature.
In the comparison experiments, 2972 and 200 images were used for training and evaluation.
Results indicated the obvious improvement of the proposed method (97.33\% accuracy) compared with HSV-GLCM \cite{Xu2017HSV_GLCM_tower} (61.85\%) that mentioned above.
\subsubsection{Broken strand of conductor}
A few attempts have been made to detect the broken strand of conductor and most of them follow the similar framework: extract the line segment and then determined the abnormal segment by using hand-craft rules.
Liu et al. \cite{Liu2012IFR_line} dealt with broken strand of the transmission conductor based on Improved Freeman Rule (IFR).
Canny Edge Detector was applied to extract segments in the input image.
According to the extracted segments and the characteristics of the end point, the conductor can be rotated to horizontal.
Then, the IFR was used to determine whether there exists broken strand in the detected conductor.
The proposed method was tested by using 100 images and all broken strand faults were correctly recognized.
Yin et al. \cite{Yin2016LED-HT_line} applied Laplacian Edge Detector (LED) combined with Hough Transformation (HT) to extract the lines.
Based on the extracted lines, the region of conductors can be located by employing the Region Growing.
As for the fault identification, the hand-craft rules were designed on the basis of the width change of the detected conductor.
Results on a few images were illustrated the performance of the proposed method.
Wang et al. \cite{Wang2015CT} employed Cross Template to detect vertical and horizontal lines.
The extracted lines were grouped based on the Gestalt perception theory.
According to the different perceptual contours, the fittings such as dampers and spacers installed at the conductor can be filtered out.
Finally, similar to the research in \cite{Yin2016LED-HT_line}, the hand-craft rules were established to recognize the broken strand.
Besides the width change, the proposed rules contained more parameters such as absolute gray difference and relative gray difference.
The presented method was evaluated at several images and the performance was demonstrated by some visualized results.
Different to the aforementioned studies, Zhang et al. \cite{zhang2019GVN_line} established a monitoring system of transmission conductor based the texture structure on the conductor surface.
In the image analysis algorithm of the monitoring system, the aerial image should be converted to gray color space by using Gray-scale Variance Normalization (GVN).
Then, the conductor can be extracted based on adaptive threshold segmentation with morphological processing.
The gray value distribution of the conductor region can be represented based on the Square Wave Transformation (SWT).
According to the characteristic of the conductor, the broken strand would break the repeated helical structure of the normal conductor.
Thus, the broken strand can be identified by analyzing the Z-shaped waveform from SWT.
The proposed method achieved 90.5\% accuracy in 400 aerial images with simple background, and 85.5\% in 400 images with complex background.
\subsubsection{Foreign body of conductor}
The procedure of the foreign body detection was similar to the inspection task of bird's nest detection.
Mao et al. \cite{mao2019HOG_SVM_line} detected the foreign body of the conductor based on HOG and SVM.
Firstly, the aerial image was processed by gray-scale and median filter for further analysis.
Then, the HOG feature was extracted and classified by Directed Acyclic Graph (DAG) multi-classifiers that defined as DAG-SVM.
The DAG-SVM consisted of three SVM classifiers that responsible for different categories.
For classifier 1, the unusual image and Non-foreign-body image were distinguished and inputted to next two classifiers respectively.
The classifier 2 was utilized to determine whether the unusual image belonged to foreign body or broken strand.
The rest classifier recognize the Non-foreign-body image into two categories: broken strand and normal.
Finally, the condition of the transmission conductor can be obtained.
In experiments, 335 images were utilized with 10-fold cross validation for training and testing.
The recognition accuracy with 84.3\% illustrated the effectiveness of the proposed method.
Tang et al. \cite{Wang2018SSD_line} presented a deep-learning based method for foreign body detection.
The object detection framework SSD that employed the VGG (a CNN for classification) as the basic network was applied to detect kite, balloon and bird's nest in the power lines.
Each type of foreign body had 1500 training samples and 500 testing samples with 300$\times$300 resolution.
Authors discussed the parameter setting of the detection method, results showed the box ratio with \{1/2,2\} and training batch size with 4 can achieve better performance with 85.2\% mAP.
The competitive experiments also conducted with the shallow-feature-based detection framework such as DPM that achieved 54.8\% mAP.
It demonstrated the powerful capabilities of the proposed deep-learning-based method.
\subsubsection{Vegetation encroachment of conductor}
The visible image analysis of vegetation encroachment is quite different to other inspection items, it should be combined with distance measurement instead of object detection or classification alone.
The commonly used approach for distance measurement in the optical based aerial inspection was binocular stereo vision.
To determine the vegetation encroachment, the vegetation (trees) and transmission conductors should be located manually or automatically first, and then the distance between them can be estimated.
Mills et al. \cite{Mills2010PCNN_line} segmented the crown of trees in the multi-spectral image by using Pulse-Coupled Neural Network (PCNN) and morphological operation.
The horizontal distance between the conductor and trees along with the height of trees and towers can be estimated by stereo vision.
The stereo image was obtained from subsequent frames of a single camera that had the same effectiveness with the binocular camera.
To obtain depth information in the stereo image, a stereo matching algorithm was proposed based on the dynamic programming.
In the experiment, the detection rate of tress reached 96\% in 10 images with totally 129 trees.
The average error in the estimation of tree-line distance achieved 0.7 m.
And the height estimation of trees and tower attained 1.8 m and 1.1 m average error respectively.
Qayyum et al. \cite{Qayyum2018Deep_stereo_line} also applied the stereo image for monitoring vulnerable zones near transmission conductors.
However, the automatic detection of the trees is not the objective of this research, the authors paid more attention to the height estimation based on stereo vision.
For obtaining the stereo image, the binocular camera was installed on a fixed wing UAV.
In order to calculate the height of objects proximal to transmission conductors, they presented a 8-layer CNN for Stereo Matching (CNN-SM).
The experiment was implemented in a 500 kV power corridor which comprised 20 towers.
The proposed method was compared with existing algorithms such as dynamic programming and graph cut and achieved higher accuracy of 90\%.
\subsubsection{Broken of fitting}
A few examples can be found on the detection of broken fittings.
Song et al. \cite{Song2015CED-HT_fitting} applied Canny edge detector combined with Hough transform to extract the edge of the conductor.
Next, along the direction of the conductor, a scanning window was established.
Then, the candidate region of spacers can be recognized by finding the minimum white area in all the windows that slid through the conductor.
Finally, the hand-craft rules were designed based on connected components calculation to identify whether the detected spacer was broken.
If the number of connected components is lager than 1, the spacer was recognized as broken spacer.
As results, a sequence of the visualized images in the algorithm procedure illustrated the effectiveness of the proposed method.
Contrast to \cite{Song2015CED-HT_fitting}, Tang et al. \cite{Tang2018Fst_fitting} treated the broken fitting detection as a conventional detection task.
They employed Faster R-CNN to detect broken dampers and other normal fittings.
Inspection images with 5 categories were prepared for training and validation including: two types of the spacer, normal damper, broken damper and bird's nest.
There were 1000 training samples and 500 testing samples for each category in the experiments.
Authors discussed the performance of the proposed method under different situation.
Result of 83.4\% recall demonstrated the basic network with ResNet and convolutional kernel size with 9$\times$9 performed better.
\subsubsection{Missing pin of fitting}
The challenging inspection task of missing pin diagnosis has only been investigated in few studies due to the extremely small size.
The detection of the small fitting such as pin and nut is still an opening issue, thus, these studies analyzed the missing pin based on the aerial image that was captured close to the fitting or even cropped the fitting region from original image manually.
Fu et al. \cite{Fu2017Haar_fitting} introduced a hierarchical model with learning algorithm to identify the missing pin.
According to the And-or Graph (AoG), the fitting can be represented by the combination of several parts.
For example, the fastener can be divided into two parts: pin and nut.
In order to detect each part of the fitting, the Haar-like feature and Adaboost classifier were applied.
For missing pin identification, the detected fitting region was processed with LSD and Hough transform to extract segments and circles respectively.
Then, the missing pin fault can be identified based on the distance constraint between the center of the circle and the segment of the pin.
This method was tested by using 42 images of fitting region, and 5 images were considered have pins while only one of them was correct.
Wang et al. \cite{Wang2018CNN_fitting} proposed a CNN based method for missing pin diagnosis.
The fitting region was located by using Aggregate Channel Features and Adaboost classifier.
Then, a 8-layer convolutional neural network was established to extract deep features of the fitting region and classify them into three categories: normal fitting, fitting with missing pin and background.
The diagnosis method was trained by 1900 images and evaluated at 752 images and achieved 96.54\% recall.
However, the faulty image was already the fitting region cropped by hand, which meant the jointly experiment was not conducted in this research that performed detection and diagnosis in-order.
\subsubsection{Remarks}
\mbox{Table. \ref{tab:summary_diag}} provides the valuable information of researches in power line fault diagnosis, which includes the fault category, proposed method, approach used in component detection stage, approach used in fault identification stage, main image features, brief description of data, and the method performance.
The fault diagnosis of power line is a relatively rare touched area in the literature compared with component detection.
The problems in this area is similar to the component detection to some extent, but there are
still several nuances should be concerned.
Current researches mainly treat the fault diagnosis as object detection task (e.g., missing-cap of insulator) or classification task (e.g., corrosion of tower).
In reality, one fault has various forms that leads to the difficulties in robust algorithm design.
It is worth trying to identify the fault from the perspective of abnormal image detection .
There is a primary attempt in \cite{sampedro2019UpNet_CNN_ins} to classify abnormal images.
For fault types, the missing-cap of insulator received most of the attention while the works of other faults are limited.
In addition, we find that in most cases, one paper only focused on one fault of a specific component.
With the widely application of aerial inspection and the accumulation of inspection data, more fault types need to be considered.
As for image features, shape and deep features are most frequently used in the existing literature.
Since the fault data is relatively rare, the hand-craft extractor for fusion features would need some further attention.
Moreover, using multi-modal learning to leverage the rich information of text data is another good choice which is preliminary tried in \cite{maeda2018CMDELM-LRF}.
To identify the fault, most studies need to detect the component region first.
However, few researchers take it into consideration that how to achieve fault identification when the component is miss detected.
Fault diagnosis without the stage of component detection deserves further investigation.
In addition, we also find that most existing works are evaluated in laboratory.
More real-world experimental results in practical aerial inspection of power lines are welcomed for the research.
\subsection{Main limitations of current researches}
Although the power lines inspection has developed rapidly in recent years, there are still two main limitations in the existing literature that need some further attention.
The first is the insufficient research on some power line components and their faults.
As can be seen from the previous review, most of the research is focused on the insulator together with its faults while other components are only received rare attention.
The reasons for this phenomenon are as follow:
In four categories of crucial components, the insulator has lowest variants in images due to its standardized shape, which makes the algorithm easy to get higher generalization in real-world applications.
Further, the insulator has moderate size in aerial images while the tower is too large, the conductor is overly thin and the fitting is excessively small.
This factor results in conveniently photographing for UAVs to capture more images of insulators.
In addition, the moderate size and standardized shape also reduces the difficulty in method design.
Finally, components apart from insulators have many variants or subcategories or scales in aerial images.
For instance, the damper, fastener and spacer all belongs to fittings and they are very different in size and shape.
The variants, insufficient data and inappropriate scale make researches on other components is a rarely touched area in the literature.
The second is that most methods in current works have not been tested in actual engineering.
In laboratory, the data collected from aerial inspection is separated into training set and testing set, which means they are identically distributed.
But in reality, this precondition can not be guaranteed.
Generally, the appearances of component, fault and background have a lot of variants in real world inspection image, and some variants are not included in the experimental data.
Moreover, the image differences between the lines in different regions are even greater.
This challenging problem places higher requirements on the robustness and generalization capability of both hand-craft designed methods and learning based methods.
Nevertheless, the researches on effective evaluation of the robustness and generalization of the analysis methods are still limited.
Another factor which limits the practical application of inspection data analysis is the computation cost.
There are massive images and videos with high pixel resolution need to be analyzed within an inspection period.
Under the situation of limited computing resources, the analysis method should achieve highly efficient computation.
However, researches on acceleration of the analysis model for inspection data are quite rare.
In addition, the computation time of the analysis method is rarely introduced in the existing literature.
\section{Deep-learning-related analytic methods in power lines inspection}
Deep learning has been widely used in generic tasks such as car detection and face recognition, and its application in power lines inspection is becoming a research hotspot in the past two years.
In this section, with the objective to offer an in-depth discussion of current deep-learning-related researches in power lines inspection, we summarize these works (some of them are briefly mentioned above) with special attention paid to their method characteristics, research issues and core ideas.
Firstly, we provide a brief introduction of some fundamental deep learning approaches for batter understanding the deep-learning-based researches in the field of power lines inspection.
Then, the exploration and taxonomy of current deep-learning-related methods for the inspection of power lines are introduced from five aspects: using existing frameworks, extracting deep features, network cascading, aiming at data insufficiency, and improving methods based on domain knowledge.
Valuable information of the literature is listed in \mbox{Table. \ref{tab:summary_DL}}.
Finally, we propose a basic conception about how to conduct an intelligent analysis system of inspection data by using the deep learning technology and several novel image processing approaches, some alternative methods are also provided in each stage of the system.
\subsection{A brief introduction to fundamental deep learning approaches}
\subsubsection{Deep convolutional neural network}
Deep convolutional neural network (DCNN) has the capability to extract high quality features and is widely used in a variety of tasks.
It has made great achievements in the field of computer vision (e.g., image classification), and outperforms other Non-DCNN based algorithms.
A typical DCNN consists of multiple layers which aims to learn the representation of input data
Most layers of DCNN are composed of a number of feature map, within which each unit acts like a neuron.
There are three major types of layers in DCNN: convolutional layer, pooling layer and fully connected layer.
In the convolutional layer, units of the feature map are connected to local patches in the feature maps of the previous layer through the 2D convolutional kernel (or filter or weights).
The role of the pooling layer is to downsampling of feature maps.
The fully connected layer provides the feature vector for classifiers (e.g., SVM or Softmax).
A typical CNN is composed of several stacked convoluitonal and pooling layers, followed by the fully connected layer.
Since the appearance of AlexNet \cite{krizhevsky2012alexnet}, a lot of novel DCNN architectures have been proposed by restructuring the processing unit and designing the new block.
ZF-Net \cite{zeiler2014ZFNet} and VGG-Net \cite{simonyan2014vgg} increased the depth of the DCNN by reducing the size of the filters.
GoogleNet \cite{szegedy2015googlenet} reduced the computational cost through inception block.
In 2015, the residual block (or skip connections) was proposed in ResNet \cite{He2015ResNet} which got famous.
This concept of skip connections was utilized by many succeeding DCNN architectures such as Inception-ResNet \cite{szegedy2017inc-res},and ResNext \cite{xie2017ResNext}.
Some researchers concentrated on the lightweight DCNN for mobile device such as MobileNet \cite{howard2017mobilenet}, Xception \cite{chollet2017xception}, and SuffleNet \cite{zhang2018shufflenet}.
Recently, some attempts have been made to automatically design the DCNN architecture (also known as Neural Architecture Search) such as NasNet \cite{zoph2018nasnet}, MNasNet \cite{tan2019mnasnet}, and ENas \cite{pham2018enas}.
\subsubsection{Dee Learning Based Object Detection and Segmentation}
The object detection method based on deep learning consists of two parts: a DCNN (also defined as backbone or basic network) for feature extraction and a detecting scheme for object classification and location.
According to the detecting scheme, the DL-based detection method can be summarized into two major categories \cite{liu2018deep_OD_survey}:
(1) Two-stage detection method which needs to generate proposals of possible objects in an independent stage.
The proposal can be regarded as a specific bonding box that may have a object and be generated from an image.
In a two-stage detection method, deep features are extracted from these proposals, and then classified by category-specific classifiers.
The classic and probably the most commonly used method is Faster R-CNN, introduced by Ren et al. \cite{ren2015faster-rcnn} in 2015.
Many remarkable methods have emerged in the same period such as R-FCN \cite{dai2016rfcn}, Cascade R-CNN \cite{cai2018cascade_rcnn}, and Light Head R-CNN \cite{li2017Light_head_r-cnn}.
(2) One-stage detection method which does not contain the generation of proposals.
For adjusting to the mobile device that has limited storage and computational capability, the one-stage detection method removes the procedure of proposal generation and its subsequent feature processing operations (e.g., classification).
As an alternative, the method directly obtains the category and position information from preset grids of the full image with a single DCNN.
Commonly used methods are SSD \cite{liu2016ssd}, YOLO\cite{redmon2016YOLO,redmon2018yolov3}, and RetinaNet \cite{lin2017retinanet}.
Recently, some studies opened up a new direction of DL-based object detection method which is called anchor-free detection method.
These methods utilized a key-point-like approach to represent the position of objects instead of a traditional bounding box or anchor.
The popular anchor-free methods are CornerNet \cite{law2018cornernet}, ExtremeNet \cite{zhou2019ExtremeNet}, CenterNet \cite{duan2019centernet}, and FCOS \cite{tian2019fcos}.
In addition to aforementioned object detection methods, the segmentation method also has the function to detect objects in an image.
In segmentation methods, each pixel is classified with the category of its enclosing object.
There are some commonly used methods that have become widely known standards such as FCN \cite{long2015FCN}, U-Net \cite{ronneberger2015UNet}, and SegNet \cite{badrinarayanan2017segnet}.
Recently, some attempts (e.g., Mask R-CNN \cite{he2017mrcnn} and HTC \cite{chen2019HTC}) have been made to combine the object detection and segmentation that is called instance segmentation.
These methods achieved the label separation for different instances of the same category.
In other words, they can achieve pixel-wise classification in each bounding box that contains the object.
\subsubsection{Generative Adversarial Networks}
Generative Adversarial Networks (GANs) have attracted widespread attention especially in computer vision field which was proposed by Goodfellow et al. \cite{goodfellow2014GAN} in 2014.
GANs consist of two networks and train them in competition with each other, these two networks are described as follow:
a network so-called generator is utilized to generate synthetic data samples,
another network so-called discriminator is used to distinguish real data samples from synthesized samples.
Due to the capacity of new data generation from the learned statistical distribution of training data, GANs achieved state-of-art performance in various vision applications including image synthesis, segmentation, style transfer, and image super-resolution.
Since original GANs, there are many variants in different fields have been proposed.
Some studies focus on generating high-quality samples such as CGAN\cite{mirza2014CGAN}, DCGAN \cite{radford2015DCGAN}, and WGAN\cite{arjovsky2017WGAN}.
Few attempts have been made to image style transfer, i.e. converting images from one style to another such as day to night.
The typical researches include Pix2Pix \cite{isola2017Pix2Pix}, and CycleGAN \cite{zhu2017cycleGAN}.
GANs are also widely used in image restoration and the well-know researches are DeblurGAN \cite{kupyn2018DeblurGAN} and SRGAN \cite{ledig2017SRGAN}.
\subsection{An exploration of current deep-Learning-based approaches for the inspection of power line components}
\begin{table*}[ht]
\caption{Summary of the related work of deep-learning-based approaches for the inspection of power line components.}
\label{tab:summary_DL}
\centering
\renewcommand\arraystretch{1.2}
\setlength{\tabcolsep}{1.5mm}{
\begin{tabular}{m{1.8cm} l l c c m{5cm}}
\hline
\textbf{Characteristic} & \textbf{~~~Inspection item} & \textbf{~~~~~~Method} & \textbf{Data size}& \textbf{Pixel size} & \textbf{~~~~~~~~~~~~~~~~~~Core idea} \\
\hline
\multirow{9}{1.8cm}{Existing frameworks} & Missing-cap of insulator & Faster R-CNN \cite{liu2018frcnn_ins} & 4500 & 500$\times$500 & Utilize Faster R-CNN to detect insulator and it's fault \\
\cline{2-6}
& Tower detection & Faster R-CNN \cite{bian2019Fst_tower} & 1300 & 640$\times$480 & Utilize Faster R-CNN to detect tower \\
\cline{2-6}
& Conductor detection & FCNs \cite{hui2018Fst_FCNs} & 600 & 1280$\times$720 & Utilize FCNs to detect power line \\
\cline{2-6}
& Fitting detection & Faster R-CNN \cite{wang2017Fst_for_fitting} & 6000 & 500$\times$500 & Utilize Faster R-CNN to detect fittings\\
\cline{2-6}
& Insulator detection & YOLO \cite{wang2018YOLO_for_ins} & 1000 & 448$\times$448 & Utilize YOLO to detect insulator \\
\cline{2-6}
& Tower detection & YOLOv3 \cite{chen2019YOLO_tower} & 13429 & 352$\times$352 & Utilize YOLO to detect tower \\
\cline{2-6}
& Insulator detection & SSD \cite{xu2018SSD_for_ins} & 2500 & 512$\times$512 & Utilize SSD to detect insulator \\
\cline{2-6}
& Conductor detection & cGAN \cite{chang2018cGAN_for_line} & 5500 & 256$\times$256 & Utilize cGAN to detect conductor \\
\cline{2-6}
& Insulator detection & cGAN \cite{chang2018cGAN_for_ins} & 3000 & 256$\times$256 & Utilize cGAN to detect insulator \\
\hline
\multirow{5}{1.8cm}{Extracting deep features} & Surface-fault of insulator & M-PDF \cite{zhao2016M-PDF_ins} & 1000 & 227$\times$227 & Extract features by CNN in multi image patches\\
\cline{2-6}
& Corrosion of tower & CMDELM-LRF \cite{maeda2018CMDELM-LRF} & 3017 & 50$\times$50 & Extract features by CNN in image and text\\
\cline{2-6}
& Missing-cap of insulator & DCNN \cite{yang2019DCNN_ins} & 2951 & 256$\times$256 & Extract features by CNN in sub-windows of aerial image\\
\hline
\multirow{5}{1.8cm}{Network cascading} & Missing-cap of insulator & \tabincell{l}{Faster R-CNN \\+ U-net\cite{ling2018Fst-Unet}} & 620 & 1024$\times$1024 & Utilize Faster R-CNN to detect insulator and U-net to detect the fault \\
\cline{2-6}
& Missing-cap of insulator & \tabincell{l}{Faster R-CNN \\ + FCN \cite{gao2017Fst_FCN_for_ins}} & 3650 & 1215$\times$1048 & Utilize Faster R-CNN to detect insulator and FCN to filter out background \\
\cline{2-6}
& Missing-cap of insulator & ILN + DDN \cite{tao2018ILN_DDN_ins} & 1956 & ---- & Propose an Insulator localizer network and a Defect detector network\\
\hline
\multirow{7}{1.8cm}{Aiming at data insufficiency} & Insulator detection & \tabincell{l}{Synthetic method \\ + cGAN \cite{chang2018Synthetic_ins}} & 265 & 512$\times$512 & Propose a synthetic method to synthesize training samples \\
\cline{2-6}
& Missing-cap of insulator & PPM \cite{tian2018Parallel_ins} & ---- & ---- & Introduce a preprocessed parallel method by data augmentation\\
\cline{2-6}
& Surface fault of insulator & SPPNet-TL \cite{bai2018SPPNet-TL_ins} & 278 & 227$\times$227 & Training on a small dataset based on transfer learning\\
\cline{2-6}
& Insulator detection & SSD + TS-FT \cite{miao2019TS-FT_ins} & 8005 & 300$\times$300 & Introduce a two-stage fine-tune strategy for training on the small dataset \\
\cline{2-6}
& Conductor detection & WSL-CNN \cite{lee2017weakly_line} & 8400 & 512$\times$512 & Apply weakly surprised learning to train the conductor detection model\\
\hline
\multirow{5}{1.8cm}{Improving by domain knowledge} & Missing-cap of insulator & EL-MLP \cite{jiang2019EL-MLP} & 485 & 300$\times$300 & Aggregate deep learning models in perception levels based on ensemble learning \\
\cline{2-6}
& Missing-cap of insulator & SO-FCN \cite{chen2019SOFCN} & 300 & 400$\times$600 & Introduce a mathematical morphology operation to optimize the detection procedure \\
\cline{2-6}
& Missing-cap of insulator & Up-Net + CNN \cite{sampedro2019UpNet_CNN_ins} & 2800 & 256$\times$256 & Propose a diagnosis strategy for missing-cap detection based on semantic segmentation\\
\cline{2-6}
& External force damage & \tabincell{l}{Modified\\Faster R-CNN \cite{xiang2018Modified_Fst_ins}} & 2199 & 600$\times$1000 & Improve Faster R-CNN to detect engineering vehicles based on their characteristics \\
\hline
\end{tabular}
}
\end{table*}
\subsubsection{Directly use of existing frameworks}
Faster R-CNN is a common used framework in the inspection of power lines for insulator fault detection \cite{liu2018frcnn_ins}, tower detection \cite{bian2019Fst_tower,hui2018Fst_FCNs}, and fitting detection \cite{wang2017Fst_for_fitting}.
Liu et al. \cite{liu2018frcnn_ins} applied Faster R-CNN to detect the insulator and the missing cap fault separately.
They tested the method with insulator images in three different voltage level and prepared 1000 training samples and 500 testing samples for each level.
For the diagnosis of missing cap fault, only 120 images (80 for training) were utilized for evaluation.
In the experiment, the all the images were resized to 500$\times$500 pixel resolution and data augmentation including flipping and cropping was applied to extend the dataset.
Bian et al. \cite{bian2019Fst_tower} used Faster R-CNN for tower detection.
Totally 1300 aerial images were prepared for experiments and the 10-fold cross-validation was applied to find best model.
Hui et al. \cite{hui2018Fst_FCNs} also employed Faster R-CNN to locate towers.
Furthermore, the conductor was extracted by using FCNs.
The data with 1280 tower images (1000 for training) and 600 conductor images (400 for training) was used in experiments.
Wang et al. \cite{wang2017Fst_for_fitting} applied Faster R-CNN to detect fittings including space, damper and arcing ring.
For each type of fittings, 1500 training samples and 500 testing samples were prepared and all images were resized to 500$\times$500 pixel resolution.
In order to achieve high computation speed, the one stage detection framework was applied in some researches such as YOLO \cite{wang2018YOLO_for_ins,chen2019YOLO_tower} and SSD \cite{xu2018SSD_for_ins}.
Wang et al. \cite{wang2018YOLO_for_ins} employed YOLO to detect insulator in the image with gray color space.
The data including 1000 images (800 for training) was collected in laboratory and outdoor power lines.
All the images were resized to 448$\times$448 for matching the input size of the network.
Chen et al. \cite{chen2019YOLO_tower} utilized the improved YOLO (also denoted as YOLOv3) to detect towers.
On account of the data insufficiency, they constructed a dataset by generating the simulated images.
Among 13429 images were used in the experiment, of which 11,951 for training and 1478 for testing, the pixel resolution for the network input was 352$\times$352.
The authors discussed that the pixel size of the input image was an important factor to influence the method performance.
Xu et al. \cite{xu2018SSD_for_ins} proposed a SSD based method for insulator detection.
Totally 2000 images were augmented by rotation and extended to the number of 6000 (500 for validation).
In experiments, the pixel resolution with 512$\times$512 showed higher accuracy compare to 300$\times$300 while both of them achieved the requirement of real-time detection.
Few studies applied unconventional detection framework (cGAN) to detect the power line components \cite{chang2018cGAN_for_line,chang2018cGAN_for_ins}.
Chang et al. \cite{chang2018cGAN_for_line} recognized the power conductor by using cGAN.
The aerial image was inputted to cGAN and the mask image that only contained the conductor was generated.
The pixel resolutions of the input and the output were 256$\times$256 and 128$\times$128 respectively.
Three datasets were prepared in experiments including training set with 5000 images, simple testing set with 500 images and difficult testing set with 500 images.
The authors also employed the cGAN for insulator detection \cite{chang2018cGAN_for_ins}.
A two-stage training strategy was proposed to obtain a more accurate cGAN model.
In the training stage 1, the model was trained by using the position samples with coarse annotation.
Then, the same model was continue trained by utilizing the segmentation samples with fine annotation.
Among 3000 images collected from the Internet were used for evaluation.
The input and output of the cGAN had the same pixel resolution with 256$\times$256.
\subsubsection{Extract deep feature for classification or detection}
A few examples can be found on the use of extracting deep feature for classification \cite{zhao2016M-PDF_ins,maeda2018CMDELM-LRF} or detection \cite{yang2019DCNN_ins}.
Zhao et al. \cite{zhao2016M-PDF_ins} classified the condition of insulators by means of AlexNet.
The deep feature was extracted by the untrained AlexNet which was pre-trained in the ImageNet dataset.
Then, the extracted deep feature was fed to a SVM for final classification.
Totally 1000 images with 256$\times$256 pixel resolution were used for training (70\%) and testing the proposed method.
On the purpose of deterioration Levels estimation for towers, Maeda et al. \cite{maeda2018CMDELM-LRF} extracted visual features by using LRF which performed convolution and pooling similar to CNN.
Different from the traditional image-based research, they combined with the text feature that was extracted by a hidden layer.
Two kinds of features are further extracted and classified by DELM.
In the experiment, 3107 images with 50$\times$50 pixel resolution were used and 5-fold validation was applied as verification method.
Yang et al. \cite{yang2019DCNN_ins} established a 9-layer CNN to extract deep feature from sub-windows of the original image for insulator fault detection.
For example, an aerial image with 1280$\times$720 pixel resolution can be divided into 15 small images by the adaptive sliding window.
Then, the sub-window can be determined by the CNN into two classes: normal and abnormal.
The CNN model was trained by 2610 sub-windows that obtained from 205 raw images and tested in 341 real-world images.
\subsubsection{Network cascading for fault diagnosis}
Studies presented in the following have discussed the structure of network cascading and they have concentrated on the fault diagnosis.
The network cascading was generally composed of two sequential deep learning networks such as the combination of Faster-CNN and U-net \cite{ling2018Fst-Unet}, the combination of Faster-CNN and FCN \cite{gao2017Fst_FCN_for_ins}, and the combination of Insulator localizer network (ILN) and Defect detector network (DDN) \cite{tao2018ILN_DDN_ins}.
The procedure of the network cascading greatly narrowed the scope of fault analysis in which the former network was responsible for component detection and the latter identified the fault on the located component region.
Ling et al. \cite{ling2018Fst-Unet} detected the insulator by using Faster R-CNN.
Then, the insulator region was cropped from the original image and inputted to the U-net for locating the missing cap.
In the experiment, 620 aerial image of 1024$\times$1024 pixel resolution were utilized for Faster R-CNN with 3-fold validation.
For training and testing of U-net, 220 insulator images contained missing-cap faults were cropped from original images and the 4-fold validation was performed.
The pixel resolution of the cropped image was various depending on the size of the insulator.
Gao et al. \cite{gao2017Fst_FCN_for_ins} also applied Faster R-CNN to detect insulators.
But instead of detecting the fault by U-net, they employed FCN to segment the insulator from the detected region.
Then, each cap of the insulator can be recognized for further fault identification.
Among 3000 aerial images with 1215$\times$1048 pixel resolution were utilized to train Faster R-CNN and 100 images were used for evaluation.
Due to labeling cost, only 450 and 100 insulator images with 500$\times$500 pixel resolution were prepared for FCN training and testing respectively.
Tao et al. \cite{tao2018ILN_DDN_ins} proposed the ILN and DDN to detect insulators and their fault based on different backbone (VGG and ResNet respectively).
The ILN first detect all the insulators in the aerial image, and then the detected regions were cropped and fed into the DDN for locating the missing-cap fault.
For the experiment, totally 900 normal images and 60 faulty images were acquired from UAV.
Due to the data insufficiency, the image synthetic algorithm and the data augmentation process were applied.
The image synthetic algorithm employed U-net to segment the insulator and then pasted it into other images with various backgrounds.
The data augmentation contained 7 image processing operations such as rotation, shift, shear, and shear.
Eventually, among 1956 images (1186 for training) for ILN and 1056 images (792 for training) with missing-cap fault were prepared.
\subsubsection{Objective to solve data insufficiency}
Data insufficiency is a challenging problem in the data analysis of power line inspection.
Some attempts have been made in the previous articles such as image synthesis (e.g., \cite{tao2018ILN_DDN_ins}) and data augmentation (e.g., \cite{liu2018frcnn_ins,xu2018SSD_for_ins}).
The following researches made some further investigation about the problem of data insufficiency.
Chang et al. \cite{chang2018cGAN_for_ins} employed the cGAN for insulator detection.
Due to the difficulty to obtain the real-world aerial images, they proposed a synthetic method to generate synthetic images from 65 real-world insulator images.
The insulator region was overlapped to various background images with different parameters such as gaussian noise and transparency.
The synthetic dataset included three sample categories: sample with insulators, sample without insulators and sample with pseudo targets.
The cGAN model was trained by using 8000 synthetic images and tested in 200 real-world insulator images.
Both the input and output of the model had the same pixel resolution with 512$\times$512.
Tian et al. \cite{tian2018Parallel_ins} proposed a parallel method to solve the insufficient diversity of acquired inspection data.
The original input image was processed with different operation (e.g., rotation, mirror, and defogging) and then concurrently fed into a cascading network for fault diagnosis.
After inputting the parallel images, parallel results would be generated and then a voting decision mechanism was designed for determination of the final result.
In addition to increasing the data diversity, some studies discussed the use of the transfer learning.
Bai et al. \cite{bai2018SPPNet-TL_ins} determined the surface fault of insulators based on Spatial Pyramid Pooling
networks (SPP-Net) with transfer learning.
In the experiment, the model was first trained by the ImageNet dataset which contained among 1.2 million training samples.
Then, the same model was further trained (also denoted as fine-tune) by the small dataset with insulator fault.
Miao et al. \cite{miao2019TS-FT_ins} introduced a two-stage fine-tuning strategy in SSD network to detect insulators.
Two kinds of insulator dataset were prepared in the proposed method: basic dataset and specific dataset.
The former contained aerial images with various types of insulators in different background, which has large quantity.
The later comprised images with the specific insulator in the specific background (e.g., porcelain insulator in forest background), which has little images.
The implementation of fine-tuning stage 1 was similar to \cite{bai2018SPPNet-TL_ins}.
But instead of ImageNet dataset and small insulator dataset that mentioned in \cite{bai2018SPPNet-TL_ins}, they used COCO dataset and the basic dataset.
In the fine-tuning stage 2, the detection model was further trained by using a specific dataset.
Furthermore, the specific dataset can be replaced according to different engineering applications.
The experiments illustrated the enhanced performance of the proposed strategy compared to the traditional fine-tuning.
Recently, a novel technology called weakly supervised learning was proposed to combat with the data insufficiency that opens a new research issue in the inspection image analysis.
Lee et al. \cite{lee2017weakly_line} segmented power conductors in pixel-level by using data with image-level annotations.
A sliding window combined with CNN was utilized to classify each sub-window of an aerial image into two image-level categories: conductor and background.
If the sub-window was classified as conductor, the bilinear interpolation was applied to up-sampling this sub-window for obtaining the area of conductors.
In the experiment, 4000 images with 128$\times$128 (the size of sub-window) and 200 images with 512$\times$512 were used to train and test the proposed method.
Notice that a real-world can separated into several sub-images as the training samples.
Results with 81.82\% recall rate illustrated the effectiveness of this weakly supervised learning method.
\subsubsection{Improve deep learning method based on domain knowledge of power lines inspection}
The detection and diagnosis tasks of power line components have some contrasts compared to the common task.
(e.g., some faults need to be identified by two stages object detection)
These unique characteristics also can be denoted as domain knowledge in the power lines inspection.
In recent years, few attempts have been made to improve the exiting deep learning method based on this domain knowledge, which makes it more suitable for the data analysis of power lines inspection.
Jiang et al. \cite{jiang2019EL-MLP} concentrated on the detection procedure of the insulator fault.
The traditional fault diagnosis algorithm was usually a two-stage object detection procedure, which first detected the component and then detect the fault on the component region.
Authors pointed out that the performance of the traditional procedure depending on the effect of component detection, for example, once the component was missing detected, the fault identification can not be achieved.
Therefore, they improved the procedure and proposed a fault diagnosis method based on the ensemble learning with multi-level perception.
They applied SSD to detect the missing-cap in three different input images: original aerial image, multi-insulator image and single-insulator image.
Then, the final result can be filtered by using an improved ensemble learning method.
In the experiment, the improved procedure showed higher accuracy (92.3\%) compared to the traditional procedure (89.1\%) that verified the effectiveness of the proposed method.
Similar to \cite{jiang2019EL-MLP}, Chen et al. \cite{chen2019SOFCN} also discussed the improvement of the fault diagnosis procedure.
A fault detection method of insulators was proposed based on Second-order Fully Convolutional Network (SO-FCN).
They inserted an image filtering operation into the traditional two-stage detection procedure.
The improved procedure consisted of three main steps: the first order FCN was applied to obtain the initial segmentation result of insulator region, morphological reconstruction filtering was performed to remove the false identification, and the second order FCN was employed to detect the missing-cap fault.
Recently, Sampedro et al.\cite{sampedro2019UpNet_CNN_ins} introduced a novel strategy for missing caps detection, which transferred the object detection problem into semantic segmentation problem.
The insulator was conducted by two elements including caps and connectors that were tightly interlocked.
The authors segmented the caps and connectors from an insulator string, and generated a mask image where the pixels belonging to caps were changed to green and the regions of connectors were changed to red.
In this mask image, the detection of missing-cap was transferred to detecting the absent green region.
Moreover, a large number of fault samples can be synthetically produced by randomly removing the green region in the mask image.
In the experiments, totally 2400 training samples were generated from 160 original images.
In addition to the optimization of fault diagnosis scheme, Xiang et al. \cite{xiang2018Modified_Fst_ins} improved the deep learning network itself.
They proposed an modified Faster R-CNN to detect the external force damage (e.g., engineering vehicles) of power lines.
According to the characteristics of the engineering vehicles images, for example, the object size, object shape and background, authors modified the Faster R-CNN structure in the feature extraction and classification parts.
In feature extraction, a shallower convolutional neural network was utilized for extracting the high-resolution features .
In feature classification, one convolutional layer was added after the Region of Interest(RoI) pooling layer in order to learn the region-wise features that were suitable for RoI.
These improvements enhanced the ability of the detection network and the advantages of the proposed method (89.93\%) was verified compared to the traditional Faster R-CNN (89.12\%).
\subsubsection{Remarks}
\mbox{Table. \ref{tab:summary_DL}} provides the valuable information of deep-learning-based researches in inspection data analysis, which includes the literature characteristic, inspection item, method, size of total data, pixel size of image, and the core idea of the research.
Although a number of works utilized deep learning methods to analyze inspection data in the past two years, research in this area is still in its early stages.
These works mainly applied existed deep framework (e.g., Faster R-CNN, SSD and YOLO) in a specific inspection item.
More attention should be paid to the improvement of deep learning methods for inspection data analysis instead of direct utilization.
Some primary attempts have been made in this area and several following issues are rose:
For deep feature extracting, which data can be extracted is an important question.
Text with rich information of power lines inspection can be further concerned.
For network cascading, it is worth studying how to solve the coupling problem between object detection stage and fault identification stage, especially the situation of the fault identification can not complete when the object detection fails.
A meticulous designed procedure may be helpful when the object region is miss or wrong detected.
For data insufficiency, as can be seen in \mbox{Table. \ref{tab:summary_diag}}, a majority of studies used hundreds or thousands samples for experiments that is typically not enough to train a high performance deep learning model.
Few-shot learning is a hot-spot research and some novel methods have been proposed outside the area of power lines inspection.
It is worth trying to apply these state-of-the-art methods to solve the lack of data.
To improve the deep learning method based on domain knowledge in power lines inspection, the characteristic of inspection items need to be further investigated.
Not only the inspection image should be concerned, the information in the whole inspection procedure is also valuable such as the landform, date, weather, and flight record.
Multi-modal learning may be a good choice to handle such complex information.
In addition, we also find that even though the camera can capture high resolution image on UAV (e.g., 4000$\times$3000), the pixel size used for deep learning model training and testing is still small (e.g., 300$\times$300).
How to effectively employ the deep learning method under the situation of large pixel size or limited computation resource is another issue that needs to be further addressed.
Some researches in this area would be helpful to bridge the gap between laboratory work and real-world application.
Besides, the research on deep learning application in power lines inspection will be promoted if some open datasets are provided.
\subsection{A basic conception of inspection data analysis system based on deep learning}
To build an intelligent analysis system of inspection, the following steps should be considered:
1) process the inspection data for storage and model training using three main approaches including data cleaning, data labeling and data augmentation (Section V.C.1$\sim$V.C.3).
2) design the component detection method (Section V.C.4).
3) design the fault identification method (Section V.C.5).
4) train and optimize the deep learning models in detection stage and identification stage by applying cross-validation, model pruning, and model ensemble (Section V.C.6).
\begin{figure*}[ht]
\centering
\includegraphics[width=0.95\linewidth]{DL_framework.jpg}
\caption{Basic conception of inspection data analysis system based on deep learning}\label{fig:DL_framework}
\end{figure*}
\subsubsection{Data Cleaning}
The aerial images and videos captured from UAVs contain redundant information such as duplicate data, irrelevant data and corrupt data.
These invalid or even harmful data should be filtered out in order to guarantee the model performance and save computation resource.
For achieving this objective, one possible solution is to establish a quality evaluation framework of the inspection data.
Then, the invalid data with low quality can be eliminated.
In order to remove the duplicate data, the similarity comparison method can be applied.
The commonly used approach to compare images is to extract features by descriptors and calculate the squared euclidean distance between these features.
Some hand-craft designed descriptors can be used such as SIFT \cite{Lowe2004SIFT} and DAISY \cite{Tola2008DAISY}.
There are also deep learning based method for similarity comparison, for instance, Siamese Network \cite{Chopra2005Siamese} and 2-Channel Network \cite{Zagoruyko2015Channel2}.
For irrelevant data, we can apply object detection model to detect the power line component.
The aerial image or video without the component is regarded as irrelevant data.
The detection method will be further discussed in subsequent section.
The corrupt data refers to the distorted images caused by UAVs motion, digital compression, and noise interference.
The conventional procedure of filtering the corrupt data is to extract features from aerial image, and then regress these features to a quality score. (The quality here focus on the degree of image distortion which should be distinguished from the quality of inspection data.)
There are some CNN-based approaches that can be applied such as IQA-CNN \cite{2014KangIQA-CNN}, RankIQA \cite{liu2017RankIQA}, BIECON (Blind Image Evaluator based on a Convolutional Neural Network) \cite{kim2016BIECON}, and DIQA (Deep Image Quality Assessor) \cite{kim2018DIQA}.
Once the distorted images are obtained, we can remove or restore according to the application scenario.
For example, not every aerial image in the periodic inspection need to be analyzed, we can remove the distorted images under this condition.
However, in the emergent post-disaster inspection, each image is important and the distorted image should be restored by using CNN-based \cite{sun2015LCNN,nah2017DMsCNN} or GAN-based \cite{kupyn2018DeblurGAN,liu2018XGAN} image restoration approaches.
\subsubsection{Data Labeling}
In order to train and test the deep learning model, the inspection data should be labeled.
The common labeling procedure is to write the image information (e.g., pixel size, object coordinates, and object class) to a file that is independent of the image.
This file also is called annotation file and the file format include TXT, XML, and JSON.
The data labeling can be accomplished manually or semi-automatically.
For manual labeling, two commonly used graphical image annotation tools can be applied: LabelImg \cite{LabelImg} and Labelme \cite{labelme2016}.
In LabelImg, we can click and release left mouse to select a region to annotate the rectangle box which contain the object.
Then, enter the category of the object that exists in the rectangle box.
Finally, the annotation files are saved as XML files in PASCAL VOC format which is commonly used in many dataset (e.g., ImageNet).
The operation of Labelme is similar to LabelImg, but it can achieve more labeling tasks.
Besides the rectangle box, there are many other shapes can be used for image annotation including polygon, circle, line and point.
It is worth noting that the polygon annotation has more detailed contour of the object which can be used for image segmentation task.
The annotation files are saved as JSON file in VOC-format or COCO-format (for COCO dataset).
The procedure of semi-automatic labeling consists of two parts: automatic detection by deep learning models and adjustment by human.
In the first part, a coarse detection model should be trained by using a small part of the entire dataset (manual labeling).
Then, initial annotation files can be obtained by applying the detection model on the rest of the inspection data.
In the second part, the initial annotation file will be adjusted and corrected manually.
There is a semi automatic image annotation tool so-called Anno-Mage \cite{Anno-Mage} can make the this procedure more easier.
A real-time detection model should be prepared and then the image can be detected and adjusted sequentially and interactively.
\subsubsection{Data Augmentation}
Data augmentation is a commonly used technique in deep learning for promoting the performance of the model.
The quantity and diversity of the training data can be augmented by the following approaches: image transformation, image synthesis and GAN-based image generation.
In image transformation, the training sample can be transformed to a new sample by using various image process operations such as rotating, cropping, resizing, shifting, and noising.
These operations can be applied alone or in combination.
Different augmentation strategies will result in different model performance.
To this end, AutoAument \cite{cubuk2018autoaugment} can be employed to search the optimal strategy.
In addition, there are two implementations of image transformation: before the training and during the training.
The former transforms all the images before the model training that are stored with real-world data together.
The later is more resource efficient that transforms the image in each iteration during model training.
The synthetic image is generated from real-world images by synthesizing the instance image and background image.
The instance image represents the polygon image area of the power line component that the polygon is the contour of the object.
It can be obtained from polygon annotations labeled by hand-craft, or extracted by applying the object segmentation method such as FCN \cite{long2015FCN}, U-Net \cite{ronneberger2015UNet} and Mask R-CNN \cite{he2017mrcnn}.
The background image can be captured from the aerial inspection video of the power line corridor.
By adding the instance image to the background image, a large number of high-quality synthetic image can be obtained.
In addition, some automatic approaches can also generate the synthetic image.
For example, we can applied re-sampling method to synthesize new samples such as Synthetic Minority Over-sampling Technique (SMOTE) \cite{chawla2002smote}, SamplePairing \cite{inoue2018SamplePairing}, and Mixup \cite{zhang2017mixup}.
Recently, the GAN-based image-to-image translation method have opened up possibilities in data augmentation.
The generative model of GAN can generate an new image by inputting an original image.
The image can be transformed to another style such as day-to-night, summer-to-winter, and sunny-to-foggy.
The objects in the image will be also transformed with color, size, and orientation.
With respect to train the generative model, several GAN architectures can be used such as Pix2Pix \cite{isola2017Pix2Pix}, CycleGAN \cite{zhu2017cycleGAN}, and AugGAN \cite{huang2018AugGAN}.
\subsubsection{Component detection}
The component detection in the inspection data analysis refers to obtain the position and the category of the power line component in the aerial image.
In addition, the position information can be represented by the coordinates of rectangular box or polygonal box.
The polygonal box is used in the segmentation task which can also acquire the location of the component in a more meticulous way.
There are two major goals for component detection:
1) collect key frame from aerial videos that have power line components.
2) crop the component region from the original image for further fault identification.
Given the tremendously rapid evolution of deep learning, there are many successful detection networks that can be applied in the inspection data analysis.
Two main indicators should be considered in selecting these networks for inspection data analysis in different applications: accuracy and speed.
For instance, detection speed is the most important performance indicator in the post-disaster inspection.
But in the long-period inspection task, the electrical company more concentrates on the detection accuracy.
Every detection network aims at detecting the object fast and precise.
However, accuracy and speed are generally contradictory, for example, most high accuracy networks have corresponding high computational cost
In most case, the two-stage DL-based detection method has higher accuracy than the one-stage method, but in contrast, the former has lower detection speed.
Another factor that affects the performance of the DL-based detection method is the basic network.
Therefore, different combinations of the detection scheme and the basic network have diverse performances.
Recently, a guide was presented by Huang et al. \cite{huang2016modelzoo} for selecting the DL-based detection method that achieves the appropriate performance for a given application.
We can also refer to leaderboards of several large-scale dataset (e.g, COCO and ImageNet) to look for favorable detection methods.
For reference in this paper, two suggested combinations can be applied in the object detection of inspection data.
Concerning applications with high accuracy requirements, the combination of Faster R-CNN with NasNet\cite{pham2018enas} is an exceptional selection.
With respect to the application that requires low computation cost, SSD with ResNet-FPN \cite{lin2017retinanet} can be applied that it can calculate at high speed with the acceptable accuracy.
Furthermore, the DL-based segmentation method (e.g., FCN \cite{long2015FCN}) and DL-based multi-task method (e.g., Mask R-CNN \cite{he2017mrcnn}) can play the role as the detection method does.
But it requires pixel-level annotations which means more labor costs should be paid for.
\subsubsection{Fault identification}
In fault identification, the component region should be cropped first from the original aerial image based on results of component detection stage.
Then, the identification method can be performed on the cropped image.
This two-stage pipeline has following main advantages:
1) reduce the search range that can improve the accuracy and speed.
2) design component-specific identification methods and perform them on corresponding component region.
Which means there is no need to perform all identification methods in an input image.
The fault identification task in power lines can be summarized into three categories: generic object detection task, generic classification task, and fault-specific task.
In generic object detection task, the fault identification can be regarded as the location of fault regions.
For example, the missing-cap fault of insulators will be determined by detecting the disappeared part of the insulator string.
Similar to missing-cap, many other faults can also be identified by means of object detection such as bird's nest and foreign body.
Therefore, DL-based detection methods mentioned above (e.g., Faster R-CNN with NasNet) can be utilized to accomplish these tasks.
Most identification tasks of surface faults are generic classification tasks due to their irregular fault range and degree.
By using the DCNN (e.g., ResNet), we can classify different surface faults such as flashover of insulators, icing of insulators and corrosion of towers.
Compared to detection tasks that identify the fault by determining the presence or absence of the fault region, the classification task aims at identifying the fault condition level of components.
The remaining faults are difficult to identify by directly detecting or classifying such as broken strand of conductors and vegetation encroachment.
Identification tasks for these faults are summarized as fault-specific tasks.
In order to accomplish these tasks, it is necessary to design special identification methods for different faults.
For instance, a possible solution for broken strand diagnosis is as follow:
First, segment the contours of conductor in the detected conductor region which is obtained from detection stage.
Then, design hand-craft rules to determined which contour is the broken strand according to the characteristics of the fault.
\subsubsection{Model training and optimization}
After the data are processed and the detection and identification methods are configured, it is essential to obtain available models for real-world applications by model training and optimization.
In this paper, the DL method refers to the conceptual network architecture while the DL model represents the network that has actual parameters (e.g., weights of the neuron) and can be implemented on a real-world platform.
The model training defined as a procedure of updating parameters with back propagation given the initial model with initial parameters.
There are two frequently-used techniques for model training: fine-tuning and cross-validation.
Fine-tuning is an implementation of transfer learning, where a previously-trained model is utilized as an initial model and then its parameters are adjusted for a new dataset \cite{sze2017survey_DL}.
Compared with learning from scratch (or training without fine-tuning) that the model with random parameters is used as the initial model, the fine-tuning makes the training process of learning representations more simpler and acquires higher accuracy \cite{yosinski2014transferable}.
The cross-validation is a widespread technique for combating the over-fitting and making full use of the available data \cite{domingos2012Tips_ML}.
In real-world applications, it is common to split the data into two parts: a part for training and another for validation.
These two subsets of data can be denoted as training set and validation set respectively.
The basic form of cross-validation is k-fold cross-validation, where the data is first split into k equally sets.
Then, subsequently $k$ iterations of training and validation are performed.
In each iteration, a set is held out as validation set and the rest $k-1$ sets are used to train the model.
Finally, we can obtain $k$ well-trained models and corresponding results.
The reliable performance of the method can be acquired by averaging these results and then guides the adjustment of method settings.
In addition, optimal $k$ in k-fold cross-validation is between 5 and 10 that was mentioned by Arlot et al. \cite{arlot2010survey_cross-validation}.
With respect to model optimization, there are two widespread optimized directions: saving the computational cost by model pruning and improving the accuracy by model ensemble.
It is widely-recognized that DL methods are typically over-parameterized in order to train a high performance model with stronger representation power, which leads to high computational cost \cite{sze2017survey_DL}.
As a remedy, the model pruning can remove a set of redundant parameters and its procedure consists of three stages:
1) train an over-parameterized model,
2) prune the redundant parameters of the model according to a certain criterion,
3) fine-tune the pruned model to maintain the original accuracy \cite{Li2017Pruning,luo2017thinet,liu2018rethinking,huang2018sparse_structure_selection}.
In real-world applications, there are substantial remarkable methods can be employed which yields the difficulty of selection.
To this end, the model ensemble provides a solution of combining multiple methods to obtain better performance \cite{sagi2018ensemble_survey}.
There are two mainstream ensemble methods that have been widely used in classification and object detection tasks: boosting and bagging.
In boosting, the learner (e.g., ResNet) is trained in sequence that each learner depends on the previous learner.
Particularly, each new learner focus on samples the previous ones tended to get wrong.
In bagging, learners are trained independently and parallel, and then the predictions of all learners are combined according to a deterministic average process (e.g., voting) \cite{domingos2012Tips_ML}.
Recently, the model ensemble has been applied to the inspection task that a bagging-like ensemble method was used to detect insulator fault introduced by Jiang et al. \cite{jiang2019EL-MLP}
\section{Challenges and open research issues}
Despite the recent promising results reported in the literature, the adoption of deep learning in image analysis of power lines inspection is still in its infancy and cannot yet satisfactorily address several long-standing challenges.
In this section, we discuss some crucial issues and promising research directions with special attention paid to highlight their challenges and potential opportunities.
\subsection{Data quality problems}
Although the application of UAVs has greatly reduced the workload of inspectors, it has also brought huge amounts of daily data.
It is an emerging issue to make full use of these inspection data to achieve automatic data analysis.
High-quality data is the guarantee of high performance of analysis methods which are based on machine learning technology.
However, there are four main problems in current inspection data:
\begin{itemize}
\item \textbf{High labor-cost of data labeling}.
Until now, most of the analysis methods are based on supervised learning that are rely heavily on manual annotations.
But such a large amount of data in power lines inspection requires professionals to spend a lot of time labeling.
As mentioned by Nguyen et al. \cite{2018DL_inspection_review}, a person needs almost one hour to label 40 images.
\item \textbf{Class imbalance}.
Different components in power lines have different quantities and their faults occur with different frequency.
For instance, the number of fittings is much larger than the tower and the possibility of failure is also higher.
In addition, the time accumulation is not enough since the UAV inspection has only been developed for few years.
In extreme cases, some categories even have no training data such as tower collapse for specific area.
These factors lead to class imbalance (also known as long-tailed distribution) that makes the model perform poorly on those categories with insufficient data.
\item \textbf{Intra-class variations}.
The problem of intra-class variations is similar to class imbalance in a sense that affects the model performance.
In real-world application, each category of power line components can have many different object instances, and they possibly have diverse combinations of different characteristics such as color, shape, texture, size, and material.
Furthermore, the various imaging conditions caused by the changing environment would impact the object appearance even according to the same instance.
The changing environment includes the day-to-night changing, weather conditions, photographing orientation and distance, background, occlusion etc.
In other words, intra-class variations have two manifestations: diverse object instance and complex background.
\item \textbf{Multiple data sources}.
In this paper, we only focus on the visible image that is widely used in power lines inspection.
There are also data from some other sources such as thermal images, ultraviolet images, laser scanner data, and text data which contains flight information.
How to effectively use these multi-modal data to accomplish the condition identification of the power lines is a challenging problem.
\end{itemize}
These problems in inspection data limit the application of the analysis method for power lines inspection.
In order to offer the potential solutions for the aforementioned challenges, we provide the following research directions.
\subsubsection{\textbf{Weakly supervised object detection}}
Weakly supervised object detection (WSOD) plays a crucial role in relieving human involvement from object-level annotations, and aims at using image-level labels to train an object detector.
If the labeler only needs to care about what the object is in the image without paying attention to its position, it will greatly speed up the labeling process and save a lot of labor costs.
Until now, there is only one work concerning about applying weakly supervised learning into power lines inspection field which attempts to detect conductors by using image-level class labels \cite{lee2017weakly_line}.
There are many other novel WSOD methods \cite{bilen2016WSDDN,tang2018PCL,tang2017OICR,wei2018ts2c} worth trying in components detection and fault identification.
\subsubsection{\textbf{Automatic image generation}}
To deal with the problems of class imbalance and intra-class variations, automatic image generation is a very promising approach.
This approach generates rare data by pasting or converting.
In pasting, the demand object should be extract by segmentation network (e.g., Mask R-CNN\cite{he2017maskrcnn}, U-Net\cite{ronneberger2015UNet}, and DeepLab\cite{chen2018deeplabv3}) first, and then paste the object region to the background image.
Few works utilize pasting to generate inspection data including insulator \cite{chang2018Synthetic_ins} and its fault \cite{tao2018ILN_DDN_ins,sampedro2019unet_ins}.
It should be notice that the pasted rule needs refined design in order to obtain realistic data.
In converting, the new image is generally converted from the old one by using the Generative Adversarial Networks (GANs) \cite{goodfellow2014GAN}.
An example is shown in work \cite{lu2019trasfer_ins} which realizes the mutual conversion of normal image and fault image by means of CycleGAN \cite{zhu2017cycleGAN}.
There are some other powerful GANs can be used for image generation such as Pix2Pix \cite{isola2017Pix2Pix} and AugGAN \cite{huang2018AugGAN}.
In this direction, how to bridge the reality gap is important for generating the high quality and realistic synthetic data\cite{cong2019_Image_Harmonization, tremblay2018DR}.
\subsubsection{\textbf{Multi-modal object detection}}
To take advantage of multiple data sources, the technology of multi-modal object detection can be applied.
The objective is to fuse the information from different modalities to achieve a more discriminant detection method.
In the exiting works of power lines inspection, few researchers attempt to make use of multiple data sources.
Zhao et al. \cite{zhao2019frcnn_ins} discussed that it is feasible to detect insulators in visible images by using the model trained from infrared images.
Maeda et al. \cite{maeda2018CMDELM-LRF} extracted deep features from visible images and text to identify the deterioration level of tower.
Jalil et al. \cite{jalil2019multimodal} applied multi-modal imaging which integrated the infrared and visible images for fault identification of the power line component.
Nevertheless, the components or faults detection based on multi-modal data is still in its early stage.
In this direction, the questions of "what sources to fuse" and "how to fuse" are important for designing the multi-modal based method.
Some works in generic tasks such as pedestrian detection \cite{guan2019_pedestrian_det}, car detection \cite{chen2017_car_det} and medical image analysis \cite{xu2019_medical}, can be used as as references.
\subsection{Small object detection}
There are many small components in the inspection image such as the fitting and conductor.
\mbox{Fig. \ref{fig:challenge_small_obj}} illustrates an example of the small object.
The fault in this sample is missing pin of fittings.
As can be seen in the image, the pixel resolution of the component region is merely 60$\times$40 in the whole image with 6000$\times$4000.
The object is already very small, however, the high resolution image should be resized to a smaller resolution (e.g., 300$\times$300) during training that makes many features disappear.
Unfortunately, the pooling and down-sampling operation in the deep network makes this problem worse.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.95\linewidth]{challenge_small_obj.jpg}
\caption{An example of small object. The fault in the image is missing pin of fittings.}\label{fig:challenge_small_obj}
\end{figure*}
To detect small objects in aerial inspection images, there are three potential solutions:
The first is to directly enlarge images to different scales.
An example is provided by Bai et al. \cite{bai2019frcnn_multi} for insulators and dampers detection.
The second is multi-stage detection strategy which utilizes the contextual information.
The component with large size is firstly detected and cropped as ROIs, and then small objects are located in these ROIs.
This solution also has been applied for the identification of insulator fault \cite{jiang2019EL-MLP,ling2018Fst-Unet,chen2019SOFCN}.
The third is to improve the deep neural network by fusing the features in different scales.
Han et al. \cite{Han2019MYOLO_AM_ins} add three branches into YOLOv3 network for insulator detection.
Besides that, there are some other fusion methods in generic can be applied such as Feature pyramid networks (FPN) \cite{lin2017fpn}, Top-Down Modulation (TDM) \cite{shrivastava2016tdm}, and Reverse connection with objectness prior networks (RON) \cite{kong2017ron}.
The key in this solution is how to make use of the rich information of small objects in low-level feature maps of shallow convolutional layers.
\subsection{Embedded application}
Due to the increasing demands of high performance computation, reducing data transmission, and achieving highly efficient inspection, it is necessary to accomplish some processes of the analytic system on site (also means on-board the UAVs).
Even though some of the current embedded computing devices, such as NVIDIA Jetson TX2, can undertake complex image processing tasks including light DCNN, they still can not handle the high performance analysis methods.
Therefore, how to make inspection data analysis more efficient with short computing time and small memory usage is an important issue for practical engineering.
In this research direction, the technologies of model compression and acceleration \cite{cheng2017comprossion_survey} can be applied that include model pruning \cite{liu2017model_pruning}, network quantization \cite{hubara2017network_quantized}, network decomposition \cite{liu2015network_decomposition}, knowledge distillation \cite{hinton2015knowledge_distillation}, and lightweight network design \cite{howard2019mobilenetv3}.
\subsection{Evaluation baseline}
The evaluation baseline refers to the evaluation metrics and the standard dataset, which can offer a public platform for researchers and facilitate related practice.
Currently, the evaluation metrics used in researches of inspection image analysis are diverse, for instance accuracy rate, precision, true positive rate, false alarm rate, detection rate etc.
Even in the same evaluation metric, the definition may be different especially the accuracy rate.
In addition, the available public datasets of power lines inspection are not enough to build a comprehensive standard dataset that can well evaluate the performance of an analysis system.
As for building an evaluated baseline, the generic and successful dataset such as ImageNet \cite{russakovsky2015ILSVRC} and COCO \cite{lin2014cocodataset}, can provide some experience.
When constructing the dataset, many factors should be considered, for instance the component category, fault type, labeled rule, flight environments, and size of samples.
We deem that an successful evaluation baseline can facilitate the studies and applications of power lines inspection.
\section{Conclusion}
In this paper, we have provided a comprehensive review of inspection data analysis in power lines.
The latest developments have been summarized and the key characteristics of these researches have been discussed.
Firstly, studies on power line component detection in inspection images are reviewed from the perspective of insulator, tower, conductor, and fitting.
Then, the literature survey of power line fault identification is conducted in a fault-specific way including surface fault of insulator, missing-cap of insulator, tower corrosion, bird's nest, broken strand, foreign body, vegetation encroachment, broken fitting, and missing-pin of fitting.
Next, a thorough review about deep learning related works in the area of data analysis of power lines inspection is introduced.
These articles are categorized into five groups including direct utilization of existing frameworks, deep feature extraction, network cascading, data insufficiency issue, and improvement based on domain knowledge.
Further, a basic conception of inspection data analysis system which is mainly based on deep learning technology is proposed.
This system consists four parts: data preprocessing, component detection, fault diagnosis, and model training and optimization.
Finally, we discuss the challenges and propose future research directions from the prospective of data quality, small object detection, embedded application, and evaluation baseline.
Inspection data analysis in power lines is still an emerging and promising research area.
We hope that this review can provide a complete picture and deep insights into this area for researchers who are interested in developing a automatic analysis system of power line inspection data using deep learning technology.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,314,259,996,602 | arxiv |
\section{\uppercase{Introduction}}
\noindent View materialization is presumably one of the most effective
technique to improve query performance of data warehouses. Materialized views
are physical structures that improve data access time by precomputing
intermediary results.
Typical OLAP queries defined on data warehouses consist in selecting
and aggregating data with queries such as
grouping sets (\textsc{group by} clauses)\danielcut{~\cite{graycube}}. By precomputing
many plausible groupings, we can avoid aggregates over large tables.
However,
materializing views requires additional storage space and induces maintenance
overhead when refreshing the data warehouse.
One of the most important issues in data warehouse physical design is to select
an appropriate configuration of materialized views. Several heuristics and
methodologies were proposed for the materialized view selection problem, which
is NP-hard~\cite{gup97sel}. Most of these techniques exploit cost models to
estimate the data access cost using materialized views, their maintenance and
storage cost. This cost estimation mostly depends on view-size estimation.
Several techniques have been proposed for view-size estimation:
some requiring assumptions about the data
distribution and others that are ``unassuming.''
A common statistical assumption is uniformity~\cite{gol98met}, but any
skew in the data leads to an overestimate of the size of the view.
Generally, while statistically assuming estimators are computed quickly, the most expensive
step being the random sampling, their error
can be large and it cannot be bounded a priori.
In this paper, we consider several state-of-the-art statistically unassuming
estimation techniques: \textsc{Gibbons-Tirthapura}~\cite{Gibbons2001},
probabilistic counting~\cite{flajolet1985pca}, and \textsc{LogLog}
probabilistic counting~\cite{durand2003lcl}. While relatively expensive,
unassuming estimators tend to provide a good accuracy. To our knowledge, this
is the first experimental comparisons of unassuming view-size estimation
techniques in a data warehousing setting.
\section{\uppercase{Related Work}}\label{sec:RealtedWork}
\noindent Haas~et~al.~\cite{haas1995sbe} estimate the view-size from the histogram of a
sample: adaptively, they choose a different estimator based on the skew of
the distribution. Faloutsos~et~al.~\cite{faloutsos1996msd} obtain results nearly as accurate as
Haas et al., that is, an error of approximately 40\%, but they only need the
dominant mode of the histogram, the number of distinct elements in the sample,
and the total number of elements. In sample-based estimations, in the
worst-case scenario, the histogram might be as large as the view size we are
trying to estimate. Moreover, it is difficult to derive unassuming accuracy
bounds since the sample might not be representative. However, a sample-based
algorithm is expected to be an order of magnitude faster than an algorithm
which processes the entire data set.
Probabilistic counting~\cite{flajolet1985pca} and
\textsc{LogLog} probabilistic counting (henceforth \textsc{LogLog})~\cite{durand2003lcl} have been
shown to provide very accurate unassuming view-size estimations quickly,
but their estimates assume we have independent hashing. Because of this
assumption, their theoretical bound may not hold in practice. Whether
this is a problem in practice is one of the contribution of this paper.
Gibbons and Tirthapura~\cite{Gibbons2001} derived an unassuming bound
(henceforth \textsc{Gibbons-Tirthapura})
that only requires pairwise independent hashing. It has been shown
recently that if you have $k$-wise independent hashing for $k>2$ the
theoretically bound can be improved substantially~\cite{viewsizetechreport}.
The benefit of \textsc{Gibbons-Tirthapura} is that as long as the random number generator
is truly random, the theoretical bounds have to hold irrespective of
the size of the view or of other factors.
All unassuming estimation techniques in this paper (\textsc{LogLog},
probabilistic counting and \textsc{Gibbons-Tirthapura}), have an accuracy proportional
to $1/\sqrt{M}$ where $M$ is a parameter noting the memory usage.
\section{\uppercase{Estimation by Multifractals}}\label{sec:multi}
\noindent We implemented the statistically assuming algorithm by Faloutsos~et~al. based on a multifractal model~\cite{faloutsos1996msd}.
Nadeau and Teorey~\cite{nadeau2003pmo}
reported competitive results for this approach.
Maybe surprisingly, given a sample, all that is required to learn the multifractal model
is the number of distinct elements in the sample $F_0$, the number of elements
in the sample $N'$, the total number of elements $N$, and the number of occurrences
of the most frequent item in the sample $m_\textrm{max}$.
Hence, a very simple
implementation is possible (see Algorithm~\ref{algo:multifractal}).
Faloutsos et al. erroneously introduced a tolerance factor $\epsilon$ in their algorithm:
unlike what they suggest, it is not possible, unfortunately, to adjust the model
parameter for an arbitrary
good fit, but instead, we have to be content with the best possible fit (see line~9
and following).
\begin{algorithm}
\begin{small}\begin{algorithmic}[1]
\STATE \textbf{INPUT:} Fact table $t$ containing $N$ facts \STATE
\textbf{INPUT:} \textsc{group by} query on dimensions $D_1, D_2, \ldots, D_d$
\STATE \textbf{INPUT:} Sampling ratio $0<p<1$ \STATE \textbf{OUTPUT:} Estimated
size of \textsc{group by} query \STATE Choose a sample in $t'$ of size
$N'=\lfloor pN \rfloor$ \STATE Compute $g$=\textsc{group by}($t'$) \STATE let
$m_{\textrm{max}}$ be the number of occurrences of the most frequent tuple
$x_1,\ldots, x_d$ in $g$ \STATE let $F_0$ be the number of tuples in $g$ \STATE
$k \leftarrow \lceil\log F_0 \rceil$ \WHILE{$F<F_0$} \STATE $p\leftarrow
(m_\textrm{max}/{N'})^{1/k}$ \STATE $F\leftarrow \sum_{a=0}^k {k\choose a}
(1-(p^{k-a}(1-p)^a)^{N'})$ \STATE $k \leftarrow k+1$ \ENDWHILE \STATE
$p\leftarrow (m_\textrm{max}/N)^{1/k}$ \STATE \textbf{RETURN: $\sum_{a=0}^k
{k\choose a} (1-(p^{k-a}(1-p)^a)^N)$}
\end{algorithmic}
\end{small}
\caption{\label{algo:multifractal}View-size estimation using a multifractal distribution model.}
\end{algorithm}
\section{\uppercase{Unassuming View-Size Estimation}}
\subsection{Independent Hashing}\label{sec:hashing}
\noindent Hashing maps objects to values in a nearly random way. It has been
used for efficient data structures such as hash tables and in cryptography. We are interested in
hashing functions from tuples to $[0,2^L)$ where $L$ is fixed ($L=32$ in this
paper). Hashing is uniform if $P(h(x)=y)=1/2^L$ for all $x,y$, that is, if all
hashed values are equally likely. Hashing is \textit{pairwise independent} if
$P(h(x_1)=y \land h(x_2)=z)= P(h(x_1)=y) P(h(x_2)=z)=1/4^L$ for all
$x_1,x_2,y,z$. Pairwise independence implies uniformity. Hashing is $k$-wise
independent if $P(h(x_1)=y_1 \land \cdots \land h(x_k)=y_k)=1/2^{kL}$ for all
$x_i,y_i$. Finally, hashing is (fully) independent if it is $k$-wise
independent for all $k$. It is believed that independent hashing is unlikely
to be possible over large data sets using a small amount of
memory~\cite{durand2003lcl}.
Next, we show how 3-wise independent hashing is easily achieved in a
multidimensional data warehousing setting. For each dimension $D_i$, we build a
lookup table $T_{i}$, using the attribute values of $D_i$ as keys.
Each time we meet a new key,
we generate a random number in $[0, 2L)$ and store it in the lookup table $T_i$.
This random number is the hashed value of this key.
This table generates (fully)
independent hash values in amortized constant time.
In a data warehousing context, whereas dimensions are
numerous, each dimension will typically have few distinct values: for example,
there are only 8,760 hours in a year. Therefore, the lookup table will often use a
few Mib or less. When hashing a tuple $x_1,x_2,\ldots,x_k$ in $D_1\times
D_2\times \ldots D_k$, we use the value $T_1(x_1) \oplus T_2(x_2) \oplus \cdots
\oplus T_k(x_k)$ where $ \oplus$ is the \textsc{exclusive or} operator. This hashing
is 3-wise independent and requires amortized constant time.
Tables $T_i$ can be reused for several estimations.
\subsection{Probabilistic Counting}\label{sec:probacounting}
\noindent Our implementation of (stochastic) probabilistic counting ~\cite{flajolet1985pca}
is given in Algorithm~\ref{algo:stoch}. Recently, a variant of this algorithm,
\textsc{LogLog}, was proposed~\cite{durand2003lcl}.
Assuming independent hashing, these algorithms have standard error
(or the standard deviation of the error) of
$0.78/\sqrt{M}$ and $1.3/\sqrt{M}$ respectively\danielcut{ (see
Figure~\ref{theorycounting})}. These theoretical results assume independent
hashing which we cannot realistically provide. Thus, we do not expect these
theoretical results to be always reliable.
\danielcut{
\begin{figure}
\begin{center}
\includegraphics[width=0.6\columnwidth,angle=270]{\myfig{loglog-stddev-vs-M}}
\end{center}
\caption{ \label{theorycounting}\daniel{This figure should probably
go since we did not correctly compute the standard error, but
rather the average error.}Standard error for probabilistic counting and
\textsc{LogLog} as a function of the memory parameter
$M\in [128,2048]$: for most values of $M> 128$, the standard error is under
10\% for both of these algorithms. }
\end{figure}
}
\begin{algorithm}
\begin{small} \begin{algorithmic}[1]
\STATE \textbf{INPUT:} Fact table $t$ containing $N$ facts \STATE
\textbf{INPUT:} \textsc{group by} query on dimensions $D_1, D_2, \ldots, D_d$
\STATE \textbf{INPUT:} Memory budget parameter $M=2^k$ \STATE \textbf{INPUT:}
Independent hash function $h$ from $d$ tuples to $[0,2^L)$. \STATE
\textbf{OUTPUT:} Estimated size of \textsc{group by} query \STATE $b\leftarrow$
$M\times L$ matrix (initialized at zero) \FOR{tuple $x\in t$} \STATE
$x'\leftarrow \pi_{D_1, D_2, \ldots, D_d} (x)$ \COMMENT{projection of the
tuple} \STATE $y\leftarrow h(x')$ \COMMENT{hash $x'$ to $[0,2^L)$} \STATE
$\alpha = y \bmod{M}$ \STATE $i\leftarrow$ position of the first 1-bit in
$\lfloor y/M\rfloor$ \STATE $b_{\alpha,i}\leftarrow 1$ \ENDFOR \STATE $A
\leftarrow 0$ \FOR{$\alpha \in \{0,1,\ldots, M-1\}$} \STATE increment $A$ by
the position of the first zero-bit in $b_{\alpha,0},b_{\alpha,1},\ldots$
\ENDFOR \STATE \textbf{RETURN:} $M/\phi 2^{A/M}$ where $\phi\approx 0.77351$
\end{algorithmic}\end{small}
\caption{\label{algo:stoch}View-size estimation using (stochastic) probabilistic counting.}
\end{algorithm}
\begin{algorithm}
\begin{small} \begin{algorithmic}[1]
\STATE \textbf{INPUT:} fact table $t$ containing $N$ facts
\STATE \textbf{INPUT:} \textsc{group by} query on dimensions $D_1, D_2, \ldots, D_d$
\STATE \textbf{INPUT:} Memory budget parameter $M=2^k$
\STATE \textbf{INPUT:} Independent hash function $h$ from $d$ tuples to $[0,2^L)$.
\STATE \textbf{OUTPUT:} Estimated size of \textsc{group by} query
\STATE $\mathcal{M}\leftarrow \underbrace{0,0,\ldots,0}_M$
\FOR{tuple $x\in t$}
\STATE $x'\leftarrow \pi_{D_1, D_2, \ldots, D_d} (x)$ \COMMENT{projection of the tuple}
\STATE $y\leftarrow h(x')$ \COMMENT{hash $x'$ to $[0,2^L)$}
\STATE $j\leftarrow$ value of the first $k$ bits of $y$ in base 2
\STATE $z\leftarrow$ position of the first 1-bit in the remaining $L-k$ bits of $y$ (count starts at 1)
\STATE $\mathcal{M}_j \leftarrow \max (\mathcal{M}_j,z)$
\ENDFOR
\STATE \textbf{RETURN:} $\alpha_M M 2^{\frac{1}{M}\sum_j \mathcal{M}_j}$ where $\alpha_M \approx 0.39701-(2\pi^2+\ln^2 2)/(48M)$.
\end{algorithmic}\end{small}
\caption{\label{algo:loglog}View-size estimation using \textsc{LogLog}.}
\end{algorithm}
\subsection{\textsc{Gibbons-Tirthapura}}\label{sec:gibbons}
\noindent Our implementation of the \textsc{Gibbons-Tirthapura} algorithm (see
Algorithm~\ref{algo:gibbons}) hashes each tuple only once unlike the original
algorithm~\cite{Gibbons2001}. Moreover, the independence of the hashing depends
on the number of dimensions used by the \textsc{group by}. If the view-size is
smaller than the memory parameter ($M$), the view-size estimation is without
error. For this reason, we expect \textsc{Gibbons-Tirthapura} to perform well
when estimating
small and moderate view sizes.
\begin{algorithm}
\begin{small} \begin{algorithmic}[1]
\STATE \textbf{INPUT:} Fact table $t$ containing $N$ facts \STATE
\textbf{INPUT:} \textsc{group by} query on dimensions $D_1, D_2, \ldots, D_d$
\STATE \textbf{INPUT:} Memory budget parameter $M$ \STATE \textbf{INPUT:}
$k$-wise hash function $h$ from $d$ tuples to $[0,2^L)$. \STATE
\textbf{OUTPUT:} Estimated size of \textsc{group by} query \STATE
$\mathcal{M}\leftarrow$ empty lookup table \STATE $t\leftarrow0$ \FOR{tuple
$x\in t$} \STATE $x'\leftarrow \pi_{D_1, D_2, \ldots, D_d} (x)$
\COMMENT{projection of the tuple} \STATE $y\leftarrow h(x')$ \COMMENT{hash $x'$
to $[0,2^L)$} \STATE $j\leftarrow$ position of the first 1-bit in $y$ (count
starts at 0) \IF{$j \leq t$} \STATE $\mathcal{M}_{x'}=j$
\WHILE{$\textrm{size}(\mathcal{M})>M$} \STATE $t\leftarrow t+1$ \STATE prune
all entries in $\mathcal{M}$ having value less than $t$ \ENDWHILE \ENDIF
\ENDFOR \STATE \textbf{RETURN:}.$2^t \textrm{size}(\mathcal{M})$
\end{algorithmic}\end{small}
\caption{\label{algo:gibbons}\textsc{Gibbons-Tirthapura} view-size estimation.}
\end{algorithm}
\begin{figure}
\begin{center}
\includegraphics[width=0.6\columnwidth,angle=270]{\myfig{epsilon-vs-M}}
\end{center}
\caption{
\label{theory}Bound on the estimation error (19 times out of 20) as a function
of the number of tuples kept in memory ($M\in [128,2048]$) according to Proposition~\ref{countprop}
for \textsc{Gibbons-Tirthapura} view-size estimation with $k$-wise independent hashing.
}
\end{figure}
The theoretical bounds given in~\cite{Gibbons2001} assumed
pairwise independence. The generalization below is from~\cite{viewsizetechreport} and is illustrated by Figure~\ref{theory}.
\begin{proposition}\label{countprop}Algorithm~\ref{algo:gibbons} estimates the number of distinct tuples
within relative precision $\epsilon$,
with a $k$-wise independent hash for $k\geq 2$ by storing $M$ distinct tuples ($M\geq 8 k$)
and with reliability $1-\delta$ where $\delta$ is given by
\[\delta= \frac{k^{k/2}}{ e^{k/3} M^{k/2}} \left ( 2^{k/2}
+\frac{ 8^{k/2}}{ \epsilon^k (2^{k/2}-1)} \right ).\]
More generally, we have
\newcommand{\frack}[2]{{{#1}/{#2}}}
\begin{eqnarray*}\delta
&\leq & \frac{k^\frack{k}{2}}{ e^{\frack{k}{3}} M^{\frack{k}{2}} } \left ( \frac{\alpha^{\frack{k}{2}}}{(1-\alpha)^k}+ \frac{4^{\frack{k}{2}}}{\alpha^{\frack{k}{2}} \epsilon^k (2^{\frack{k}{2}}-1)}\right ) .
\end{eqnarray*}
for $4k/M\leq \alpha<1$ and any $k,M>0$.
\end{proposition}
In the case where hashing is 4-wise independent,
we derive a more concise bound.
\begin{corollary}\label{corollary1}
With 4-wise independent hashing, Algorithm~\ref{algo:gibbons} estimates
the number of distinct tuples
within relative precision $\epsilon\approx 5/\sqrt{M} $, 19 times out of 20
for $\epsilon$ small.
\end{corollary}
\begin{proof}\newcommand{\frack}[2]{{{#1}/{#2}}}
We start from the second inequality of Proposition~\ref{countprop}.
Differentiating $\frac{\alpha^{\frack{k}{2}}}{(1-\alpha)^k}+\frac{4^{\frack{k}{2}}}{\alpha^{\frack{k}{2}} \epsilon^k (2^{\frack{k}{2}}-1)}$
with respect to $\alpha$ and setting the result to zero, we get
$3\alpha^4 \epsilon^4+16 \alpha^3-48 \alpha^2-16=0$ (recall that $4k/M\leq \alpha<1$).
By multiscale analysis, we seek a solution of the form $\alpha = 1-a\epsilon^r+o(\epsilon^r)$ and we have that $\alpha\approx 1- \frack{1}{2}\sqrt[3]{\frack{3}{2}} \epsilon^{4/3}$ for $\epsilon$ small. Substituting this value of $\alpha$, we have $\frac{\alpha^{\frack{k}{2}}}{(1-\alpha)^k}+ \frac{4^{\frack{k}{2}}}{\alpha^{\frack{k}{2}} \epsilon^k (2^{\frack{k}{2}}-1)}\approx \frack{128}{24 \epsilon^4}$. The result follows by substituting in the second inequality.
\end{proof}
\section{\uppercase{Experimental Results}}\label{sec:Experiment}
\noindent To benchmark the quality of the view-size estimation against the memory
and speed, we have run test over the US~Census~1990 data
set~\cite{KDDRepository} as well as on synthetic data produced by
DBGEN~\cite{DBGEN}. The synthetic data was produced by running the DBGEN
application with scale factor parameter equal to 2. The characteristics of
data sets are detailed in Table~\ref{tab:caractDataSet}. We selected
20 and 8 views respectively from these data sets: all views in US~Census~1990
have at least 4~dimensions whereas only 2 views have at least 4~dimensions in the synthetic
data set.
\begin{table}
\centering
\begin{tabular}{ccc} \hline
& \textbf{US~Census~1990} & \textbf{DBGEN} \\ \hline
\# of facts& 2458285 & 13977981 \\
\# of views& 20 & 8 \\
\# of attributes& 69 & 16 \\ \hline
Data size& 360\,MB & 1.5\,GB\\ \hline
\end{tabular}
\caption{Characteristic of data sets.}\label{tab:caractDataSet}
\end{table}
We used the GNU C++ compiler version~4.0.2 with the ``-O2'' optimization flag on a Centrino Duo 1.83\,GHz machine with 2\,GB of
RAM running Linux kernel 2.6.13--15. No thrashing was observed. To ensure
reproducibility, C++ source code is available freely from the authors.
For the US~Census~1990 data set, the hashing look-up table is a simple array
since there are always fewer than 100~attribute values per dimension.
Otherwise, for the synthetic DBGEN data, we used the GNU/CGI STL extension
\texttt{hash\_map} which is to be integrated in the C++ standard as an
\texttt{unordered\_map}: it provides amortized $O(1)$ inserts and queries. All
other look-up tables are implemented using the STL \texttt{map} template which
has the same performance characteristics of a red-black tree. We used comma
separated (CSV) (and pipe separated files for DBGEN) text files and wrote our
own C++ parsing code.
The test protocol we adopted (see Algorithm~\ref{algo:protocol}) has been
executed for each estimation technique (\textsc{LogLog}, probabilistic counting
and \textsc{Gibbons-Tirthapura}), \textsc{group by} query, random seed and
memory size. At each step corresponding to those parameter values, we compute
the estimated-size values of \textsc{GROUP BY}s and time required for their
computation.
For the multifractal estimation technique, we computed at the same way the time
and estimated size for each \textsc{GROUP BY}, sampling ratio value and random
seed.
\begin{algorithm}
\begin{small}\begin{algorithmic}[1]
\FOR{\textsc{group by} query $q\in Q$}
\FOR{memory budget $m\in M$}
\FOR{random seed value $r\in R$}
\STATE Estimate the size of \textsc{group by} $q$ with $m$ memory budget and $r$
random seed value
\STATE Save estimation results (time and estimated size) in a log
file
\ENDFOR
\ENDFOR
\ENDFOR
\end{algorithmic}\end{small}
\caption{\label{algo:protocol}Test protocol.}
\end{algorithm}
\noindent \textbf{US Census 1990.} Figure~\ref{fig:expUscensus19Out20} plots the largest
$95^{\textrm{th}}$-percentile error observed over 20 test estimations for
various memory size $M \in \{16, 64, 256, 2048\}$. For the
multifractal estimation technique, we represent the error for each sampling
ratio $p \in \{0.1\%, 0.3\%, 0.5\%, 0.7\%\}$. The X axis represents the size of the
exact \textsc{GROUP BY} values. This
$95^{\textrm{th}}$-percentile error can be related to the theoretical bound for
$\epsilon$ with 19/20 reliability for \textsc{Gibbons-Tirthapura} (see
Corollary~\ref{corollary1}): we see that this upper bound is verified experimentally.
However, the error on ``small'' view sizes can exceed 100\% for probabilistic counting
and \textsc{LogLog}.
\danielcut{\noindent \textbf{US Census 1990.} Figure~\ref{fig:expUscensusError} represents
the standard error (average of 20 test estimations\daniel{NO!!! The
standard error is the standard deviation of the error!!!}) for each estimation
technique (\textsc{LogLog}, probabilistic counting and
\textsc{Gibbons-Tirthapura}) and memory size $M \in \{16, 64, 256, 2048\}$. For the
multifractal estimation technique, we represent the error for each sampling
ratio $p \in \{0.1\%, 0.3\%, 0.5\%, 0.7\%\}$. The X axis represents the size of the
exact \textsc{GROUP BY} values and the Y axis the corresponding standard error\daniel{rather:
average error}.
Both of the Y and X axis are in logarithm scale to highlight the difference in
error values and view sizes.}
\danielcut{Figure~\ref{fig:expUscensusError}
\ref{fig:expUscensusErrorGibbons}~\ref{fig:expUscensusErrorCounting}~and~\ref{fig:expUscensusErrorLogLog}
show that the standard error\daniel{average error!} decreases when the memory used for estimating view
sizes increases. Except for \textsc{Gibbons-Tirthapura}, the accuracy of estimates is not stable
over the size of views being estimated. Indeed, for ``small'' view sizes, the
error\daniel{average error!!!} can exceed 100\% for probabilistic counting and \textsc{LogLog}. In fact,
Figure~\ref{fig:expUscensusErrorGibbons} shows that \textsc{Gibbons-Tirthapura}
has sometimes accuracy better than 0.01\% for small views.
For the multifractal estimation technique \danielcut{(see
Figure~\ref{fig:expUscensusErrorMulti})}, the error\daniel{average error!} decreases when the sampling
ratio increases. While the accuracy can sometimes approach 10\%, we never have
reliable accuracy, no matter the sampling ratio. Similarly,
Figure~\ref{fig:expUscensus19Out20} plots the largest
$95^{\textrm{th}}$-percentile error observed over 20 test estimations. This
$95^{\textrm{th}}$-percentile error can be related to the theoretical bound for
$\epsilon$ with 19/20 reliability for \textsc{Gibbons-Tirthapura} (see
Corollary~\ref{corollary1}): we see that this upper bound is verified experimentally.}
\danielcut{
\begin{figure*}[htb]
\begin{center}
\subfigure[\textsc{Gibbons-Tirthapura}]{\includegraphics[width=0.8\columnwidth]{\myfig{gibbons_error}}\label{fig:expUscensusErrorGibbons}}\quad
\subfigure[Probabilistic counting]{\includegraphics[width=0.8\columnwidth]{\myfig{counting_error}}\label{fig:expUscensusErrorCounting}}
\danielcut{
\\ [1pt]
\subfigure[\textsc{LogLog}]{\includegraphics[width=0.8\columnwidth]{\myfig{loglog_error}}\label{fig:expUscensusErrorLogLog}}\quad
\subfigure[Multifractal]{\includegraphics[width=0.8\columnwidth]{\myfig{multifractal_error}}\label{fig:expUscensusErrorMulti}}
}
\end{center}
\caption{Standard error\daniel{NO!!! average error!} of estimation $\epsilon$ as a function of exact view
size for increasing values of $M$ (US Census 1990)} \label{fig:expUscensusError}
\end{figure*}
}
\begin{figure*}[htb]
\begin{center}
\subfigure[\textsc{Gibbons-Tirthapura}]{\includegraphics[width=0.8\columnwidth]{\myfig{gibbons_error_19out20}}\label{fig:expUscensus19Out20Gibbons}}\quad
\subfigure[Probabilistic counting]{\includegraphics[width=0.8\columnwidth]{\myfig{counting_error_19out20}}\label{fig:expUscensus19Out20Counting}}
\\ [1pt]
\subfigure[\textsc{LogLog}]{\includegraphics[width=0.8\columnwidth]{\myfig{loglog_error_19out20}}\label{fig:expUscensus19Out20LogLog}}\quad
\subfigure[Multifractal]{\includegraphics[width=0.8\columnwidth]{\myfig{multifractal_error_19out20}}\label{fig:expUscensus19Out20Multi}
\end{center}
\caption{$95^{\textrm{th}}$-percentile error 19/20 $\epsilon$ as a function of
exact view size for increasing values of $M$ (US Census 1990)} \label{fig:expUscensus19Out20}
\end{figure*}
\noindent \textbf{Synthetic data set.} Similarly, we computed \danielcut{the standard
error\daniel{NO!!! average error} and} the 19/20 error for each technique, computed from the DDBGEN data set
\danielcut{(see Figure~\ref{fig:expDbgenError} and Figure~\ref{fig:expDbgen19Out20})}.
We observed that the four techniques have the same behaviour observed on the US
Census data set. Only, this time, the theoretical bound for the 19/20 error is larger
\danielcut{in Figure~\ref{fig:expDbgen19Out20Gibbons} than it was in Figure~\ref{fig:expUscensus19Out20Gibbons}}
because the synthetic data sets has many views with less than 2~dimensions\danielcut{ and
so the hashing is no more than pairwise independent}.
\danielcut{
\begin{figure*}[!t]
\begin{center}
\subfigure[\textsc{Gibbons-Tirthapura}]{\includegraphics[width=0.8\columnwidth]{\myfig{gibbonsdbgen_error}}\label{fig:expDbgenErrorGibbons}}\quad
\subfigure[Probabilistic counting]{\includegraphics[width=0.8\columnwidth]{\myfig{countingdbgen_error}}\label{fig:expDbgenErrorCounting}}
\danielcut{
\\ [1pt]
\subfigure[\textsc{LogLog}]{\includegraphics[width=0.8\columnwidth]{\myfig{loglogdbgen_error}}\label{fig:expDbgenErrorLogLog}}\quad
\subfigure[Multifractal]{\includegraphics[width=0.8\columnwidth]{\myfig{multifractaldbgen_error}}\label{fig:expDbgenErrorMulti}}
}
\end{center}
\caption{Standard error\daniel{NO!!! average error} of estimation $\epsilon$ as a function of exact view
size for increasing values of $M$ (synthetic data set)} \label{fig:expDbgenError}
\end{figure*}
}
\danielcut{
\begin{figure*}[!t]
\begin{center}
\subfigure[\textsc{Gibbons-Tirthapura}]{\includegraphics[width=0.8\columnwidth]{\myfig{gibbonsdbgen_error_19out20}}\label{fig:expDbgen19Out20Gibbons}}\quad
\subfigure[Probabilistic counting]{\includegraphics[width=0.8\columnwidth]{\myfig{countingdbgen_error_19out20}}\label{fig:expDbgen19Out20Counting}}
\\ [1pt]
\subfigure[\textsc{LogLog}]{\includegraphics[width=0.8\columnwidth]{\myfig{loglogdbgen_error_19out20}}\label{fig:expDbgen19Out20LogLog}}\quad
\subfigure[Multifractal]{\includegraphics[width=0.8\columnwidth]{\myfig{multifractaldbgen_error_19out20}}\label{fig:expDbgen19Out20Multi}}
\end{center}
\caption{$95^{\textrm{th}}$-percentile error 19/20 $\epsilon$ as a function of
exact view size for increasing values of $M$ (synthetic data set)} \label{fig:expDbgen19Out20}
\end{figure*}}
\noindent \textbf{Speed.} We have also computed the time needed for each
technique to estimate view-sizes. We do not represent this time because it is
similar for each technique except for the multifractal which is the fastest
one. In addition, we observed that time do not depend on the memory budget
because most time is spent streaming and hashing the data.
For the multifractal technique, the processing time increases with the
sampling ratio.
The time needed to estimate the size of all the views by \textsc{Gibbons-Tirthapura}, probabilistic counting and
\textsc{LogLog} is about 5 minutes for US~Census~1990 data set and 7 minutes for the synthetic
data set. For the multifractal technique, all the estimates are done on roughly
2 seconds. This time does not include the time needed for sampling data which can
be significant: it takes 1~minute (resp. 4~minutes) to sample 0.5\%
of the US Census data set (resp. the synthetic data set -- TPC~H)
because the data is not stored in a flat file.
\section{\uppercase{Discussion}}
\noindent Our results show that probabilistic counting and \textsc{LogLog} do
not entirely live up to their theoretical promise. For small view sizes, the
relative accuracy can be very low.
When comparing the memory usage of the various techniques,
we have to keep in mind that the memory parameter $M$
can translate in different memory usage. The memory usage depends
also on the number of dimensions of each view. Generally, \textsc{Gibbons-Tirthapura}
will use more memory for the same value of $M$ than either probabilistic counting or
\textsc{LogLog}, though all of these can be small compared to the memory
usage of the lookup tables $T_i$ used for 3-wise independent hashing.
In this paper, the memory usage was always of the order of a few MiB which
is negligible in a data warehousing context.
View-size estimation by sampling can take minutes when data is not layed out
in a flat file or indexed, but
the time required for an unassuming estimation is even higher.
Streaming and hashing the tuples accounts for most of the processing time so
for faster estimates, we could store all hashed values in a bitmap (one per dimension).
\section{\uppercase{Conclusion and future work}}\label{sec:Conclusion}
\noindent In this paper, we have provided unassuming techniques for view-size
estimation in a data warehousing context. We adapted an estimator due to
Gibbons and Tirthapura. We compared this technique experimentally with
stochastic probabilistic counting, \textsc{LogLog}, and
multifractal statistical models. We have demonstrated that among these
techniques, only \textsc{Gibbons-Tirthapura} provides stable estimates irrespective of
the size of views. Otherwise, (stochastic) probabilistic counting has a small edge
in accuracy for relatively large views, whereas the competitive sampling-based
technique (multifractal) \danielcut{we tested} is an order of magnitude faster but can
\danielcut{sometimes} provide crude estimates. According to our experiments, \textsc{LogLog} was not faster than either \textsc{Gibbons-Tirthapura}
or probabilistic counting, and since it is less accurate than probabilistic counting, we
cannot recommend it.
There is ample room for future work. Firstly,
we plan to extend
these techniques to other types of aggregated views (for
example, views including \textsc{Having} clauses). Secondly,
we want to precompute the hashed values for very fast view-size estimation.
Furthermore, these
techniques should be tested in a materialized view selection
heuristic.
\section*{\uppercase{Acknowledgements}}
\noindent The authors wish to thank Owen Kaser for hardware and software.
This work is supported by NSERC grant 261437 and by FQRNT grant 112381.
\renewcommand{\baselinestretch}{0.5
\bibliographystyle{apalike}
{\small
|
1,314,259,996,603 | arxiv | \section{Introduction}
Machine learning (ML)-based algorithms have been widely used in the field of neutrino physics, for applications ranging from data acquisition to data reconstruction and analysis~\citep{Psihas:2017yuc, MINERvA:2018smv, Racah:2016gnm, Abbasi:2021ryj}. A detector technology ideally suited for computer vision applications in neutrino physics is that of liquid argon time projection chambers (LArTPCs), which are employed by the Deep Underground Neutrino Experiment (DUNE)~\citep{DUNE:2020lwj} and Short-Baseline Neutrino~\citep{MicroBooNE:2015bmn} experiments. ML applications are now deeply integrated into the event reconstruction and data analyses for the LArTPC experiments~\citep{MicroBooNE:2021bcu, ArgoNeuT:2021xtd, DUNE:2022fiy}.
Event record sizes for the current generation of LArTPC experiments are typically ${\leq}1$~GB and are expected to increase in the next few years.
With increased event size, the event reconstruction, especially the inference of ML algorithms, will become a challenge. Additionally, neutrino detectors are sensitive to neutrinos from a core-collapse supernova in or near the Milky Way. One of DUNE's physics goals is to rapidly reconstruct detector trigger records from such a supernova to provide rapid localization information to optical telescopes, placing a premium on short event reconstruction times. We have demonstrated GPU-accelerated ML inference as a service, which significantly reduced the reconstruction time for simulated neutrino events in the ProtoDUNE experiment~\citep{Wang:2020fjr}. Later, we tested the same GPU-as-a-Service (GPUaaS) approach to process the entire ProtoDUNE Run I dataset to demonstrate the scalability of this method. This paper reports the results of those tests.
\section{Infrastructure setup and methods}
\subsection{ProtoDUNE background}
The ProtoDUNE single phase detector (ProtoDUNE-SP)~\citep{DUNE:2020cqd,DUNE:2021hwx} is a liquid argon time projection chamber (LArTPC) that serves as a prototype for the first far detector module of DUNE~\citep{DUNE:2020lwj}. The ProtoDUNE-SP is installed at the CERN Neutrino Platform~\citep{Pietropaolo:2017jlh}. It has an active volume of $7.2\times6.1\times7.0$ m$^{3}$. The TPC wires are read out by 15,360 electric channels at a rate of 2 MHz. A typical event record consists of 6000 time samples, corresponding to a 3\,ms time window. Between October 10 and November 11, 2018, ProtoDUNE-SP was exposed to a beam that delivers charged pions, kaons, protons, muons and electrons with momenta in the range 0.3 GeV/$c$ to 7 GeV/$c$. After the beam runs ended, ProtoDUNE-SP continued to collect cosmic ray and calibration data until July 20, 2020, after which the detector decommissioning started. The total number of trigger records (also called ``events'') during the beam period, which consist of both beam interactions and non-beam interactions such as cosmic rays, is approximately 7.2 million.
A ProtoDUNE-SP TPC waveform recorded by a single electric channel consists of both signals and noise. There are typically three sources of signals. During the beam runs, the beam particles can interact with the liquid argon inside the TPC and produce both ionization electrons and scintillation light. Since ProtoDUNE-SP is located on the Earth’s surface, it is subject to a large flux of cosmic ray muons, which induce signals over the entire detector. There are also radioactive backgrounds such as $^{39}$Ar that generate low energy signals on the scale of a few hundred keV to a few MeV. Figure~\ref{fig:r5772_e15132} shows the event display of a 6 GeV/$c$ pion interaction in the ProtoDUNE-SP detector.
\begin{figure*}[htbp!]
\centering
\includegraphics[width=0.95\textwidth]{figures/R5772_E15132_T1T5T9_w0_480_t3500_5200_sc15.pdf}
\caption{A 6 GeV/$c$ beam $\pi^+$ interaction in the ProtoDUNE-SP detector~\cite{DUNE:2020cqd}. The x axis shows the wire number. The y axis shows the time tick in the unit of 0.5 $\mu$s. The color scale represents the charge deposition.}
\label{fig:r5772_e15132}
\end{figure*}
The first step in the reconstruction of events in the TPC is the signal processing. The goal of this stage is to produce distributions of charge arrival times and positions given the input TPC waveforms. The effects of induced currents
due to drifting and collecting charge, as well as the response of the front-end electronics, are removed through de-convolution. The charge arrival distributions are
used in subsequent reconstruction steps, starting with hit finding. The hit finding algorithm fits peaks in the wire waveforms, where a hit represents a charge deposition on a single wire at a given time. Each hit corresponds to a fitted peak. The hits are input to pattern recognition algorithms such as Pandora~\citep{pandorasdk,pandorauboone,DUNE:2022wlc}. This stage finds the high-level objects associated with particles, like tracks, showers, and vertices, and assembles them into a hierarchy of parent-daughter nodes that ultimately point back to the candidate neutrino interaction. More details on the reconstruction workflow are described in Ref.~\cite{DUNE:2020cqd}.
In ProtoDUNE-SP, a novel algorithm is developed based on a convolutional neural network (CNN) to perform the classification of each reconstructed hit as track-like or arising from electromagnetic cascades~\citep{DUNE:2022fiy}. These hit-level classifications can be used alongside pattern recognition based reconstruction algorithms such as Pandora to refine the track or shower classification of reconstructed particles. The CNN model was trained using TensorFlow~\citep{tensorflow2015-whitepaper}. Hereafter, we call this algorithm EmTrkMichelId.
In order to improve the efficiency and speed of the inference of ML algorithms in a large-scale data processing, GPU acceleration specifically for the ProtoDUNE reconstruction chain has been integrated without disrupting the native computing workflow using the services for optimized network inference on coprocessors (SONIC) approach~\citep{larrecodnn,Wang:2020fjr}. With the integrated framework, the most time-consuming task, track and particle shower hit identification, runs faster by a factor of 17. This results in a factor of 2.7 reduction in the total processing time when compared with CPU-only production. This initial test using a small number of simulated ProtoDUNE events showed a viable, cost-effective way to solve the computing challenge facing the neutrino experiments. In this work, we report the results of reprocessing the entire 7 million ProtoDUNE events taken during the test beam runs with the SONIC-enabled framework.
\subsection{Inference server setup}
The Nvidia\xspace Triton\textsuperscript{\texttrademark}\xspace Inference Server is an open-source inference serving software that helps standardize model deployment and execution; its goal is to deliver fast and scalable AI in production~\citep{NVIDIA_triton_rt}. NVIDIA provides multiple ways to deploy the inference server on different cloud providers and infrastructure types, including both bare metal and containerized workloads.
This study uses a cloud-based deployment of Nvidia\xspace Triton\textsuperscript{\texttrademark}\xspace Inference Server within a Google Cloud Kubernetes Engine~\citep{Google_KubernetesE} cluster on virtual infrastructure provided by Google Cloud Platform. The use of this technology enables us to deploy a flexible GPUaaS model where a public endpoint takes remote inference requests from various geographically distributed sources as depicted in Figure~\ref{fig:ProtoDUNEGPUaaScloud}. The Triton\textsuperscript{\texttrademark} server running on the Google cloud supports different backends. We use the TensorFlow (version 1.15.5) backend for the inference of the EmTrkMichelId algorithm.
\begin{figure*}[htp]
\centering\centering
\includegraphics[width=0.6\textwidth]{figures/ProtoDUNEGPUaaS-Page-1-2.pdf}
\caption{ProtoDUNE GPUaaS component diagram depicting local and remote batch inference runs submitted from Fermilab and OSG Grid sites.}
\label{fig:ProtoDUNEGPUaaScloud}
\end{figure*}
In a similar way as Ref.~\citep{Wang:2020fjr}, this study uses several Triton\textsuperscript{\texttrademark}\xspace servers split into separate Kubernetes deployments with common services for networking and external load balancing in the form of ingress objects~\citep{Google_KubernetesE_ingress}. One significant improvement for the current study is the deployment of metrics and monitoring which provided us with observability within the system in different states. In IT and cloud computing, observability is the ability to measure a system's current state based on the data it generates, such as logs, metrics, and traces. It relies on telemetry derived from instrumentation that comes from the endpoints and services in computing environments. Triton\textsuperscript{\texttrademark}\xspace provides a built-in metrics endpoint~\citep{NVIDIA_triton_rt_metrics} that publishes plain-text data in Prometheus format. Prometheus collects and stores data to be displayed by Grafana as seen in Figure~\ref{fig:ProtoDUNEGrafana100GPU}.
\begin{figure*}[htp]
\centering
\includegraphics[width=0.8\textwidth]{figures/ProtoDUNEGrafana100GPU.png}
\caption{A real-time monitoring view of a 100-GPU cluster run for ProtoDUNE (2021).}
\label{fig:ProtoDUNEGrafana100GPU}
\end{figure*}
\subsection{Methods}
The DUNE collaboration undertook a production campaign in 2021 to process ProtoDUNE-SP data using the LArSoft toolkit~\citep{Snider:2017wjd} version v09\_30\_00. Each production run during the beam period comprises several data files, each containing between 100 and 150 data records. In contrast to the previous work, in which DUNE simulation events were processed by submitting jobs locally to a dedicated queue, we submit jobs to process each file via the current standard DUNE workflow management and job submission systems~\citep{duneprod,POMSCHEP}, thus requiring no special treatment. Jobs may run either at Fermilab or one of several remote sites that we reach with opportunistic access enabled by the OSG Consortium~\citep{OSG2007}.
We begin from the existing reconstructed outputs and apply the updated EmTrkMichelId algorithm to produce new outputs. Of the 7.2 million ProtoDUNE events during the 2018 beam period, we process 6.4 million through the SONIC infrastructure, and 800k with the CPU-only version of the same algorithm for comparison. The OSG sites included in the SONIC runs were chosen to be geographically proximate to the location of the Google Cloud GPU servers (which were in Iowa, USA at the time) in order to minimize the latency in data transmissions.
The difference in the time spent in the inference step is the primary metric with which we assess the advantage of GPUaaS over traditional CPU processing. Each job produces a log file that statistically summarizes the time spent on each stage of the event reconstruction for the job as a whole. The log has no record of per-stage processing time at the individual event level, but we can closely approximate it by taking the difference between the start times of consecutive events. We estimate the per-event EmTrkMichelId duration by subtracting the median non-EmTrkMichelId duration from the total event duration, as the non-EmTrkMichelId stages display very little time variation across events. The CNN-based hit classification occurs in the EmTrkMichelId stage and is the most time-consuming step in the event reconstruction, typically accounting for more than 90\% of the processing time.
\section{Results}
\subsection{CPU-only runs}
We process a set of 13 runs using CPU-based Tensorflow both at Fermilab and several off-site locations. The off-site locations are the University of Notre Dame, the University of Victoria, and the high performance computing center at Wayne State University. The TensorFlow version used in the CPU-only runs is 2.3.1. Table~\ref{tab:cpu_offsite} summarizes the number of events processed at each site and the median processing times. We did not request any specific CPU type when submitting these jobs since typical DUNE practice is to use any and all available CPU types.
\begin{table}[h]
\renewcommand{\arraystretch}{1.3}
\caption{List of CPU-only run sites and median processing time}
\centering
\begin{tabular}{crr}
OSG Site & N samples & Median processing time (s) \\\hline
FermiGrid & 746603 & 79 \\
Notre Dame & 36082 & 68 \\
Victoria & 10944 & 52 \\
Wayne State & 4242 & 45 \\
\end{tabular}
\label{tab:cpu_offsite}
\end{table}
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{figures/processing_cpu_1D_CPU.pdf}
\caption{Timing distributions for CPU-only runs, broken down by CPU type.}
\label{fig:cpu_series_timing}
\end{figure}
There is a clear dependence on processor type in the EmTrkMichelId processing time distribution. In general, more recent CPUs process events faster. Figure~\ref{fig:cpu_series_timing} shows the CPU-based EmTrkMichelId timing for each of the CPU types currently available on the Fermilab general purpose batch farm. We do not have access to CPU type information outside of Fermilab and thus group them together.
\subsection{GPU runs}
Our main processing effort uses the GPUaaS infrastructure as described. Figure~\ref{fig:GPUall} shows the average EmTrkMichelId processing time when using the GPUaaS infrastructure for our entire running period. The first peak at approximately 20 s represents a factor of two improvement with respect to the fastest CPU-only runs, and a factor of roughly 11 over the slowest CPU runs. It is important to note that the EmTrkMichelId times we report here are wall times measured within the job, and thus include contributions from network latency to and from the server. There is another peak in the distribution with a median of over 100 s, to which we now turn.
\begin{figure}[htp!]
\centering
\includegraphics[width=0.5\textwidth]{figures/alltimes.pdf}
\caption{Average EmTrkMichelId times for GPU runs during the period September 30, 2021 to October 20, 2021. The double peak structure arises from periods during which the outbound network connection from the Fermilab grid processing center was saturated.}\label{fig:GPUall}
\end{figure}
\subsubsection{Outbound network saturation}
During the first period of GPU running we averaged between 200 and 2000 concurrent jobs. Figure~\ref{fig:nevt_traffic_1D} shows the overlay of network traffic and event processing start rate during the period of September 30, 2021 to October 6, 2021.
As the event start rate increases because of the rise in the number of concurrent jobs, we see that the 100 Gb/s outbound network connection used by the Fermilab data center where the jobs run becomes saturated. While our jobs were not solely responsible for the saturation (the connection serves the entire cluster), the saturation did result in a significant increase in the average EmTrkMichelId processing time as shown in Figure~\ref{fig:michel_traffic}. The highest job concurrency levels were on October 5, when unusually low demand for computing resources from other Fermilab experiments resulted in a large number of opportunistic job slots being available at Fermilab. We were, without any direct intervention, thus able to scale up to approximately 6,000 concurrent jobs. The monitoring does show switch saturation as early as October 1, however. After learning of the network saturation we implemented a concurrency limit on jobs of approximately 600; thereafter the jobs ran without incident and the EmTrkMichelId times returned to pre-saturation levels (see Figure~\ref{fig:GPUafterOct8}).
\begin{figure}[htp!]
\centering
\includegraphics[width=0.5\textwidth]{figures/nevt_traffic_1D_start_rate.pdf}
\caption{Overlay of network traffic and event processing start rate at FermiGrid as a function of time, which is a proxy for the number of concurrent jobs. The origin day is September 30, 2021. The solid line is the event start rate, the blue dot-dash line is the outbound network traffic rate through the 100 Gb/s switch at Fermilab used by the batch processing cluster, and the black dashed line is the ingress rate to the Google cloud server.
We are unable to disambiguate traffic sources through the switch, so the blue dot-dash line represents the total traffic as opposed to only traffic generated by our processing campaign. We see that the network switch was effectively saturated in multiple instances, though Google ingress was not.}\label{fig:nevt_traffic_1D}
\end{figure}
\begin{figure}[htp!]
\centering
\includegraphics[width=0.45\textwidth]{figures/michel_traffic_out_noNorm.pdf}
\includegraphics[width=0.45\textwidth]{figures/michel_traffic_out_maxNorm.pdf}
\caption{The average EmTrkMichelId duration before Oct. 7 as a function of the total network traffic through the 100 Gb/s network switch at Fermilab used by the batch processing cluster. The top plot shows the real event rate. The bottom plot is the same as the left one, with each column scaled separately so the maximum amplitude is 1 for each column.}\label{fig:michel_traffic}
\end{figure}
\begin{figure}[htp!]
\centering
\includegraphics[width=0.5\textwidth]{figures/all_after20211008.pdf}
\caption{The average time spent in the EmTrkMichelId task for all GPU jobs after October 8, when the network saturation had subsided.}\label{fig:GPUafterOct8}
\end{figure}
\section{Discussion}
In order to understand the impact of ProtoDUNE jobs on the Fermilab network traffic, we plot the distribution of event processing start rate versus network traffic in Figure~\ref{fig:ratevstraffic}. Even though the network traffic has contributions from all grid jobs at Fermilab, there is a clear correlation between the number of ProtoDUNE concurrent jobs and the increase of network traffic. We fit a straight line to the data points below the network traffic of 80 Gb/s. The slope of the best fit line is $4.2\pm0.2$ Gb, which is the average outbound data transmission per event. The intercept is $44\pm2$ Gb/s, which is the average traffic from non-ProtoDUNE grid jobs. Based on the discussion of transmission time in Ref.~\citep{Wang:2020fjr}, for 55,000 inferences per event, with each input a $48 \times 48$ image at 32 bits, the total amount of data transmitted is about 4.1 Gigabits per event. This is consistent with the slope of the best fit straight line. The spread in data with respect to the straight line could be caused by the variation in the number of non-ProtoDUNE grid jobs during this period.
\begin{figure*}[!htp]
\centering
\includegraphics[width=1\textwidth]{figures/Traffic_NEvt_MichelDuration_5_7.pdf}
\caption{The outbound network traffic vs. the average event start rate per second in 2-minute sliding windows, on October 5 and October 6. Data from each day is denoted with a different marker type. The color coding corresponds to the median EmTrkMichelId time for events in each sliding window. The linear fit to the traffic below $80$ Gb/s indicates that each event sends $4.2\pm 0.2$ Gb of outbound traffic, on top of $44\pm2$ Gb/s of baseline traffic from non-ProtoDUNE sources. }\label{fig:ratevstraffic}
\end{figure*}
Figure~\ref{fig:GPUafterOct8} indicates that the average processing time is roughly 25 s/event for the GPU jobs. Assuming the entire 100 Gb/s bandwidth is available to the ProtoDUNE jobs, the maximum number of concurrent ProtoDUNE jobs we can run without saturating the network is $(100~\text{Gb/s})/(4.1~\text{Gb/event})\cdot(25~\text{s/event}) \simeq 600$. This is consistent with the concurrency limit of 600 jobs that we implemented after October 7.
Based on the above discussions, we conclude that, while overall computational time clearly decreases using GPUaaS, one does have to take particular care to
understand what the expected data movement requirements will be for jobs using this architecture, and to set job concurrency limits appropriate to the capabilities of each local computing site and input data source. HTCondor~\cite{condorgrid,BOCKELMAN2020101213} in particular has the ability to define an arbitrary kind of resource that each job requires; one could define a ``bandwidth'' resource for these jobs, for example. HTCondor additionally allows configuring the job submissions to prevent more jobs to start at a given site once the sum of consumed resources by running jobs at that site reaches a certain threshold. Therefore, if one knows the total network capacity of each site hosting jobs, one can configure per-site job limits and prevent network saturation in an automated way.
\subsection{Future improvements}
A number of improvements to overall scalability and ease of use are possible. In addition to automatic job concurrency limits to prevent network saturation as previously described, we are exploring the possibility of compressing the data sent to the GPU server to reduce the overall bandwidth requirements. While a reduced payload would obviously increase job concurrency limits, that must be balanced against the additional run time that would be introduced in compressing and decompressing the data on the worker node and server, respectively. Another desirable area of improvement is in overall ease of use and human effort requirements. In the current setup we make use of the standard DUNE Production job submission infrastructure, which allows for a high degree of automated job submission, but due to the current nature of the cloud server it requires an authorized individual to manually instantiate the GPU inference server before we submit jobs. Establishing a method of automatically instantiating the server at job submission time and automatically ramping it down when the associated jobs are complete would avoid a clear possible failure point should no authorized individuals be available when the infrastructure is needed.
A second option to study is to use several geographically distributed inference servers instead of a single server, while also spreading the job workload over a much broader range of sites. Expanding the site pool has the advantage of making it much less likely that any single site would get enough work assigned to saturate its external connectivity, and using several inference servers spread around the world would help to mitigate the potential problem of network latency becoming comparable to the inference time. The cost changes in this scenario (for example, the relative cost of three cloud servers versus a single server three times the size) must be assessed and taken into account. Another consideration is how the overall event processing times would change if the worker nodes were much more geographically diffuse than they were for this study. Since we stream the input data over the network, longer network paths between the worker nodes and input data sources may lead to the non-EmTrkMichelId portions of the event processing taking longer, which in turn affects the total event processing time. DUNE is able to distribute data to various storage elements around the world via the Rucio framework~\citep{Barisits2019}, and pre-placing the data of interest at storage elements close to the sites to be used for processing may mitigate such concerns, though it is not required.
Another potential avenue is to use the GPU server infrastructure, but to use sites with GPUs available on the worker nodes, and run an independent server on each worker node.
Several high-performance computing sites have built or are building clusters with readily available GPUs, and in some cases with multiple GPUs on each worker node, that would naturally lend themselves to such a setup.
If the jobs run on worker nodes with local GPUs, external network connectivity limitations become unimportant for carrying out the inference calculations.
In fact, Triton\textsuperscript{\texttrademark}\xspace allows the use of shared memory for direct data transfer between CPU and GPU when the GPU is local. While it may not be necessary to retain the server infrastructure in these cases, the advantage of doing so is that the experiment software does not have to be modified to directly access the GPU, making it maximally portable and easier to maintain. We plan to conduct a similar study using this type of setup in the future.
\section{Summary}
We have reprocessed approximately seven million data events from the ProtoDUNE detector installed at CERN. We use an Nvidia\xspace Triton\textsuperscript{\texttrademark}\xspace inference server hosted on the Google Cloud Platform to run the most computationally expensive step of the workflow on a GPU, speeding up the required processing time by more than a factor of two, even comparing to the fastest CPU runs. Running at a scale similar to that expected during regular ProtoDUNE-II and DUNE operations, we see the expected performance improvement until the network switch through which the majority of our jobs communicate becomes saturated. Despite that, the cloud infrastructure easily kept up with demand and demonstrates the viability of the GPUaaS model at a level sufficient for current and future high-energy physics experiments, as long as the job concurrency levels at each site respect the site's network resource limits. With several promising avenues of improvement to explore, we expect that this computing model will become even more capable and easier to use in the future.
\section*{Author Contributions}
All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by Tejin Cai, Kenneth Herner, and Tingjun Yang. The first draft of the manuscript was prepared by Tejin Cai, Maria Acosta Flechas, Kenneth Herner, Kevin Pedro, Nhan Tran, and Tingjun Yang. All authors read and approved the final manuscript.
\section*{Acknowledgments}
We acknowledge the Fast Machine Learning collective as an open community of multi-domain experts and collaborators. This community was important for the development of this project. We acknowledge the DUNE collaboration for providing the ProtoDUNE-SP code base and data samples. The analysis is enabled in part by the Digital Research Alliance of Canada.
\section*{Declarations}
\subsection*{Competing Interests}
The authors have no competing interests to declare that are relevant to the content of this article.
\subsection*{Data Availability}
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
\subsection*{Funding}
MF, KH, BH, KP, NT, MW, and TY are supported by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. NT is partially supported by the U.S. Department of Energy Early Career Award. KP is partially supported by the High Velocity Artificial Intelligence grant as part of the U.S. Department of Energy High Energy Physics Computational HEP program. PH is supported by NSF grants \#1934700, \#193146. Cloud credits for this study were provided by Internet2 managed Exploring Cloud to accelerate Science (NSF grant PHY-190444). TC is supported by NSERC Canada.
|
1,314,259,996,604 | arxiv | \section{Introduction}\label{sec:introduction}
Traditionally, the field of Automated Planning deals with
the problem of generating a sequence of actions---a plan---that transforms an
initial state of the environment to some goal state, see for instance \cite{0014222,GNT2016}. Actions, in plain words,
stand for modifiers of the environment. One interesting question is whether the
effects of an action are reversible (by other actions), or in other words,
whether the action effects can be undone. Notions of reversibility have
previously been investigated, most notably by \cite{japll:EiterEF08} and by \cite{icaps:DaumT0HW16}.
Studying action reversibility is important for several reasons. Intuitively,
actions whose effects cannot be reversed might lead to dead-end states from
which the goal state is no longer reachable. Early detection of a dead-end state
is beneficial in a plan generation process, as shown by \cite{LipovetzkyMG16}. Reasoning in
more complex structures such as Agent Planning Programs~\citep{GiacomoGPSS16},
which represent networks of planning tasks where a goal state of one task is an
initial state of another task is even more prone to dead-ends, as shown by~\cite{ChrpaLS17}.
Concerning non-deterministic planning, for instance Fully Observable
Non-Deterministic (FOND) Planning, where actions have non-deterministic effects,
determining reversibility or irreversibility of each set of effects of the
action can contribute to early dead-end detection, or to generalise recovery
from undesirable action effects, which is important for efficient computation of
strong (cyclic) plans, cf.~\citep{CamachoMM16}. Concerning online planning, we can
observe that applying reversible actions is safe and hence we might not need to
explicitly provide the information about safe states of the
environment~\citep{CsernaDRR18}. Another, although not very obvious, benefit of
action reversibility is in plan optimization. If the effects of an action are
later reversed by a sequence of other actions in a plan, these actions might be
removed from the plan, potentially shortening it significantly. It has been
shown by \cite{ChrpaMO12} that under given circumstances, pairs of inverse actions, which are a
special case of action reversibility, can be removed from
plans.
\cite{MorakCFF20} introduced a general framework for action reversibility
that offers a broad definition of the term, and generalises many of the already
proposed notions of reversibility, like ``undoability'' proposed by
\cite{icaps:DaumT0HW16}, or the concept of
``reverse plans'' as introduced by \cite{japll:EiterEF08}. The concept of reversibility of \cite{MorakCFF20}
directly incorporates the set of states in which a given action should be
reversible. We call these notions $S$-reversibility and $\phi$-reversibility,
where the set $S$ contains states, and the formula $\phi$ describes a set of
states in terms of propositional logic. These notions are then further
refined to universal reversibility (referring to the set of all states) and to
reversibility in some planning task $\Pi$ (referring to the set of all reachable
states w.r.t.\ the initial state specified in $\Pi$). These last two versions
match the ones proposed by \cite{icaps:DaumT0HW16}. Furthermore, our notions can be further
restricted to require that some action is reversible by a single ``reverse
plan'' that is not dependent of the state for which the action is reversible. For
single actions, this matches the concept of the same name proposed by
\cite{japll:EiterEF08}.
The complexity analysis of \cite{MorakCFF20} indicates that some of these problems can be addressed by means of Answer Set Programming (ASP), but also by means of Epistemic Logic Programs (ELPs). In this paper, we
leverage the translations implemented in plasp~\citep{DimopoulosGLRS19}, and
produce ASP and ELP encodings to effectively solve some of the reversibility problems on PDDL domains,
restricted, for now, to the STRIPS fragment~\citep{Fikes71}. The encodings differ quite a bit concerning their generality and extensibility, and we discuss their advantages and disadvantages.
We also present preliminary experiments that compare the various encodings, highlighting a trade-off between extensibility and efficiency.
\section{Preliminaries}\label{sec:preliminaries}
\section{Background}\label{sec:preliminaries}
\paragraph{STRIPS Planning.}
Let $\mathcal{F}$ be a set of \emph{facts}, that is, propositional variables describing
the environment, which can either be true or false. Then, a subset $s \subseteq
\mathcal{F}$ is called a \emph{state}, which intuitively represents a set of facts
considered to be true. An action is a tuple $a = \langle \precond{a},
\addeffects{a}, \deleffects{a} \rangle$, where $\precond{a} \subseteq \mathcal{F}$ is
the set of \emph{preconditions} of $a$, and $\addeffects{a} \subseteq \mathcal{F}$ and
$\deleffects{a} \subseteq \mathcal{F}$ are the add and delete effects of $a$,
respectively. W.l.o.g., we assume actions to be well-formed, that is,
$\addeffects{a} \cap \deleffects{a} = \emptyset$ and $\precond{a} \cap
\addeffects{a} = \emptyset$. An action $a$ is \emph{applicable} in a state $s$
iff $\precond{a} \subseteq s$. The result of applying an action $a$ in a state
$s$, given that $a$ is applicable in $s$, is the state $\applyactions{a}{s} = (s
\setminus \deleffects{a}) \cup \addeffects{a}$. A sequence of actions $\pi =
\langle a_1, \ldots, a_n \rangle$ is applicable in a state $s_0$ iff there is a
sequence of states $\langle s_1, \ldots, s_n \rangle$ such that, for $0 < i \leq
n$, it holds that $a_i$ is applicable in $s_{i-1}$ and
$\applyactions{a_i}{s_{i-1}} = s_i$. Applying the action sequence $\pi$ on $s_0$
is denoted $\applyactions{\pi}{s_0}$, with $\applyactions{\pi}{s_0} = s_n$. The
\emph{length} of action sequence $\pi$ is denoted $|\pi|$.
A \emph{STRIPS planning task}
$\Pi = \langle \mathcal{F}, \mathcal{A}, s_0, G \rangle$
is a four-element tuple consisting of a set of \emph{facts} $\mathcal{F} = \{f_1, \ldots,
f_n\}$, a set of \emph{actions} $\mathcal{A} = \{a_1, \ldots, a_m\}$, an
\emph{initial state} $s_0 \subseteq \mathcal{F}$, and a \emph{goal} $G \subseteq \mathcal{F}$. A state $s \subseteq \mathcal{F}$ is a
\emph{goal state (for $\Pi$)} iff $G \subseteq s$. An action sequence $\pi$ is
called a \emph{plan} iff $\applyactions{\pi}{s_0} \supseteq G$. We further
define several relevant notions w.r.t.\ a planning task $\Pi$. A state $s$ is
\emph{reachable from state $s'$} iff there exists an applicable action sequence
$\pi$ such that $\applyactions{\pi}{s'} = s$. A state $s \in 2^\mathcal{F}$ is simply
called \emph{reachable} iff it is reachable from the initial state $s_0$. The
set of all reachable states in $\Pi$ is denoted by $\reachablein{\Pi}$. An
action $a$ is \emph{reachable} iff there is some state $s \in \reachablein{\Pi}$
such that $a$ is applicable in $s$.
Deciding whether a STRIPS planning task has a plan is known to be
\ensuremath{\textsc{PSpace}}-complete in general and it is \ensuremath{\textsc{NP}}-complete if the length of the plan is polynomially bounded~\citep{ai:Bylander94}.
\paragraph{Epistemic Logic Programs (ELPs) and Answer Set Programming (ASP).} We
assume the reader is familiar with ELPs and will only give a very brief overview
of the core language. For more information, we refer to the original paper
proposing ELPs~\citep{aaai:Gelfond91}, therein named \emph{Epistemic
Specifications}, whose semantics we will use in the present paper.
Briefly, ELPs consist of sets of \emph{rules} of the form
\[
a_1 \vee \ldots \vee a_n \gets \ell_1, \ldots, \ell_m.
\]
In these rules, all $a_i$ are \emph{atoms} of the form $p(t_1, \ldots, t_n)$,
where $p$ is a predicate name, and $t_1, \ldots, t_n$ are terms, that is, either
variables or constants. Each $\ell$ is either an objective or subjective
literal, where objective literals are of the form $a$ or $\neg a$ ($a$ being an atom), and subjective literals are of the form $\mathbf{K}\, l$ or $\neg \mathbf{K}\, l$, where $l$
is an objective literal. Note that often the operator $\mathbf{M}\,$ is also used, which
we will simply treat as a shorthand for $\neg \mathbf{K}\, \neg$.
The domain of constants in an ELP $P$ is given implicitly by the set of all
constants that appear in it. Generally, before evaluating an ELP program,
variables are removed by a process called \emph{grounding}, that is, for every
rule, each variable is replaced by all possible combination of constants, and
appropriate ground copies of the rule are added to the resulting program
$\mathit{ground}(P)$. In practice, several optimizations have been implemented
in state-of-the-art systems that try to minimize the size of the grounding.
The result of a (ground) ELP program $P$ is calculated as follows~\citep{aaai:Gelfond91}. An \emph{interpretation} $I$ is a set of ground atoms
appearing in $P$. A set of interpretations $\mathcal{I}$ satisfies a subjective
literal $\mathbf{K}\, l$ (denoted $\mathcal{I} \models \mathbf{K}\, l$) iff the objective literal $l$ is
satisfied in all interpretations in $\mathcal{I}$. The \emph{epistemic reduct}
$P^\mathcal{I}$ of $P$ w.r.t.\ $\mathcal{I}$ is obtained from $P$ by replacing all
subjective literals $\ell$ with either $\top$ in case where $\mathcal{I} \models
\ell$, or with $\bot$ otherwise. $P^\mathcal{I}$, therefore, is an ASP program, that
is, a program without subjective literals. The solutions to an ELP $P$ are
called \emph{world views}. A set of interpretations $\mathcal{I}$ is a world view of
$P$ iff $\mathcal{I} = \answersets{P^\mathcal{I}}$, where
$\answersets{P^\mathcal{I}}$ denotes the set of stable models (or answer sets) of the
logic program $P^\mathcal{I}$ according to the semantics of answer set programming~\citep{ngc:GelfondL91}. Checking whether a world view exists for an ELP is known
to be \SIGMA{P}{3}-complete in general, as shown by \cite{birthday:Truszczynski11}.
\section{Reversibility of Actions}\label{sec:reversibility}
In this section, we focus on the notion of uniform reversibility, which is a subclass of action reversibility as explained in detail by \cite{MorakCFF20}.
Intuitively, we call an action reversible if there is a way to undo all the
effects that this action caused, and we call an action \emph{uniformly
reversible} if its effects can be undone by a single sequence of actions
irrespective of the state where the action was applied.
While this intuition is fairly straightforward, when formally defining this
concept, we also need to take several other factors into account---in
particular, the set of possible states where an action is considered plays
an important role~\citep{MorakCFF20}.
\begin{definition}\label{def:uniformreversibility}
%
Let $\mathcal{F}$ be a set of facts, $\mathcal{A}$ be a set of actions, $S \subseteq 2^\mathcal{F}$ be
a set of states, and $a \in \mathcal{A}$ be an action. We call $a$
\emph{uniformly $S$-reversible} iff there exists a sequence of actions $\pi = \langle a_1,
\ldots, a_n \rangle \in \mathcal{A}^n$ such that for each $s \in S$ wherein $a$ is
applicable it holds that $\pi$ is applicable in $\applyactions{a}{s}$ and
$\applyactions{\pi}{\applyactions{a}{s}} = s$.
%
\end{definition}
The notion of uniform reversibility in the
most general sense does not depend on a concrete STRIPS planning task, but only
on a set of possible actions and states w.r.t.\ a set of facts. Note that the
set of states $S$ is an explicit part of the notion of uniform $S$-reversibility.
Based on this general notion, it is then possible to define several concrete
sets of states $S$ that are useful to consider when considering whether an
action is reversible.
For instance, $S$ could be defined via a propositional formula over the
facts in $\mathcal{F}$.
Or we can consider a set of all possible states ($2^\mathcal{F}$) which gives us
a notion of universal reversibility that applies to all possible planning
tasks that share the same set of facts and actions (i.e., the tasks that
differ only in the initial state or goals).
Or we can move our attention to a specific STRIPS instance and ask whether
a certain action is uniformly reversible for all states reachable from the
initial state.
\begin{definition}\label{def:uniformreversibility:names}
%
Let $\mathcal{F}$, $\mathcal{A}$, $S$, and $a$ be as in
Definition~\ref{def:uniformreversibility}. We call the action $a$
%
\begin{enumerate}
%
\item \emph{uniformly $\phi$-reversible} iff $a$ is uniformly $S$-reversible in the set $S$ of
models of the propositional formula $\phi$ over $\mathcal{F}$;
%
\item \emph{uniformly reversible in $\Pi$} iff $a$ is uniformly $\mathcal{R}_\Pi$-reversible for some
STRIPS planning task $\Pi$; and
%
\item \emph{universally uniformly reversible}, or, simply, \emph{uniformly reversible}, iff $a$
is uniformly $2^\mathcal{F}$-reversible.
%
\end{enumerate}
%
\end{definition}
Given the above definitions, we can already observe some interrelationships. In
particular, universal uniform reversibility (that is, uniform
reversibility in the set of all possible
states) is obviously the strongest notion, implying all the other, weaker
notions. It may be particularly important when one wants to establish
uniform reversibility
irrespective of the concrete STRIPS instance. On the other hand,
$\phi$-reversibility may be of particular interest when $\phi$ encodes the
natural domain constraints for a given planning task.
%
%
%
%
%
The notion of uniform reversibility naturally gives rise to the notion of the
reverse plan. We say that some action $a$ has an \emph{($S$-)reverse plan} $\pi$
iff $a$ is uniformly ($S$-)reversible using the sequence of actions $\pi$. It is
interesting to note that this definition of the reverse plan based on uniform
reversibility now coincides with the same notion as defined by
\cite{japll:EiterEF08}. Note, however, that
in that paper the authors use a much more general planning language.
Even if the length of the reverse plan is polynomially bounded, the problem of
deciding whether an action is uniformly ($\phi$-)reversible is intractable. In
particular, deciding whether an action is universally uniformly reversible
(resp. uniformly $\phi$-reversible) by a polynomial length reverse plan is
NP-complete (resp. in $\SIGMA{P}{2}$)~\citep{MorakCFF20}.
\section{Methods}\label{sec:methods}
After reviewing the relevant features of \emph{plasp}, described by \cite{DimopoulosGLRS19}, in
Section~\ref{sec:plasp}, we present our encodings for determining reversibility
in Section~\ref{sec:asprev}.
\subsection{The \emph{plasp} Format}\label{sec:plasp}
The system \emph{plasp}, described by \cite{DimopoulosGLRS19}, transforms PDDL domains and
problems into facts. Together with suitable programs, plans can then be computed
by ASP solvers---and hence also by ELP solvers, since ELPs are a superset of ASP
programs.
Given a STRIPS domain with facts $\mathcal{F}$ and actions $\mathcal{A}$, the following relevant facts and rules will be created by \emph{plasp}:
\begin{itemize}
%
\item \verb!variable(variable("f")).! for all \verb!f! $\in \mathcal{F}$
%
\item \verb!action(action("a")).! for all \verb!a! $\in \mathcal{A}$
%
\item \verb!precondition(action("a"),variable("f"),value(variable("f"),true))!\\
\verb!:- action(action("a")).!\\ for each \verb!a! $\in \mathcal{A}$ and \verb!f! $\in \precond{\mbox{a}}$
\item \verb!postcondition(action("a"),effect(unconditional),variable("f"),!\\
\verb! value(variable("f"),true)) :- action(action("a")).!\\ for each \verb!a! $\in \mathcal{A}$ and \verb!f! $\in \addeffects{\mbox{a}}$
\item \verb!postcondition(action("a"),effect(unconditional),variable("f"),!\\
\verb! value(variable("f"),false)) :- action(action("a")).!\\ for each \verb!a! $\in \mathcal{A}$ and \verb!f! $\in \deleffects{\mbox{a}}$
\end{itemize}
In addition, a predicate \verb!contains! encodes all possible values for a given
variable (for STRIPS, this is either true or false).
\begin{example}
The STRIPS domain with $\mathcal{F}=\{f\}$ and actions $del$-$f=\langle\{f\},\emptyset,\{f\}\rangle$ and $add$-$f=\langle\emptyset,\{f\},\emptyset\rangle$ is written in PDDL as follows:
{\small
\begin{verbatim}
(define (domain example1)
(:requirements :strips)
(:predicates (f) )
(:action del-f
:precondition (f)
:effect (not (f)))
(:action add-f
:effect (f)))
\end{verbatim}
}
\emph{plasp} translates this domain to the following rules (plus a few technical facts and rules):
{\small
\begin{verbatim}
variable(variable("f")).
action(action("del-f")).
precondition(action("del-f"), variable("f"), value(variable("f"), true)) :-
action(action("del-f")).
postcondition(action("del-f"), effect(unconditional), variable("f"),
value(variable("f"), false)) :- action(action("del-f")).
action(action("add-f")).
postcondition(action("add-f"), effect(unconditional), variable("f"),
value(variable("f"), true)) :- action(action("add-f")).
\end{verbatim}
}
\end{example}
\subsection{Reversibility Encodings using ASP and ELPs}\label{sec:asprev}
In this section, we present our ASP and ELP encodings for checking whether, in a
given domain, there is an action that is uniformly reversible. As we have seen
in Section~\ref{sec:plasp}, the \emph{plasp} tool is able to rewrite STRIPS
domains into ASP rules even when no concrete planning instance for that domain
is given. We will present two encodings, one for (universal) uniform
reversibility, and one that can be used for uniform $\phi$-reversibiliy.
Note that \emph{universal} uniform reversibility is computationally easier than
$\phi$-uniform reversibility (under standard complexity-theoretic assumptions).
For a given action (and polynomial-length reverse plans), the former can be
decided in \ensuremath{\textsc{NP}}, while the latter is harder (Theorem~18 and~20 in \cite{MorakCFF20}).
We will hence start with the encoding for the former problem, which follows a
standard guess-and-check pattern.
\subsubsection{Universal Uniform Reversibility}
The encodings are based on \verb!sequential-horizon.lp! in the \emph{plasp} distribution.
\paragraph{ELP Encoding.} As a ``database'' the encoding
takes the output of \emph{plasp}'s translate action (for details, see~\citep{DimopoulosGLRS19}). The
problem can be solved in \ensuremath{\textsc{NP}}\ due to the following Observation (*): in any
(universal) reverse plan for some action $a$, it is sufficient to consider only
the set of facts that appear in the precondition of $a$. If any action in a
candidate reverse plan $\pi$ for $a$ (resp.\ $a$ itself) contains any other fact
than those in $\precond{a}$, then $\pi$ cannot be a reverse plan for $a$ (resp.\
$a$ is not uniformly reversible), see Theorem~18 in \cite{MorakCFF20} or Theorem~3 in \cite{ChrpaFM21}. With this
observation in mind, we can now describe the (core parts of) our
encodings\footnote{The full encodings are available here:
\url{https://seafile.aau.at/d/cd4cb0d65d124a619920/}.}. We start with our ELP
encoding and will explain later how to modify it to obtain a plain ASP encoding. We should note that here the epistemic operators are used in a way as choices are used in ASP. We did this in order to understand the computational overhead of using ELP rather than ASP, but also in preparation for the uniform $\phi$-reversibility encoding.
The ELP encoding makes use of the following main predicates (in addition to
several auxiliary predicates, as well as those imported from \emph{plasp}):
\begin{itemize}
%
\item \verb!chosen/1! encodes the action to be tested for reversibility.
%
\item \verb!holds/3! encodes that some fact (or variable, as they are called
in \emph{plasp} parlance) is set to a certain value at a given time step.
%
\item \verb!occurs/2! encodes the candidate reverse plan, saying which action
occurs at which time step.
\end{itemize}
With the intuitive meaning of the predicates defined, we first choose a single
action from the available actions and set the initial state as the facts in the
precondition of the chosen action. The first two lines partition the actions into chosen and unchosen ones; since it is a ``modal guess,'' there will be one world view for each partition. The third line makes sure that there is at most one chosen action, and lines 4 and 5 enforce at least one chosen action. The last rule says, in line with the Observation (*)
above, that only those variables in the precondition are relevant to check for a
reverse plan.
\small
\begin{verbatim}
chosen(A) :- action(action(A)), not &k{-chosen(A)}.
-chosen(A) :- action(action(A)), not &k{ chosen(A)}.
:- chosen(A), chosen(B), A!=B.
onechosen :- chosen(A).
:- not onechosen.
holds(V, Val, 0) :-
chosen(A), precondition(action(A), variable(V), value(variable(V), Val)).
relevant(V) :- holds(V, _, 0).
\end{verbatim}
\normalsize
These rules set the stage for the inherent planning problem to be solved to find
a reverse plan. In fact, from the initial state guessed above, we need to find a
plan $\pi$ that starts with action $a$ (the chosen action), such that after
executing $\pi$ we end up in the initial state again. Such a plan is a
(universal) reverse plan. This idea is encoded in the following:
\small
\begin{verbatim}
time(0..horizon+1).
occurs(A, 1) :- chosen(A).
occurs(A, T) :- action(action(A)),time(T), T > 1, not &k{-occurs(A, T)}.
-occurs(A, T) :- action(action(A)),time(T), T > 1, not &k{occurs(A, T)}.
:- occurs(A,T), occurs(B,T), A!=B.
oneoccurs(T) :- occurs(A,T), time(T), T > 0.
:- time(T), T>0, not oneoccurs(T).
caused(V, Val, T) :-
occurs(A, T), postcondition(action(A), _, variable(V), value(variable(V), Val)).
modified(V, T) :- caused(V, _, T).
holds(V, Val, T) :- caused(V, Val, T).
holds(V, Val, T) :- holds(V, Val, T - 1), not modified(V, T), time(T).
\end{verbatim}
\normalsize
The above rules guess a potential plan $\pi$ using the same technique as above, and then
execute the plan on the initial state (changing facts if this is caused by the
application of a rule, and keeping the same facts if they were not modified).
Finally, we simply need to check that the plan is (a) executable, and (b) leads
from the initial state back to the initial state. This can be done with the
following constraints:
\small
\begin{verbatim}
:- occurs(A, T), precondition(action(A), variable(V), value(variable(V), Val)),
not holds(V, Val, T - 1).
:- occurs(A, T), precondition(action(A), variable(V), _), not relevant(V).
:- occurs(A, T), postcondition(action(A), _, variable(V), _), not relevant(V).
noreversal :- holds(V, Val, 0), not holds(V, Val, H+1), horizon(H).
noreversal :- holds(V, Val, H+1), not holds(V, Val, 0), horizon(H).
:- not &k{ ~ noreversal}.
\end{verbatim}
\normalsize
The first rule checks that rules in the candidate plan are actually applicable.
The next two check that the rules do not contain any facts other than those that
are relevant (cf.\ observation (*) above). Finally, the last three rules make
sure that at the maximum time point (i.e.\ the one given by the externally
defined constant ``horizon'') the initial state and the resulting state of plan
$\pi$ are the same. It is not difficult to verify that any world view of the
above ELP (combined with the \emph{plasp} translation of a STRIPS problem
domain) will yield a plan $\pi$ (encoded by the \verb!occurs! predicate) that
contains the sequence of actions $a, a_1, \ldots, a_n$, where $a_1, \ldots, a_n$
is a (universal) reverse plan for the action $a$ (each world view consists of
precisely one answer set). Note that our encoding yields reverse plans of length
exactly as long as set in the ``horizon'' constant. One could for instance employ an iterative deepening approach for determining the shortest reverse plans in case the plan length is not known or fixed. This completes our ELP
encoding for the problem of deciding universal uniform reversibility.
We can show that the encoding indeed leads to the correct result:
\begin{theorem}\label{thm:elpcorrect}
Given a STRIPS planning task $\Pi = \langle \mathcal{F}, \mathcal{A}, s_0, G \rangle$, the
ELP encoding in this section, when applied to $\Pi$, produces exactly one
world view for each universally uniformly reversible action $a \in \mathcal{A}$ and
reverse plan $\pi$ of length \verb!horizon! for $a$.
\end{theorem}
\begin{proof}[Proof (Sketch).]
We will show that, for each such action $a$ and reverse plan $\pi$, there
exists exactly one world view $\mathcal{I}$, such that every answer set in $\mathcal{I}$
contains the facts \verb!chosen(a)! and \verb!occurs(a', i)! for each action
$a' \in \pi$, where $a'$ is the $(i-1)$-th action in $\pi$. This follows by
construction:
The rules deriving the \verb!chosen! and \verb!occurs! predicates, together
with the constraints that follow, ensure that there is exactly one world view
candidate per choosable action and candidate reverse plan of length
\verb!horizon!. Because of Theorem~18 in \cite{MorakCFF20}, for universal
uniform reversibility we only need to check a single starting state, and hence
each world view candidate $\mathcal{I}$ has at most one answer set $M$, i.e.\ $\mathcal{I}
= \{ M \}$.
The rules deriving the \verb!holds! and \verb!caused! predicates then execute
action $a$ and the reverse plan, keeping track of which value each variable
has after each step (represented by time points \verb!T!). Finally, $M$ is
eliminated as an answer set in case where some action $a'$ in the reverse plan
is not applicable or if $a'$ ``touches'' a variable that does not occur in the
precondition of the chosen action $a$ (encoded in the predicate
\verb!relevant!). The latter check is, again, correct because of Observation
(*). The final three rules ensure that, in any world view, no answer set can
contain the fact \verb!noreversal!, which is true if and only if some variable
in the initial state (time point \verb!0!) has a different value from the
final state (time point \verb!horizon + 1!).
Hence, in any remaining world view $\mathcal{I} = \{ M \}$, $M$ contains precisely a
chosen action $a$, a reverse plan $\pi$ of length \verb!horizon! inside
the \verb!occurs! predicate, and the intermediate states at each time
step after the successful and valid application of action $a$ or actions
from $\pi$, starting at some initial state that equals the final state. But
this is precisely a reverse plan for $a$ of length \verb!horizon!, as desired.
\end{proof}
\paragraph{ASP Encoding.} Now, to see how the same thing can be achieved using
ASP, we can modify the encoding above as follows, yielding an encoding that
guarantees that every answer set represents a possible uniform reverse plan.
Firstly, in order to choose the action to reverse, the first five rules of the
ELP encoding can be replaced by a simple choice rule:
\begin{verbatim}
1 {chosen(A) : action(action(A))} 1.
\end{verbatim}
Similarly, the rules that chose, for each time step, an action (via the
\verb!occurs! predicate), can be replaced with a choice rule as follows:
\begin{verbatim}
1 {occurs(A, T) : action(action(A))} 1 :- time(T), T > 1.
\end{verbatim}
Finally, the check that no reversal exists (represented by the \verb!noreversal!
atom in the ELP encoding) can be encoded in ASP using simple constraints:
\begin{verbatim}
:- holds(V, Val, 0), not holds(V, Val, horizon+1).
:- holds(V, Val, horizon+1), not holds(V, Val, 0).
\end{verbatim}
This completes the ASP encoding, which now does not contain any subjective
literals. It can be seen that, whereas the ELP encoding generates one world view
per uniform reverse plan (by doing all the guesses via subjective literals), the
ASP encoding will generate one answer set per such plan:
\begin{theorem}\label{thm:aspcorrect}
Given a STRIPS planning task $\Pi = \langle \mathcal{F}, \mathcal{A}, s_0, G \rangle$, the
ASP encoding in this section, when applied to $\Pi$, produces exactly one
answer set for each universally uniformly reversible action $a \in \mathcal{A}$ and
reverse plan $\pi$ of length \verb!horizon! for $a$.
\end{theorem}
\begin{proof}[Proof (Idea).]
The proof proceeds in a similar fashion to the proof of
Theorem~\ref{thm:elpcorrect}. In particular, now the actions are not guessed
via a world view, but directly inside the answer set, via the appropriate
choice rules. Hence, candidate answer sets contain all combinations of chosen
actions and reverse plan candidates. Via the constraints, any answer set where
the actions in the reverse plan don't follow the conditions of Observation
(*), or where they do not lead back to the original state, are eliminated,
leaving only answer sets that contain chosen actions together with valid
reverse plans for them, as desired.
\end{proof}
\paragraph{Comparison.} The ASP encoding is a fairly straightforward
guess-and-check program, as the underlying problem of deciding universal uniform
action reversibility is only \ensuremath{\textsc{NP}}-complete \citep{MorakCFF20}. In this case, it
could be argued that the choice rules employed there are a more natural encoding
than guessing via the modal operators of ELPs. However, in terms of contrasting
the expressiveness of the two languages, we feel that, still, it is interesting
to see how ``simple'' \ensuremath{\textsc{NP}}-complete problems can be encoded using the modal
operators of ELPs, as this may lead to further improvements of the modelling
capabilities of the ELP language in the future. It also stands to reason that,
in the future, ELP solvers should aim to provide some syntactic sugar for these
modal operators for guess-and-check programs, similar to how choice rules are
provided by modern ASP solvers.
\subsubsection{Other Forms of Uniform Reversibility}
\paragraph{ELP Encoding.} Using a similar
guess-and-check idea as in the previous encodings, we can also check for uniform
reversibility for a specified set of states (that is, uniform
$S$-reversibility). Generally, the set $S$ of relevant states is encoded in some
compact form, and our encoding therefore, intentionally, does not assume
anything about this representation, but leaves the precise checking of the set
$S$ open for implementations of a concrete use case. The predicates used in this
more advanced encoding are similar to the ones used in the previous for the
universal case above, and hence we will not list them here again. However, in
order to encode the for-all-states check (i.e.\ the check that the candidate
reverse plan works in \emph{all} states inside the set $S$), we now need our
world views to contain multiple answer sets: one for each state in $S$. We again
start off with the ELP encoding. However, this time, we will see afterwards that
there is no easy modification to immediately obtain an ASP encoding, but the two
differ substantially\footnote{The full encodings
can be found at \url{https://seafile.aau.at/d/cd4cb0d65d124a619920/}.}.
The ELP encoding starts off much like the previous one:
\small
\begin{verbatim}
chosen(A) :- action(action(A)), not &k{-chosen(A)}.
-chosen(A) :- action(action(A)), not &k{ chosen(A)}.
:- chosen(A), chosen(B), A!=B.
onechosen :- chosen(A).
:- not onechosen.
holds(V, Val, 0) :- chosen(A),
precondition(action(A), variable(V), value(variable(V), Val)).
\end{verbatim}
\normalsize
Note that we no longer need to keep track of any set of ``relevant'' facts,
since we now need to consider all the facts that appear inside the actions and
in the set $S$ of states. However, we need to open up several answer sets, one
for each state. This is done by guessing a truth value for each fact at time
step 0. Recall that \verb!contains! is part of the \emph{plasp} output, encoding all possible values for a given
variable.
\small
\begin{verbatim}
holds(V,Val,0) | -holds(V,Val,0) :-
variable(variable(V)), contains(variable(V),value(variable(V),Val)).
oneholds(V,0) :- holds(V,Val,0).
:- variable(variable(V)), not oneholds(V,0).
:- holds(V,Val,0), holds(V,Val1,0), Val != Val1.
\end{verbatim}
\normalsize
Next, we again guess and execute a plan, keeping track of whether the actions
were able to be applied at each particular time step:
\small
\begin{verbatim}
occurs(A, 1) :- chosen(A).
occurs(A, T) :- action(action(A)),time(T), T > 1, not &k{-occurs(A, T)}.
-occurs(A, T) :- action(action(A)),time(T), T > 1, not &k{occurs(A, T)}.
:- occurs(A,T), occurs(B,T), A!=B.
oneoccurs(T) :- occurs(A,T), time(T), T > 0.
:- time(T), T>0, not oneoccurs(T).
inapplicable :- occurs(A, T),
precondition(action(A), variable(V), value(variable(V), Val)),
not holds(V, Val, T - 1).
:- not &k{ ~ inapplicable}.
caused(V, Val, T) :- occurs(A, T),
postcondition(action(A), E, variable(V), value(variable(V), Val)).
modified(V, T) :- caused(V, _, T).
holds(V, Val, T) :- caused(V, Val, T).
holds(V, Val, T) :- holds(V, Val, T - 1), not modified(V, T), time(T).
\end{verbatim}
\normalsize
Again, the rules above choose a candidate reverse plan $\pi$, starting with the
action-to-be-checked $a$, as before. Furthermore, we check applicability: $\pi$
should be applicable (i.e.\ at each time step, the relevant action must have
been applied, encoded by the third block of rules above), and furthermore, only
modified facts (i.e.\ those affected by an action) can change their truth values
from time step to time step. Finally, we again need to make sure that the
guessed plan actually returns us to the original state at time step 0.
\small
\begin{verbatim}
noreversal :- holds(V, Val, 0), not holds(V, Val, H+1), horizon(H).
noreversal :- holds(V, Val, H+1), not holds(V, Val, 0), horizon(H).
:- not &k{ ~ noreversal}.
\end{verbatim}
\normalsize
This concludes the main part of our ELP encoding. In its current form, the encoding
given above produces exactly the same results as the first encoding given in
this section; that is, it checks for \emph{universal} uniform reversibility.
However, the second encoding can be easily modified in order to check uniform
$S$-reversibility. Simply add a rule of the following form to it:
\small
\begin{verbatim}
:- < check guessed state against set S >
\end{verbatim}
\normalsize
This rule should fire precisely when the current
guess (that is, the currently considered starting state) does not belong to the
set $S$. This can of course be generalized easily. For example, if set $S$ is
given as a formula $\phi$, then the rule should check whether the current guess
conforms to formula $\phi$ (i.e., encodes a model of $\phi$). Other compact
representations of $S$ can be similarly checked at this point. Hence, we have a
flexible encoding for uniform $S$-reversibility that is easy to extend with
various forms of representations of set $S$.
\paragraph{ASP Encoding.} Now, for the ASP encoding. As we will see, this is
now substantially more involved than the ELP encoding, since we need to apply
an encoding technique called \emph{saturation}, cf.~\cite{amai:EiterG95},
allowing us to express a form of universal quantification. We can start off the
same as last time, that is, with a choice rule:
\small
\begin{verbatim}
1 {chosen(A) : action(action(A))} 1.
holds(V, Val, 0) :- chosen(A),
precondition(action(A), variable(V), value(variable(V), Val)).
affected(A, V) :- postcondition(action(A), _, variable(V), _).
\end{verbatim}
\normalsize
We note the first difference compared to the ELP encoding: we need to keep track
of all STRIPS facts that are potentially affected by an action. We assume that a
predicate \verb!opposites/2! exists that holds, in both possible orders, the
values ``true'' and ``false''. This will later be used to find the opposite
value of some STRIPS fact at a particular time step.
Next, we again guess and execute a plan:
\small
\begin{verbatim}
occurs(A, 1) :- chosen(A).
1 {occurs(A, T) : action(action(A))} 1 :- time(T), T > 1.
applied(0).
applicable(A, T) :- occurs(A, T), applied(T - 1),
holds(V, Val, T - 1) :
precondition(action(A), variable(V), value(variable(V), Val)).
applied(T) :- applicable(_, T).
holds(V, Val, T) :- applicable(A, T),
postcondition(action(A), _, variable(V), value(variable(V), Val)).
holds(V, Val, T) :- holds(V, Val, T - 1), occurs(A, T), applied(T),
not affected(A, V).
\end{verbatim}
\normalsize
Note that we use the predicate \verb!affected! here to encode inertia for those
facts that are not affected by the applied action. From here on, we now see a
major difference to the ELP encoding. We need to set up our goal conditions and
then encode the universal check for all states of set $S$. First we check that
$\pi$ should be applicable (i.e.\ at each time step, the relevant action must
have been applied), and furthermore, the state at the beginning must be equal to
the state at the end.
\small
\begin{verbatim}
same(V) :- holds(V, Val, 0), holds(V, Val, horizon + 1).
samestate :- same(V) : variable(variable(V)).
planvalid :- applied(horizon + 1).
reversePlan :- samestate, planvalid.
\end{verbatim}
\normalsize
Finally, we need to specify that for all the states specified in the set $S$ the
candidate reverse plan must work. This is done as follows:
\small
\begin{verbatim}
holds(V, Val1, 0) | holds(V, Val2, 0) :- variable(variable(V)),
opposites(Val1, Val2), Val1 < Val2.
holds(V, Val, T) :- reversePlan, contains(variable(V), value(variable(V), Val)),
time(T).
:- not reversePlan.
\end{verbatim}
\normalsize
As stated above, this is done using the technique of \emph{saturation}
\citep{amai:EiterG95}, allowing us to express a form of universal quantifier
that, in our case, checks that, for every state in the set $S$, we return to
the original state after applying the chosen action and the reverse plan. We
encourage the reader to refer to the relevant publication for more details on
the ``inner workings'' of this encoding technique. Again, as is, the ASP
encoding above checks \emph{universal} uniform reversibility. However, it again
can be easily modified in order to check uniform $S$-reversibility. Simply add
a rule of the following form to it, analogously to what we had for the ELP
encoding:
\small
\begin{verbatim}
reversePlan :- < check guessed state against set S >
\end{verbatim}
\normalsize
This completes the overview of our ELP and ASP encodings for uniform
reversibility.
\paragraph{Comparison.} Looking at the structure of the ELP and ASP encodings,
it is not difficult to see that they share a certain common structure. This is
not surprising, since they underlying language is the same. However, it can also
be observed that the technique of saturation, which is required (in terms of
expressive power) to encode uniform $S$-reversibility in ASP, is somewhat
non-intuitive, as it is not immediately clear, what the semantics of this
construction are. By contrast, the modal operators provided by ELPs make this
much more readable and declarative.
\subsection{Experiments}
We have conducted preliminary experiments with artificially constructed domains. The domains are as follows:
{\small
\begin{verbatim}
(define (domain rev-i)
(:requirements :strips)
(:predicates (f0) ... (fi))
(:action del-all
:precondition (and (f0) ... (fi) )
:effect (and (not (f0)) ... (not (fi))))
(:action add-f0
:effect (f0))
...
(:action add-fi
:precondition (fi-1)
:effect (fi)))
\end{verbatim}
}
The action \verb!del-all! has a universal uniform reverse plan $\langle$
\verb!add-f0!, \ldots, \verb!add-fi! $\rangle$. We have generated instances from
\verb!i! $=1$ to \verb!i! $=6$ and from \verb!i! $=10$ to \verb!i! $=200$ with
step 10. We have analyzed runtime and memory consumption of two problems: (a)
finding the unique reverse plan of size \verb!i! (by setting the constant
\verb!horizon! to \verb!i!) and proving that no other reverse plan exists, and
(b) showing that no reverse plan of length \verb!i-1! exists (by setting the
constant \verb!horizon! to \verb!i-1!). We compare the four encodings described
in Section~\ref{sec:asprev}, and refer to the first two as the \emph{simple
ELP/ASP encoding} and to the second two as the \emph{general ELP/ASP encoding}.
We used plasp 3.1.1 (\url{https://potassco.org/labs/plasp/}), eclingo 0.2.0
(\url{https://github.com/potassco/eclingo}), and clingo 5.4.0
(\url{https://potassco.org/clingo/}) on a computer with a 2.3 GHz AMD EPYC 7601
CPU with 32 cores and 500 GB RAM running CentOS 8. We have set a timeout of 20
minutes and a memory limit of 16GB (which was never exceeded).
\begin{figure}
\begin{tikzpicture}[scale=0.60]
\begin{axis}[
xlabel=Number of facts,
ylabel=Runtime (s)]
\addplot +[unbounded coords=jump] table [y=time(asp.simple),
x=size]{encodings/experiment/experiments.plan.dat};
\addlegendentry{simple ASP encoding}
\addplot +[unbounded coords=jump] table [y=time(asp.general),
x=size]{encodings/experiment/experiments.plan.dat};
\addlegendentry{general ASP encoding}
\addplot +[unbounded coords=jump] table [y=time(elp.simple),
x=size]{encodings/experiment/experiments.plan.dat};
\addlegendentry{simple ELP encoding}
\addplot +[unbounded coords=jump] table [y=time(elp.general),
x=size]{encodings/experiment/experiments.plan.dat};
\addlegendentry{general ELP encoding}
\end{axis}
\end{tikzpicture}
\qquad
\begin{tikzpicture}[scale=0.60]
\begin{axis}[
xlabel=Number of facts,
ylabel=Memory (MB)]
\addplot +[unbounded coords=jump] table [y=memory(asp.simple),
x=size]{encodings/experiment/experiments.plan.dat};
\addlegendentry{simple ASP encoding}
\addplot +[unbounded coords=jump] table [y=memory(asp.general),
x=size]{encodings/experiment/experiments.plan.dat};
\addlegendentry{general ASP encoding}
\addplot +[unbounded coords=jump] table [y=memory(elp.simple),
x=size]{encodings/experiment/experiments.plan.dat};
\addlegendentry{simple ELP encoding}
\addplot +[unbounded coords=jump] table [y=memory(elp.general),
x=size]{encodings/experiment/experiments.plan.dat};
\addlegendentry{general ELP encoding}
\end{axis}
\end{tikzpicture}
\caption{Calculating the unique reverse plan (plan length equals number of facts)}
\label{fig:experiment.exists}
\end{figure}
The results for problem (a) are plotted in Figure~\ref{fig:experiment.exists}.
The general ELP encoding exceeded the time limit already at the problem with
seven facts, while the simple ELP encoding could solve all problems with up to
150 facts within the time limit. The general and simple ASP encodings perform
better than their ELP counterparts, but the simple ELP encoding performed much
better than the saturation-based general ASP encoding, even though ELP solvers
are in their infancy compared to the heavily optimized ASP solving systems. The
memory consumption increased with \verb!i! for all encodings, proportional to
the computation time.
\begin{figure}
\begin{tikzpicture}[scale=0.60]
\begin{axis}[
xlabel=Number of facts,
ylabel=Runtime (s)]
\addplot +[unbounded coords=jump] table [y=time(asp.simple),
x=size]{encodings/experiment/experiments.noplan.dat};
\addlegendentry{simple ASP encoding}
\addplot +[unbounded coords=jump] table [y=time(asp.general),
x=size]{encodings/experiment/experiments.noplan.dat};
\addlegendentry{general ASP encoding}
\addplot +[unbounded coords=jump] table [y=time(elp.simple),
x=size]{encodings/experiment/experiments.noplan.dat};
\addlegendentry{simple ELP encoding}
\addplot +[unbounded coords=jump] table [y=time(elp.general),
x=size]{encodings/experiment/experiments.noplan.dat};
\addlegendentry{general ELP encoding}
\end{axis}
\end{tikzpicture}
\qquad
\begin{tikzpicture}[scale=0.60]
\begin{axis}[
xlabel=Number of facts,
ylabel=Memory (MB)]
\addplot +[unbounded coords=jump] table [y=memory(asp.simple),
x=size]{encodings/experiment/experiments.noplan.dat};
\addlegendentry{simple ASP encoding}
\addplot +[unbounded coords=jump] table [y=memory(asp.general),
x=size]{encodings/experiment/experiments.noplan.dat};
\addlegendentry{general ASP encoding}
\addplot +[unbounded coords=jump] table [y=memory(elp.simple),
x=size]{encodings/experiment/experiments.noplan.dat};
\addlegendentry{simple ELP encoding}
\addplot +[unbounded coords=jump] table [y=memory(elp.general),
x=size]{encodings/experiment/experiments.noplan.dat};
\addlegendentry{general ELP encoding}
\end{axis}
\end{tikzpicture}
\caption{Determining nonexistence of a reverse plan (plan length one step too short)}
\label{fig:experiment.notexists}
\end{figure}
The results for problem (b) are plotted in
Figure~\ref{fig:experiment.notexists}. Interestingly, compared to (a), all the
encodings performed significantly better. While the general encodings still hit
the time limit for seven facts (ELP) and 50 facts (ASP), the simple encodings
were able to solve all the instances up to our maximum of \verb!i! $=250$ (the
figure stops at \verb!i! $=150$), but at the expense of increasing memory usage.
In total, the general encodings, for both ASP and ELP, scale worse, as
expected, since the ELP solver needs to evaluate all answer sets inside each
possible world view, and the ASP solver needs to compute the result of the
saturation check. However, for the simple encodings, especially the task of
testing for non-reversibility performed surprisingly well for both ASP and ELP.
From all of our results, however, we can see that ELP solving still severely
trails, in terms of performance, encodings for plain ASP. This was somewhat
expected, since ELP solvers are nowhere near as optimized as modern ASP
systems. We hope, however, that our results encourage further improvements in
the area of ELP solvers, since matching the ASP results, at least in this
particular benchmark set, does not seem completely out of reach.
\section{Conclusions}\label{sec:conclusions}
In this paper, we have given a review of several notions of action reversibility
in STRIPS planning, as originally presented by \cite{MorakCFF20}. We then
proceeded, on the basis of the PDDL-to-ASP translation tool \emph{plasp}, described by
\cite{DimopoulosGLRS19}, to present two ELP encodings and two ASP encodings to
solve the task of universal uniform reversibility of STRIPS actions, given a
corresponding planning domain. When given to an appropriate solving system,
these encodings, combined with the ASP translation of STRIPS planning domains
produced by \emph{plasp}, then yield a set of world views (for ELP) or answer
sets (for ASP), each one representing a (universal) reverse plan for each action
in the domain, for which such a reverse plan could be found.
The four encodings use two different approaches. The first, simpler, encoding
makes use of a shortcut that allows it to focus only on those facts that appear
in the precondition of the action to check for reversibility, as described
by~\cite{MorakCFF20}.
The second two encodings make use of the power of world views containing
multiple answer sets in ELP, and the encoding technique of saturation
as of \cite{amai:EiterG95} in ASP, respectively, which allows for encoding universal
quantifiers. These two encodings try to directly represent the original
definition of uniform reversibility: for an action to be uniformly reversible,
there must exist a plan, and this plan must revert the action in all possible
starting states (where it is applicable). Hence, the two general encodings are
more flexible insofar as they also allows for the checking of non-universal
uniform reversibility (e.g.\ to check for uniform $\phi$-reversibility, where
the starting states are given via some formula $\phi$).
In order to compare the four encodings, we performed some benchmarks on
artificially generated instances by checking whether there is an action that is
universally uniformly reversible. For the ELP and ASP communities, it will not
come as a surprise that the ELP encodings perform worse than the ASP encodings.
We see this as a call-to-action to further optimize and improve ELP solvers.
From our experiments, it seems that the performance of ASP solvers, while
significantly better, is not out of reach for ELP systems.
For future work, we intend to optimize our ELP encodings further, and test them
with other established ELP solvers. There are several competing ELP semantics
out there and several solvers are available. It would also be interesting to see
how the encodings perform when compared to a procedural implementation of the
algorithms proposed for reversibility checking by \cite{MorakCFF20}. We would also like to compare our approach to existing tools
\emph{RevPlan}\footnote{\url{http://www.kr.tuwien.ac.at/research/systems/revplan/index.html}}
(implementing techniques of \cite{japll:EiterEF08}) and \emph{undoability}
(implementing techniques of \cite{icaps:DaumT0HW16}). Furthermore, we aim to
explore how our techniques can be extended to planning languages more expressive
than STRIPS. We envision various avenues for that, one is to deal with ``lifted representations'' (going beyond propositional atoms), another one is to allow for non-deterministic action effects or exogenous events, for which ASP and ELP seem to be well-suited.
\section{Example}
As an example, consider the following domain, which follows the pattern of the experiments:
{\small
\begin{verbatim}
(define (domain rev-2)
(:requirements :strips)
(:predicates (f0) (f1) )
(:action del-all
:precondition (and (f0) (f1) )
:effect (and (not (f0)) (not (f1)) ) )
(:action add-f0
:effect (f0) )
(:action add-f1
:precondition (f0)
:effect (f1) )
)
\end{verbatim}
}
The tool \emph{plasp} translates it to the following ASP quasi-facts:
\scriptsize
\begin{verbatim}
boolean(true).
boolean(false).
type(type("object")).
variable(variable("f0")).
variable(variable("f1")).
contains(X, value(X, B)) :- variable(X), boolean(B).
action(action("del-all")).
precondition(action("del-all"), variable("f0"), value(variable("f0"), true))
:- action(action("del-all")).
precondition(action("del-all"), variable("f1"), value(variable("f1"), true))
:- action(action("del-all")).
postcondition(action("del-all"), effect(unconditional), variable("f0"), value(variable("f0"), false))
:- action(action("del-all")).
postcondition(action("del-all"), effect(unconditional), variable("f1"), value(variable("f1"), false))
:- action(action("del-all")).
action(action("add-f0")).
postcondition(action("add-f0"), effect(unconditional), variable("f0"), value(variable("f0"), true))
:- action(action("add-f0")).
action(action("add-f1")).
precondition(action("add-f1"), variable("f0"), value(variable("f0"), true))
:- action(action("add-f1")).
postcondition(action("add-f1"), effect(unconditional), variable("f1"), value(variable("f1"), true))
:- action(action("add-f1")).
\end{verbatim}
\normalsize
In the simple ELP encoding, the first rules will give rise to multiple possible world views, one that contains answer sets with \verb!chosen("del-all"))!, \verb!occurs("del-all",1)!, \verb!holds("f0",true,0)!, \verb!holds("f1",true,0)!, \verb!relevant("f0")!, and \verb!relevant("f1")!, one world view that contains answer sets with \verb!chosen("add-f0")! and \verb!occurs("add-f0",1)!, and one with answer sets containing \verb!chosen("add-f1")!, \verb!occurs("add-f1",1)!, \verb!holds("f0",true,0)!, and \verb!relevant("f0")!.
When we set the constant \verb!horizon! to 2, more world views are created, based on the three mentioned above, one for each pair of actions of the three available ones. Each world view will contain at most one answer set. Many of these answer sets turn out to be invalid immediately, for instance any answer set containing \verb!occurs("add-f0",1)! and \verb!occurs("add-f1",2)! will be violating a constraint, as the precondition of \verb!add-f1! is not relevant. Others are invalidated because the preconditions of actions are not met. A few others derive \verb!noreversal!, for instance \verb!occurs("del-all",1)!, \verb!occurs("add-f0",2)!, \verb!occurs("add-f0",3)!, as we have \verb!holds("f1",true,0)! but not \verb!holds("f1",true,3)!.
We can check that the only world view with an answer set, in which \verb!noreversal! is not derived, is the one in which \verb!occurs("del-all",1)!, \verb!occurs("add-f0",2)!, \verb!occurs("add-f1",3)! hold. Indeed, \verb!del-all! is the only universally uniformly reversible action, and its only reverse plan of length 2 is $\langle$ \verb!add-f0!, \verb!add-f1! $\rangle$.
The simple ASP encoding works in a very similar way. Since the simple ELP encoding has at most one answer set per world view, we simple turn the ``epistemic guesses'' into ``standard guesses'', so instead of an answer set encapsulated in a world view, it is just an answer set, and also there, one answer set exists for the example, in which \verb!occurs("del-all",1)!, \verb!occurs("add-f0",2)!, \verb!occurs("add-f1",3)! hold.
Concerning the general ELP encoding, similar world views as above are created. But in that encoding, multiple answer sets can exist in a world view: for each variable not in the precondition of the chosen action, there will be answer sets in which the variable is true, and answer sets in which it is false. So, any world view, in which \verb!chosen("del-all"))!, \verb!occurs("del-all",1)!, \verb!holds("f0",true,0)!, \verb!holds("f1",true,0)! hold, will still have at most a single answer set, as all variables occur in the precondition of \verb!del-all!. It is easy to see that the reverse plan is then in a single-answer-set world view similar to the one in the simple ELP encoding.
On the other hand, in world views containing \verb!chosen("add-f0")! and \verb!occurs("add-f0",1)! four potential answer sets can exist, one with \verb!holds("f0",true,0)! and \verb!holds("f1",true,0)!, one with \verb!holds("f0",false,0)! and \verb!holds("f1",true,0)!, one with \verb!holds("f0",true,0)! and \verb!holds("f1",false,0)!, and one with \verb!holds("f0",false,0)!, \verb!holds("f1",false,0)!.
Let us have a look at the world view containing \verb!occurs("add-f0",1)!, \verb!occurs("del-all",2)!, \verb!occurs("del-all",3)!. For this, \verb!inapplicable! will be derived because the preconditions for \verb!occurs("del-all",3)! are not met in any of the answer sets, and also because the precondition for \verb!occurs("del-all",2)! is not met in those answer sets in which \verb!holds("f1",false,0)! is true. Therefore, the constraint \verb!:- not &k{ ~ inapplicable}.! is violated for this world view.
Maybe more interesting is the world view containing \verb!occurs("add-f0",1)!, \verb!occurs("add-f1",2)!, \verb!occurs("del-all",3)!. Here, \verb!inapplicable! will not be derived, as the preconditions of the actions hold in all answer sets. But consider the answer set containing \verb!holds("f0",true,0)! and \verb!holds("f1",true,0)!: neither \verb!holds("f0",true,3)! nor \verb!holds("f1",true,3)! is derived, so \verb!noreversal! is derived. \verb!noreversal! is also true in the other answer sets except the one containing \verb!holds("f0",false,0)! and \verb!holds("f1",false,0)!. The constraint \verb!:- not &k{ ~ noreversal}.! is thus violated for this world view.
The general ASP encoding works in a rather different way. Here, one candidate answer set will be created for each action to be reversed, one completion of the initial state and one candidate reverse plan.
So there is still only one answer set candidate containing \verb!chosen("del-all"))!, \verb!occurs("del-all",1)!, \verb!holds("f0",true,0)!, \verb!holds("f1",true,0)!. For \verb!occurs("add-f0",1)!, \verb!occurs("del-all",2)!, \verb!occurs("del-all",3)! there will be four answer set candidates, similar to the four answer sets in the world view in the general ELP encoding, similar for \verb!occurs("add-f0",1)!, \verb!occurs("del-all",2)!, \verb!occurs("del-all",3)!, there will also be four answer set candidates.
Let us first see what happens with the reverse plan answer set candidate containing \verb!chosen("del-all"))!, \verb!occurs("del-all",1)!, \verb!holds("f0",true,0)!, \verb!holds("f1",true,0)!. Eventually, \verb!samestate! and \verb!planvalid!, and \verb!reversePlan! are derived for this answer set, which means that \verb!holds(V,Val,T)! will be derived for all combinations of variables, values and times. It should be noted that doing this will not invalidate any derivation done earlier, so it is an answer set.
Now consider the answer set candidates containing \verb!occurs("add-f0",1)!, \verb!occurs("del-all",2)!, \verb!occurs("del-all",3)!. In none of these, \verb!planvalid! will be derived, as either the preconditions of \verb!occurs("del-all",2)! are not met (in answer set candidates having \verb!holds("f1",false,0)!) or the preconditions of \verb!occurs("del-all",3)! are not met. So, \verb!reversePlan! will not hold in any of these answer set candidates, which means that they all violate the constraint \verb!:- not reversePlan.!
Now for the answer set candidates containing \verb!occurs("add-f0",1)!, \verb!occurs("add-f1",2)!, \verb!occurs("del-all",3)!, \verb!planvalid! will be derived in all of them, but \verb!samestate! only in the one containing \verb!holds("f0",false,0)! and \verb!holds("f1",false,0)!. This means, that in the other three answer set candidates \verb!reversePlan! will not hold, violating the constraint \verb!:- not reversePlan.! For the remaining one, the constraint is satisfied, however the saturation rule derives \verb!holds(V,Val,T)! for all combinations of variables, values and times. This ``inflated'' answer set candidate is not stable any longer: we can form a subset of it, in which exactly the atoms of one of the other three constraint-violating candidates are true plus \verb!reversePlan!; the obtained interpretation also satisfies the program and therefore is a counterexample for the stability of the saturated interpretation. One might object that \verb!reversePlan! is unsupported in the counterexample, but supportedness is not a requirement for countermodels. So none of the candidate answer sets containing \verb!occurs("add-f0",1)!, \verb!occurs("add-f1",2)!, \verb!occurs("del-all",3)! turned out to be answer sets (for quite different reasons).
\section*{Acknowledgements}
%
Supported by the S\&T Cooperation CZ 05/2019 ``Identifying Undoable
Actions and Events in Automated Planning by Means of Answer Set
Programming'', by the Czech Ministry of Education, Youth and Sports
under the Czech-Austrian Mobility programme (project no. 8J19AT025),
by the OP VVV funded project CZ.02.1.01/0.0/0.0/16\_019/0000765
``Research Center for Informatics'' and by the Czech Science
Foundation (project no. 18-07252S).
%
\bibliographystyle{acmtrans}
|
1,314,259,996,605 | arxiv |
\section{Wynk Music Function PseudoCodes}
\label{pseudocode}
\subsection{Wynk v1}
\begin{algorithm}
\KwIn{$url$\tcc*{Resource Url}}
\KwOut{[$chunks$]\tcc*{Audio Chunk List}}
\Begin(Initialisation){
$deviceId$, $userAgent$ $\leftarrow$ Arbitrary strings \\
}
\Begin(Authentication){
\tcc{Authorisation with the Wynk Servers to enable authenticated retrieval requests}
$uid, token$ $\leftarrow$ register($deviceId, userAgent$)\\
search\_id $\leftarrow$ get\_search\_id($url$)\\
$C$ $\leftarrow$ request\_manifest($token$, search\_id)
}
\Begin(Retrieval){[$chunks$] $\leftarrow$ get\_song($C$)}
\caption{Client Side in Wynk 1.0}
\label{wynk1}
\end{algorithm}
\begin{algorithm}
\KwIn{$deviceId, userAgent$}
\KwOut{$uid, token$}
\KwData{Request Headers: $\mathcal{H}$}
\Begin{
$url$:= \url{"https://sapi.wynk.in/music/v3/account/login"}\\
payload := \texttt{\{"deviceId": "$deviceId$", "userAgent": "$userAgent$"\}}\\
$uid, token$ $\leftarrow$ POST($url$, $\mathcal{H}$, payload)\\
\Return $uid, token$
}
\caption{register() Function}
\label{reg}
\end{algorithm}
\begin{algorithm}
\KwIn{$url$}
\KwOut{$search\_id$}
\KwData{\texttt{STATIC\_CPMAPPING[ ]}}
\Begin{
\tcc{Each URI on Wynk has the following format -
\texttt{\textbf{"producerid\_string"}}
}
Extract \texttt{$producerid$\_$string$} from $url$\\
$id$ $\leftarrow$ \texttt{STATIC\_CPMAPPING[$producerid$}] \\
\Return ($id$ $||$ \texttt{"\_"} $||$ \texttt{$string$})
}
\caption{search\_id() Function}
\label{searchid}
\end{algorithm}
\begin{algorithm}
\KwIn{$search\_id, token, uid$}
\KwOut{Signed CloudFront Parameters}
\KwData{Request Headers: $\mathcal{H'}$}
\Begin{
\tcc{Generating header x-bsy-utkn}
prefix := \url{"/streaming/v4/cscgw/"}\\
suffix := \url{".html?ets=true&hlscapable=1&sq=a&lang=en{}"}\\
subdomain := \url{"https://playback.wynk.in"}\\
$msg$ := $\texttt{"POST"}||$ prefix $||$ $search\_id$ $||$ suffix\\
$digest$ $\leftarrow$ SHA1\_HMAC($token$, $msg$)\\
$\mathcal{H'}$[x-bsy-utkn] $\leftarrow$ $uid||\texttt{":"}||\texttt{Base64Enc(}digest\texttt{)}$\\
$auth\_url$ $\leftarrow$ subdomain $||$ prefix $||$ $search\_id$ $||$ suffix\\
$C$ $\leftarrow$ POST( $auth\_url$, $\mathcal{H'}$, payload: $"\{\}"$ )\\
\Return $C$
}
\caption{request\_manifest() Function}
\label{reqman}
\end{algorithm}
\begin{algorithm}
\KwIn{$C$ \tcc*[]{Response Object Containing Authenticated URIs \& Signatures}}
\KwOut{[$chunks$]}
\Begin{
manifest\_url $\leftarrow$ Extract manifest file URI from $C$\\
manifest\_file $\leftarrow$ GET(manifest\_url)\\
index\_uri $\leftarrow$ Identify and extract highest quality \texttt{index.m3u8} URI\\
index\_file $\leftarrow$ GET(index\_uri)\\
chunks = [ ]\\
\ForAll{chunk\_url in index\_file}{
$tmp$ $\leftarrow$ GET($chunk\_url$)\\
chunks.push[$tmp$]
}
\Return chunks
}
\caption{get\_song() Function}
\label{alg5}
\end{algorithm}
\clearpage
\subsection{Wynk v2}
\begin{algorithm}
\KwIn{$url$\tcc*{Resource Url}}
\KwOut{[$chunks$]\tcc*{Audio Chunk List}}
\Begin(Initialisation){
$BK$ $\leftarrow$ gen\_bk($\mathcal{T}$,$\mathcal{R}$) \\
$deviceId$ $\leftarrow$ gen\_random\_id($\mathcal{R}$)\\
$pk$ = \texttt{Base64enc(https://sapi.wynk.in/music})\\
$sk$ = \texttt{51ymYn1MS}
}
\Begin(Authentication){
\tcc{Authorisation with the Wynk Servers to enable authenticated retrieval requests}
spit\_out($BK$, $deviceId$)\\
$U:=\{k,n,y,w,m,z,a,p\}$ $\leftarrow$ check($BK$, $\mathcal{T}$)\\
$M:=\{dt,uid, token, kt, ...\}$ $\leftarrow$ login($U$, $\mathcal{T}$)\\
$C$ $\leftarrow$ request\_manifest($url$, $token$, $dt$, $sk$)\\
}
\Begin(Retrieval){[$chunks$] $\leftarrow$ get\_song($C$)}
\caption{Client Side in Wynk 2.0}
\label{algwynk2}
\end{algorithm}
\begin{algorithm}
\KwIn{$BK$, $deviceId$}
\Begin{
$d_1, d_2$ $\leftarrow$ $deviceId_{[0...36)}, deviceId_{[36...72)}$\\
In $d_1, d_2$ replace \text{"$-$" $\rightarrow$ ""}\\
$d_3, d_4$ $\leftarrow$ mix\_it($d_1$, $BK$), mix\_it($d_2$, $BK$)\\
$url$ := \url{"https://img.wynk.in/webassets/"} \\
GET($url || d_3 || \texttt{"\_1.jpg"}$)\\
GET($url || d_4 || \texttt{"\_2.jpg"}$)\\
}
\caption{spit\_out() Function}
\label{spitout}
\end{algorithm}
\begin{algorithm}
\KwIn{$BK$}
\KwOut{k,n,y,w,m,z,a,p}
\KwData{Request Header: $\mathcal{H}$}
\Begin{
$url$:= \url{"https://ping.wynk.in/health/check"}\\
$\mathcal{H}$[tk] $\leftarrow$ $\mathcal{T}$\\
$\mathcal{H}$[bk] $\leftarrow$ $BK_{[0, len(BK)/2)}$\\
$p$ $\leftarrow$ $BK_{[len(BK)/2), len(BK))}$\\
$U$ $\leftarrow$ POST($url$, $\mathcal{H'}$, payload:$\{``pid":``p"\}$)\\
return $U$
}
\caption{check() Function}
\label{check}
\end{algorithm}
\begin{algorithm}
\KwIn{${k,n,y,w,m,z,a,p}$}
\KwOut{$dt,uid,token,kt,...$}
\KwData{Request Header: $\mathcal{H'}$}
\Begin{
$url$:= \url{"https://login.wynk.in/music/account/v1/login"} \\
$BS$ := $k||n||y||w||m||z||a||p$\\
$\mathcal{H'}$[x-bsy-ptot] $\leftarrow$ $\mathcal{T}$\\
\tcc{Generate \texttt{x-bsy-cip} from BS value}
a $\leftarrow$ [ ], b$\leftarrow$ 0, t$\leftarrow$ 0\\
\For(){$t \leq len(BS)-1$}{
$e = 10(BS[t]) + BS[t+1]$\\
\eIf{$e \leq 55$}{
\eIf{$2 \not | \; b$}{
a.push($200+e$)
}
{a.push($100+e$)}
b++
}{
a.push($100+e$)
}
}
$\mathcal{H'}$[x-bsy-cip] $\leftarrow$ concat($a$)\\
$C$ $\leftarrow$ POST($url$, $\mathcal{H'}$, payload: $\{\}$)\\
return $C$
}
\caption{login() Function}
\label{login}
\end{algorithm}
\begin{algorithm}
\KwIn{$\{ url, dt, uid, token, kt\}$}
\KwOut{Authenticated CloudFront Resource Parameters}
\KwData{Request Header: $\mathcal{H''}$}
\Begin{
$\mathcal{H''}$[x-bsy-uuid] $\leftarrow$ $dt$\\
\tcc{Generating header x-bsy-utkn}
suffix := \url{"/song/v4/stream?ets=true\&hlscapable=1\&sq=a\&lang=en\&id="}\\
search\_id $\leftarrow$ get\_search\_id($url$)\\
$msg$ := $\texttt{"POST"}||$ suffix $||$ search\_id $||\texttt{"\{\}"}$\\
$digest$ $\leftarrow$ SHA1\_HMAC($token$, $msg$)\\
$\mathcal{H''}$[x-bsy-utkn] $\leftarrow$ $uid||\texttt{":"}||\texttt{Base64Enc(}digest\texttt{)}$\\
\tcc{Generating x-bsy-t using Time Based OTPs and CryptoJS AES}
$\mathcal{H''}$[x-bsy-t] $\leftarrow$ AES($kt$, TOTP($dt||sk$, 600, 6))\\
\tcc{Send POST Request To Server}
$X$ $\leftarrow$ POST($url$, $\mathcal{H''}$, payload: $\{\}$)\\
return $X$
}
\label{requestmanifest}
\caption{request\_manifest() Function}
\end{algorithm}
\subsection{Digital Rights Management (DRM)}
\label{drm}
DRM can be thought of as a digital lock or as protections in place to secure proper usage of proprietary technologies and copyrighted works\cite{DRMBook}. Although enforced in many countries through licensing agreements and laws such as the Digital Millenium Copyright Act (DMCA)\cite{DMCA} which criminalise circumvention, their efficacy and ulterior motives have been the subject of constant debate. A discussion on these technicalities is however not relevant to this paper, where we will instead choose to focus upon the DRM techiques used primarily by OTT service providers.
Most streaming services such as Netflix, Hulu, Amazon Prime require playback devices to support some form of DRM. Common choices fir DRM schemes are Microsoft PlayReady\cite{playready}, Apple FairPlay\cite{fairplay}, Adobe PrimeTime\cite{primetime}, Marlin\cite{marlin} and Google Widevine\cite{widevine}. Most of these DRM schemes \textit{atleast} provide browser support through Content Decryption Modules\cite{EME} (CDMs) which follow the Encrypted Media Extensions (EME)
\cite{EME} specification which in turn is implemented by all major browsers today. This uniformity has resulted in an unprecedented ease of implementing basic content protection at an efficient cost. An example would be Google's Shaka Player\cite{shaka}: an open source player which can be integrated into a project with relative ease, which uses the Widevine DRM scheme and supports adaptive streaming over MPEG-DASH\cite{dash} and HLS.
\input{introduction/related-work.tex}
\subsection{HTTP Live Streaming (HLS)}
As the name suggests, HLS is an adaptive streaming protocol that delivers content over HTTP/HTTPS. The \textit{HLS Architecture} \cite{hls-arch} essentially involves three components-
\begin{enumerate}
\item The Streaming Server
\item The Distribution Component
\item The Streaming Client
\end{enumerate}
\begin{figure}[H]
\centering
\includegraphics[width = 0.25\textwidth]{figures/hls_arch.png}
\caption{HLS Architecture}
\label{hls-arch}
\end{figure}
A typical configuration (Figure \ref{hls-arch}) consists of a hardware encoder that encodes Audio/Video input into MP3/H.264 and encapsulates it into an MPEG-2 transport stream. A software segmenter then divides the stream into chunks(.ts files) of equal duration and creates an index file \texttt{index.m3u8} that contains links to those chunks. This process is carried out for each encoding of the A/V stream and a master index file, also called the \textit{manifest} is generated and usually named \texttt{master.m3u8}. The manifest identifies and points to the different index files available for that particular A/V stream. This manifest is then served by the streaming server over HTTP to the client which then selects the suitable encoding based on the resources available and requests the index file of that encoding. Once the index is received, the client sequentially makes requests for the chunks, enabling playback on its device. When a network change is detected, a lower bitrate encoded index file is requested in order to retrieve the next chunk, for continuous playback.
\section{Background}
This section is provided as a primer for familiarising the reader with certain technologies that are heavily referred to in this work.
\input{background/adaptive-streaming.tex}
\input{background/hls.tex}
\input{background/drm.tex}
\subsection{Adaptive Streaming}
\label{adaptive-streaming}
Classical streaming protocols used a technique called \textit{progressive streaming} to deliver content. In this technique, a single file sitting on the vendors's server was delivered to the client requesting it. Though this method was simple, it had some obvious inefficiencies which are demonstrated using a toy example below-
\begin{enumerate}
\item Consider two clients with two different displays, one having a 720p display and the other having a 4K one. With the progressive streaming protocol, both clients would be delivered the \textit{same} content despite the differences in their hardware capabilities. If the content streamed was in 4K, it would not pose a problem for the second client, however for the first client it would imply that he receive 4K media which would eventually be downscaled to 720p (or not run at all, depending on the decoding hardware)
\item A problem would also arise if one of the clients had severley limited network bandwidth. This client would be unable to consume content meaningfully owing to it's unnecessarily large size.
\end{enumerate}
The idea of \textit{adaptive streaming} aims at solving both these issues in real time. The first problem is solved by having encodings at multiple bitrates of the same media on the content-delivery servers while the second problem is solved by providing the client with the ability to switch between encodings \textit{mid-stream} depending on it's resources. This \textit{adaptiveness} is facilitated by dividing the source content into chunks and indexing it. Hence if network degradation is detected by the client, the next chunk can be retrieved from a lower bitrate, thus maintaining the flow of content.
We will now describe the HTTP Live Streaming (HLS) Protocol, an adaptive streaming protocol which is predominantly used by most streaming services including, in our case, by Wynk Music to serve content efficiently. An understanding of its architecture is necessary to grasp the underlying protocol. The HLS Protocol was developed by Apple Inc and released to the public in 2009. According to survey reports from 2019, HLS remains the most adopted protocol with more than 45\% of broadcasters using it to provide streaming services to their clients.\cite{hls-pop}.
\subsection{The Curious Case Of Wynk}
Airtel Wynk Music was the first service that we came across that had serious flaws in their content security mechanism. The flaws were such, that we were able to write scripts in order to automatically steal content at the highest available quality. The protocol diagram in Figure \ref{wynk1} describes the working of Wynk prior to our disclosure.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth, trim=0 30 0 0,clip]{figures/wynk1.pdf}
\caption{Content Retrieval Protocol - Wynk \textit{(Reconstructed)}}
\label{wynk1}
\end{figure}
\subsubsection*{The Protocol in A Nutshell}
\begin{itemize}
\item \textbf{Client Registration} The client is identified to the server using a \texttt{POST} request containing the \texttt{deviceId} and \texttt{userAgent} parameters in the payload. These parameters are set by the client and appear to be random in nature. Our observation was that persisting the values for these parameters had no effect on the execution of the protocol. As a response to this request, the server replies with values for \texttt{uid} and \texttt{token}.
\item \textbf{Compute Search Id For Resource} A \texttt{search\_id} was computed based on the song URL through a combination of string operations and table lookups.
\item \textbf{Acquire Authenticated URI} A \texttt{POST} request is made to retrieve the authenticated URI for content retrieval from the CDN.
Using \texttt{token} as the key, a \texttt{SHA1-HMAC} of a string containing the \texttt{search\_id} is generated. The Base64 encoded value of this HMAC is assigned to a request header \texttt{x-bsy-utkn} after appending it to the \texttt{uid}. The result of this request is a URL with a set of signed cookies which we will refer to as \textit{CloudFront Cookies}.
\item \textbf{Acquire Manifest} On making a request to the URI obtained as a response in previous request, the server responds with the manifest file which contains URIs to the various \texttt{index.m3u8} files available.
\item \textbf{Acquire \texttt{index.m3u8}} Using the URI for the index file of highest quality available, a request is made with query parameters being set using the \textit{CloudFront Cookies} obtained previously. A successful response from the server gives us the \texttt{index.m3u8} file of our choice.
\item \textbf{Getting Content} By making \texttt{GET} requests to the chunk URIs present in the index file and setting the appropriate query parameters, the client starts recieving \texttt{.ts} media files from the server. By appending those files in order, the complete audio file is obtained.
\end{itemize}
Following our disclosure, Wynk made certain changes to their protocol that are listed below -
\begin{enumerate}
\item A code obfuscation scheme was introduced that replaced function/variable names, identifiers with what were essentally array lookups. The array used for lookup was included in the source code which rendered the obfuscation useless.
\item The client registration process was redesigned and a time window was introduced using Time Based OTPs (TOTPs)\cite{totp}.
\end{enumerate}
However, the the content retrieval part of the protocol remained the same. The introduced changes only served to complicate the process of getting the authenticated URIs for the CDN. Moreover, content was still being streamed without encryption\footnote{When we say \textit{without encryption}, we refer to the fact that after the decryption from the HTTPS layer and gzip unpacking has occurred, the audio content is \textbf{directly playable (unencrypted)}}. The revised protocol is described in Figure \ref{wynk2}. Needless to say, we did not face any difficulties in breaking Wynk once more.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figures/wynk2.pdf}
\caption{Revised Content Retrieval Protocol - Wynk \textit{(Reconstructed)}}
\label{wynk2}
\end{figure}
\subsubsection*{The New Protocol in a Nutshell}
\begin{itemize}
\item \textbf{Initialisation Of Identifiers} The first part of the protocol involved the generation of certain tokens, namely, \texttt{BK, deviceId, pk \& sk}. \texttt{pk} and \texttt{sk} were values that were hardcoded into the source code while \texttt{BK} and \texttt{deviceId} were generated using the \textit{epoch time} and a \textit{pseudorandom number}
\item \textbf{\texttt{spit\_out(BK, deviceId)}} Two requests are made to the server using the outputs of this method which basically does some intermixing of the strings \texttt{deviceId} and \texttt{BK}. The ouptut strings are then appended with \texttt{"\_1.jpg"} and \texttt{"\_2.jpg"} and treated as endpoints for requests. Now, we are not entirely sure why the image extensions are used in particular, however we can confidently say that the response to those requests has no further use. That being said, if either of those requests are not made, the protocol fails subsequently,
\item \textbf{\texttt{check() \& login()}} These functions are named after the endpoints to which requests are made. A successful response to the check endpoint returns several parameters which are used to compute the values of certain headers in the request to the login endpoint. A successful response from the \texttt{login} endpoint contains the parameters \texttt{dt, uid, token, kt} among others.
\item \textbf{Compute Search Id} This method did not change compared to the previous deployment of Wynk
\item \textbf{Acquire Authenticated URI} The values received in the previous step are used to set the headers for another request as follows -
\begin{itemize}
\item \texttt{x-bsy-uuid} $\leftarrow$ \texttt{dt}
\item \texttt{x-bsy-utkn} similar computations as before\footnote{The changes can be observed in the Appendix}
\item \texttt{x-bsy-t} $\leftarrow$ \texttt{AES(kt, TOTP(dt||sk, 600, 6))}\footnote{\texttt{6 digit TOTP generated with a window of 600 seconds.} CryptoJS implementation of AES used}
\end{itemize}
This \texttt{POST} request if successful returns the \textit{CloudFront Cookies} with a URI. The rest of the protocol follows identically to the previous version of Wynk.
\end{itemize}
It is pretty evident from the analysis that Wynk went to greater lengths to complicate the retrieval mechanism post disclosure, however they failed to address the crux of the problem.
\subsection{JioSaavn Joins The Jam}
With the findings from our work on Wynk, we were inspired to look into other platforms to test if the situation found with Wynk was a general norm among established players.
JioSaavn is the 2\textsuperscript{nd} most popular India based music streaming service in terms of number of subscribers.
It took some vigilant effort to get to the media content but once the relevant execution path was found, piecing together the protocol was found to be extremely easy and straightforward.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{figures/saa1.pdf}
\caption{Content Retrieval Protocol - JioSaavn \textit{(Reconstructed)}}
\label{saa1}
\end{figure}
\subsubsection*{The Protocol in A Nutshell}
\begin{itemize}
\item \textbf{Acquiring Song Info} Interestingly, the metadata related to the song is served within an HTML response.
It is found within the \emph{JavaScript} variable,\\
\texttt{window\.\_\_INITIAL\_DATA\_\_}.\\
Parameters that are essential for fetching the media content are \texttt{encrypted\_medial\_url} and \texttt{perma\_url}.
\item \textbf{Generate Auth Token} A \texttt{GET} request is made to
\url{https://www.jiosaavn.com/api.php?call=song.generateAuthToken&url=<encrypted_media_url>&bit_rate=<bit_rate>}
to obtain the authorised URL that is used to fetch the media from the CDNs. The relevant parameters are
\texttt{url} which is the \texttt{encrypted\_media\_url} discussed above and \texttt{bit\_rate} which takes the
values \texttt{"128", "320", "64", "32", "16"}. The response contains \texttt{auth\_url} which is verified by the CDN to authorize a request.
\item \textbf{Downloading Media} A \texttt{GET} request is made to \texttt{auth\_url} to finally retrieve the relevant media.
This URL is sufficient for authorization.
\end{itemize}
\subsection{Getting Gaana}
Gaana was the third service that we looked at. Figuring out the protocol was easy as Gaana had no code obfuscation and at least for the non-logged in user, did not rely on cookies at all. Figure \ref{gaana} demonstrates the working of the protocol.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth,trim=0 30 0 0,clip]{figures/gaana.pdf}
\caption{Content Retrieval Protocol - Gaana (\textit{Reconstructed})}
\label{gaana}
\end{figure}
\begin{itemize}
\item \textbf{Getting Song Details} Where Wynk relied on interaction with the server to obtain an authorised resource URI, Gaana instead embeds all information as text in the \texttt{.html} code of the song's page. The current song information is a JSON string present in a \texttt{<span>} tag with \texttt{'data-type':'playSong'}. The \texttt{path} key contains \texttt{AES-CBC} encrypted URIs with \texttt{PKCS\#7} padding\cite{pkcs7} for various bitrates indexed as \texttt{high, medium, low}.
\item \textbf{Decrypting \texttt{path}} The \textit{key} and \text{initialization vector} is \emph{hardcoded} in the JS files and we use those values to decrypt and obtain the authorized URI.
\item \textbf{Acquire Manifest} A request to the authorised URI returns the \texttt{manifest.m3u8} which contains the URI for \texttt{index.m3u8}
\item \textbf{Playback} The \texttt{index.m3u8} file contains URIs for all chunks of the audio. After iterating throught this file and making requests for all chunks (\texttt{.ts} files), we can append them together to obtain the complete audio.
\end{itemize}
\subsection{Comparative Analysis}
Among the case studies showcased in this work, reversing the protocol for Wynk was the most challenging in terms of effort due to the intricacies and complexity of the implementation.
Yet, in the end it turned out to be a matter of getting through the different layers of obfuscation which didn't really have any theoretical security guarantees. When compared to Wynk, the other 3 services, JioSaavn, Gaana and Hungama had simpler mechanisms for content serving which were reversed quite effortlessly. The patch implemented by Wynk after our disclosure shows that they are indeed concerned about protecting their content and so it would be quite interesting to
see how these platforms embrace DRM in the future instead of working on stop-gap solutions which only delay the inevitable. \\
We present a summary of the best/worst practices in Table \ref{table-comparison} based on our investigations.
\begin{table}[htbp]
\resizebox{\linewidth}{!}{%
\renewcommand{\arraystretch}{1}
\begin{tabular}{lccccc}
\hline
\multicolumn{1}{c}{\textbf{Practice}} &
\textbf{Spotify} &
\textbf{Wynk} &
\textbf{JioSaavn} &
\textbf{Gaana} &
\textbf{Hungama} \\ \hline
\begin{tabular}[c]{@{}l@{}}Mandatory User\\ Identification\end{tabular} & \checkmark & & & & \\ \hline
\begin{tabular}[c]{@{}l@{}}Streamed Content\\ Encryption\end{tabular} & \checkmark & & & & \\ \hline
Hardcoded Keys & & \checkmark & & \checkmark & \\ \hline
DRM Scheme & \checkmark & & & & \\ \hline
\begin{tabular}[c]{@{}l@{}}Cookie Based \\ Authentication \\ Timeout\end{tabular} &
\checkmark &
\checkmark &
&
&
\\ \hline
\begin{tabular}[c]{@{}l@{}}Premium Content \\ Access Restrictions\end{tabular} & \checkmark & & & & \\ \hline
\begin{tabular}[c]{@{}l@{}}Obfuscation/\\ Minification\end{tabular} &
\checkmark &
\checkmark &
\checkmark &
\checkmark &
\checkmark \\ \hline
\end{tabular}%
}
\caption{A Comparative Analysis of Practices}
\label{table-comparison}
\end{table}
\subsection{Hunting Hungama}
Hungama is yet another popular music streaming service in India. We explored its content serving mechanism and found it to be pretty similar to JioSaavn and reverse engineered the following protocol.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth,trim=0 30 0 0,clip]{figures/hungama.pdf}
\caption{Content Retrieval Protocol - Hungama \textit{(Reconstructed)}}
\label{hungama}
\end{figure}
\subsubsection*{The Protocol in A Nutshell}
\begin{itemize}
\item \textbf{Audio Player Data} A \texttt{GET} request is made to \url{https://www.hungama.com/audio-player- data/track/<song_id>} to fetch the metadata of the song. The \texttt{song\_id} parameter required for this request is obtained from the WebApp URL of the song which is of the form \url{https://www.hungama.com/song/<name>/<song_id/}.
The response contains the relevant values \texttt{media\_id} and \texttt{file}. The \texttt{file} URL contains the parameter \texttt{token}
\item \textbf{Fetching Media Url} A \texttt{POST} request with an empty body is made to \url{http://www.hungama.com/mdnurl/song/{song_id}?token=<token>}.
The response contains \texttt{media\_url} which is the final URL that is used to retrieve the media. The bit rate can be chosen by setting the \texttt{hcom\_audio\_qty} parameter to one of \texttt{"high", "low", "medium"} in the \texttt{Cookie} header.
\item \textbf{Downloading Media} A \texttt{GET} request is made to \texttt{media\_url} to download the relevant media. This URL is sufficient for authorization.
\end{itemize}
\section{Case Studies}
\label{case-study}
This section forms the basis of the work done in this paper. We present here, a reconstruction of the streaming protocols used by the four biggest \textit{(by subscriber base)} music streaming services in India, in view of formulating an exploit to steal their content. The reconstruction of these protocols involved reverse engineering the Javascript modules executing on the client browser using static and dynamic techniques such as code de-obfuscation, debugging etc., observing network packets using Burp\cite{ref_burp} and a fair amount of intuition. In all cases, we were able to \emph{completely replicate} the protocols in order to get access to the audio content using these standard techniques of reverse engineering. Some code obfuscation aside, none of these services used industry standard DRM and could be broken with minimal effort by a dedicated attacker.
Given below is a summarised analysis of Airtel Wynk, JioSaavn, Gaana and Hungama. Using this analysis we were able to write scripts to automatically steal content. In the interest of keeping the descriptions concise, we have deliberately sacrificed rigorous function definitions in favour of broad descriptions of what those functions do, while illustrating a protocol. The token/variable names mentioned are also similar to their names in the actual JS code. For a detailed description refer to Appendix~\ref{pseudocode}. The implementation details are furnished in Section \ref{impl}.
\input{case-studies/wynk.tex}
\input{case-studies/jiosaavn.tex}
\input{case-studies/gaana.tex}
\input{case-studies/hungama.tex}
\section{Conclusion}
\label{conclusions}
In this paper we surveyed various OTT Music streaming services, particularly those of Indian origin and analysed their attack surface. Owing to their growing popularity as the de facto standard of media consumption, our observations led us to conclude that most streaming services were highly vulnerable to basic stream ripping and reverse engineering attacks. To present our case, we analyzed four most popular music streaming apps in India namely Wynk, Gaana, JioSaavn and Hungama and were successfully able to steal protected content without restrictions. In case of Wynk, we would like to emphasize the fact that even after the attacks had been disclosed, the patched version of their protocol was broken again using the same techniques, clearly showcasing the pitfalls of doing patchwork instead of adopting a systematic solution. A detailed comparative study is furnished to show the extent of deviation from the state-of-the-art and possible mitigation strategies are also proposed. Through our work, we hope that such platforms take cognizance of the lax security measures in place and improve upon them.
\subsection{An Inherent Problem of the Analog Hole~\cite{analoghole}}
\label{analog-hole}
For a considerable amount of time now, there has been a raging debate on the effectiveness of DRM solutions. The aptly named \textit{analog hole} is a problem inherent to the task of protecting Audio/Video Content and is something that is touted by DRM critics all the time and for good reason. Put simply, the analog problem can be illustrated as follows -
Imagine an ideal system that is completely secure in terms of communication and implementation. The client receives the encrypted content from the server and the decryption process occurs securely following which, the content is displayed by the screen or played by the speakers. The analog hole problem states that human perceivable analog signals can always be recaptured and re-encoded to a digital format, thereby nullifying all previous protection. As an example, a pirate can always re-capture video content by directing the content to a video card instead of a screen and similarly tapes/mics can be used to re-capture audio content.
The proponents of DRM however argue that since all analog to digital conversions are \textit{lossy}, one can never actually retrieve the original content by exploiting the analog hole. We leave it to the readers to decide which side of the fence they lean on.
\subsection{Why No DRM ?}
\label{drm-discussion}
What might be the possible reasons that could have led to such an oversight, we wonder. DRM protection is not exactly a new or novel concept and has been a part of the industry for a fair amount of time. This would imply that deploying these services with such primitive security measures was a conscious decision at its worst and a rushed one at its best.
A possible reason that comes to mind is the shaky compatibility of the discussed DRM schemes with current playback devices which could result in the alienation of a large chunk of the subscriber base. In a massive, competitive market such as India, this could potentially spell the demise of a competing streaming service at the hands of its rivals.
\subsection{Attacks On Widevine}
Widevine is a DRM system developed by Google in order to protect content from misuse by the client. L3 level Widevine is supported by the latest versions of most major browsers and in our case, it is used by our benchmark - Spotify. When we chose to treat Spotify as the \textit{ideal} service with content protection we did so by blindly trusting Widevine based on its popularity. We attempt to rectify our assumptions in this section.
There have been multiple reports of researchers breaking L3 level Widevine. The first such claim was made by a security researcher \textit{David Buchanan} in a tweet~\cite{dbuchanan} in January 2019. This was followed by a blog from \textit{Fidus Information Security}~\cite{fidusinfosec} who claimed to have decrypted an episode of \textit{Stranger Things} from Netflix which used L3 Widevine. This group claims to have used a modified Differential Fault Analysis (DFA) approach to recover the keys used for decryption by the Widevine Module. Despite making claims, neither of the security researchers published working PoCs or exploits in order to prevent misuse. Speaking of misuse, a blackhat group called \textit{The Scene} also claims to have a working break~\cite{scene} which they use to pirate content off of Amazon Prime and Netflix.
\subsection{StreamRipping}
\label{streamripping}
In this section we discuss an alternate attack strategy that continues to be very popular among pirates~\cite{streamripinc}. This attack strategy, called \emph{StreamRipping} exploits the \textit{un-encrypted} nature of the streamed content within a browser. Essentially, once the decryption from the HTTPS layer has occured, if the content itself is not encrypted, it is visible to the browser and hence the attacker. This content can then be simply dumped to a file for later distribution - thus \textit{ripping from the stream}.
There are various tools and services which make it easy to \emph{StreamRip}.
An example would be the browser add-on, \emph{Audio Downloader Prime}~\cite{audprime} for \emph{Mozilla Firefox}. We found this tool to be quite effective in seamlessly capturing the content streamed from JioSaavn and Hungama where the media was being served as in response to a single request. In the case of Wynk, the chunks of the stream were detected and were retrievable as \texttt{.mp3} files.
\subsection{Mitigation Strategies}
Following our analysis, we found most of the streaming services were guilty of malpractices (Refer Table \ref{table-comparison}). Further, our success in completely reverse engineering the protocols shows without a doubt that shallow patches will not prove to be secure in the long term. Short term strategies might include stronger obfuscation, reliance on precompiled binaries etc. but none of these techniques would stand the test of time. As of today, the best possible protection available is through DRM schemes like Widevine, Playready, Fairplay and given the ease of setting up these schemes, it might turn out to be the best possible long term strategy as well.
\section{Discussion}
\label{discussions}
\input{case-studies/analysis.tex}
\input{discussions/drm-possible.tex}
\input{discussions/analog-hole.tex}
\input{discussions/widevine-attacks.tex}
\input{discussions/streamripping.tex}
\section{Implementation Details}
\label{impl}
After much deliberation and thought over the fact that a working PoC could potentially be misused, we decided to exclude it from this work. As our experience with Wynk taught us, unless radical changes are made to those services, their protocols can potentially always be broken and data stolen on a massive scale. Our work illustrates this very fact. Ultimately we hope that these services deploy proper DRM measures and not a workaround patch that will perpetuate this game of cat-and-mouse.
\subsection*{Earlier Work on DRM}
Breaking DRM protection has been the focus of all kinds of hacker groups ever since the popularity of commercial software grew, giving rise to the so called pirate hubs which are still popular today. Right from spoofing KMS systems for acquiring Windows licenses to patching AAA titles deploying the Denuvo Anti Tamper\cite{denuvo} system, the community has been witness to some rather creative albeit illegal ways of stealing content over the years.
Acacdemic attention to the problem of breaking DRM systems \cite{Wyang,Kindle,ACMDRM} however, has proved to be rather mild. To the best of our knowledge, our work is the first such analysis of OTT Indian music streaming webapps. We did however take inspiration for the subject from the work done by Wang et. al \cite{Wyang} on automatically bypassing DRM systems and for a way to present our findings, we looked at Kumar et. al's \cite{UPI} work on analysing UPI apps in India.
\subsection{Responsible Disclosure}
\label{disclosure}
All the services mentioned here were contacted prior to submission of this manuscript with reports on the vulnerabilities in their protocols and with offers to collaborate on the fix. It should be noted that none of these services have vulnerability disclosure programs and hence finding a suitable point of contact was tough. When informed of the break, Airtel Wynk was all for the idea of a collaborative fix but ended up deploying a haphazard patch without consultation and proper notice which was broken eventually using the same techniques.
\section{Introduction}
\label{introduction}
OTT is an acronym for ``over-the-top'' and refers to the distribution of multimedia (audio, video) content over a public network. Recent trends have shown a mass adoption of smart mobile devices in the consumer market. This coupled with a higher penetration of high-speed, cheap Internet and the emergence of advanced technologies, such as 5G, 4G, developed online payment infrastructure and continual demand for content within the entertainment domain, projects the global OTT service market to grow from \$81.60 billion in 2019 to \$156.9 billion by 2024 exhibiting a CAGR (Compound Annual Growth Rate) of 14\% ~\cite{mnm20}. The Asia Pacific region is set to record the highest growth rate during the forecast period. According to a joint report published by the Indian Music Industry (IMI) and Deloitte India~\cite{ref_deloitte1}, the audio-video OTT market in India is valued at around US\$ 280 million with nearly 150 million monthly active users accessing soundtracks across various platforms.
\begin{table}[h]
\centering
\resizebox{\linewidth}{!}{
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{cccc}
\hline
\textbf{Service Name} & \textbf{Business Model} & \textbf{Origin} & \textbf{Reference} \\ \hline
Airtel Wynk & \begin{tabular}[c]{@{}c@{}}Bundle, \\ Ad Supported\end{tabular} & Domestic & \cite{wynk} \\ \hline
Apple Music & Paid & International & \cite{apple-music} \\ \hline
Amazon Music & Paid & International & \cite{amazon-music} \\ \hline
Gaana & Ad Supported & Domestic & \cite{gaana} \\ \hline
Hungama & Ad Supported & Domestic & \cite{hungama} \\ \hline
JioSaavn & \begin{tabular}[c]{@{}c@{}}Bundle,\\ Paid\end{tabular} & Domestic & \cite{jiosaavn} \\ \hline
Spotify & Ad Supported & International & \cite{spotify_ref} \\ \hline
Youtube Music & Subscription & International & \cite{youtube-music} \\ \hline
\end{tabular}
}
\caption{OTT music services currently operating in India}
\label{tab:my-table}
\end{table}
Revenue from digital means contributes nearly 78\%to the overall recorded music industry revenue in India and 54\%~\cite{ifpi18}, globally. A survey of India's audio streaming market reveals that it is primarily divided among domestic players Wynk, Gaana, JioSaavn, Hungama and global players Spotify, Amazon Music, Apple Music and more recently YouTube Music (Table~\ref{tab:my-table}). As per a consumer insights survey conducted by the IFPI in 2018~\cite{ifpi_mcir_18}, an average internet user in India spends 21.5 hours every week listening to music, higher than the global average of 17.8 hours. It is interesting to note that despite the popularity, contemporary literature lacks security analysis of \emph{any} of the domestic OTT platforms and forms the primary motivation of this work.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/music-survey.pdf}
\caption{The dominance of streaming as the main source of revenue in the Indian music industry~\cite{ifpi_gmr_19}}
\label{fig:music-streaming-services-in-india}
\end{figure}
This easy and free access to content was thought to have solved several issues regarding unsanctioned sharing of media~\cite{spotify} as it provided Music-as-A-Service which was more lucrative to the consumer than content ownership~\cite{master_thesis}. However, with the consequential emergence of ``stream-ripping'', piracy has increasingly kept pace. The gravity of the situation reflects in the numbers where estimates point to almost US\$ 250 million lost each year in India alone while the estimated number of stream-rippers in the US have grown to an alarming 17 million~\cite{ref_streamrip}. The surging popularity of such platforms has also not been missed by the shadier sections of our society with more sinister agendas~\cite{ref_blackhat}. Couple this with the 40\% - 60\% of revenue that is lost to pirates, there is hence a dire need to take a critical look at the security of such content delivering platforms. A recent paper on bypassing DRM protection in online video streaming~\cite{DBLP:conf/uss/WangSKV13} is one of the many research efforts highlighting the need to have a deeper understanding on how OTT services should be deployed in practice.
In this work we systematically analyze the four leading OTT music service providers in India namely Wynk, Gaana, JioSaavn and Hungama comparing them to the best practices in the industry. To our great surprise, our research reveals that none of these platforms adopt \textit{any} state-of-the-art DRM protection. Contrary to this they actually attempt a very rudimentary form of code obfuscation.
As a result, we were able to not only reverse engineer their protocols but also devise mechanisms leading to automated, unsupervised and uninterrupted download of music from their servers. We develop detailed Proof-of-Concepts for the same and illustrate case-studies on each of the platforms. To put things in context, we also investigate the Spotify web-application and find it adopting very standard DRM protection making it a benchmark in the comparative study that we furnish later. Finally, we discuss possible mitigation strategies to salvage the situation.
As a part of responsible disclosure, this work was attempted to be communicated to the concerned parties. With the exception of Wynk, response from others is awaited.
\input{introduction/contributions.tex}
\input{introduction/layout.tex}
\input{introduction/disclosure.tex}
\subsection*{Our Contributions}
Our contributions can be summed up as follows -
\begin{itemize}
\item We present a security analysis of the content protection systems in place for four of the biggest music streaming services (by subscriber base) in India.
\item We highlight the lax security protocols in place in \textbf{all} these services by attempting to steal content in an undetectable way and provide proof of concepts to automatically acquire content by reverse engineering their content delivery protocols.
\item We present a comparative study of these apps with the current state-of-the-art DRM systems.
\item We present a discussion on the design choices employed by these services and make recommendations to enhance their security.
\end{itemize}
\subsection*{Organisation Of The Paper}
The following sections contain the conclusions and results of our experiments while reverse engineering said services. We first provide a primer on Adaptive Streaming in Section[\ref{adaptive-streaming}] which is used by most of the OTT streaming services and which would help us elucidate the protocols involved clearly. We follow this up in Section[\ref{drm}] with a brief note on present day DRM systems. Section[\ref{spotify}] is dedicated to describing the Widevine DRM used by Spotify to protect it's content, to establish a benchmark for comparing the other services. This leads us into the results of the reverse engineering in Section[\ref{case-study}] where we give reconstructions of the protocols used. Section[\ref{discussions}] contains discussions on the flaws in current DRM systems and the design choices made by these services followed by our conclusions in Section[\ref{conclusions}].
\section{Spotify: Demonstrating Widevine}
\label{spotify}
Having been baffled by the results of our preliminary investigations into Wynk, we were curious to see if this trend was followed across the board by even the big players in the game and hence we decided to focus our attention on Spotify. We were quite satisfied to observe that Spotify proved resilient to the basic reversing techniques that had proved fatal for the other services in terms of content security.
However we must clarify, we do not claim that Spotify is infallible, just that it would require more effort than what was put in for all the others \textit{combined}. Here we present a high level overview, explaining how Spotify protects its content while streaming and also use this analysis later to highlight the deficiencies in the other protocols.
\subsection{Methodology}
We decided to target the \textit{Spotify Web Player} as it was clearly suited for comparison with the other services. By monitoring network requests made by the web player and using a combination of static and dynamic analysis of the client-side Javascript modules, we were able to piece together the inner workings. The documented architecture\cite{widevine-arch} for Widevine was also heavily referred to in order to establish context.
\subsection{Summary Of Findings}
Spotify\footnote{For future reference, unless explicitly stated otherwise, a reference to Spotify refers to the Spotify Web Player} is currently using Widevine Level L3 to implement DRM for it's content which is streamed to a modified version of the Shaka player that uses the HTML5 Media Source Extensions to interact with the CDM.\footnote{For a detailed explanation, refer to the EME documentation\cite{EME}}
The CDM is a \textit{precompiled} binary\footnote{An open source CDM or OCDM can be viewed in the Chromium Project's source, however the Chrome CDM is closed source \cite{DRMSupport}}, implemented as a shared library \textit{(}\texttt{libwidevinecdm.so} \textit{in Linux} for Google Chrome and as a plugin for Mozilla Firefox.
Coming to the retrieval, Figure (\ref{fig-spotify}) depicts the protocol followed. We would like to point out that we have intentionally left out the exact details in some parts of the protocol in the interests of keeping the description brief.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figures/spotify.pdf}
\caption{Spotify's Content Retrieval Protocol - Widevine L3 \textit{(Reconstructed)}}
\label{fig-spotify}
\end{figure}
\begin{enumerate}
\item \textbf{Login} This is the first part of the protocol that a client encounters when trying to start playback. Streaming does \textit{not} start until user identification and authentication is done. There are numerous options for authentication using OAuth but all of them effectively end up setting identification cookies on the client. Let us denote these cookies by $\mathcal{C}$. These cookies are later used to setup and maintain a player \textit{state} which is used to track playback, sync multiple devices, gather insights etc.
\item \textbf{Acquire Access Token} An access token is requested from the spotify servers using the cookies $\mathcal{C}$. As in the protocol, we shall refer to this token by \texttt{Bearer}. \texttt{Bearer} is an authenticated token that has a long expiration time and is required by and subsequently used for most operations of the Spotify client.
\item \textbf{Get Resource URI} To actually retrieve the media file from the Content Distribution Network (CDN), we require an authorized URI which permits access to the content on the server. This URI is obtained from the Spotify servers by making a request and leveraging the \texttt{Bearer} token as authorization. If the \texttt{Bearer} token is valid, the server responds with a list of multiple URIs \textit(We assume for redundancy).
\item \textbf{Retrieving the First Chunk Of Data} According to the widevine specification, the first chunk of data is used to gather licensing information for subsequent decryption of content. Having obtained the CDN URI in the previous request, the player requests the first chunk of data of a certain size by setting the \texttt{Range} header in the request. The server response which contains a chunk of the media file (\textit{distinguished from its header}) is used to extract initial licensing information called \texttt{initData}. \texttt{initData} is then passed on to the Content Decryption Module (CDM).
\item \textbf{Obtaining the License} The fragmented\footnote{See \texttt{moof} boxes \cite{moofbox} for information on format of the media fragments} media chunks retrieved from the servers are encrypted using \texttt{AES-128} in \texttt{CTR} mode. Hence, in order to initlialise playback, decryption of these media needs to be performed. The information (keys, initlialisation vectors) needed for decryption is included in the \textit{license}. Based on the \texttt{initData}, the CDM generates an \texttt{encrypted} license request and passes it to the player. The player then relays this request to a license URI that was obtained asynchronously. If the request and its payload is valid, the server responds with an \textit{encrypted} response that is relayed by the player to the CDM. The CDM decrypts the response to obtain the license.
\item \textbf{Playback} Once the licensing information has been obtained, 10 second chunks are downloaded from the servers and passed to the CDM which decrypts those chunks and passes them to the Audio/Video Stack for playback.
\end{enumerate}
\subsection{Lessons Learnt}
Spotify does a few things very well for protecting their content as it is streamed to a client's device. For the sake of establishing standard practices, we highlight a few of them below -
\begin{enumerate}
\item \textbf{Mandatory User Identification} The login process forces a client to identify itself in order to use the services. Spotify, while providing flexibility of login options with OAuth also implements reCAPTCHA protection against bots. In addition to this, Spotify can track each user's activity which could potentially be used to recognise malicious use.
\item \textbf{Streamed Content is Encrypted} To prevent \emph{streamripping} \textit{(discussed in Section \ref{streamripping})}, content stored on the servers is encrypted.
\item \textbf{No Hardcoded Keys} The keys for decryption of the streamed content are not hardcoded in the files that the user has direct access to.
\item \textbf{License Information is Invisible to the Player} The license information passed between the CDM and the server is encrypted and hence is not accessible to the user.
\item \textbf{Content Decryption Module} The CDM is theoretically the weakest part of the protocol. However in terms of \textit{usable}/\textit{practical} security, since it is closed source binary, it offers a basic level of protection against direct observation of the decrypted content, but is theoretically vulnerable to black box cryptanalysis techniques and some implementation level exploits. L2 and L1 level Widevine attempt to mitigate this vulnerability by having the decryption occur in a \textit{Trusted Execution Environment(TEE)}\cite{DRMExplained}.
\end{enumerate}
Now that a benchmark has been established, we proceed to present an analysis of the four biggest OTT music streaming services in India, in the process highlighting security gaffes where DRM is concerned. |
1,314,259,996,606 | arxiv | \section{Appendix A: Experimental procedure with example \label{appendixa}}
In this section, we describe in some detail the proposed protocol for
estimating the average fidelity of a noisy implementation $\tilde{\sop{U}}$ to an
ideal $n$-qubit Clifford gate $\sop{U}$. For illustrative purposes, we use
the encoding circuit~\cite{Got97a} of the five-qubit
code~\cite{LMPZ96} (shown in Fig.~\ref{fig:enc5}) as a running
example of the unitary process to be certified. As mentioned in the
main text, this protocol can be viewed as a variation on a
local-Cliffords-and-permutations twirling scheme described in
ref.~\cite{ESM+07}, and detailed in ref.~\cite{Silva08} as the parity
monitoring protocol.
\begin{figure}
\includegraphics[width=.5\textwidth]{enc5a.pdf}
\caption{Encoding network for the [[5,1,3]] stabilizer code~\cite{Got97a}.\label{fig:enc5}}
\end{figure}
In order to estimate the average fidelity between a physical
implementation $\tilde{\sop{U}}$ and the ideal encoding circuit $\sop{U}$, one can use
the following procedure.
\begin{enumerate}
\item Offline, for each $w\in{1,\cdots,n}$:
\begin{enumerate}
\item Choose an operator $\op{M}_k$ with weight $w$ from the Pauli
group on $n$-qubits. The overall sign of the Pauli operator should
also be chosen uniformly at random. This is a tensor product of $n$
single-qubit Pauli operators, where $n-w$ of them are the
identity. This step amounts to picking a random string
of $2w+1$ bits, and choosing $w$ qubits at random on which to act
with the non-identity Paulis.
\item For the running example, one such choice for, say, $w=3$ would
be $\op{M}_{1} = \op{I}\otimes \op{Y}\otimes \op{I}\otimes \op{X}\otimes \op{Z}$.
\item Repeat this procedure $k_w=O(1/\epsilon^2)$ times in order
to achieve a final accuracy of $\epsilon$~\cite{ESM+07}.
\end{enumerate}
\item In the laboratory:
\begin{enumerate}
\item For each choice of $\op{M}_k$, prepare a state $\op{\rho}_k$ such that
\begin{equation}
r_k:=\langle{\op{M}_k}\rangle_{\op{\rho}_k} = \tr \op{\rho}_k \op{M}_k\ne 0\,.
\end{equation}
This, for example, can always be achieved by applying local Cliffords
to the state $\ket{0}^{\otimes n}$. For $\op{M}_1=\op{I}\otimes \op{Y}\otimes \op{I}\otimes \op{X}\otimes \op{Z}$, one choice of local
operations which achieves this is $\op{C}_1 = \op{I}\otimes \op{P}\op{H}\otimes \op{I}
\otimes \op{H} \otimes \op{I} $, where
$\op{H}={1\over\sqrt{2}}\left(\begin{smallmatrix}1 & 1\\ 1 & -1\end{smallmatrix}\right)$
and $\op{P}=\left(\begin{smallmatrix}1 & 0\\ 0 &
i\end{smallmatrix}\right)$,
resulting in $r_1=1$.
\item Apply the noisy implementation $\tilde{\sop{U}}$ to the prepared state
$\op{\rho}_k$
\item Measure the expectation value
\begin{equation}
\begin{split}
t_k:=
&\expect{f(\op{M}_{k},C_i,\sop{U})}{\tilde{\sop{U}}(\op{\rho}_k)} \\
= &\expect{\sop{U}(C_i\op{M}_{k}C_i^\dagger)}{\tilde{\sop{U}}(\op{\rho}_k)} \\
= &\tr\left[C_i^\dagger{\mathcal E}(C_i\ \ketbra{0}{0}^{\otimes n}\ C_i^\dagger)C_i\op{M}_k\right].
\end{split}
\end{equation}
If, for example, one had access only to projective measurements in the
Pauli $Z$ eigenbasis on the individual qubits, a measurement of
$f(\op{M}_{k},C_i,\sop{U})$ can be accomplished by a basis change which is
a tensor product of single-qubit Clifford transformations. For the
running example, $f(\op{M}_{1},C_1,\sop{U}) = \op{Z}\otimes \op{Z}\otimes \op{I}\otimes \op{Y}\otimes \op{X}$, so that the
transformation needed to change Pauli $Z$ measurements into this
observable would be $\op{C}_1^{'}=\op{I}\otimes \op{I}\otimes
\op{I}\otimes \op{P}\op{H}\otimes \op{H}$.
\end{enumerate}
\item The average fidelity should be estimated as follows:
\begin{enumerate}
\item For each weight $w$, $\lambda_w$ is the average of the
ratio $t_k/r_k$ for all $\op{M}_k$ of weight $w$ ($\lambda_0$ is
taken to be $1$).
\item $\Pr(\text{no error})$ is the inner product of $\lambda_w$ and the
first row of $\Omega^{-1}$, which is a matrix described in
refs.~\cite{ESM+07,Silva08}. This results in
\begin{align}
\Pr(\text{no error}) & = \sum_{w=0}^n {3^{w} {n\choose w}\over 4^n}~\lambda_w .
\end{align}
\item The average fidelity $\overline{F}$ between $\sop{U}$ and $\tilde{\sop{U}}$ is
finally given by
\begin{equation}
\overline{F} = {2^n \Pr(\text{no error}) + 1 \over 2^n + 1}
\end{equation}
\end{enumerate}
\end{enumerate}
\section{Appendix B: Software}
A simple script which automates the computation of the transformed
Pauli operators given some Clifford operation can be found at
\url{http://github.com/marcusps/TransPauli}.
\textbf{Acknowledgements --} We thank J. Emerson, A. Blais and
J. Gambetta for comments on the manuscript. This work was funded by
the Natural Sciences and Engineering Research Council (NSERC) of
Canada.
|
1,314,259,996,607 | arxiv | \section{Introduction}
Compressed sensing aims to recover high dimensional sparse signals
based on considerably fewer linear measurements. Formally one
considers the following model:
\begin{equation}
\label{model} y = \Phi\beta + z
\end{equation}
where the matrix $\Phi \in {\mathbb R}^{n\times p}$ (with $n \ll p$) and $z\in
{\mathbb R}^n$ is a vector of measurement errors. The goal is to reconstruct
the unknown signal $\beta\in {\mathbb R}^p$ based on $y$ and $\Phi$. A
remarkable fact is that $\beta$ can be recovered exactly in the
noiseless case under suitable conditions, provided that the signal is sparse.
A na\"ive approach for solving this problem is to consider
$\ell_0$ minimization where the goal is to find the sparsest solution
in the feasible set of possible solutions. However this is NP hard and thus is
computationally infeasible. It is then natural to consider the
method of $\ell_1$ minimization which can be viewed as a convex
relaxation of $\ell_0$ minimization. The $\ell_1$ minimization method in
this context is
\begin{equation} (P_{{\cal B}}) \quad\quad
\hat \beta = \mathop{\rm arg\min}_{\gamma \in {\mathbb R}^p} \{\|\gamma\|_1 \; \mbox{
subject to } \; y - \Phi\gamma\in {\cal B}\}
\end{equation}
where ${\cal B}$ is a bounded set determined by the noise structure. For example,
${\cal B}=\{0\}$ in the noiseless case and ${\cal B}$ is the feasible set of the
noise in the case of bounded error. This method has
been successfully used as an effective way for reconstructing a
sparse signal in many settings. See, e.g., \cite{CanRomTao,CanTao05,CanTao06,CanTao07,Donoho,DonHuo,CaWaXu,CaWaXu1}.
One of the most commonly used frameworks for sparse recovery via $\ell_1$
minimization is the {\sl Restricted Isometry Property (RIP)}
introduced by Cand\`es and Tao \cite{CanTao05}.
RIP essentially requires that every subset of columns of $\Phi$ with
certain cardinality approximately behaves like an orthonormal system.
A vector $v=(v_i) \in
{\mathbb R}^p$ is {\it $k$-sparse} if $|\mbox{supp} (v)| \le k$, where $\mbox{supp} (v) = \{ i : v_i\neq
0\}$ is the support of $v$. For an $n\times p$ matrix $\Phi$ and an integer $k$, $1\le k \le p$,
the {\it $k$-restricted isometry constant} $\delta_k(\Phi)$ is the smallest constant such that
\begin{equation}\label{cond:2.1}
\sqrt{1-\delta_k(\Phi)}\|c\|_2 \le \|\Phi c\|_2 \le \sqrt{1+\delta_k(\Phi)}\|c\|_2
\end{equation}
for every $k$-sparse vector $c$. If $k+k'\le p$, the
{\it $k, k'$-restricted orthogonality constant} $\theta_{k,k'}(\Phi)$,
is the smallest number that satisfies
\begin{equation}\label{cond:2.2}
|\langle \Phi c, \Phi c'\rangle| \le \theta_{k, k'}(\Phi)\|c\|_2\|c'\|_2,
\end{equation}
for all $c$ and $c'$ such that $c$ and $c'$ are $k$-sparse and $k'$-sparse respectively, and have
disjoint supports. For notational simplicity we shall write $\delta_k$
for $\delta_k(\Phi)$ and $\theta_{k, k'}$ for $\theta_{k, k'}(\Phi)$
hereafter.
It has been shown that $\ell_1$ minimization can recover a sparse
signal with a small or zero error under various conditions on
$\delta_k$ and $\theta_{k,k'}$. For example, the condition
$\delta_k+\theta_{k,k}+\theta_{k,2k}<1$ was used in Cand\`es and Tao
\cite{CanTao05}, $\delta_{3k}+3\delta_{4k}<2$ in Cand\`es, Romberg and
Tao \cite{CanRomTao}, and $\delta_{2k}+\theta_{k,2k}<1$ in Cand\`es
and Tao \cite{CanTao07}. In \cite{CaXuZh}, Cai, Xu and Zhang proved
that stable recovery can be achieved when
$\delta_{1.5k}+\theta_{k,1.5k}<1$
\footnote{For a positive real number $\alpha$, $\delta_{\alpha k}$
and $\theta_{k,\alpha k}$ are understood as $\delta_{\lceil
\alpha k \rceil}$ and $\theta_{k,\lceil \alpha k \rceil}$. }.
In a recent paper, Cai, Wang and Xu \cite{CaWaXu1} further improve
the condition to $\delta_{1.25k}+\theta_{k,1.25k}<1$.
It is important to note that RIP conditions are difficult to verify
for a given matrix $\Phi$. A widely used technique for avoiding checking
the RIP directly is to generate the matrix $\Phi$
randomly and to show that the resulting random matrix satisfies the
RIP with high probability using the well-known Johnson-Lindenstrauss
Lemma. See, for example, Baraniuk, et al. \cite{BaDaDeWa}. This is
typically done for conditions involving only the restricted isometry
constant $\delta$. Attention has been focused on $\delta_{2k}$ as it
is obviously necessary to have $\delta_{2k}<1$ for model
identifiability. In a recent
paper, Davies and Gribonval \cite{DavGri} constructed examples which
showed that if $\delta_{2k}\ge \frac{1}{\sqrt{2}}$,
exact recovery of certain $k$-sparse signal can fail in the noiseless case.
On the other hand, sufficient conditions on $\delta_{2k}$ has been given.
For example, $\delta_{2k} < \sqrt{2} -1$ is used by Cand\`es
\cite{Candes} and $\delta_{2k} < 0.4531$ by Fouchart and Lai \cite{FOLA}.
The results given in Cai, Wang and Xu \cite{CaWaXu1} implies that
$\delta_{2k}< 0.472$ is a sufficient condition for sparse signal recovery.
Among the conditions of the form $\delta_{ck} < \alpha$, the most
natural and desirable condition for recovering a $k$-sparse signal is
arguably
\[
\delta_{k} < \alpha,
\]
for some quantity $\alpha$.
The purpose of this paper is to establish, to the best of our
knowledge, the first such condition on $\delta_k$. To be more specific, we
show that under the condition
\begin{equation}
\label{k-bound}
\delta_{k} < 0.307,
\end{equation}
$k$-sparse signals are guaranteed to be recovered exactly via $\ell_1$
minimization when no noise is present and $k$-sparse signals can be
estimated stably in the noisy case. Although we are mainly interested
in recovering sparse signals, the results can be extended to the
general setting where the true signal is not necessarily $k$-sparse.
It is also shown in the present paper that the bound (\ref{k-bound}) cannot be
substantively improved. An upper bound for $\delta_{k}$ is also
given. An explicitly example is constructed in which
$\delta_{k}=\frac{k-1}{2k-1} < 0.5$, but it is impossible to recover
certain $k$-sparse signals.
Our analysis is simple and elementary.
The main ingredients in proving the new RIP conditions are the {\sl norm-inequality for $\ell_1$ and $\ell_2$}, and the {\sl square root lifting
inequality} for the restricted orthogonality constant $\theta_{k,k'}$.
Let $x\in {\mathbb R}^n$. A direct consequence of the Cauchy-Schwartz inequality
is that $0\le \|x\|_2-\frac{\|x\|_1}{\sqrt{n}}$. Our norm-inequality for $\ell_1$ and $\ell_2$ gives an upper bound
for the quantity $\|x\|_2-\frac{\|x\|_1}{\sqrt{n}}$, namely
\begin{equation}\label{eq:1.1}
\|x\|_2 - \frac{\|x\|_1}{\sqrt{n}}\le\frac{\sqrt{n}}{4}\big(\max_{1\le i\le n} |x_i|-\min_{1\le
i\le n} |x_i|\big).
\end{equation}
This is an inequality of its own interest. The square root lifting inequality is a result we developed in \cite{CaWaXu1} which states
that if $a \ge 1$ and $k', ak'$ are positive integers, then
\begin{equation}\label{eq:1.2}
\theta_{k, ak'} \le \sqrt{a}\theta_{k, k'}.
\end{equation}
Indeed we derive a more general result on RIP and obtain
(\ref{k-bound}) as a special case.
The paper is organized as follows. After Section \ref{sec:ripproperies}, in
which some basic properties of restricted isometry constants are
discussed, we introduce in Section \ref{sec:nmineq} a norm inequality
for $\ell_1$ and $\ell_2$, which enables us to make finer analysis of
the sparse recovery problem. Our new RIP bounds are presented in
Section \ref{sec:newbds}. In Section \ref{sec:uppbds}, upper bounds
for RIP constants are given.
\section{Some Properties of Restricted Isometry Constants}
\label{sec:ripproperies}
We begin by introducing basic notations and definitions related to the RIP.
We also collect a few
elementary results needed for the later sections.
For a vector $v=(v_i) \in {\mathbb R}^p$, we shall denote by $v_{\max(k)}$ the vector $v$ with all but the
$k$ largest entries (in absolute value) set to zero and define $v_{-\max(k)} = v - v_{\max(k)}$,
the vector $v$ with the $k$ largest entries (in absolute value) set to zero. We use the standard
notation $\|v\|_q=(\sum_{i=1}^p |v_i|^q)^{1/q}$ to denote the $\ell_q$-norm of the vector $v$. We
shall also treat a vector $v=(v_i)$ as a function $v: \{1,2,\cdots, p\}\rightarrow {\mathbb R}$ by assigning
$v(i)=v_i$.
For a subset $T$ of $ \{1, \cdots, p\}$, we use $\Phi_T$ to denote the submatrix obtained by
taking the columns of $\Phi$ according to the indices in $T$. Let
\[
\mathcal{SSV}_T = \{\lambda
: \lambda \mbox{ an eigenvalue of }\Phi'_T\Phi_T \},
\]
and $\strut \displaystyle \Lambda_{\min}(k)=\min
\{\cup_{|T|\le k}\mathcal{SSV}_T\}$, $\strut \displaystyle \Lambda_{\max}(k)=\max\{\cup_{|T|\le k}\mathcal{SSV}_T\}$. It
can be seen that
\[
1-\delta_{k}\le \Lambda_{\min}(k)\le \Lambda_{\max}(k)\le 1+\delta_k.
\]
Hence the condition (\ref{cond:2.1}) can be viewed as a condition on
$\Lambda_{\min}(k)$ and $\Lambda_{\max}(k)$.
The following monotone properties can be easily checked,
\begin{eqnarray}\label{eq:2.3}
&&\delta_k \le \delta_{k_1}, \mbox{ if } k\le k_1\le p\\
\label{eq:2.4} &&\theta_{k,k'}\le \theta_{k_1, k_1'}, \mbox{ if } k\le k_1, k'\le k_1', \mbox{
and } k_1+k_1'\le p.
\end{eqnarray}
Cand\`es and Tao \cite{CanTao05} showed that the constants $\delta_k$ and $\theta_{k,k'}$ are
related by the following inequalities,
\begin{equation}\label{eq:2.5}
\theta_{k,k'} \le \delta_{k+k'}\le \theta_{k,k'} +\max(
\delta_{k},\delta_{k'}).
\end{equation}
In the following, we list several refinements to the inequalities (\ref{eq:2.5}) whose proofs will be
provided in the appendix.
\begin{lem}\label{lem:2.0}
For any positive integers $k$ and $k'$, we have
\begin{eqnarray}
&&\delta_{k+k'}\leq \theta_{k,k'}+\frac{k\delta_k+k'\delta_{k'}}{k+k'} \label{eqn2.0-1}\\
&&\delta_{k+k'}\leq \frac{2\sqrt{k k'
}}{k+k'}\theta_{k,k'}+\max\{\delta_k,\delta_{k'}\}\label{eqn2.0-2}
\end{eqnarray}
\end{lem}
The following properties for $\delta$ and $\theta$, developed by
Cai, Xu and Zhang in \cite{CaXuZh}, have been especially useful in producing simplified recovery conditions:
\begin{equation}\label{eq:2.6}
\theta_{k, \sum_{i=1}^l k_i}\le \sqrt{\sum_{i=1}^l \theta_{k, k_i}^2}
\le \sqrt{ \sum_{i=1}^l \delta_{k+k_i}^2}.
\end{equation}
It follows from (\ref{eq:2.6}) that for any positive integer $a$, we have $\theta_{k,ak'}\leq
\sqrt{a}\theta_{k,k'}$. This fact was further generalized by Cai, Wang and Xu in \cite{CaWaXu1} to the following
square root lifting inequality.
\begin{lem}
\emph{(Square root lifting inequality)}
\label{lem:2.1}
For any $a\ge 1$ and positive integers $k, k'$ such that $ak'$ is an integer,
\begin{equation}\label{eq:2.7}
\theta_{k,ak'}\leq \sqrt{a}\theta_{k,k'}.
\end{equation}
\end{lem}
Using the square root lifting inequality and other properties for
RIP constants we mentioned earlier, some interesting results can be
produced. For example,
\begin{cor}
\label{cor:2.1} For any integer $k\ge 1$,
\begin{eqnarray}
\delta_{4k} &\le& 3\delta_{2k}. \label{eq:2.8}\\
\delta_{3k} &\le& \frac{1}3\delta_{k} + (\sqrt{2}+\frac{2}3 )\delta_{2k}.
\label{eq:2.9}
\end{eqnarray}
\end{cor}
\section{A Norm Inequality for $\ell_1$ and $\ell_2$}
\label{sec:nmineq}
In this section, we will develop a useful inequality for achieving finer
conversion between $\ell_1$-norm and $\ell_2$-norm.
Let $x=(x_1,x_2,\cdots,x_n)\in {\mathbb R}^n$. A direct consequence of the
Cauchy-Schwartz inequality is that
\[
0\le \|x\|_2-\frac{\|x\|_1}{\sqrt{n}}
\]
and the equality hold if and only if $|x_1|=|x_2|=\cdots=|x_n|$. The next result reveals some information
about how large the quantity $\|x\|_2-\frac{\|x\|_1}{\sqrt{n}}$ can be.
\begin{prop}
\label{norm.ieq} For any $x \in {\mathbb R}^n$,
\[
\|x\|_2 - \frac{\|x\|_1}{\sqrt{n}}\le\frac{\sqrt{n}}{4}\big(\max_{1\le i\le n} |x_i|-\min_{1\le
i\le n} |x_i|\big).
\]
The equality is attained if and only if $|x_1|=|x_2|=\cdots=|x_n|$, or $n=4m$ for some positive
integer $m$ and $x$ satisfies $|x_{i_1}|=|x_{i_2}|=\cdots=|x_{i_m}|$
for some $1\le i_1 < i_2<\cdots < i_m\le n$ and
$x_{k}=0$ for $k\notin \{i_1, i_2, ..., i_m\}$.
\end{prop}
\begin{proof}
It is obvious that the result holds when the absolute values of all
coordinates are equal. Without loss of generality, we now assume that
$x_1\geq x_2\geq\cdots\geq x_n\geq 0$ and not all $x_i$ are equal. Let
$$f(x)=\|x\|_2-\frac{\|x\|_1}{\sqrt{n}}.$$
Note that for any $i\in \{2,3,\cdots,n-1\}$,
$$\frac{\partial f}{\partial
x_i}=\frac{x_i}{\|x\|_2}-\frac{1}{\sqrt{n}}.$$
This implies that when $x_i\leq \frac{\|x\|_2}{\sqrt{n}}$,
$f(x)$ is decreasing in $x_i$; otherwise $f(x)$ is increasing in $x_i$.
Therefore, if we fix $x_1$ and
$x_n$, when $f(x)$ achieves its maximum, $x$ must be of the form that $x_1=x_2=\cdots=x_k$ and
$x_{k+1}=\cdots=x_n$ for some $1\leq k < n$. Now
$$f(x)=\sqrt{k(x_1^2-x_n^2)+n
x_n^2}-\frac{k}{\sqrt{n}}(x_1-x_n)-\sqrt{n}x_n.$$
Treat this as a function of $k$ for $k\in (0,n)$
$$g(k)=\sqrt{k(x_1^2-x_n^2)+n
x_n^2}-\frac{k}{\sqrt{n}}(x_1-x_n)-\sqrt{n}x_n.$$
By taking derivative, it is easy to see that
$$g(k)\leq
g(n\frac{(\frac{x_1+x_n}{2})^2-x_n^2}{x_1^2-x_n^2})=\sqrt{n}(x_1-x_n)(\frac{1}{2}-\frac{x_1+3x_n}{4(x_1+x_n)}).$$
Now, since $\frac{x_1+3x_n}{4(x_1+x_n)}\geq 1/4$, we have
\[
\|x\|_2 \le \frac{\|x\|_1}{\sqrt{n}}+\frac{\sqrt{n}}{4}\big(x_1-x_n\big).
\]
We can also see that the above inequality becomes equality if and only
if $x_{k+1}=\cdots =x_n=0$ and $k=n/4$.
\end{proof}
\begin{remark}{\rm
A direct consequence of Proposition \ref{norm.ieq} is that for any $x \in {\mathbb R}^n$,
\[
\|x\|_2 \le \frac{\|x\|_1}{\sqrt{n}} + \frac{\sqrt{n}\|x\|_{\infty}}{4}.
\]
}
\end{remark}
\section{New RIP Bounds of Compressed Sensing Matrices}
\label{sec:newbds}
In this section, we consider new RIP conditions for sparse signal
recovery. However, the results can be easily extended to general
signals $\beta$ with error bounds involved with $\beta_{-\max(k)}$, as
discussed in \cite{CaWaXu1, CaXuZh}.
Suppose
\[
y=\Phi\beta + z
\]
with $\|z\|_2\le \varepsilon$. Denote $\hat\beta$
the solution of the following $\ell_1$ minimization problem,
\begin{equation}
\label{hat.beta}
\hat \beta=\mathop{\rm arg\min}\{\|\gamma\|_1: \quad \mbox{subject to } \quad \|\Phi\gamma-y\|_2 \le \varepsilon\}.
\end{equation}
\begin{theorem}
\label{thm:4.1}
Suppose $\beta$ is $k$-sparse. Let $k_1, k_2$ be positive integers
such that $k_1\ge k$ and $8(k_1-k) \le k_2$. Let
$$t=\sqrt{\frac{k_1}{k_2}}+\frac{1}{4}\sqrt{\frac{k_2}{k_1}}-\frac{2(k_1-k)}{\sqrt{k_1 k_2}}.$$
Then under the condition
\[
\delta_{k_1}+t\theta_{k_1 , k_2 } < 1,
\]
Then the $\ell_1$ minimizer $\hat \beta$ defined in (\ref{hat.beta}) satisfies
\[
\|\beta-\hat\beta\|_2\le \frac{2\sqrt{2} \sqrt{1+\delta_{k_1 }}}
{1-\delta_{ k_1 }-t\theta_{k_1, k_2}} \varepsilon.
\]
In particular, in the noiseless case where $y=\Phi\beta$, $\ell_1$
minimization recovers $\beta$ exactly.
\end{theorem}
\begin{remark}{\rm
Different choices of $k_1$ and $k_2$ can result
in different conditions. Here we list several of them which are of certain interest.\footnote{
Here we assume that the the fraction multiple of $k$ are integers. Otherwise, we have to
use the ceiling notation. }
\begin{center}
\begin{tabular}{|l|l|c|}
\hline
$k_1$ & $k_2$ & Recovery condition \\
\hline
\hline
$k$ & $k$ & $\delta_{k}+1.25\theta_{k,k}<1$\\
\hline
$k$ & ${4\over 9}k$ & $\delta_k+\frac{5}{3}\theta_{k,\frac{4k}{9}}<1$\\
\hline
${9\over 8}k$ & $k$ &$\delta_{\frac{9k}{8}}+\sqrt{9\over 8}\theta_{k, \frac{9k}{8}}<1$\\
\hline
${8\over 7}k$ & ${8\over 7}k$ & $\delta_{\frac{8k}{7}}+\theta_{\frac{8k}{7}, \frac{8k}{7}} <1$\\
\hline
\end{tabular}
\end{center}
}
\end{remark}
Now let us prove the theorem.\\
\begin{proof}
Let $h=\hat\beta - \beta$. For any subset $Q\subset \{1,2,\cdots, p\}$, we define
\[
h_Q = h{\mathbb I}_Q.
\]
Suppose $|h(1)|\ge |h(2)|\ge \cdots
\ge |h(k+1)|\ge |h(k+2)| \ge \cdots$. \\Let $T=\{1,2,\cdots, k\}$ and $\Omega$ be the
support of $\beta$. The following fact, which is based on the minimality of $\hat\beta$, has been
widely used, see \cite{CaWaXu1,CanRomTao,DonHuo}.
\[
\|h_{\Omega}\|_1 \ge \| h_{\Omega^c}\|_1.
\]
It is obvious that $\|h_{\Omega^c\cap T}\|_1 \ge \|h_{\Omega^c\cap T^c}\|_1$, so
we have
\[
\|h_T\|_1 \ge \|h_{T^{c}}\|_1.
\]
Partition $\{1,2,\cdots, p\}$ into the following sets:
\[
S_0=\{1,2,\cdots, k_1\}, S_1=\{k_1+1, \cdots, k_1+k_2\},
S_2=\{k_1+k_2+1, \cdots, k_1+2k_2\}, \cdots.
\]
Then it follows from Proposition~\ref{norm.ieq} that
\begin{eqnarray*}
\sum_{i\geq 1}\|h_{S_i}\|_2&\leq&
\sum_{i\ge 1}\frac{\|h_{S_i}\|_1}{\sqrt{k_2}}+\\
&&\frac{\sqrt{k_2}}{4}\big(|h(k_1+1)|- |h(k_1+k_2)|+|h(k_1+k_2+1)|-|h(k_1+2k_2)|+\cdots \big)\\
&\leq&\frac{1}{\sqrt{k_2}}(\|h_{T^{c}}\|_1-(k_1-k)|h(k_1+1)|)+\frac{\sqrt{k_2}}{4}|h(k_1+1)|\\
&\leq&\frac{1}{\sqrt{k_2}}(\|h_{T}\|_1-(k_1-k)|h(k_1+1)|)+\frac{\sqrt{k_2}}{4}|h(k_1+1)|\\
&\leq&\frac{1}{\sqrt{k_2}}(\|h_{S_0}\|_1-2(k_1-k)|h(k_1+1)|)+\frac{\sqrt{k_2}}{4}|h(k_1+1)|\\
&\leq&\sqrt{\frac{k_1}{k_2}}\|h_{S_0}\|_2+\left(\frac{\sqrt{k_2}}{4} - \frac{2(k_1-k)}{\sqrt{k_2}}\right)|h(k_1+1)|\\
&\leq&\sqrt{\frac{k_1}{k_2}}\|h_{S_0}\|_2+\left(\frac{\sqrt{k_2}}{4\sqrt{k_1}} - \frac{2(k_1-k)}{\sqrt{k_1
k_2}}\right)\|h_{S_0}\|_2=t\|h_{S_0}\|_2
\end{eqnarray*}
Now
\begin{eqnarray*}
|\langle \Phi h, \Phi h_{S_0}\rangle| &=& |\langle\Phi h_{S_0}, \Phi
h_{S_0}\rangle+\sum_{i\ge
1}\langle\Phi_{S_i}h_{S_i}, \Phi h_{S_0}\rangle|\\
&\ge& (1-\delta_{k_1})\|h_{S_0}\|_2^2 -\theta_{k_1, k_2}\|h_{S_0}\|_2\sum_{i\ge
1}\|h_{S_i}\|_2\\
&\ge& (1-\delta_{k_1}-t\theta_{k_1, k_2})\|h_{S_0}\|_2^2
\end{eqnarray*}
Note that
\[
\|\Phi h\|_2=\|\Phi(\beta-\hat{\beta})\|_2\le \|\Phi\beta-y\|_2+
\|\Phi\hat{\beta}-y\|_2\le 2\varepsilon.
\]
Also the next relation
\begin{eqnarray*}
\|h_{S_0^c}\|_2^2\leq \|h_{S_0^c}\|_1 \frac{\|h_{S_0}\|_1}{k_1}\leq
\frac{\|h_{S_0}\|_1^2}{k_1}\leq \|h_{S_0}\|_2^2
\end{eqnarray*}
implies
\[
\|h\|_2^2 = \|h_{S_0}\|_2^2+ \|h_{S_0^c}\|_2^2 \le 2\|h_{S_0}\|_2^2.
\]
Putting them together we get\footnote{If $h_{S_0}= 0$, then the theorem is trivially true.
So here we assume that $h_{S_0}\neq 0$.}
\begin{eqnarray*}
\|h\|_2 &\le& \sqrt{2}\|h_{S_0}\|_2\\
&\le& \frac{\sqrt{2}|\langle \Phi h, \Phi
h_{S_0}\rangle|}{(1-\delta_{k_1}-t \theta_{k_1,
k_2})\|h_{S_0}\|_2}\\
&\le&\frac{\sqrt{2}\|\Phi h\|_2\|\Phi h_{S_0}\|_2}
{(1-\delta_{k_1}- t \theta_{k_1, k_2})\|h_{S_0}\|_2}\\
&\le &
\frac{2\sqrt{2}\varepsilon \sqrt{1+\delta_{k_1}}\|h_{S_0}\|_2}
{(1-\delta_{k_1}-t\theta_{k_1, k_2})\|h_{S_0}\|_2}\\
&\le& \frac{2\sqrt{2}\sqrt{1+\delta_{k_1}}}
{1-\delta_{k_1}-t \theta_{k_1, k_2}}\varepsilon.
\end{eqnarray*}
\end{proof}
The following is our main result of the paper. It is the consequence
of Theorem~\ref{thm:4.1} and the square root lifting inequality.
\begin{theorem}
\label{thm:4.2}
Let $y=\Phi\beta + z$ with $\|z\|_2\le \varepsilon$. Suppose
$\beta$ is $k$-sparse with $k > 1$. Then under the condition
\[
\delta_{k} < 0.307
\]
the constrained $\ell_1$ minimizer $\hat \beta$ given in
(\ref{hat.beta}) satisfies
\[
\|\beta-\hat\beta\|_2\le \frac{\varepsilon}{0.307 - \delta_{k}},
\]
In particular, in the noiseless case $\hat \beta$ recovers $\beta$ exactly.
\end{theorem}
To the best of our knowledge, this seems to be the first result for sparse recovery
with conditions that only involve $\delta_k$.
\
\begin{proof}
We will present the proof for the case $k\equiv 0 \pmod 9$ in this
section. This is the case that can be treated in a concise way and for
which the proof also conveys the main ideas. The complete proof will be
given in the appendix.
In Theorem \ref{thm:4.1}, set $k_1=k$ and $k_2=\frac{4}{9}k $. Let
\[
t = \sqrt{\frac{k}{k_2}} +\frac{1}4\sqrt{\frac{k_2}{k}}=\frac{5}3.
\]
Then under the condition
$$\delta_k+\frac{5}3\theta_{k,\frac{4}{9}k}<1$$
we have
\[
\|\beta-\hat\beta\|_2\le \frac{2\sqrt{2} \sqrt{1+\delta_{k }}}
{1-\delta_{ k }-\frac{5}3\theta_{k, \frac{4}{9}k}} \epsilon.
\]
Using the square root lifting inequality, we get
\begin{eqnarray*}
\delta_{k} + \frac{5}3\theta_{k,\frac{4k}9} & = & \delta_{k} + \frac{5}3\theta_{\frac{9}5\frac{5k}9,\frac{4k}9}\\
&\le &\delta_{k} + \frac{5}3\sqrt{\frac{9}5} \theta_{\frac{5k}9,\frac{4k}9}\le (1+\sqrt{5})\delta_k\\
&<& 1.
\end{eqnarray*}
In this case,
\begin{eqnarray*}
\|\beta-\hat\beta\|_2 &\le & \frac{2\sqrt{2} \sqrt{1+\delta_{k }}}
{1-\delta_{ k }-t\theta_{k, \frac{4k}9}} \varepsilon \le \frac{2\sqrt{2} \sqrt{1+\delta_{k }}}{1-(1+\sqrt{5})\delta_k}\varepsilon\\
&\le & \frac{3.256}{1-3.256 \delta_k}\varepsilon
\le \frac{\varepsilon}{0.307 - \delta_{k}}.
\end{eqnarray*}
\end{proof}
\begin{remark}
{\rm
\begin{enumerate}
\item It can be seen from the proof that we actually have a slightly better estimation, that is,
\[
\|\beta-\hat\beta\|_2\le \frac{2\sqrt{2} \sqrt{1+\delta_{k
}}}{1-C_0\cdot \delta_{k}} \varepsilon,
\]
where $C_0 = 1+\frac{23}{2\sqrt{26}}< 3.256.$
\item For simplicity, we have focused on recovering $k$-sparse signals in
the present paper. When $\beta$ is not $k$-sparse, $\ell_1$
minimization can also recover $\beta$ with accuracy if $\beta$ has
good $k$-term approximation. Similar to \cite{CaWaXu,CaXuZh}, this result can be extended to the general setting.
Under the condition $\delta_k < 0.307$,
Theorem \ref{thm:4.2} holds with the error bound
\[
\|\hat\beta - \beta\|_2 \le \frac{\varepsilon}{0.307 - \delta_{k}}+ \frac{1}{0.307 - \delta_{k}}\frac{\|\beta_{-\max(k)}\|_1}{\sqrt{k}}.
\]
\end{enumerate}
}
\end{remark}
We now consider stable recovery of $k$-sparse signals with error in a
different bounded set.
Cand\`es and Tao \cite{CanTao07} treated the sparse signal recovery
in the Gaussian noise case by solving $(P_{{\cal B}})$ with
${\cal B} = {\cal B}^{DS} =\{z : \|\Phi' z\|_{\infty}\le \eta\}$ and referred the
solution as the {\sl Dantzig Selector}. The following result shows
that the condition $\delta_k < 0.307$ is also sufficient when
the error is in the bounded set ${\cal B}^{DS} =\{z : \|\Phi'
z\|_{\infty}\le \lambda\}$.
\begin{theorem}
\label{DS.thm}
Consider the model (\ref{model}) with $z$ satisfying
$\|\Phi' z\|_{\infty}\le \lambda$. Suppose $\beta$ is $k$-sparse and
$\hat\beta$ is the minimizer
$$\hat \beta =\mathop{\rm arg\min}_{\gamma\in{\mathbb R}^p}\{\|\gamma\|_1 : \|\Phi'
(y-\Phi\gamma)\|_{\infty}\le \lambda\}.$$
Then
\[
\|\hat\beta - \beta\|_2 \le \frac{\sqrt{k}}{0.307-\delta_k}\lambda
\]
\end{theorem}
The proof of this theorem can be easily obtained based on a
minor modification of the proof of Theorem~\ref{thm:4.1}.
\section{Upper Bounds of $\delta_k$}
\label{sec:uppbds}
We have established the sparse recovery condition
\[
\delta_k < 0.307
\]
in the previous section. It is interesting to know the limit
of possible improvement within this framework. In this section,
we shall show that this bound cannot be substantively improved.
An explicitly example is constructed in which
$\delta_{k}=\frac{k-1}{2k-1} < 0.5$, but it is impossible to recover
certain $k$-sparse signals. Therefore,
the bound for $\delta_k$ cannot go beyond $0.5$ in order to guarantee
stable recovery of $k$-sparse signals.
This question was considered for the case of $\delta_{2k}$. In \cite{CaWaXu1}, among a family of recovery
conditions, it is shown that
\[
\delta_{2k} < 0.472
\]
is sufficient for reconstructing $k$-sparse signals. On the other hand, the results of Davies and Gribonval
\cite{DavGri} indicate that $\frac{1}{\sqrt{2}}\approx 0.707$ is likely the upper bound for $\delta_{2k}$.
\begin{theorem}
\label{thm:5.1}
Let $k$ be a positive integer. Then there exists a $(2k-1)\times 2k$ matrix
$\Phi$ with the restricted isometry constant $\delta_k = \frac{k-1}{2k-1}$, and two nonzero
$k$-sparse vectors
$\beta_1$ and $\beta_2$ with disjoint supports such that
\[
\Phi \beta_1 = \Phi \beta_2.
\]
\end{theorem}
\begin{remark}{\rm This result implies that the model (\ref{model}) is
not identifiable in general under the condition
$\delta_k = \frac{k-1}{2k-1}$
and therefore not all $k$-sparse signals can be recovered exactly in
the noiseless case. In the noisy case, it is easy to see that
Theorem \ref{thm:4.2} fails because no estimator
$\hat\beta$ can be close to both $\beta_1$ and $\beta_2$ when the noisy level
$\varepsilon$ is sufficiently small.
}
\end{remark}
\begin{proof}
Let $\Gamma$ be a $2k\times 2k$ matrix such that each diagonal element
of $\Gamma$ is 1 and each off diagonal element equals
$-\frac{1}{2k-1}$. Then it is easy to see that $\Gamma$ is a
positive-semidefinite matrix with rank $2k-1$.
Note that the symmetric matrix $\Gamma$ can be decomposed as $\Gamma=\Phi'\Phi$ where $\Phi$ is a
$(2k-1)\times 2k$ matrix with rank $2k-1$. More precisely, since $\Gamma$ has two distinct
eigenvalues $\strut \displaystyle \frac{2k}{2k-1}$ and $0$, with the multiplicities of $2k-1$ and $1$ respectively,
there is an orthogonal matrix $U$ such that
\[
\Gamma = U \mbox{Diag}\big\{\underbrace{\frac{2k}{2k-1},\frac{2k}{2k-1},\cdots,
\frac{2k}{2k-1}}_{2k-1}, 0\big\} U'.
\]
Define $\Phi$ as
\[
\Phi= \begin{pmatrix} \sqrt{\frac{2k}{2k-1}} & 0 & \cdots & 0 & 0 \\
0 & \sqrt{\frac{2k}{2k-1}} & \cdots & 0 & 0 \\
& & \ddots & &\\
0 & 0 &\cdots & \sqrt{\frac{2k}{2k-1}}&0\\
\end{pmatrix} U'.
\]
Let $T\subset \{1,2,\cdots, 2k\}$ with $|T|=k$. Then it can be verified that
\[
\Phi_T'\Phi_T = \begin{pmatrix} 1 & -\frac{1}{2k-1} & \cdots & -\frac{1}{2k-1} \\
-\frac{1}{2k-1}& 1 & \cdots & -\frac{1}{2k-1} \\
& & \ddots & &\\
-\frac{1}{2k-1} & -\frac{1}{2k-1} &\cdots & 1 \\
\end{pmatrix}_{k\times k}.
\]
The characteristic polynomial of $\Phi_T'\Phi_T$ is
\[
p(\lambda) = \left( \lambda-\frac{k}{2k-1}\right)\left(\lambda-\frac{2k}{2k-1}\right)^{k-1}.
\]
This shows that for $\Phi$,
\[
\delta_k(\Phi) =1- \frac{k}{2k-1}=\frac{k-1}{2k-1}.
\]
Since the rank of $\Phi$ is $2k-1$, there exists some $\gamma\in {\mathbb R}^{2k}$
such that $\gamma\neq 0$ and $\Phi\gamma=0$.
Suppose $\beta_1, \; \beta_2\in
{\mathbb R}^{2k}$ are given by
$$\beta_1=(\gamma(1),\gamma(2),\cdots,\gamma(k),0,,\cdots,0)',$$
and
$$\beta_2=(\underbrace{0,0,\cdots,0}_{k},-\gamma(k+1),-\gamma(k+2),\cdots,-\gamma(2k))'.$$
Then both $\beta_1$ and $\beta_2$ are $k$-sparse vectors but $\Phi\beta_1=\Phi\beta_2$. This means
the model is not identifiable within the class of $k$-sparse signals.
\end{proof}
|
1,314,259,996,608 | arxiv | \section{Introduction}
Neutron star (NS) mergers became the first confirmed cosmic site of $r$-process
element production, following the detection of a kilonova from GW170817
\cite{ligogw170817multi-messenger,coulter_2017,Cowperthwaite2017,drout2017,Tanaka2017,tanvir2017}.
Nucleosynthesis takes place in the expanding ejecta, which is neutron-rich and
therefore favors the operation of the $r$-process
\cite{Lattimer1974,Eichler+89,Freiburghaus+99}. The ejecta is made up of
multiple components launched over a range of timescales and by a variety of
mechanisms (e.g.,
\cite{FM16,baiotti_BinaryNeutronStar_2017,radice_DynamicsBinaryNeutron_2020}).
Of particular importance is matter unbound from the accretion disk formed after
the merger, which can dominate mass ejection in events like GW170817 (e.g.,
\cite{shibata_2017b}).
Transport of energy and lepton number by neutrinos is a key physical process in
the accretion disk, because the timescales associated with some of the ejection
mechanisms are comparable to or longer than the weak interaction timescale.
Neutrino transport heats or cools different parts of the disk and modifies the
electron fraction of the disk material (e.g, \cite{Ruffert+97}). Neutrinos can
also be involved in the launching of a gamma-ray burst jet, by clearing out
dense matter from the polar regions, or contributing energy through
neutrino-antineutrino pair annihilation (e.g.,
\cite{goodman_1987,Eichler+89,Richers2015,just_2016,foucart_2018,fujibayashi_PropertiesNeutrinodrivenEjecta_2017}).
Given the thermodynamic conditions reached in NS mergers, however, temperatures
are well below the muon and taon mass energies ($\sim 100$\,MeV and $\sim 1.8$\,GeV, respectively),
thus electron-type neutrinos and antineutrinos are the only species
that can exchange energy and lepton number with matter locally or non-locally
through charged current weak interactions, with heavy lepton neutrinos
fulfilling primarily a cooling role\footnote{The exception being neutrino pair
annihilation in low-density polar regions (e.g., \cite{sumiyoshi_2021}).}.
Flavor transformation due to non-zero neutrino mass and to interactions
with background matter (the MSW mechanism
\cite{mikheyev_ResonantNeutrinoOscillations_1989,wolfenstein_NeutrinoOscillationsMatter_1978})
have long been expected to occur at large distances from the merger, with
little impact on the dynamics or nucleosynthesis of the ejecta. However, neutrino-neutrino
interactions make the flavor transformation process nonlinear, leading to a
rich phenomenology (e.g,
\cite{duan_2010_arnps,mirizzi_SupernovaNeutrinosProduction_2016,capozzi_NeutrinoFlavorConversions_2022}).
In the context of neutron star mergers, the matter-neutrino resonance was shown
to occur in the polar regions above the remnant, such that it could have significant impacts on
the nucleosynthesis in outflows along the rotation axis
\cite{malkus_2016,wu_msw_2016}. More recently, the so-called \emph{fast flavor
instability} (FFI) was shown to be ubiquitous in both neutron star mergers and
core-collapse supernovae, resulting in extremely fast (nanosecond) flavor
transformation both within and outside of the massive accretion disk
\cite{wu_tamborra_2017} and the HMNS \cite{george_2020}.
Although local simulations of the FFI have been performed and can predict the
final flavor abundance following the instability
\cite{bhattacharyya_2020,padilla_2021,wu_2021_1d,richers_2021_pic,richers_2021_3d,duan_FlavorIsospinWaves_2021,martin_FastFlavorOscillations_2021,zaizen_NonlinearEvolutionFast_2021,xiong_PotentialImpactFast_2020,sigl_SimulationsFastNeutrino_2022}
(see also \cite{richers_code_comp} for a code comparison study), a general
description of the effects of the instability and a consistent inclusion in
global simulations is still lacking.
Assessment of the FFI in post-processing of time-dependent
simulations of NS merger remnants has confirmed the prevalence of the
instability outside the neutrino decoupling regions, with implications for the
composition of the disk outflow
\citep{wu_2017_trajectories,george_2020,richers_EvaluatingApproximateFlavor_2022}.
Effective inclusion of the FFI in global simulations of
post-merger black hole (BH) accretion disks has been achieved recently, first in
general-relativistic (GR) magnetohydrodynamic (MHD)
simulations over a timescale of $400$\,ms \cite{Li_Siegel_2021}, and
then also on viscous hydrodynamic simulations over the full
evolutionary time of an axisymmetric disk ($\sim 10$\,s, \cite{Just2022_FFI}, who
also performed 3D MHD simulations for $500$\,ms).
In both cases, a standard 3-species, 2-moment scheme (M1) was
modified based on a criterion indicating fast flavor instability,
along with an algebraic swapping scheme between species to mix the zeroth and first
moments. Both studies found that the FFI results in a
$\sim 10\%$ decrease in mass ejection, with the ejecta shifting toward
more neutron-rich values. The prevalence of the instability over the
entire disk system was confirmed, and the sensitivity to various
mixing prescriptions was found by \cite{Just2022_FFI} to be moderate.
Here we introduce a different method to include the effects of the FFI in
global simulations that employ a leakage-lightbulb-type neutrino scheme, in
order to enable parameter studies over a larger number of long-duration
accretion disk simulations. We employ an optical depth prescription to smoothly
activate the FFI in regions where neutrinos are out of thermal equilibrium, and
use algebraic expressions to parametrically mix the fluxes and energies of each
neutrino flavor absorbed by the fluid. The scheme relies on the very rapid
growth and saturation of the instability ($\sim $\,ns timescales over $\sim
$\,cm length scales) relative to the relevant evolutionary time- and spatial
scales of the system ($> $\,ms timescales over $\sim$\,km length scales). The
efficiency of our method allows for exploration of varying degrees of flavor
equilibration, as well as flavor mixing that respects lepton number
conservation in the neutrino self-interaction Hamiltonian.
We apply this method self-consistently to an axisymmetric viscous hydrodynamic
setup representative of a post-merger accretion disk, and explore the effects
of the instability on long-term mass ejection from disks around hypermassive
neutron stars (HMNSs) of variable lifetime. Viscous hydrodynamic simulations
that include neutrino emission and absorption as well as nuclear recombination
produce ejecta that is consistent with GRMHD simulations at late-time
($\gtrsim$\,s timescales), since viscous heating models dissipation of MHD
turbulence reasonably well, with the main difference being the lack of earlier
ejecta launched by magnetic stresses \cite{F19_grmhd}. Thus, our results
produce a lower limit to the quantity of ejecta from these systems, while also
focusing on the portion of the ejecta that is most affected by neutrinos.
The paper is structured as follows. Section \S\ref{s:methods} describes
the hydrodynamics simulations, the neutrino implementation, flavor
transformation prescription, and models evolved. Results are presented
in \S\ref{s:results}, including evolution without and with flavor transformation,
nucleosynthesis implications, and comparison with previous work. A summary
and discussion follow in \S\ref{s:summary}.
\section{Methods \label{s:methods}}
\subsection{Numerical Hydrodynamics \label{s:hydro}}
We solve the equations of time-dependent hydrodynamics in axisymmetry using
{\tt FLASH} version 3.2 \cite{fryxell00,dubey2009}, with the modifications
described in \cite{FM13,MF14,FKMQ14,lippuner_2017}. The code solves the
equations of mass, momentum, energy, and lepton number conservation in
spherical polar coordinates $(r,\theta)$, subject to the pseudo-Newtonian
potential of a spinning BH \cite{artemova1996} with no self-gravity, an
azimuthal viscous stress with viscosity coefficient $\alpha_{\rm v}$
\cite{shakura1973}, and the equation of state of \cite{timmes2000} with the
abundances of neutrons, protons, and alpha particles in nuclear statistical
equilibrium, accounting for nuclear binding energy changes.
Neutrino effects in the disk are included through a leakage scheme for cooling
and annular lightbulb irradiation with optical depth corrections for absorption
\cite{FM13,MF14,lippuner_2017}. The HMNS is modeled as a reflecting inner
radial boundary from which additional neutrino luminosities are imposed. In
\S\ref{s:leakage} we describe the baseline neutrino scheme, including upgrades
relative to versions used in previous work, and modifications to include flavor
transformation due to the FFI.
The initial condition is an equilibrium torus with constant angular momentum,
entropy, and composition \cite{FM13}. The disk configuration and central object
mass is the same in all the simulations, aiming to match the parameters of
GW170817 (c.f., \cite{fahlman_2018}) and to connect with previous long-term
post-merger disk calculations (e.g., \cite{fujibayashi2018,Just2022_FFI}). The
central object has a mass $2.65M_\odot$, spin $0.8$ if a BH, or otherwise a
radius $30$\,km and rotation period\footnote{The rotation
period of the HMNS affects the way in which the viscous stress applied at the
surface, where rigid rotation is enforced. The pseudo-Newtonian potential is
set to have spin zero when the HMNS is present.} $3$\,ms if a HMNS.
The disk has a mass
$0.1M_\odot$, radius of maximum density $r_{\rm d}=50$\,km,
initial $Y_e = 0.1$, and entropy $8$\,k$_{\rm B}$ per baryon. The
viscosity parameter in all simulations is $\alpha_{\rm v}=0.03$. The computational
domain outside the torus is filled with an inert low-density ambient medium.
The initial ambient level and density floors are set as described in
\cite{Fernandez2020BHNS}.
The computational domain spans the range $\theta \in [0,\pi]$ in polar angle,
with reflecting boundary conditions at each end of the interval. The domain is
discretized with a grid equispaced in $\cos\theta$ using $112$ cells. In the
radial direction, the grid is logarithmically spaced with $128$ points per
decade in radius. This results in a resolution $\Delta r / r \simeq \Delta
\theta \simeq 0.02$ at the equator. When a BH sits at the center, the inner
radial boundary is set at $r\simeq 8.8$\,km, halfway between the ISCO and the horizon
of the BH, and the boundary condition is set to outflow. When a HMNS is
present, the inner radial boundary is reflecting and set at $r=30$\,km. The
outer radial boundary is a factor $10^5$ times larger than the inner radial
boundary, and the boundary condition is set to outflow.
When a HMNS is transformed into a BH, the inner radial boundary is moved inward
(from $30$\,km to $8.8$\,km), the extension to the computational domain is
filled with values equal to the first active cell outside the HMNS prior to
collapse, the imposed HMNS luminosities are turned off, and the inner radial
boundary is set to outflow. The newly added cells are filled with inert
matter: no neutrino source terms are applied, and their angular momentum is set
to solid body rotation to eliminate viscous heating. The inert matter in these
new cells is quickly swallowed by the BH, with a minimal impact on the
evolution. This collapse procedure largely follows that of \cite{MF14} and
\cite{fahlman_2018}, allowing us to parameterize and isolate the HMNS lifetime
without needing to fine-tune many parameters in a microphysical EOS.
For each simulation, we add $10^4$ passive, equal-mass tracer particles in the disk,
following the density distribution, in
order to record thermodynamic and kinematic quantities as a function of time.
In models with a finite HMNS lifetime, particles are added upon BH formation;
no disk material has left the domain by that time, so all relevant matter is sampled.
We designate trajectories associated to the unbound disk outflow as those that reach an
extraction radius $r=10^9$\,cm and have positive Bernoulli parameter
\begin{equation}
\label{eq:bernoulli}
Be = \frac{1}{2}\mathbf{v}^2 + e_{\rm int} + \frac{P}{\rho} + \Phi,
\end{equation}
with $\mathbf{v}$ the total fluid velocity, $e_{\rm int}$ the specific internal
energy, $P$ the total gas pressure, $\rho$ the mass density, and $\Phi$ the
gravitational potential. These outflow trajectories are then post processed
with the nuclear reaction network code {\tt SkyNet} \cite{lippuner_skynet},
using the same settings as in \cite{lippuner_2017} and
\cite{Fernandez2020BHNS}. The network employs $\sim 7800$ isotopes and more
than $10^5$ reactions, including strong forward reaction rates from the REACLIB
database \cite{cyburt_2010}, with inverse rates computed from detailed balance;
spontaneous and neutron-induced fission rates from \cite{frankel_1947},
\cite{mamdouh_2001}, \cite{wahl_2002}, and \cite{panov_2010}; weak rates from
\cite{fuller_1982}, \cite{oda_1994}, \cite{langanke_2000}, and the REACLIB
database; and nuclear masses from the REACLIB database, which includes
experimental values where available, or otherwise theoretical masses from the
finite-range droplet macroscopic model (FRDM) of \cite{moeller_2016}.
\subsection{Neutrino Leakage Scheme and Flavor Transformation \label{s:leakage}}
We introduce a prescription to account for some of the salient features of
neutrino flavor transformation via the FFI in neutron star mergers. While we
directly solve neither the quantum kinetic equations nor the Boltzmann equation
for neutrinos, the following prescription is constructed to only transform
neutrino flavor outside of regions where neutrinos are in thermodynamic
equilibrium, since the angular asymmetries needed to incite the FFI are weak in
near-equilibrium conditions. We also provide a means to respect the
conservation of net lepton number called for by the symmetries of the neutrino
self-interaction potential.
\subsubsection{Leakage scheme}
The baseline neutrino leakage scheme used here follows \cite{ruffert_1996}, and
in particular the specific implementation described in
\cite{FM13,MF14,lippuner_2017}. While various modifications to the leakage
approach have been proposed to enhance its ability to realistically replicate
true neutrino transport (e.g., \cite{perego_2016,ardevol_2019}), our purpose
here is only to assess the potential impact of neutrino flavor transformation
in a variety of scenarios, and the computational efficiency of the present
scheme enables a large number of inexpensive simulations. Nevertheless,
several upgrades have been made to the leakage implementation used in our
previous work (\cite{MF14}) in order to extend it to 3 species, borrowing from
the implementation in {\tt FLASH} reported in \cite{couch_2014}. First, a third
species (denoted by X) accounting for all heavy lepton species ($\nu_\mu,
\bar\nu_\mu, \nu_\tau, \bar\nu_\tau$) is now tracked. Second, emissivities due
to plasmon decay and electron-positron pair annihilation have been added for
all species, following \cite{ruffert_1996}. Third, we compute opacities for
number and energy transport accounting for neutrino-nucleon elastic scattering
for all species, in addition to charged-current interactions for electron-type
neutrinos and antineutrinos, again following \cite{ruffert_1996}. When
computing emissivities and opacities, the chemical potential for nucleons is
set to that of an ideal gas, for consistency with the equation of state used
(\S\ref{s:hydro}). Finally, the electron neutrino and antineutrino chemical
potentials for Fermi blocking factors is obtained, as in \cite{ruffert_1996},
by interpolating between the beta equilibrium value for opaque regions and zero
for the transparent regime, but using the variable $\rho /
(10^{11}$\,g\,cm$^{-3})$ in lieu of optical depth. This is done to avoid an
iteration, since the optical depth depends on the opacity, which has Fermi
blocking factors.
The hydrodynamic source terms accounting for neutrino absorption are obtained
from the local absorption opacity and the incident luminosity of electron
neutrinos and antineutrinos. In our implementation, luminosities used for
absorption are made up of a contribution from the disk and another from the
HMNS, when present. For disk luminosities, we use the annular lightbulb
prescription of \cite{FM13}, which heuristically accounts for neutrino
reabsorption by modeling incident radiation as originating from an equatorial ring with a
radius and luminosity representative of the net radiation produced by the disk.
In this prescription, the distribution function of emitted neutrinos is assumed
to have the form
\begin{equation}
\label{eq:distf_FM13}
f_{\nu_i} = e^{-\tau_{\mathrm{irr},i}}\frac{\mathcal{N}_{\nu_i}}{2\pi}\frac{\Theta(\cos\theta_k - \cos\theta_{k,{\rm min}})}{\exp(\epsilon/[kT_{\nu_i}])+1},
\end{equation}
with
\begin{equation}
\label{eq:N_nu}
\mathcal{N}_{\nu_i} = \frac{L^*_{\nu_i}}{(7/16)4\pi r_{\rm em, i}^2 \sigma T_{\nu_i}^4}\,\,.
\end{equation}
Here, $\Theta$ is the step function, $\cos\theta_k$ is the angle between the
propagation direction and the radial direction, and $\cos\theta_{k,{\rm
min}}\simeq 1 - 0.5(r_{\rm em,i}/d)^2$. The emission radius $r_{\rm em, i}$ is
an emissivity-weighted equatorial radius indicative of the point where most of
the neutrinos are emitted in the disk, while $d$ is the distance
between a point on this equatorial ring and the irradiated point. The neutrino
temperature $T_{\nu_i}$ is computed from the mean neutrino energy $\langle
\epsilon_{\nu_i}\rangle$, which in turn is obtained as in \cite{ruffert_1996}
by taking the ratio of the volume-integrated energy- to volume-integrated number emission rates
(we use the conversion $\langle \epsilon_{\nu_i}\rangle =
[\mathcal{F}_4(0)/\mathcal{F}_3(0)]\,kT_{\nu_i}\simeq 4\,kT_{\nu_i}$), with
$\mathcal{F}_i(\mu/kT)$ the Fermi-Dirac integral). This prescription yields a
neutrino distribution that follows a Fermi-Dirac spectrum with temperature
$T_{\nu_i}$ and zero chemical
potential, but normalized such that the luminosity of the ring is equal to the
\emph{net} disk luminosity leaving the computational domain
$L^*_{\nu_i}=L_{\nu_i}-L_{\mathrm{abs},i}$. Here we denote the volume integral
of the neutrino emissivity $L_{\nu_i}$ and the volume integral of the neutrino
absorption power $L_{\mathrm{abs},i}$ \footnote{Absorption terms are computed
with the luminosity from the previous time step, and absorption terms are set
to zero during the first timestep after neutrino sources are turned on.}. In
previous work, it was sufficient to assume $L_{\mathrm{abs},i}=0$, since the
reabsorption correction produces no major qualitative changes
on the dynamics and ejecta composition.
However, we find that accounting for $L_{\mathrm{abs},i}$ is needed to ensure
that the number luminosity of electron antineutrinos is higher than that of
electron neutrinos, as occurs for a leptonizing accretion disk, and the
relative number of different neutrino species does impact the effects of flavor
transformation.
The incident neutrino flux from the disk at any point $\mathbf{r}$ in the
computational domain is attenuated by a factor $\exp(-\tau_{\rm irr,i})$, where
\begin{equation}
\label{eq:tau_irr}
\tau_{\rm irr,i}(\mathbf{r}) = \max[\tau_{\nu_i}(\mathbf{r}_{\rm em,i}), \tau_{\nu_i}(\mathbf{r})]
\end{equation}
is the maximum between the local optical depths at the emission maximum (annular ring
$\mathbf{r}_{\rm em}$) and the irradiated point. The local optical depth at any location
is computed using the minimum between the vertical scale height, horizontal scale height,
and the radial direction
\begin{equation}
\label{eq:tau_local}
\tau_{\nu_i} = \kappa^{\rm e}_{\nu_i} \min(H_{\rm vert}, H_{\rm horiz}, r)
\end{equation}
where $\kappa^{\rm e}_{\nu_i}$ is the neutrino opacity for energy transport,
$H_{\rm vert} = P/(\rho g |\cos\theta|)$ and $H_{\rm horiz} = P/(\rho
[g\sin\theta - a_{\rm cent}])$ are the vertical and horizontal scale heights,
respectively, with $g$ the
local acceleration of gravity, and $a_{\rm cent}$ the
centrifugal acceleration given the local specific angular momentum and
position. See, e.g. \cite{ardevol_2019,fahlman_2022} for a comparison of this
optical depth prescription with others used in the literature.
The luminosity contribution from the HMNS, when present, is parametric and
imposed at the boundary. The following functional dependence is used (c.f.
\cite{MF14})
\begin{equation}
\label{eq:hmns_lum}
L^{\rm ns}_{\nu_e} = L^{\rm ns}_{\bar\nu_e} = L^{\rm ns}_{\nu_e,0}
\left[\frac{30\,\textrm{ms}}{\max(10\,\textrm{ms},t)}\right]^{1/2},
\end{equation}
with $L_{\nu_e,{\rm ns}}^0 = 2\times 10^{52}$\,erg\,s$^{-1}$. The normalization
of this functional form compares favorably with results obtained using moment
transport on the combined HMNS and disk system (e.g., Figure~3 of
\cite{fujibayashi_2020}), and the time dependence corresponds to diffusive
cooling \citep{salpeter_shapiro_1981}. In our default setting, the heavy
lepton luminosity from the HMNS has the same time dependence and the same
normalization as the electron neutrinos and antineutrinos (i.e., $L^{\rm
ns}_{\rm X, 0} = L^{\rm ns}_{\nu_e,0}$). To test the effect of this choice on
our results, for each HMNS lifetime, we run an additional model that increases
the heavy lepton luminosity normalization to twice the default value ($L^{\rm
ns}_{\rm X, 0} = 2L^{\rm ns}_{\nu_e,0}$). The neutrino temperatures of HMNS
neutrinos are constant and set to $T^{\rm ns}_{\nu_e} = 4$\,MeV, $T^{\rm
ns}_{\bar{\nu}_e} = 5$\,MeV, and $T^{\rm ns}_{\nu_x} = T^{\rm ns}_{\bar{\nu}_x}
= 6$\,MeV. This choice is made following typical values in protoneutron stars
(e.g., \cite{janka2001}). As with disk neutrinos, the spectrum is assumed to
follow a Fermi-Dirac distribution with zero chemical potential.
The local distribution function of neutrinos from the HMNS has a similar
functional form as equation (\ref{eq:distf_FM13}), with the following
differences \cite{MF14}: (1) there is no absorption correction to the
luminosity (i.e., $L_{\nu_i}^{*,\mathrm{ns}}=L_{\nu_i}^\mathrm{ns}$), (2)
the angular distribution is that of an emitting sphere, so we use the HMNS
radius instead of the ring radius and the factor $\cos\theta_{k,{\rm min}}$ is
computed analytically, (3) the neutrino temperatures are constant, and (4) the
attenuation factor uses the optical depth integrated along radial rays,
\begin{equation}
\label{eq:tau_ns}
\tau_{\nu_i}^{\rm ns}(\mathbf{r}) = \int_{r_{\rm ns}}^{r} \kappa_{\nu_i}^e(r,\theta) dr,
\end{equation}
with $r_{\rm ns}=30$\,km the stellar radius. The neutrino absorption
contribution from the HMNS is then added to that from the disk. The energy
absorbed from HMNS neutrinos enters the absorption luminosity $L_{{\rm abs},i}$
used to correct the disk luminosity.
\subsubsection{Implementation of the FFI}
In the neutrino leakage treatment, emission and absorption of neutrinos are
treated separately. Flavor transformation occurs after emission during
propagation, so \emph{the neutrino emission terms are unchanged by flavor
transformation}.
We include the effects of the FFI by modifying the incident neutrino fluxes and
neutrino temperatures for absorption. In order to restrict flavor
transformation to regions in the post-merger environment where we expect
instability (see, e.g., \cite{Just2022_FFI,richers_EvaluatingApproximateFlavor_2022}), we
control where flavor transformation occurs by interpolating between oscillated
and un-oscillated luminosities. At any point in the computational domain where
neutrino absorption takes place, the luminosity used in
equation~(\ref{eq:N_nu}) becomes
\begin{equation}
\label{eq:L_eff_heating}
L^*_{\nu_i}\to L^{\rm eff}_{\nu_i} = (1 - \eta_{\rm osc})L^*_{\nu_i} + \eta_{\rm osc}L^{\rm osc}_{\nu_i},
\end{equation}
where $L^*_{\nu_i}$ is the net un-oscillated luminosity, corrected for absorption,
and the superscript ``osc" indicates oscillated luminosities.
The activation parameter $\eta_{\rm osc}$ restricts flavor
transformation to regions where at least
one species is out of thermal equilibrium. Specifically, for disk luminosities we choose
\begin{equation}
\label{eq:eta_osc}
\eta_{\rm osc} = \exp({-\tau_{\bar{\nu}_e}}),
\end{equation}
where the local optical depth (equation~\ref{eq:tau_local}) to electron
antineutrinos is usually smaller than that to electron neutrinos, given the
lower proton fraction.
When a HMNS is present, the luminosities from the disk and the star are
oscillated separately, since in our formulation they originate from separate
locations. The oscillation parameter for the HMNS luminosities uses the same
radially-integrated optical depth used to attenuate it
(equation~\ref{eq:tau_ns}), i.e. $\eta_{\rm osc}^{\rm ns} = \exp(-\tau^{\rm
ns}_{\bar{\nu}_e})$. This working definition results in a simple linear
superposition in regions transparent to both disk and HMNS neutrinos (polar
regions), while ignoring flavor transformation for HMNS neutrinos in regions
where they are heavily attenuated anyway (equator to mid-latitudes). In
\S\ref{s:results}, we show that disk luminosities are much larger than HMNS
luminosities and hence more impactful.
We express the flavor-transformed luminosities themselves as a linear
combination of the un-transformed luminosities:
\begin{eqnarray}
\label{eq:a_osc}
L^{\rm osc}_{\nu_e} & = & (1 - a_{\rm osc})L^*_{\nu_e} + \,a_{\rm osc} L_{\nu_x}\\
\label{eq:b_osc}
L^{\rm osc}_{\bar{\nu}_e} & = & (1 - b_{\rm osc})L^*_{\bar{\nu}_e} + \,b_{\rm osc} L_{\bar{\nu}_x}.
\end{eqnarray}
We separate heavy lepton neutrinos from heavy lepton antineutrinos by evenly
splitting the total heavy lepton luminosity $L_{\rm X}$ produced by the leakage
scheme: $L_{\nu_x} = L_{\bar{\nu}_x} = (1/2) L_{\rm X}$. This is justified in
that the mechanisms that produce heavy lepton neutrinos and antineutrinos are
symmetric. The electron neutrino and antineutrino temperatures for absorption
in equations (\ref{eq:distf_FM13})-(\ref{eq:N_nu}) are modified in the same way
as the luminosities
\begin{eqnarray}
\label{eq:Teff_nue}
kT^{\rm eff}_{\nu_e} & = & \left(1 - \eta_{\rm osc}a_{\rm osc}\right)\,kT_{\nu_e} + \eta_{\rm osc}a_{\rm osc}\,kT_{\nu_x}\\
\label{eq:Teff_nuebar}
kT^{\rm eff}_{\bar{\nu}_e} & = & \left(1 - \eta_{\rm osc}b_{\rm osc}\right)\,kT_{\bar{\nu}_e} + \eta_{\rm osc}b_{\rm osc}\,kT_{\bar{\nu}_x},
\end{eqnarray}
where $T_{\nu_x} = T_{\bar{\nu}_x} = T_{\rm X}$.
Reabsorption by heavy lepton neutrinos is neglected, since their absorption
opacities are much smaller than those of electron neutrinos and antineutrinos.
Equations (\ref{eq:a_osc})-(\ref{eq:Teff_nuebar}) are applied separately to
disk and HMNS luminosities.
The coefficients $a_{\rm osc}$ and $b_{\rm osc}$ in equations
(\ref{eq:a_osc})-(\ref{eq:Teff_nuebar}) are scalar quantities that allow us to
manually tune how much flavor change occurs. We test a variety of flavor
transformation assumptions:
\begin{enumerate}
\item \emph{Baseline:} $a_\mathrm{osc}=b_\mathrm{osc}=0$, which ensures no flavor transformation
and consistency with standard neutrino treatment.
\item \emph{Complete}: $a_\mathrm{osc}=b_\mathrm{osc}=1$, such that all neutrinos fully change flavor.
This is quite extreme and unrealistic.
\item \emph{Flavor Equilibration:} $a_\mathrm{osc}=b_\mathrm{osc}=2/3$ results in all neutrinos
and antineutrinos separately having equal abundances in all three flavors. This is still likely extreme.
\item \emph{Intermediate:} $a_\mathrm{osc}=b_\mathrm{osc}=1/2$ is a less extreme version of
the assumption of full Flavor Equilibration.
\item \emph{Asymmetric }(AS): The fast flavor instability is driven by the
neutrino self-interaction Hamiltonian alone, the symmetries of which imply that the
net lepton number cannot change. This requires that
$a_\mathrm{osc}(N_{\nu_e}-N_{\nu_x}) =
b_\mathrm{osc}(N_{\bar{\nu}_e}-N_{\bar{\nu}_x})$, with $N_{\nu_i}$ the local incident
number luminosity.
We choose either $a_\mathrm{osc}=2/3$ or $b_\mathrm{osc}=2/3$ and deem the other value
\textit{asymmetric} as determined locally by this relationship. Given that
electron neutrinos are generally sub-dominant by number, and therefore more likely
to undergo flavor transformation, the case
\begin{equation}
\label{eq:b_AS}
a_{\rm osc} = \frac{2}{3} \qquad b_{\rm osc} = \frac{2}{3}\left(\frac{N_{\nu_e} - N_{\nu_x}}{N_{\bar\nu_e}-N_{\bar{\nu}_x}}\right)
\end{equation}
is expected to be the most realistic assumption. A related scheme was
proposed in \cite{Just2022_FFI}. In practice, we compute the asymmetric
coefficient in equation~(\ref{eq:b_AS}) by using the global number luminosity
attenuated with the appropriate optical depth (i.e., as in
equation~\ref{eq:distf_FM13} for disk neutrinos), as geometric dilution cancels
out. Also, the asymmetric coefficient is constrained to the range $[0,1]$.
\end{enumerate}
\begin{table*}
\caption{Models evolved and summary of results. Columns from left to right show
model name, oscillation coefficients $a_{\rm osc}$ and $b_{\rm osc}$
(equations~\ref{eq:a_osc}-\ref{eq:Teff_nuebar}), lifetime $t_{\rm ns}$ of the HMNS,
ratio of HMNS heavy-lepton luminosity normalization to HMNS electron
neutrino luminosity normalization (equation~\ref{eq:hmns_lum}),
mass-outflow-averaged electron fraction and velocity at $r=10^9$\,cm
(equations~\ref{eq:ye_ave}-\ref{eq:vr_ave}), unbound mass ejected at
$r=10^9$\,cm, and unbound mass with $Y_e < 0.25$ that contributes to the red
kilonova component.
\label{t:models}}
\begin{ruledtabular}
\begin{tabular}{lccccccccc}
Model & $a_{\rm osc}$ & $b_{\rm osc}$ & $t_{\rm ns}$ & $L^{ns}_{\rm X,0}$
& $\langle Y_e\rangle$ & $\langle v_r\rangle$ & $M_{\rm ej}$ & $M_{\rm ej,red}$ \\
& & & (ms) & ($L^{ns}_{\nu_e,0}$)
& & ($0.01$\,c) & ($10^{-2}\,M_\odot$) & ($10^{-2}\,M_\odot$) \\
\noalign{\smallskip}
\hline
BH-ab00 & 0 & 0 & 0 & --- & 0.29 & 2.8 & 2.4 & 0.08 \\
BH-ab05 & 1/2 & 1/2 & & & 0.28 & 3.1 & 1.9 & 0.09 \\
BH-ab07 & 2/3 & 2/3 & & & 0.27 & 3.2 & 1.9 & 0.18 \\
BH-ab10 & 1 & 1 & & & 0.26 & 3.4 & 1.8 & 1.09 \\
BH-aAS & AS & 2/3 & & & 0.27 & 3.1 & 2.0 & 0.29 \\
BH-bAS & 2/3 & AS & & & 0.28 & 3.1 & 1.8 & 0.15 \\
\noalign{\smallskip}
t010-ab00 & 0 & 0 & 10 & --- & 0.27 & 3.0 & 3.2 & 0.69 \\
t010-ab05 & 1/2 & 1/2 & & 1.0 & 0.26 & 3.2 & 2.9 & 0.54 \\
t010-ab07 & 2/3 & 2/3 & & & 0.26 & 3.2 & 2.4 & 1.29 \\
t010-ab10 & 1 & 1 & & & 0.24 & 3.1 & 2.3 & 1.72 \\
t010-aAS & AS & 2/3 & & & 0.25 & 2.9 & 2.6 & 1.16 \\
t010-bAS & 2/3 & AS & & & 0.26 & 3.1 & 2.5 & 0.62 \\
t010-L20 & 2/3 & AS & & 2.0 & 0.26 & 3.4 & 2.6 & 1.49 \\
\noalign{\smallskip}
t100-ab00 & 0 & 0 & 100 & --- & 0.31 & 4.5 & 4.2 & 0.53 \\
t100-ab05 & 1/2 & 1/2 & & 1.0 & 0.31 & 5.7 & 5.0 & 0.93 \\
t100-ab07 & 2/3 & 2/3 & & & 0.31 & 6.1 & 5.3 & 1.21 \\
t100-ab10 & 1 & 1 & & & 0.34 & 7.8 & 5.8 & 1.08 \\
t100-aAS & AS & 2/3 & & & 0.31 & 6.3 & 5.3 & 1.25 \\
t100-bAS & 2/3 & AS & & & 0.31 & 6.2 & 5.2 & 1.26 \\
t100-L20 & 2/3 & AS & & 2.0 & 0.31 & 6.1 & 5.1 & 1.23 \\
\noalign{\smallskip}
tinf-ab00 & 0 & 0 & $\infty$ & --- & 0.38 & 7.3 & 9.7 & 0.51 \\
tinf-ab05 & 1/2 & 1/2 & & 1.0 & 0.37 & 7.8 & 9.7 & 1.05 \\
tinf-ab07 & 2/3 & 2/3 & & & 0.38 & 8.2 & 9.6 & 1.10 \\
tinf-ab10 & 1 & 1 & & & 0.40 & 9.2 & 9.4 & 1.03 \\
tinf-aAS & AS & 2/3 & & & 0.38 & 8.2 & 9.6 & 1.31 \\
tinf-bAS & 2/3 & AS & & & 0.38 & 8.2 & 9.6 & 1.15 \\
tinf-L20 & & & & 2.0 & 0.38 & 8.2 & 9.7 & 1.17 \\
tinf-noT\footnote{This model does not mix temperatures (eqns.~\ref{eq:Teff_nue}-\ref{eq:Teff_nuebar}).}
& & & & 1.0 & 0.40 & 6.9 & 9.4 & 0.54 \\
\end{tabular}
\end{ruledtabular}
\end{table*}
\subsection{Models Evolved \label{s:models_evolved}}
All of our models are shown in Table~\ref{t:models}. We evolve four groups of
simulations that differ in the lifetime of the HMNS: $t_{\rm ns} = \{0, 10,100\}$\,ms,
plus a set with a HMNS surviving until the end of the
simulation (labeled $t_{\rm ns}=\infty$). All models are evolved for $17.735$\,s,
which corresponds to $5000$ orbits at $r=50$\,km (initial torus density maximum). By that
time, disks have lost at least 95\% of their initial mass to outflows and accretion.
For all four sets of models, we evolve neutrino flavor transformation cases
corresponding to \emph{Baseline}, \emph{Intermediate}, \emph{Complete},
\emph{Flavor Equilibration}, and \emph{Asymmetric} (see \S\ref{s:leakage} for
definitions). Table~\ref{t:models} uses AS to refer to the coefficient set to
asymmetric, with the other held constant (e.g., $b_{\rm osc}= \textrm{AS}$
corresponds to equation~\ref{eq:b_AS}). The naming convention of models
indicates first whether it is a prompt BH or its HMNS lifetime, followed by the
value of the oscillation coefficients $a_{\rm osc}$ and $b_{\rm osc}$ if
symmetric (e.g., model t100-ab10 has $t_{\rm ns}=100$\,ms and $a_{\rm
osc}=b_{\rm osc}=1$) or by AS if one of them is set to asymmetric. For each
HMNS lifetime, we also evolve models with $b_{\rm osc}=\textrm{AS}$ that double the
normalization of the HMNS heavy lepton luminosity (equation~\ref{eq:hmns_lum}), labeled
`L20'. Additionally, we evolve a test model with $t_{\rm ns} = \infty$ and
$b_{\rm osc}=\textrm{AS}$ that includes transformation of neutrino fluxes
(Equations~\ref{eq:a_osc}-\ref{eq:b_osc}) but not neutrino mean energies
(Equations~\ref{eq:Teff_nue}-\ref{eq:Teff_nuebar}), denoted by tinf-noT.
\section{Results \label{s:results}}
\begin{figure*}
\centering
\includegraphics*[width=\linewidth]{f1.pdf}
\caption{Neutrino luminosities emitted by the disk (top) and associated mean
neutrino energies (bottom) in models without neutrino flavor transformation
($a_{\rm osc}=b_{\rm osc}=0$) and various HMNS lifetimes $t_{\rm ns}$, as
labeled (corresponding, from left to right,
to models bh-ab00, t010-ab00, t100-ab00, and tinf-ab00 in Table~\ref{t:models}).
The dashed lines show the imposed luminosity
(Equation~\ref{eq:hmns_lum}) and neutrino energies at the surface of the HMNS,
when present. Note that the disk luminosities used in Equation~(\ref{eq:N_nu})
are corrected for global absorption, and are thus lower than those shown here
(c.f. \S\ref{s:physical_origin}).
}
\label{fig:lum_ener_time}
\end{figure*}
\begin{figure*}
\centering
\includegraphics*[width=\linewidth]{f2.pdf}
\caption{Mass histograms of electron fraction for unbound ejecta reaching
$r=10^9$\,cm by the end of each simulation ($t\simeq 17.7$\,s). Rows from top
to bottom show groups of models with different HMNS lifetime $t_{\rm ns}$, as
labeled. The gray shaded histograms show models without flavor transformation,
while colored curves show different combinations of flavor transformation
coefficients $\{a_{\rm osc},b_{\rm osc}\}$ (see \S\ref{s:leakage} for
definitions): runs with symmetric coefficients ($a_{\rm osc}=b_{\rm osc}$) are
on the left column, and asymmetric combinations on the right column. The bin
width is $\Delta Y_e \simeq 0.017$.}
\label{fig:ye_histogram}
\end{figure*}
\begin{figure*}
\centering
\includegraphics*[width=\linewidth]{f3.pdf}
\caption{Activation parameter $\eta_{\rm osc}$ (Equation~\ref{eq:eta_osc}) that
describes where we assume that the FFI operates, shown at selected times as
labeled in the upper left corner of each panel (the rotation
axis is along the $z$-axis and the equatorial plane is at $z=0$). The top row
shows the prompt BH model with no flavor transformation (BH-ab00), while the
bottom row shows the long-lived HMNS model with no flavor transformation
(tinf-ab00). For the latter, we compute -- in post-processing -- an effective
activation parameter that combines the contribution of disk and HMNS
luminosities (Equation~\ref{eq:eta_osc_eff}). The solid lines show density
contours at $10^{8}$\,g\,cm$^{-3}$ (outer) and $10^{11}$\,g\,cm$^{-3}$ (inner).
The black and gray circles indicate the size of the BH (absorbing) or HMNS
(reflecting) boundary. The square-edged red region in the leftmost panels, and
the cavity around the z-axis for the top row, corresponds to low-density
ambient material where neutrino source terms are suppressed and we set
$\eta_{\rm osc}=0$. }
\label{fig:eta-osc_snapshots}
\end{figure*}
\subsection{Overview of Evolution without Flavor Transformation \label{s:overview}}
In order to analyze the effects of the FFI on the disk outflow, we first
establish the baseline of comparison: accretion disks with variable HMNS lifetime
that evolve without flavor transformation effects (model names ending in `ab00').
The initial maximum temperature and density in the torus
are $7\times 10^{10}$\,K ($\sim 6$\,MeV) and $8\times 10^{10}$\,g\,cm$^{-3}$,
respectively, thus neutrino emission from the disk is significant, and the disk
is optically thick in its densest regions (e.g., \cite{Ruffert+97,DiMatteo+02}).
In the model with a promptly-formed BH (BH-ab00), the inner disk adjusts to a
near-Keplerian spatial distribution over a few orbits at $r=50$\,km (initial
density peak radius), with neutrino emission peaking at $t\sim 20$\,ms (top
left panel of Figure~\ref{fig:lum_ener_time}). The emitted electron
antineutrino luminosity is slightly larger than the electron neutrino
luminosity, and both are about an order of magnitude larger than the combined
heavy lepton luminosity. Neutrino emission evolves on a timescale set by
viscous angular momentum transport, with luminosities dropping by a factor
$\sim 100$ below their maximum at a time $t\sim 400$\,ms. Thereafter, the disk
is radiatively inefficient (e.g., \cite{Metzger+09a}).
When a HMNS is present, a boundary layer forms at the surface of the star, and
the disk can reach higher maximum densities and temperatures ($\sim
10^{12}$\,g\,cm$^{-3}$ and $\sim 10^{11}$\,K, respectively) than in the prompt
BH case. This results in electron neutrino and antineutrino luminosities from
the disk being higher by a factor of up to $\sim 2$ relative to the prompt BH
case. For a long-lived HMNS (model tinf-ab00, top right panel of
Figure~\ref{fig:lum_ener_time}), disk luminosities decay much more slowly with
time than both the prompt BH luminosities and the HMNS luminosities imposed at
the boundary. The heavy lepton neutrino/antineutrino luminosity from the disk
$L_{\rm X}$ is significantly higher in model tinf-ab00 than in model BH-ab00,
rising to values within a factor of a few of the emitted electron neutrino and
antineutrino luminosities from the disk at $t\sim 200$\,ms.
The intermediate cases of a HMNS lasting for 10\,ms (model t010-ab00) or
100\,ms (model t100-ab00) show neutrino luminosities intermediate between the
prompt BH and long-lived HMNS cases. In the model with $t_{\rm ns}=10$\,ms,
upon HMNS collapse, all luminosities drop sharply to a level below those of the prompt
BH case at the same time, and then recover over a timescale
$t\sim 100$\,ms until they approximately match those from model BH-ab00.
The model with $t_{\rm ns}=100$\,ms is such that upon BH formation, all luminosities
also drop sharply but never recover to the level of the prompt BH model. We
attribute this difference to transport of angular momentum by the boundary
layer when the HMNS is present. The chosen surface rotation period of $3$\,ms
corresponds to sub-Keplerian rotation at the stellar surface and
also at the ISCO radius of the BH, thus material co-rotating with the star at
its surface is not able to circularize upon BH formation, and the resulting
disk has less matter at the same time than a torus that began evolving around a BH.
Before BH formation (and for $t \lesssim 100$\,ms in models BH-ab00 and
t010-ab00), the mean energy of heavy lepton neutrinos emitted by the disk is
higher by up to a factor of $\sim 2$ than those of electron neutrinos and
antineutrinos (bottom row of Figure~\ref{fig:lum_ener_time}). This hierarchy
is due to the low opacity of heavy lepton neutrinos and the steeper temperature
dependence of the primary mechanism that produces them ($e^+e^-$ pair
annihilation). In all cases, the mean energies of electron antineutrinos
emitted by the disk are 20-50\% higher than the mean energies of electron
neutrinos, with values becoming close to one another only before a sharp drop
at $t\sim 0.5$\,s. The drop in mean energies is a consequence of energy
luminosities decreasing faster with time than number luminosities as the disk
transitions to a radiatively inefficient state with lower temperature and
density.
Due to enhanced neutrino irradiation and suppressed mass loss through the inner
boundary, a longer HMNS lifetime correlates with more mass ejected as well as
an overall higher average electron fraction and velocity of the unbound ejecta
\cite{MF14,Perego2014,fujibayashi2018,fahlman_2018,fujibayashi_2022}, which in
turn translates into a lower yield of heavy $r$-process elements
\cite{martin_2015,lippuner_2017,curtis_2021} (Table~\ref{t:models}). Our model
t010-ab00 has a slightly lower average $Y_e$ than the prompt BH model due to a
relative increase in the ejecta with $Y_e < 0.25$ material
(Figure~\ref{fig:ye_histogram})
Mass ejection in our models is driven by neutrino energy deposition, viscous
heating, and nuclear recombination. Neutrino-driven outflows operate on a
timescale of $\gtrsim 10$\,ms and are significant whenever a HMNS is present.
In pure BH models, and also in late-time HMNS disks, mass is primarily ejected
by a combination of viscous heating and nuclear recombination, operating on a
timescale of few $100$\,ms. Simulations that include MHD effects have
additional mass ejection channels available in the form of magnetic stresses
(Lorentz force) that eject matter on a $\sim$\,ms timescale, providing a
distinct component (e.g.,
\cite{siegel_2018,F19_grmhd,miller2019,Just2022_Yeq,hayashi_2022}). The
composition of this prompt disk outflow is sensitive to that of the disk upon
formation (i.e., neutron-rich), since weak interactions do not operate for long
enough to bring $Y_e$ toward its equilibrium value. The properties of this
early magnetic-driven ejection component are also sensitive to the initial
field geometry (e.g., \cite{christie_2019,fahlman_2022}). In models with a
long-lived HMNS, magnetic stresses and/or neutrino absorption combine to launch
a fast outflow (e.g., \cite{Siegel_2014,ciolfi_2019,moesta_2020,shibata_2021,most_2021,sun_2022}).
\begin{figure*}
\includegraphics*[width=\textwidth]{f4.pdf}
\caption{Average outflow properties as a function of flavor transformation
intensity, parameterized through the FFI coefficients $\{a_{\rm osc},b_{\rm
osc}\}$ in symmetric combinations ($a_{\rm osc}=b_{\rm osc}$,
Table~\ref{t:models}). Shown are the total unbound ejected mass (left),
average electron fraction (middle), and average radial velocity (right), for
various HMNS lifetimes $t_{\rm ns}$, as labeled in the middle panel. }
\label{fig:table_results}
\end{figure*}
\subsection{Effect of Flavor Transformation on Outflow Properties \label{s:flavor_transformation}}
\subsubsection{Overall Trends}
Figure~\ref{fig:eta-osc_snapshots} shows the FFI activation parameter
$\eta_{\rm osc}$ (Equation~\ref{eq:eta_osc}) for the prompt BH and long-lived
HMNS models with $a_{\rm osc} = b_{\rm osc} = 0$ at various times in the
evolution. The BH disk starts optically thick in its denser regions and flavor
transformation operates outside these opaque regions, by construction. For as
long as neutrino emission remains significant, the disk retains a dense core
where flavor transformation does not operate, while $\eta_{\rm osc}\sim 1$ in
all the outflow material.
To diagnose the long-lived HMNS case, we compute an effective activation parameter
(weighted by attenuated luminosity) that combines disk
and stellar contributions (which are treated separately
in our formalism, see \S\ref{s:leakage}):
\begin{equation}
\label{eq:eta_osc_eff}
\eta_{\rm osc}^{\rm (eff)} \equiv \frac{L^*_{\bar{\nu}_e}\,\eta_{\rm osc}^2 + L^{\rm ns}_{\bar{\nu}_e}\,{\eta^{\rm ns}_{\rm osc}}^2 }
{L^*_{\bar{\nu}_e}\,\eta_{\rm osc} + L^{\rm ns}_{\bar{\nu}_e}\,\eta^{\rm ns}_{\rm osc} }.
\end{equation}
This formulation implicitly neglects the difference between the distance to the
HMNS surface and the disk emission ring, and assumes $\tau_{{\rm
irr},\bar{\nu}_e} = \tau_{\bar{\nu}_e}$ (Equation~\ref{eq:tau_local}). The
disk optical depth is initially the same as in the BH case, but as accretion
proceeds, a dense and neutrino-opaque boundary layer forms at the surface of
the HMNS. Figure~\ref{fig:eta-osc_snapshots} shows that most of the disk and
its outflow have nevertheless $\eta_{\rm osc}\sim 1$, which is due to the
dominance of disk luminosities over HMNS luminosities (c.f.,
Figure~\ref{fig:lum_ener_time}). In fact, the opaque boundary layer prevents
neutrinos emitted from the HMNS surface from reaching the disk, from the
equator up to mid-latitude regions. Neutrino emission from the disk, on the
other hand, is optically thin everywhere except the disk midplane at early
times and the boundary layer, whenever present. This suggests that the effects of
the FFI manifest primarily through disk luminosities on equatorial
latitudes, while a mixture of both contributions acts along polar latitudes.
To quantitatively assess the effects of flavor transformation on our model set,
we describe global outflow properties through quantities measured at an
extraction radius $r_{\rm out}=10^9$\,cm. The total ejected mass with positive
Bernoulli parameter (Equation~\ref{eq:bernoulli}) reaching that radius over the
course of the simulation is denoted by $M_{\rm ej}$, and the subset of this
mass with $Y_e < 0.25$ by $M_{\rm ej,red}$. The average electron fraction and
radial velocity at $r_{\rm out}$ are weighted by the mass-flux (e.g.,
\cite{FM13})
\begin{eqnarray}
\label{eq:ye_ave}
\langle Y_e \rangle & = & \frac{\int r_{\rm out}^2 \rho v_r Y_e\, d\Omega\,dt}{\int r_{\rm out}^2 \rho v_r\, d\Omega\, dt}\\
\label{eq:vr_ave}
\langle v_r \rangle & = & \frac{\int r_{\rm out}^2 \rho v_r^2\, d\Omega\,dt}{\int r_{\rm out}^2 \rho v_r\, d\Omega\, dt},
\end{eqnarray}
where only matter with positive Bernoulli parameter is included in the
integral, and the time range includes the entire simulation.
Table~\ref{t:models} shows that for each HMNS lifetime, the overall changes
introduced by neutrino flavor transformation on the ejecta properties are
moderate: at most $\sim 10\%$ in average electron fraction, up
to $\sim 40\%$ in total ejecta mass, and $\sim 10-40\%$ in average velocity
except for the most extreme FFI case with $t_{\rm ns}=100$\,ms, for which it
is a $73\%$ increase. The mass with $Y_e < 0.25$ ($M_{\rm ej,red}$) can
change by a factor of up to a few.
The direction of these changes depends on the HMNS lifetime, as illustrated by
Figure~\ref{fig:table_results} for models with symmetric FFI coefficients
($a_{\rm osc}=b_{\rm osc}$). For $t_{\rm ns}\leq 10$\,ms, the average electron
fraction of models with flavor transformation is always lower than in the
un-oscillated case. Figure~\ref{fig:ye_histogram} illustrates the shift of the
electron fraction distribution to more neutron-rich values, with the peak of
mass ejection typically decreasing by up to $0.05$.
A HMNS with lifetime $t_{\rm ns}\geq 100$\,ms, on the other hand, shows an
overall broadening of the $Y_e$ distribution when including flavor
transformation, with the average electron fraction staying constant
or increasing by at most $0.02$. The long-lived HMNS set shows
a peak $Y_e$ value shifting to higher, proton-rich values.
In all cases, the average outflow velocity stays nearly constant or increases
(most notably for $t_{\rm ns}=100$\,ms) when including flavor transformation
relative to the baseline case. Likewise, mass ejection decreases with
a more intense FFI all in cases except for the set with $t_{\rm ns}=100$\,ms.
For symmetric values of the oscillation coefficients (model names ending in
ab00, ab05, ab07, and ab10), the magnitude of the changes introduced by flavor
transformation generally varies monotonically with the value of these
coefficients. The case $a_{\rm osc} = b_{\rm osc}=1$ shows the strongest
effect, as expected. When using asymmetric coefficients, we find that the
ratio of number luminosities in equation~(\ref{eq:b_AS}) starts low, since
initially $N_{\nu_e}<N_{\bar{\nu_e}}$, but approaches values close to unity on a
timescale of $\sim 35$\,ms (10 orbits at $r=50$\,km). The magnitude of the
changes in average quantities is similar (but not always identical) to the case
$a_{\rm osc} = b_{\rm osc} = 2/3$, as expected. Differences between setting
either $a_{\rm osc}$ or $b_{\rm osc}$ to asymmetric are minor, as shown by the
right column of Figure~\ref{fig:ye_histogram}.
\begin{table*}
\caption{Average of quantities extracted from tracer particles. From left to
right, the first 9 columns show model name, net change in $Y_e$
(eq.~\ref{eq:dye_net}) , change in $Y_e$ due to emission/absorption of electron
neutrinos/antineutrinos (eqns.~\ref{eq:dye_definition}-\ref{eq:ghep}, as
labeled), net energy change due to viscous heating, neutrino
emission/absorption, and alpha particle recombination
(eqns.~\ref{eq:dq_visc}-\ref{eq:dq_alpha}).
For models with $t_{\rm ns}=10$\,ms and $100$\,ms, direct time integration of
source terms (columns 2-9) is done from BH formation onward ($t_{\rm
min}=t_{\rm ns})$. Likewise, for the long-lived HMNS model, integration begins
at $t_{\rm min}=0$ but stops at $t_{\rm max}=35$\,ms. Therefore, except for
the prompt BH series, the net change in $Y_e$ in the second column does not
capture the entire history, and comparisons should only be made between
simulations with the same $t_{\rm ns}$.
The last two columns show the
average mass fraction of Lanthanides ($X_{\rm La}$) and Actinides ($X_{\rm
Ac}$) obtained with {\tt SkyNet} including the entire simulation, and
extrapolating the trajectories to $30$\,yr.
\label{t:analysis}}
\begin{ruledtabular}
\begin{tabular}{lcccccccccc}
Model & $\Delta \bar{Y}^{\rm net}_e$ & $\Delta \bar{Y}^{{\rm em},\nu_e}_e$ & $\Delta \bar{Y}^{{\rm em},\bar{\nu}_e}_e$ &
$\Delta \bar{Y}^{{\rm abs},\nu_e}_e$ & $\Delta \bar{Y}^{{\rm abs},\bar{\nu}_e}_e$ &
$\Delta \bar{q}_{\rm visc}$ & $\Delta \bar{q}_{\nu}$ & $\Delta \bar{q}_{\alpha}$ &
$\bar{X}_{\rm La}$ & $\bar{X}_{\rm Ac}$\\
& & & & & & \multicolumn{3}{c}{($10^{19}$\,erg\,g$^{-1}$)} & $(10^{-2})$ & $(10^{-2})$\\
\noalign{\smallskip}
\hline
BH-ab00 & 0.19 & 1.05 & 0.96 & 0.44 & 0.16 & 2.06 & -1.07 & 0.33 & 0.8 & 0.3\\
BH-ab05 & 0.18 & 1.03 & 0.99 & 0.34 & 0.12 & 2.13 & -1.15 & 0.32 & 0.8 & 0.2\\
BH-ab07 & 0.17 & 1.08 & 1.06 & 0.30 & 0.11 & 2.31 & -1.31 & 0.31 & 0.9 & 0.2\\
BH-ab10 & 0.16 & 1.09 & 1.12 & 0.20 & 0.08 & 2.60 & -1.58 & 0.30 & 1.9 & 0.3\\
BH-aAS & 0.17 & 1.04 & 1.06 & 0.25 & 0.11 & 2.43 & -1.34 & 0.31 & 1.0 & 0.3\\
BH-bAS & 0.17 & 1.08 & 1.07 & 0.30 & 0.13 & 2.36 & -1.30 & 0.31 & 1.1 & 0.2\\
\noalign{\smallskip}
t010-ab00\footnote{For this group of models, source terms are integrated over $t\geq t_{\rm ns}=10$\,ms.}
& 0.11 & 0.69 & 0.58 & 0.32 & 0.10 & 1.39 & -0.56 & 0.29 & 2.3 & 0.9\\
t010-ab05 & 0.10 & 0.63 & 0.57 & 0.22 & 0.07 & 1.50 & -0.65 & 0.28 & 1.5 & 1.3\\
t010-ab07 & 0.09 & 0.59 & 0.56 & 0.17 & 0.05 & 1.52 & -0.67 & 0.28 & 2.1 & 1.4\\
t010-ab10 & 0.07 & 0.57 & 0.58 & 0.08 & 0.03 & 1.67 & -0.83 & 0.26 & 3.0 & 1.1\\
t010-aAS & 0.09 & 0.54 & 0.54 & 0.14 & 0.05 & 1.44 & -0.66 & 0.27 & 2.1 & 1.4\\
t010-bAS & 0.09 & 0.61 & 0.59 & 0.18 & 0.06 & 1.63 & -0.70 & 0.28 & 1.8 & 1.3\\
t010-L20 & 0.09 & 0.65 & 0.62 & 0.19 & 0.06 & 1.63 & -0.76 & 0.28 & 1.8 & 1.4\\
\noalign{\smallskip}
t100-ab00\footnote{For this group of models, source terms are integrated over $t\geq t_{\rm ns}=100$\,ms.}
& 0.011 & 0.029 & 0.035 & 0.009 & 0.004 & 0.25 & -0.04 & 0.13 & 0.6 & 0.08 \\
t100-ab05 & 0.004 & 0.009 & 0.013 & 0.001 & 7E-4 & 0.14 & -0.01 & 0.10 & 1.1 & 0.2 \\
t100-ab07 & 0.002 & 0.006 & 0.008 & 6E-4 & 3E-4 & 0.09 & -0.01 & 0.09 & 1.1 & 0.3 \\
t100-ab10 & 0.001 & 0.004 & 0.005 & 3E-5 & 1E-5 & 0.06 & -8E-3 & 0.07 & 1.1 & 0.2 \\
t100-aAS & 0.003 & 0.009 & 0.011 & 8E-4 & 4E-4 & 0.09 & -0.02 & 0.09 & 1.2 & 0.3 \\
t100-bAS & 0.003 & 0.007 & 0.010 & 6E-4 & 4E-4 & 0.12 & -0.01 & 0.09 & 1.2 & 0.3 \\
t100-L20 & 0.003 & 0.010 & 0.013 & 9E-4 & 5E-4 & 0.11 & -0.02 & 0.09 & 1.3 & 0.3 \\
\noalign{\smallskip}
tinf-ab00\footnote{For this group of models, source terms are integrated over $0 \leq t\leq 35$\,ms.}
& 0.08 & 1.00 & 0.88 & 0.41 & 0.20 & 1.13 & -1.79 & 0.02 & 0.5 & 0.2\\
tinf-ab05 & 0.08 & 1.02 & 0.90 & 0.40 & 0.19 & 1.09 & -1.63 & 0.02 & 1.0 & 0.4\\
tinf-ab07 & 0.09 & 1.02 & 0.90 & 0.40 & 0.19 & 1.08 & -1.52 & 0.02 & 0.9 & 0.4\\
tinf-ab10 & 0.11 & 1.06 & 0.90 & 0.43 & 0.17 & 1.07 & -1.40 & 0.02 & 0.7 & 0.2\\
tinf-aAS & 0.09 & 1.04 & 0.92 & 0.41 & 0.19 & 1.08 & -1.47 & 0.02 & 1.0 & 0.4\\
tinf-bAS & 0.09 & 1.01 & 0.90 & 0.40 & 0.19 & 1.09 & -1.52 & 0.02 & 0.9 & 0.4\\
tinf-L20 & 0.09 & 1.05 & 0.91 & 0.44 & 0.21 & 1.09 & -1.49 & 0.02 & 0.9 & 0.4\\
tinf-noT & 0.07 & 1.00 & 0.87 & 0.37 & 0.17 & 1.15 & -1.91 & 0.01 & 0.4 & 0.1
\end{tabular}
\end{ruledtabular}
\end{table*}
\subsubsection{Physical Origin of the Changes due to the FFI \label{s:physical_origin}}
The effect of the FFI on the disk outflow can be ultimately traced back to the
hierarchy of luminosities and energies shown in Figure~\ref{fig:lum_ener_time}.
For the prompt BH case, where only neutrinos from the disk exist, flavor
transformation through equations (\ref{eq:a_osc}), (\ref{eq:b_osc}),
(\ref{eq:Teff_nue}), and (\ref{eq:Teff_nuebar}) replaces a high-luminosity,
low-energy species ($\nu_e$, $\bar{\nu}_e$) for a low-luminosity, high-energy
species ($\nu_x$, $\bar{\nu}_x$). In the optically-thin limit, neutrino number
absorption is proportional to $\sim L_{\nu_i} \langle \epsilon_{\nu_i}\rangle$,
with energy absorption having an an additional power\footnote{For simplicity,
we assume $\langle \epsilon_{\nu_i}^2\rangle=\langle \epsilon_{\nu_i}\rangle^2$
in the argument.} of $\langle \epsilon_{\nu_i}\rangle$.
In this simple picture, a complete flavor swap should decrease the
electron-flavor neutrino luminosity by an order of magnitude and increase the
average energy of electron-flavor neutrinos by a factor of up to $\sim 2$, for
an overall decrease in number absorption of a factor of $\sim 2$.
\begin{figure*}
\includegraphics*[width=\linewidth]{f5.pdf}
\caption{Mass histograms of radial velocity for unbound ejecta from selected
simulations with a prompt BH (left) and long-lived HMNS (right), with flavor
transformation coefficients as labeled. The dashed line on the right
panel corresponds to the long-lived HMNS model with asymmetric
$b_{\rm osc}$ but no mixing of the neutrino temperatures
(model tinf-noT in Tables~\ref{t:models} and \ref{t:analysis}).
The bin width is $\Delta \log (v_r/c) = 0.1$.}
\label{fig:hist_vel}
\end{figure*}
To diagnose quantitatively the effects on the electron fraction, we show in
Table~\ref{t:analysis} the time-integral of the source terms that control the
evolution of $Y_e$ (c.f., \cite{Fernandez2020BHNS}),
\begin{equation}
\label{eq:dye_definition}
\Delta Y_e^i = \int_{t_{\rm min}}^{t_{\rm max}} \Gamma^i dt,
\end{equation}
where $\Gamma^i$ is the rate per baryon of charged-current weak processes:
\begin{eqnarray}
\label{eq:gcem}
\Gamma^{\rm em, \nu_e} = \lambda_{e^-}Y_p: && e^- + p \to n + \nu_e\\
\label{eq:gcep}
\Gamma^{\rm em, \bar{\nu}_e} = \lambda_{e^+}Y_n: && e^+ + n \to p + \bar{\nu}_e\\
\label{eq:ghem}
\Gamma^{\rm abs, \nu_e}=\lambda_{\nu_e}Y_n: && \nu_e + n \to e^- + p\\
\label{eq:ghep}
\Gamma^{\rm abs, \bar{\nu}_e} = \lambda_{\bar{\nu}_e}Y_p: && \bar{\nu}_e + p \to e^+ + n
\end{eqnarray}
where $\lambda_i$ are the reaction rates per target particle, and $Y_{n,p}$ the
number of neutrons or protons per baryon (in the notation of
\cite{Just2022_FFI}). Equation~(\ref{eq:dye_definition}) is computed for each
weak process in each trajectory that is unbound and reaches $r>10^9$\,cm.
Values are then averaged arithmetically over trajectories (which have identical
mass for a given run), denoted by a bar above, and the net change is computed
\begin{equation}
\label{eq:dye_net}
\Delta \bar{Y}^{\rm net}_e = \Delta \bar{Y}^{\rm em, \bar{\nu}_e}_e - \Delta \bar{Y}^{\rm em, \nu_e}_e
+ \Delta \bar{Y}^{\rm abs, \nu_e}_e - \Delta \bar{Y}^{\rm abs, \bar{\nu}_e}_e.
\end{equation}
The time range $[t_{\rm min},t_{\rm max}]$ in
equation~(\ref{eq:dye_definition}) is different for model sets with different
HMNS lifetime. For the prompt BH case, the interval is the entire simulation
time. For sets with $t_{\rm ns} = 10$\,ms and $100$\,ms, the interval begins at
BH formation ($t_{\rm min}=t_{\rm ns}$) and extends to the end of the
simulation, because particles are created after HMNS collapse. For these model
sets, quantities capture the impact of the FFI on post-collapse neutrino
processes. For the long-lived HMNS case, the period extends from the start of
the simulation ($t_{\rm min}=0$) until $10$ orbits at $r=50$\,km ($t_{\rm
max}\simeq 35$\,ms). Limiting the integration interval is necessary because
the absorption and emission terms in the long-lived HMNS case are large, and
our trajectories are sampled at coarser time intervals at later times, leading
to imprecise cancellation of large terms when numerically integrating over very
long time intervals in post-processing. Because of this, direct comparisons
should be made across models with the same $t_\mathrm{ns}$ in Table~\ref{t:analysis}. Note that the
tracer particles themselves are updated every time step and do not suffer from
this post-processing error. In all cases, the chosen time interval satisfies
$\bar{Y}_e(t_{\rm min})+\Delta \bar{Y}^{\rm net}_e = \bar{Y}_e (t_{\rm max})$.
The change in $Y_e$ with flavor transformation in the prompt BH models can be
explained straightforwardly: as the FFI becomes stronger for
models with increasing values of $a_{\rm osc}=b_{\rm osc}$, the average
absorption of both electron neutrinos and antineutrinos decreases, with $\Delta
\bar{Y}^{\rm abs, \nu_e}_e$ decreasing more than $ \Delta \bar{Y}^{\rm abs,
\bar{\nu}_e}_e$, with a maximum drop in absorption of a factor $\sim 2$.
Emission terms, on the other hand, change by a few percent at most, since
flavor transformation only changes emission rates indirectly through a minor
effect on the disk dynamics. A decrease in neutrino absorption decreases the
rate at which weak interactions bring $Y_e$ to its equilibrium value, and also
lowers the equilibrium value itself (e.g., \cite{Just2022_Yeq,Just2022_FFI}).
We diagnose the effects of flavor transformation on the outflow dynamics by
integrating energy source terms of fluid elements. Table~\ref{t:analysis}
shows the average specific energy gain of disk outflow trajectories through the
quantities
\begin{eqnarray}
\label{eq:dq_visc}
\Delta q_{\rm visc} & = & \int_{t_{\rm min}}^{t_{\rm max}} \dot{q}_{\rm visc} dt\\
\label{eq:dq_nu}
\Delta q_\nu & = & \int_{t_{\rm min}}^{t_{\rm max}} \dot{q}_{\rm \nu} dt\\
\label{eq:dq_alpha}
\Delta q_\alpha & = & \frac{B_\alpha}{m_\alpha}\left [X_{\alpha}(t_{\rm max}) - X_\alpha(t_{\rm min}) \right],
\end{eqnarray}
where $ \dot{q}_{\rm visc}$ and $\dot{q}_{\rm \nu}$ are the rate of viscous
and net neutrino heating per unit mass, respectively, ${B_\alpha}/{m_\alpha}$ is
the nuclear binding energy per unit mass of alpha particles, and $X_{\alpha}$
is the mass fraction of alpha particles. For each model, the time range and
particle sample employed is the same as in equation~(\ref{eq:dye_definition}).
For the prompt BH models, the overall decrease in the neutrino absorption to
emission ratio due to flavor transformation also results in higher net cooling
of the torus (more negative $\Delta \bar{q}_\nu$), decreasing the vertical
extent of the disk. This is accompanied by an increase in viscous heating,
which is proportional to the local disk pressure (c.f., \cite{FM13}). In the
BH model with \emph{Complete} flavor transformation (BH-ab10), net neutrino
cooling increases by $60\%$ while viscous heating increases by $30\%$ relative
to the \emph{Baseline} model (BH-ab00). The left panel of
Figure~\ref{fig:hist_vel} shows the velocity distribution of the outflow from
both models: the high-velocity portion of the histogram remains at a similar
level, with flavor transformation inducing a slight shift to higher velocities
of the peak, and a sizable decrease in the low velocity portion. If we change
the unbinding criterion from Bernoulli to positive escape velocity\footnote{The
positive escape velocity criterion is more stringent, showing less overall mass
ejected, and selecting only higher-velocity matter. The Bernoulli criterion
accounts for the conversion of internal energy to bulk kinetic energy via
adiabatic expansion, allowing slower matter to be considered as having
sufficient energy to become gravitationally unbound.}, we find that the amount
of mass ejected in model BH-ab10 is $10\%$ \emph{higher} than in model BH-ab00,
versus $25\%$ lower if we use the default Bernoulli criterion. Thus, the
overall change in energy source terms introduced by the FFI in BH disks results
in \emph{less marginally unbound mass ejected}.
Regarding the long-lived HMNS model set (tinf), Table~\ref{t:analysis} shows
that absorption of electron neutrino number slightly \emph{increases} or stays
constant in models with increasing $a_{\rm osc}=b_{\rm osc}$ in the first
$35$\,ms of evolution, while absorption of electron antineutrino number
decreases in a similar way as in the pure BH case. A decrease in electron
antineutrino absorption relative to neutrino absorption, with little change in
the emission terms, increases the equilibrium value toward which $Y_e$ is
driven (e.g., \cite{Just2022_Yeq}).
\begin{figure}
\includegraphics*[width=\columnwidth]{f6.pdf}
\caption{Luminosities as a function of time for the prompt BH model
(top) and HMNS model (bottom) without flavor transformation. Shown for electron neutrinos and antineutrinos
are the emitted luminosities $L_{\nu_i}$ (thin solid lines), total power
absorbed $L_{\nu_i}^{\rm abs}$ (dashed lines), and net luminosities used in
equation~(\ref{eq:N_nu}), $L_{\nu_i}^* = L_{\nu_i}-L_{\nu_i}^{\rm abs}$ (thick
solid lines). The black lines show the emitted luminosities of
heavy lepton neutrinos and antineutrinos, $L_{\nu_x} = L_{\bar{\nu}_x} = L_{\rm
X}/2$ (we neglect their absorption so no correction is applied, i.e., $L_{\nu_x}^* = L_{\nu_x}$).}
\label{fig:lum_abs-corr_time}
\end{figure}
The increase in electron neutrino absorption with flavor transformation
intensity for the long-lived HMNS is the consequence of two effects that modify
the simpler picture for a prompt BH. First, the drop in electron neutrino
luminosity upon flavor mixing is not as large as in the BH case. The heavy
lepton luminosity is significantly larger than in the pure BH case, as the
boundary layer region reaches higher densities and temperatures
(Figure~\ref{fig:lum_ener_time}). Also, electron neutrino absorption is more
important than in the pure BH case due to the opaque boundary layer.
Figure~\ref{fig:lum_abs-corr_time} shows that the absorption-corrected
luminosity $L_{\nu_e}^*$ is reduced relative to the emitted luminosity
$L_{\nu_e}$ by a larger factor in model tinf-ab00 than in model BH-ab00 (for
heavy leptons $L^*_{\rm X}=L_{\rm X}$ always, since we neglect their
absorption). As a result, swapping $L^*_{\nu_x}$ and $L_{\nu_e}^*$ in the HMNS
case results in a moderate (factor $\lesssim 2$) drop in electron neutrino flux
during the relevant part of the evolution, in contrast to the BH case in which
the decrease is a factor $\sim 10$.
The second effect leading to more electron neutrino absorption with flavor
transformation in long-lived HMNS disks is the mixing of the temperature of
emitted neutrinos (Equations~\ref{eq:Teff_nue}-\ref{eq:Teff_nuebar}). This
effect is present in all models with flavor transformation, and it tends to
increase net absorption by increasing the cross section, as the mean energy of
heavy lepton neutrinos is always larger than that of electron neutrinos or
antineutrinos (Figure~\ref{fig:lum_ener_time}). For the prompt BH models, this
effect is sub-dominant, since the drop in neutrino flux is much larger than the
increase in mean neutrino energies from the disk (c.f.
Figure~\ref{fig:lum_abs-corr_time}). For the long-lived HMNS case, however,
the reduction in absorption rate due to the difference between $L_{\nu_e}^*$
and $L_{\nu_x}^*$ is comparable to or smaller than the increase in absorption
rate from the increase in the absorption cross section due to the higher
average neutrino energy. Thus, the global absorption of electron neutrinos
remains nearly constant or even increases. More absorption of electron-type
neutrinos increases the equilibrium electron fraction \cite{Just2022_Yeq}.
\begin{figure}
\includegraphics*[width=\columnwidth]{f7.pdf}
\caption{Mass histograms of electron fraction for unbound ejecta at the end of
the simulation for a pair of long-lived HMNS models that differ only in that they either
include (tinf-bAS, solid blue) or exclude (tinf-noT, dashed violet) mixing of the neutrino
temperatures via Equations~(\ref{eq:Teff_nue})-(\ref{eq:Teff_nuebar}).
}
\label{fig:hist_ye_tinf}
\end{figure}
As a test of this interpretation, we ran another model (tinf-noT) with the same
parameters as tinf-bAS but neglecting the swap of neutrino temperatures. The
absorption of electron neutrinos then decreases during the first
$35$\,ms of evolution relative to model tinf-ab00 and tinf-bAS
(Table~\ref{t:analysis}) as expected, since $L_{\nu_e}^*$ is still
larger than $L_{\nu_x}^*$ by a factor $\geq 2$ over that time period.
The change in $Y_e$ over this interval is also smaller in model tinf-noT than in all
other models with $t_{\rm ns}=\infty$.
Despite the lower amount of electron neutrino absorption in its early evolution
and smaller change in $Y_e$, however, model tinf-noT has a higher average
electron fraction by the end of the simulation (Table~\ref{t:models}) than its
sibling model that includes neutrino temperature oscillation (tinf-bAS).
Figure~\ref{fig:hist_ye_tinf} shows that while the peak of the electron
fraction distribution by the end of the simulation is nearly the same in both
cases, the amount of low $Y_e$ material is lower in model tinf-noT, hence the
average over the entire outflow is higher.
To further dissect the origin of these changes, we note that
\citet{lippuner_2017} showed that the outflow from HMNS disks can be separated
into an earlier, mostly neutrino-driven component, and a late component driven
primarily by viscous heating and nuclear recombination. The early component
exhibits a strong correlation between electron fraction and entropy, with a
turnover in the range $Y_e\sim 0.4-0.5$, while the late component shows a
more scattered distribution in entropy in a narrower $Y_e$ range.
Figure~\ref{fig:ye-ents-vel_scatter} shows a scatter plot of unbound particles
in $Y_e$-entropy-velocity space for models tinf-noT, tinf-bAS, and tinf-ab00,
tagged by the time at which they reach the extraction radius at $r=10^9$\,cm.
The presence of the early ($t<1$\,s, yellow) neutrino-driven wind component is
evident, making up the majority of particles that span the electron fraction
interval $[0.15,0.5]$ and forming the broad component of the $Y_e$ histogram in
Figure~\ref{fig:hist_ye_tinf}. The smaller amount of of low-$Y_e$ ejecta from
model tinf-noT is thus associated with a smaller contribution of the
neutrino-driven wind, given the drop in luminosity upon flavor mixing without
compensation by a higher neutrino temperature.
The late-time component is also evident in Figure~\ref{fig:ye-ents-vel_scatter}
(blue particles), and is associated with the peak in the $Y_e$ histogram. The
fact that this peak is at a similar value of $Y_e$ in models tinf-noT and
tinf-bAS (Figure~\ref{fig:hist_ye_tinf}), but higher than the peak $Y_e$ from model
tinf-ab00 (bottom right panel of Figure~\ref{fig:ye_histogram}), indicates that
its location is much more sensitive to the swapping of fluxes than to that of
neutrino temperatures when the FFI operates. We can gain a qualitative
understanding of these trends by evaluating the equilibrium electron fraction
from pure absorption, to which a neutrino-driven wind without cooling is driven
\cite{Qian&Woosley96}
\begin{equation}
\label{eq:yeq_abs}
Y_e^{\rm eq,abs} \sim \left(1 + \frac{\langle \varepsilon_{\bar{\nu}_e}\rangle\,L^*_{\bar{\nu}_e}}
{\langle \varepsilon_{\nu_e}\rangle\, L^*_{\nu_e}} \right)^{-1}
\end{equation}
where again we assume
$\langle \epsilon_{\nu_i}^2\rangle=\langle \epsilon_{\nu_i}\rangle^2$.
Ignoring attenuation, considering the contribution of the disk alone, and
adopting constant $a_{\rm osc}=b_{\rm osc}=2/3$ we find $Y_e^{\rm eq,abs}\simeq
\{0.35, 0.39, 0.45\}$ at $t=1$\,s for models tinf-ab00, tinf-noT, and tinf-bAS, respectively.
Considering the HMNS contribution alone, we get $Y_e^{\rm eq,abs}\simeq \{0.44,
0.44, 0.48\}$ independent of time for the same set of models. These values are
consistent with model tinf-bAS having a more proton-rich $Y_e$ peak than the
baseline model tinf-ab00, but do not fully account for model tinf-noT being
closer to model tinf-bAS than to model tinf-ab00 in its late-time component. A
spread in $Y_e$ within a given model can be accounted for by (1) latitude:
particles ejected closer to the rotation axis have a stronger irradiation
contribution from the HMNS, and (2) attenuation: fluctuations in the ratio of
neutron to proton fraction alter the local incident luminosities in
equation~(\ref{eq:yeq_abs}) though the optical depth, and thereby affect
$Y_e^{\rm eq,abs}$. Also, neutrino emission is non-negligible,
thus a more accurate value of the equilibrium electron fraction would include
all four reactions contributing to the change in electron fraction
(Equations~\ref{eq:gcem}-\ref{eq:ghep}) but is beyond the scope of this study.
\begin{figure}
\includegraphics*[width=\columnwidth]{f8.pdf}
\caption{Entropy and radial velocity vs. electron fraction for unbound
tracer particles from models with long-lived HMNS and different flavor
transformation configuration: no FFI (tinf-ab00, top), asymmetric (tinf-bAS,
middle) and asymmmetric with no neutrino temperature oscillation (tinf-noT,
bottom). The color shows the time at which the particle last reaches
$r=10^9$\,cm.
}
\label{fig:ye-ents-vel_scatter}
\end{figure}
Regarding mass ejected and average velocity of the long-lived HMNS outflow,
Table~\ref{t:models} shows that when comparing model tinf-bAS with the
unoscillated model (tinf-ab00), the average outflow velocity increases by $\sim
10\%$ while the mass ejected barely decreases ($\lesssim 2\%$). Removing the
mixing of neutrino temperatures (model tinf-noT) results in a somewhat larger
decrease in ejected mass ($3\%$) and a $5\%$ \emph{decrease} in average
velocity relative to the model without flavor transformation (tinf-ab00).
Figure~\ref{fig:hist_vel} shows the velocity histograms for these models:
flavor transformation without oscillation of the neutrino temperatures produces
more ejecta with low velocities, which is more weakly bound than matter that
expands faster, for similar thermal energy content. We can attribute this to
the lower absolute amount of absorption given the lower electron neutrino and
antineutrino luminosities. Including temperature mixing increases neutrino
absorption substantially, to the point that the low-velocity tail of the ejecta
distribution is removed. Figure~\ref{fig:ye-ents-vel_scatter} shows that the
missing low-velocity ejecta is primarily late-time, convective outflow that is
more marginally unbound. This removal of low-velocity ejecta is also behind the
trend of increasing average velocity with decreasing ejected mass for $t_{\rm
ns}=\infty$ models with increasing $a_{\rm osc} = b_{\rm osc}$.
Regarding models with finite HMNS lifetime, the set with $t_{\rm ns}=10$\,ms
shows properties similar to the BH set. Tables~\ref{t:models} and
\ref{t:analysis} show that $65\%$ of the $Y_e$ change ($0.11/[\langle
Y_e\rangle-Y_e(t=0)]$) occurs after BH formation for the unoscillated model
(t010-ab00), following the same trend with FFI coefficients as the BH set. The
same applies to the energy source terms post-BH formation: more viscous heating
and net neutrino cooling, with nearly constant nuclear recombination heating.
The most notable difference with the prompt BH set is the bump in the electron
fraction histogram at $Y_e \sim 0.1-0.2$ (Figure~\ref{fig:ye_histogram}), which
due to its similarity to models with longer HMNS lifetime, can be attributed to
a more significant neutrino-driven component at early times. This bump decreases in magnitude
and shifts to lower $Y_e$ with increasing FFI coefficients, in line with a
weaker overall neutrino absorption level and a faster decrease of electron
neutrino absorption than antineutrino absorption.
Regarding the model group with $t_{\rm ns}=100$\,ms, Table~\ref{t:analysis}
shows that very little change ($\sim 5\%$) in $Y_e$ occurs after BH formation,
in line with the sharp decrease without recovery of the electron neutrino and
antineutrino luminosities (Figure~\ref{fig:lum_ener_time}). The time integral
of energy source terms also shows a very reduced importance of viscous heating
and neutrino cooling, but a contribution of nuclear recombination that is only
a factor $3$ lower than models for which the earlier phases are also computed
($t_{\rm ns}=0$ and $10$\,ms). The evolution of this set of models is thus
dominated by the earlier HMNS phase, during which neutrino absorption is a
dominant process.
Comparing the $Y_e$ histograms of this set with those of the $t_{\rm
ns}=\infty$ series (Figure~\ref{fig:ye_histogram}) indicates that a
neutrino-driven wind is clearly present, and becomes stronger with a more
intense FFI. Figure~\ref{fig:table_results} shows that $t_{\rm ns}=100$\,ms is
the only model set for which the ejected mass increases with more intense
flavor transformation, which we interpret as neutrino absorption taking over as
a driving mechanism of the outflow. We surmise that models with a long-lived
HMNS saturate their mass ejection at nearly $>95\%$ of the initial disk mass,
whereas the model set with $t_{\rm ns}=100$\,ms has room to grow by starting at
$42\%$ of the initial disk mass without FFI effects.
Finally, we find that our results have little sensitivity to the normalization
of the heavy lepton luminosity imposed at the HMNS surface. Models t010-L20,
t100-L20, and tinf-L20 have identical input parameters as the corresponding
asymmetric models t010-bAS, t100-bAS, tinf-bAS, respectively
(Table~\ref{t:models}), except that we set $L_{\rm X,0}^{\rm ns} =
2L_{\nu_e,{\rm 0}}^{\rm ns}$ in equation~(\ref{eq:hmns_lum}). Comparing each
pair of models with the same $t_{\rm ns}$ in Table~\ref{t:models} shows differences at the few percent
level in all average quantities, with the exception of model t010-L20 which has
an average velocity $10\%$ higher than model t010-bAS, and a mass with $Y_e<0.25$
that is a factor $\sim 2$ higher in the L20 case.
Table~\ref{t:analysis} shows that for this pair of models (t010-bAS and
t010-L20), the main difference is that electron antineutrino emission after BH
formation is higher in the model with enhanced heavy lepton HMNS luminosity,
with a correspondingly higher net neutrino cooling. Looking at the tinf
counterparts in Table~\ref{t:analysis}, which share the first $10$\,ms of
evolution with the t010 models, we find a $10\%$ higher electron neutrino and
antineutrino absorption in model tinf-L20 than in model tinf-bAS. While the net
change in $Y_e$ is identical in this $35$\,ms HMNS phase, the larger radiative
driving can account for the higher average velocity in model t010-L20 relative
to model t010-bAS. The larger amount of mass with $Y_e < 0.25$ can be
attributed to a more robust neutrino driven wind, which tends to launch more
low-$Y_e$ ejecta at early times (Figure~\ref{fig:ye-ents-vel_scatter}).
We expect the equilibrium $Y_e$ of the long-lived HMNS outflow to have an
important dependency on the imposed electron neutrino and antineutrino
luminosities at the HMNS boundary (normalization and time dependence,
equation~\ref{eq:hmns_lum}), since much of this radiation emitted toward
equatorial latitudes is absorbed at the boundary layer, thus strongly
influencing the $Y_e$ evolution in this region, which acts as
a reservoir for the outflow. A more extended parameter space study,
or a self-consistent HMNS and disk evolution, would be able to provide
a more physically based characterization of the baseline $Y_e$ of a
long-lived HMNS.
\subsection{Nucleosynthesis Implications}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{f9.pdf}
\caption{Final abundances at $t=30$\,yr as a function of mass number, for
unbound tracer particles from various models. Each row
shows models with different HMNS lifetime, with left and right column showing models
with symmetric and asymmetric flavor transformation coefficients, respectively.
Circles show solar $r$-process abundances from \cite{goriely1999}, scaled to the abundance at
$A=130$ from the model with no flavor transformation ($a_{\rm osc}=b_{\rm osc}=0$), for each
value of $t_{\rm ns}$. Abundances are normalized such that all mass fractions $X(A)$ add up
to unity.}
\label{fig:XvsA_array}
\end{figure*}
Outflows from accretion disks are important contributors to the total
ejecta from NS mergers, an example of which is GW170817, for which the disk ejecta
is expected to have been dominant (e.g.,
\cite{shibata_2017b,metzger_2017}). The abundance pattern of the ejecta thus
has implications for the $r$-process enrichment contribution (e.g.,
\cite{rosswog_2017,cowan_2019,shibata_2019,siegel_2022}) as well as on the kilonova signal
through the opacities \citep{Kasen+13,tanaka2013,fontes2015} and radioactive
heating rates (e.g., \cite{Li&Paczynski98,Metzger+10b,barnes_2021,klion_2022}). The
$r$-process requires a high abundance of free neutrons when the ejecta
temperature $T\lesssim 5\times 10^9$\,K (e.g., \cite{mendoza_2015}), which
relates directly to the electron fraction of the ejecta shaped by neutrinos at
earlier times when matter is hotter.
Figure~\ref{fig:XvsA_array} shows $r$-process abundances for trajectories from
the same models shown in Figure~\ref{fig:ye_histogram}. Overall, there are no
qualitative changes in the abundance pattern for a given HMNS lifetime,
regardless of the intensity of flavor transformation. More noticeable
differences occur in models with larger $t_{\rm ns}$ due to the increasing
protonization. The general trend is consistent with the $Y_e$ histograms: more
intense flavor transformation produces more heavy $r$-process elements, and
also more light elements (relative to $A\sim 130$) in models with a long-lived HMNS.
For a quantitative assessment, the average mass fraction of lanthanides $X_{\rm
La}$ ($57\leq Z\leq 72$) and actinides $X_{\rm Ac}$ ($89\leq Z\leq 104$) are
shown in Table~\ref{t:analysis} for all models. Overall, we see that flavor
transformation induces at most a factor $\sim 2$ change in these mass fractions
except for the model set with $t_{\rm ns}=100$\,ms, which shows a larger
variation in the actinide fraction relative to the unoscillated model. A more
significant change of up to a factor of several in $M_\mathrm{ej,red}$ (mass
ejected with $Y_e < 0.25$) is seen in Table~\ref{t:models}, which can alter the
ratio of blue/red kilonovae light curves.
Our models suggest that the FFI introduces quantitative uncertainty in the disk
outflow of at most a factor of two in the mass fraction of heavy $r$-process
elements relevant to kilonova opacities, with minor changes to the overall
$r$-process abundance pattern relative to the standard, no-FFI case.
\subsection{Comparison with Previous Work}
\citet{Li_Siegel_2021} carried out GRMHD simulations of BH accretion disks
using M1 neutrino transport. Their criterion to activate the FFI stems from a
dispersion relation arising from the linearized evolution equation for neutrino
flavor, with the FFI activated in regions with imaginary frequencies. Once
activated, the FFI manifests as the equality of distribution functions of
neutrinos ($f_{\nu_e} = f_{\nu_\mu} = f_{\nu_\tau}$) and antineutrinos, which
is an equivalent assumption as used in our \emph{Flavor Equilibration} case
($a_{\rm osc} = b_{\rm osc} = 2/3$). In contrast to our models, however, mass
ejection is dominated by magnetic stresses, since it takes several hundred
milliseconds for the disk to reach the radiatively-inefficient stage where the
outflow is driven (mostly) thermally. In this early phase of evolution, the
initial composition of the disk is more important than for late-time outflows
that were fully reprocessed by neutrinos. Consequently, their un-oscillated
$Y_e$ distribution is much more neutron rich (peaking between $0.15$ and $0.2$)
than that from our un-oscillated prompt BH model
(Figure~\ref{fig:ye_histogram}).
Given that baseline difference, however, introduction of the FFI produces very
similar changes as in our prompt BH simulations. The $Y_e$ distribution shifts
to more neutron-rich values by $0.01-0.02$, while the unbound mass ejected
decreases by $\sim 10\%$. While the mass ejection mechanisms are different,
this similarity stems from the fact that their FFI activation criterion results
in widespread operation of the instability, similar to our $\eta_{\rm osc}$
parameter, and flavor swap should alter the emission terms (which dominate in
the pure BH case) in the same way, making the disk more degenerate. Their
$r$-process abundance pattern displays a larger enhancement in heavier elements
when the FFI operates, given the larger relative amounts of ejecta with $Y_e <
0.25$ than in our BH models.
\citet{Just2022_FFI} performed axisymmetric viscous hydrodynamic simulations of BH
accretion disks for a time $10$\,s, as well as 3D MHD simulations for a time
$0.5$\,s, with an M1 neutrino scheme. The FFI is activated once the
energy-averaged electron antineutrino flux factor (ratio of number flux to
number density times $c$) exceeds a given value of $0.175$ by default, which
corresponds to a layer below the neutrinosphere where angular asymmetries
relevant to the FFI begin to appear according to a more detailed (static)
analysis. The neutrinosphere is assumed to be at a flux factor of $1/3$, which
in core-collapse supernovae corresponds to a radial optical depth of $2/3$
\cite{wu_2017_trajectories}. This activation criterion is very similar to our
optical depth based parameter $\eta_{\rm osc}$ (Equation~\ref{eq:eta_osc},
Figure~\ref{fig:eta-osc_snapshots}). Once active, the FFI is implemented by
algebraically mixing the neutrino number densities and number fluxes of each
flavor, separately for each energy bin of the multi-group M1 scheme. Three
flavor mixing prescriptions are used, among which the assumptions behind their
`mix2' prescription are equivalent to our \emph{Flavor Equilibration} case
($a_{\rm osc} = b_{\rm osc} = 2/3$), while their `mix 1' scheme that conserves
net lepton number shares some similarities with our asymmetric scheme but is
not equivalent.
Their baseline hydrodynamic simulation yields a similar ejecta mass, average
velocity, and electron fraction distributions as our model bh-ab00.
Their model that employs the `mix2' scheme shows a $10\%$ decrease
in ejecta mass and a decrease of the average electron fraction of $0.02$
relative to their baseline model, which follows the same trend as our
models bh-ab07 compared to bh-ab00 (although we see a larger fractional
ejecta mass decrease). Similar trends are found in their models
that employ other mixing prescriptions.
Unlike our models, however, the average velocity of all of their
models that include the FFI \emph{decreases} (by $10\%$ for the `mix2' case)
while in our corresponding models the average velocity shows an increase of
$20\%$ when including the FFI. This discrepancy can be due to the way
in which the average velocity is computed: mass-flux weighted at a fixed
radius in our models (Equation~\ref{eq:vr_ave}) while density-weighted
over a spatial region in theirs. It could also be due to the differences
in absorption resulting from the different neutrino scheme, or
the implementation of alpha viscosity and how viscous heating reacts
to the increase in degeneracy from more efficient cooling due to the FFI.
Their nucleosynthesis results are entirely consistent with ours.
Our long-lived HMNS model without flavor transformation is in overall
qualitative agreement with that reported in \cite{lippuner_2017}, which used
the same hydrodynamic setup but an older leakage scheme that considered only
charged-current weak interactions, no heavy lepton neutrinos, and did not include an
absorption correction to the disk luminosities. Quantitatively, comparing our
Figure~\ref{fig:ye-ents-vel_scatter} to their Figure~3, the asymptotic $Y_e$ of
their neutrino-driven wind ($\sim 0.55$) is higher than ours ($\sim 0.48$), and
the peak $Y_e$ of their late component ($\sim 0.34$) is lower than what we find
($\sim 0.42$). Both models eject close to $100\%$ of the initial disk mass.
\section{Summary and Discussion \label{s:summary}}
We have studied the effect of the FFI on the long-term outflows from accretion
disks around HMNSs of variable lifetime using axisymmetric, time-dependent,
viscous hydrodynamic simulations. The instability is implemented parametrically
into a 3-species leakage scheme for emission and a disk-lightbulb scheme for
absorption by modifying the absorbed neutrino fluxes and temperatures. We
explore a variety of cases, including partial and complete flavor
equilibration, as well as an ``asymmetric" flavor swap that reflects the
conservation of lepton number in the neutrino self-interaction Hamiltonian.
Our main results are the following:
\newline
\noindent
1. -- The impact of the FFI on the disk outflow is moderate, changing the total
unbound mass ejected by up to $\sim 40\%$, the average electron fraction
by $\sim 10\%$, and in most cases the average velocity by up to $\sim 40\%$
(Table~\ref{t:models},
Figure~\ref{fig:table_results}). The lanthanide and actinide mass fractions of
the outflow change, in most cases, by up to a factor of $\sim 2$
(Table~\ref{t:analysis}), with no qualitative changes in the $r$-process abundance
pattern for a given HMNS lifetime (Figure~\ref{fig:XvsA_array}).
\newline
\noindent
2. -- The direction of the changes depends on the HMNS lifetime.
For a promptly-formed BH or short-lived ($t_{\rm ns}\leq 10$\,ms) HMNS, the
mass ejected and average electron fraction decrease, and the average velocity
increases. The composition changes can be traced back to a decrease in the
electron neutrino/antineutrino absorption with FFI intensity
(Table~\ref{t:analysis}), which lowers the equilibrium $Y_e$ as well as the
rate at which this equilibrium is reached (as previously found by
\cite{Just2022_FFI} for prompt BH disks). The lesser absorption results in
increased cooling, partially compensated by a higher viscous heating, with the
net effect of lowering the entropy of the disk. A lower amount of ejected
material with low velocities
accounts for the decrease in mass ejected and
higher average ejecta velocity (Figure~\ref{fig:hist_vel}).
\newline
\noindent
3. -- A longer-lived HMNS ($t_{\rm ns}\geq 100$\,ms) displays a more significant role
of neutrino absorption in driving the outflow
(Figure~\ref{fig:ye-ents-vel_scatter}). The FFI results in a more significant
neutrino driven wind, broadening the electron fraction distribution, increasing
the peak $Y_e$ to higher values (Figure~\ref{fig:ye_histogram}), increasing the
average velocity of the ejecta (Figure~\ref{fig:hist_vel}), and increasing the
mass ejected up to a value of $\sim 95\%$ of the initial disk mass within
$17.7$\,s of evolution, for a very long-lived HMNS
(Figure~\ref{fig:table_results}).
\newline
\noindent
4. -- The trends with HMNS lifetime can be traced back to the effects
of flavor mixing by the FFI on the neutrino fluxes and temperatures. For
BH disks, the heavy lepton luminosity is lower by a factor $\sim 10$ than the
electron neutrino and antineutrino luminosity, while the mean energies of heavy
leptons are higher by a factor $\sim 2$ (Figure~\ref{fig:lum_ener_time}). The
net effect of flavor swap is to decrease absorption (more on electron neutrinos
than antineutrinos) due to the change in neutrino flux
(Table~\ref{t:analysis}). For a HMNS disk, on the other hand, the heavy lepton
luminosity is much higher than for a BH disk and the amount of electron
neutrino reabsorption is significant, resulting in a very moderate change in
the neutrino flux due to the FFI (Figure~\ref{fig:lum_abs-corr_time}). The
mixing of neutrino temperatures then results in a net \emph{increase} in
electron neutrino absorption (Table~\ref{t:analysis}),
with a protonization of the outflow as well as a more energetic neutrino-driven
wind that ejects less low-velocity material (Figures~\ref{fig:hist_vel}
and \ref{fig:ye-ents-vel_scatter}).
\newline
\noindent
5. -- Despite the mild changes in composition, the total mass ejected with
$Y_e < 0.25$ can change by a factor of several (Table~\ref{t:models}),
thus altering the ratio of red to blue kilonova components if they are to be
treated separately (e.g., due to spatial segregation).
\newline
Given the moderate impact of the FFI on the disk outflow, it is natural to
think of this effect as introducing an uncertainty band to theoretical
predictions for the ejecta properties. Our calculations corroborate other
work \cite{Li_Siegel_2021,Just2022_FFI}, indicating that an overall uncertainty of $\sim 10\%$ in
ejected mass, electron fraction, and velocity, as well as a factor $2$ in
lanthanide/actinide mass fraction can be used as a rule-of-thumb uncertainty in
parameter inference from and/or upper limits on
multi-messenger observations and galactic abundance studies
(e.g., \cite{kasliwal_2020,thakur_2020,hernandez_2020,wanajo_2021,ricci_2021,holmbeck_2021,geert_2021,chen_2021,gompertz_2022}). A similar uncertainty level is associated with spatial
resolution of grid-based simulations of post-merger remnants (e.g., \cite{FM13}). A more difficult
task is to estimate uncertainties in kilonova light curves and spectra due to spatial
segregation of lanthanide-rich vs lanthanide poor material, which would require
radiative transfer simulations to assess the impact on the final outcome (e.g.,
\cite{darbha_2020,Korobkin2021,bulla_2021}).
Our predictions can be made more reliable by (1) improving the quality of
neutrino transport, in particular by using a spectral moment scheme to improve
the angular distribution of radiation for the long-lived HMNS case; (2)
self-consistently including the HMNS-disk system, avoiding the use of separate
luminosities from each object; and (3) including magnetic fields in the
evolution. The latter requires the use of three spatial dimensions, and the
length of time required to fully capture the disk outflow makes such
simulations computationally expensive, precluding an extensive parameter search
with current capabilities. Selected flavor transformation scenarios will need
to be carefully selected for those 3D GRMHD studies to augment the relatively
small number of dynamical models performed to date.
\vspace{0.2in}
|
1,314,259,996,609 | arxiv |
\subsubsection{Annotation Results}
\label{annotations_results}
\begin{table*}[t]\small
\begin{center}
\begin{tabular}{|p{3.5cm}|p{5.5cm}|p{5.5cm}|}
\hline
\multicolumn{1}{|c|}{\textbf{Topic}} &
\multicolumn{1}{|c|}{\textbf{Argument}} & \multicolumn{1}{|c|}{\textbf{Associated Key Point(s)}} \\
\hline
\hline
We should end mandatory retirement. &Forcing members of a profession to retire at a certain age creates an experience drain.&A mandatory retirement age decreases institutional knowledge. \\
\hline
We should ban the use of child actors. &Child actors are fine to use as long as there is a responsible adult watching them.&Child performers should not be banned as long as there is supervision/regulation. \\
\hline
We should close Guantanamo Bay detention camp.&Guantanamo can provide security for accused terrorists who would be hurt in the general prison population.&The Guantanamo bay detention camp is better for prisoners than the alternatives. \\
\hline
Assisted suicide should be a criminal offence.&People have a basic right to bodily autonomy, deciding whether or not to die with minimal suffering and dignity is integral to that right.&
People should have the freedom to choose to end their life.\newline
Assisted suicide gives dignity to the person that wants to commit it.
\\
\hline
We should ban human cloning.&The world is already overpopulated, cloning humans will only contribute to this problem.&No key point \\
\hline
\end{tabular}
\caption{Examples for key point association to arguments.}
\label{tab:label_type_examples}
\end{center}
\end{table*}
Next, we consolidate the individual annotations as follows. We say that an argument $a$ is mapped to a key point $k$ if at least 60\% of the annotators mapped $a$ to $k$. Recall that an argument can be mapped to more than one key point. Similarly, we say that $a$ has \emph{no key point} if at least 60\% of the annotators mapped $a$ to None (which is equivalent to not selecting any key point for the argument). Otherwise, we say that $a$ is \emph{ambiguous}, i.e., the annotations were indecisive. Table~\ref{tab:label_type_examples} shows examples for arguments and their matching key points in our dataset.
The distribution of the arguments in the dataset over the above categories is shown in Table~\ref{tab:num_kp_per_arg}. Remarkably, our key points, composed independently of the arguments, were able to cover 72.5\% of them, with 5\% of the arguments mapped to more than one key point.
We further investigated the differences between arguments in each category, by comparing their average quality score (taken from the IBM-Rank-30k dataset), number of tokens and number of sentences. The results are shown as additional columns in Table~~\ref{tab:num_kp_per_arg}. Interestingly, arguments that have no key point tend to be shorter and have lower quality score, comparing to arguments mapped to a single key point; arguments mapped to more than one key point are the longest and have the highest quality.
\begin{table*}[]\small
\centering
\begin{tabular}{|l|r|r|r|r|}
\hline
& \% Arguments &Quality & \# Tokens & \# Sentences \\ \hline \hline
No key point & 4.7\% & 0.75 & 16.35 & 1.09 \\ \hline
Ambiguous & 22.8\% & 0.80 & 18.97 & 1.15 \\ \hline
Single key point & 67.5\% & 0.84 & 18.54 & 1.15 \\ \hline
Multiple key points & 5.0\% & 0.91 & 23.66 & 1.33 \\ \hline
\end{tabular}
\caption{Argument statistics by key point matches.}
\label{tab:num_kp_per_arg}
\end{table*}
Figure~\ref{fig:avg_kp_coverage} examines the impact of the number of key points on argument coverage. For each topic and stance, we order the key points according to the number of their matched arguments, and add them incrementally. The results indicate that arguments are not trivially mapped to only one or two key points, but a combination of several key points is required to achieve high coverage. The marginal contribution decays for the sixth and seventh key points, suggesting that seven key points indeed suffice for this task.
\begin{figure}[t!]
\includegraphics[scale=0.7]{avg_kp_coverage.png}
\caption{Argument coverage per number of key points.}
\label{fig:avg_kp_coverage}
\end{figure}
22.8\% of the arguments are \emph{ambiguous}. Annotations for these arguments are split over several possible key points, none reaching the 60\% threshold. For instance, the argument \emph{``homeschooling enables parents with fringe views to push their agenda on their children without allowing exposure to alternative viewpoints.''}, had two key points with annotator votes higher than 40\%, but below 60\%:
\begin{enumerate}
\item \emph{Homeschools cannot be regulated / standardized.}
\item \emph{Parents are not qualified as teachers.}
\end{enumerate}
Such cases suggest that many arguments are somewhat covered by the key points, but if the judgment is not clear-cut, the different intuitions of the annotators may result in no label receiving the required majority.
\subsubsection{Annotation Process}
Using the Figure Eight crowd labeling platform\footnote{\url{http://figure-eight.com}}, we created gold labels for associating the arguments selected as described in Section~\ref{arguments_and_kps} with key points. For each argument, given in the context of its debatable topic, annotators were presented with the key points created for this topic in the relevant stance.
They were guided to mark all of the key points this argument can be associated with, and if none are relevant, to select the 'None' option. Each argument was labeled by $8$ annotators.
\textbf{Quality Measures:} to ensure the quality of the collected data, the following measures were taken - \begin{enumerate}
\item Test questions. Annotators were asked to determine the stance of each argument towards the topic. Similarly to \citet{toledo-etal-2019-automatic}, this question functioned as a hidden text question\footnote{Unlike \citeauthor{toledo-etal-2019-automatic}, the results were analyzed after the task was completed, and the annotators were not aware of their success/failure.}. All judgments of annotators failing in more than $10\%$ of the stance questions were discarded.
\item Annotator-$\kappa$ score. This score, measuring inter annotator agreement, as defined by \citet{toledo-etal-2019-automatic}, was calculated for each annotator, and all judgments of annotators with annotator-$\kappa < 0.3$ were ignored. This score averages all pair-wise Cohen's Kappa \citep{landis+koch77} for a given annotator, for any annotator sharing at least $50$ judgments with at least $5$ other annotators.
\item Selected group of trusted annotators. As in \citet{gretz2019largescale}, the task was only available to a group of annotators which had performed well in previous tasks by our team.
\end{enumerate}
As described above, the annotation of each key point with respect to a given argument was performed independently, and each annotator could select multiple key points to be associated with each given argument. For the purpose of calculating inter-annotator agreement, we considered \emph{(argument, key point)} pairs, annotated with a binary label denoting whether the argument was matched to the key point.
Fleiss' Kappa for this task was $0.44$ \citep{fleiss:71}, and Cohen's Kappa was $0.5$ (averaging Annotator-$\kappa$ scores). These scores correspond to ``moderate agreement'' and are comparable to agreement levels previously reported for other annotation tasks in computational argumentation \citep{boltuzic-snajder-2014-back,eindor2019corpus}. As for the stance selection question, $98\%$ of the judgments were correct, indicating overall high annotation quality.
\textbf{Data Cleansing}: In addition to the above measures, the following annotations were removed from the data: (i) Annotations in which the answer to the stance selection question was wrong; (ii) Annotations in which key point choice was illegal - the 'None' option and one of the key points were both selected. However, the rate of these errors, for each of the annotators, was rather low ($<10\%$ and $<5\%$, respectively)
Arguments left with less than $7$ valid judgments after applying the above quality measures and data cleansing were removed from the dataset. $6,568$ labeled arguments remain in the dataset.
\section{Introduction}
\input{introduction}
\section{Related Work}
\input{relatedwork}
\section{Data}
\subsection{Arguments and Key Points}
\label{arguments_and_kps}
\input{arguments_keypoints}
\subsection{Mapping Arguments to Key Points}
\input{annotation}
\input{analysis}
\subsection{Final Dataset Generation}
\input{dataset}
\section{Experiments}
\subsection{Experimental Setup}
\input{setup}
\subsection{Results}
\input{results}
\section{Conclusion}
\input{conclusion}
\section*{Acknowledgments}
We would like to thank the anonymous reviewers for their helpful comments and suggestions.
\subsection{Argument Mining}
The starting point for the current work is a collection of pro and con arguments for a given topic. As previously mentioned, these arguments may be collected from a large audience by conducting a survey, or mined automatically from text.
Some of the previous work on argument mining focused on specific domains such as legal documents \citep{Moens:2007,Wyner:2010}, student essays \citep{stab-gurevych-2017-parsing, persing-ng-2016-end}, and user comments on proposed regulations \cite{park-cardie-2014-identifying}.
Mining arguments and argument components for a given topic (also known as \emph{context}) has been a prominent line of research in argument mining. \citet{levy-etal-2014-context} introduced the task of context-dependent claim detection in a collection of Wikipedia articles, and \citet{rinott-etal-2015-show} did the same for context-dependent evidence detection. More recently, several works focused on topic-related argument mining from the Web or other massive corpora \citep{Levy:ARGMINING2017,Levy:COLING2018,Wachsmuth:ARGMINING2017,Stab:NAACL2018,Stab:EMNLP2018,eindor2019corpus}.
Stance classification of extracted arguments can be performed as a separate step \cite{Barhaim:2017} or jointly with argument detection, as a three-way classification (pro argument/con argument/none), as done by \citet{Stab:EMNLP2018}.
\subsection{Argument Clustering and Summarization}
\label{ssec:argrel}
Several works have focused on identifying pairs of similar arguments, or clustering similar arguments together. \citet{ajjour-etal-2019-modeling} addressed the task of splitting a set of arguments
into a set of non-overlapping \emph{frames} such as \emph{Economics}, \emph{Environment} and \emph{Politics}. \citet{reimers-etal-2019-classification} classified argument pairs as similar/dissimilar. \citet{misra-etal-2016-measuring} aimed to detect argument pairs that are assumed to share the same \emph{argument facet}, which is similar to our notion of \emph{key points}. However, they did not attempt to explicitly identify or generate these facets, which remained implicit, but rather focused on detecting similarity between argument pairs. In contrast to these works, we directly map arguments to key points.
\citet{egan-etal-2016-summarising} proposed to summarize argumentative discussions through the extraction of salient ``points'', where each point is a verb and its syntactic arguments. Applying their unsupervised method to online political debates showed significant improvement over a baseline extractive summarizer, according to human evaluation. While the current work also aims to summarize argumentative content via concise points, our goal is not to extract these points but to accurately map arguments to given points. Our main challenge is to identify the various ways in which the meaning of a point is conveyed in different arguments. The method employed by \citeauthor{egan-etal-2016-summarising} only matches arguments with the same \emph{signature} - the same verb, subject and object dependency nodes, hence its ability to capture such variability is limited.
The line of work that seems most similar to ours is of \citet{hasan-ng-2014-taking}, \citet{boltuzic-snajder-2014-back} and \citet{naderi-2016}. \citeauthor{hasan-ng-2014-taking} classified posts and individual sentences from online debates into a closed set of \emph{reasons}, composed manually for each topic. \citeauthor{boltuzic-snajder-2014-back} mapped comments from one debating website (\emph{ProCon.org}) to arguments taken from another debating website (\emph{iDebate.org}). \citet{naderi-2016} addressed a similar task: she used part of the \citeauthor{boltuzic-snajder-2014-back} corpus as training data for an SVM classifier, which was then tested on sentences and paragraphs from same-sex marriage debates in the Canadian Parliament, annotated with the same set of arguments.
Our work differs from these works in several respects. First, we deal with crowd-contributed arguments, taken from the dataset of \citet{gretz2019largescale} while these works dealt with posts or comments in debate forums, and parliamentary debates. Second, the dataset developed in this work is far more extensive, covering 28 topics and over 6,500 arguments\footnote{As detailed in the next section, a few hundreds of arguments out of the initial 7,000 were filtered in the process of constructing the dataset.}, as compared to 2-4 topics in the datasets of \citeauthor{boltuzic-snajder-2014-back} and \citeauthor{hasan-ng-2014-taking}, respectively. This allows us to perform a comprehensive analysis on the feasibility and effectiveness of argument-to-key point mapping over a variety of topics, which has not been possible with previous datasets. Lastly, while \citeauthor{hasan-ng-2014-taking} only perform within-topic classification, where the classifier is trained and tested on the same topic, we address the far more challenging task of cross-topic classification. \citeauthor{boltuzic-snajder-2014-back} experimented with both within-topic and cross-topic classification, however they used a limited amount of data for training and testing: two topics, with less than 200 comments per topic.
Finally, we point out the similarity between the argument/key point relation and the text/hypothesis relation in \emph{textual entailment}, also known as \emph{natural language inference (NLI)} \citep{DBLP:series/synthesis/2013Dagan}. Indeed, \citet{boltuzic-snajder-2014-back} used textual entailment as part of their experiments, following the earlier work of \citet{cabrio:ac13}, who used textual entailment to detect support/attack relations between arguments.
\subsubsection{Match Scoring Methods}
\begin{table*}[t]\small
\begin{center}
\begin{tabular}{|c|c|cccc|}
\hline
\multicolumn{2}{|c|}{} & Acc & P & R & F1 \\
\hline
& Majority Class & 0.793 & & 0.000 & \\
& Random Predictions & 0.679 & 0.206 & 0.200 & 0.203 \\
\hline
Unsupervised Methods & Tf-Idf & 0.512 & 0.246 & 0.644 & 0.352 \\
& Glove Embeddings & 0.346 & 0.212 & 0.787 & 0.330 \\
& BERT Embeddings & 0.660 & 0.319 & 0.550 & 0.403 \\
\hline
Supervised Methods & BERT-base (ArgKP) & 0.844 & 0.609 & 0.718 & 0.657 \\
& BERT-large (ArgKP) & \textbf{0.868} & \textbf{0.685} & \textbf{0.688} & \textbf{0.684 }
\\
\hline
NLI Transfer Learning & BERT-base (SNLI) & 0.777& 0.472 & 0.514& 0.485 \\
& BERT-base (MNLI) & 0.772& 0.470 & 0.558& 0.505 \\
& BERT-large (SNLI) & 0.765& 0.456 & 0.533& 0.487 \\
& BERT-large (MNLI) & 0.792& 0.518 & 0.542& 0.526 \\
\hline
\end{tabular}
\caption{Comparison of match scoring methods, using the \emph{Threshold} selection policy. P, R and F1 refer to the positive class. Acc is the accuracy.}
\label{tab:Results}
\end{center}
\end{table*}
\begin{table*}[t]\small
\centering
\begin{tabular}{|c||c|c|c|c||c|c|c||c|c|c||c|}
\hline
& \multicolumn{4}{c||}{All} & \multicolumn{3}{c||}{Single}& \multicolumn{3}{c||}{Multiple}& No \\
& \multicolumn{4}{c||}{Arguments} & \multicolumn{3}{c||}{Key Point}& \multicolumn{3}{c||}{Key Points}& Key Points\\
\hline
& Acc & P & R & F1 & P & R & F1 & P & R & F1 & Acc \\
\hline
Threshold & .868 & .685 & .688 & .684 & .720 & .686 & .701 & .904 & \textbf{.690} & \textbf{.782} & \textbf{.933} \\
Best Match & .876 & .696 & .711 & .703 & .836 & .747 & \textbf{.789} & .936 & .448 & .606 & .839 \\
BM+Threshold & \textbf{.890} & \textbf{.772} & .665 & .713 & \textbf{.856} & .699 & .769 & .941 & .421 & .580 & .915 \\
Dual Threshold & .887 & .721 & \textbf{.740} & \textbf{.730} & .784 & \textbf{.752} & .767 & \textbf{.945} & .656 & .773 & .908 \\
\hline
\end{tabular}
\caption{Comparing key point selection policies, using BERT-large trained on the ArgKP dataset for match scoring.}
\label{tab:Results-policy}
\end{table*}
Table~\ref{tab:Results} compares the various match scoring methods, all using the \emph{Threshold} key point selection policy. Results are obtained by micro-averaging over the argument-key point pairs in each fold, and averaging over the different folds. We consider Precision, Recall and F1 of the positive class, as well as the overall accuracy. We also list for reference the majority class baseline that always predicts ``no match'', and the random baseline, which randomly predicts the positive class according to its probability in the training data.
The unsupervised models fail to capture the relation between the argument and the key points. Tf-Idf and Glove perform the worst, showing that simple lexical similarity is insufficient for this task. BERT embedding does better but still reaches a relatively low F1 score of 0.4.
In contrast to the unsupervised models, supervised models are shown to perform well. BERT with fine tuning leads to a substantial improvement, reaching F1 score of 0.657 with the BERT-base model, and 0.684 with the BERT-large model.
BERT Models trained on NLI data are considerably better than the unsupervised methods, with the best model reaching F1 of 0.526, yet their performance is still far below the supervised models trained on our ArgKP dataset. This may reflect both the similarities and the differences between NLI and the current task. We have also experimented with combining these two types of data in cascade: BERT was first trained on a large NLI dataset (SNLI, MNLI or their union), and was then fine-tuned on the smaller ArgKP data. However, it did not improve the supervised results.
\paragraph{Error Analysis.}
By analyzing the top errors of the supervised classifier (BERT-large), we found several systematic patterns of errors. In most cases, non-matching arguments and key points received a high match score in one of the following cases:
\begin{itemize}
\item They share some key phrases. For example: \emph{``It is unfair to only subsidize vocational education. Achieving a more advanced education is very expensive and it would also need to be subsidized.''} and \emph{``Subsidizing vocational education is expensive''.}
\item They share a large portion of the sentence, but not the main point, for example: \emph{``Women should be able to fight if they are strong enough''} and \emph{``Women should be able to serve in combat if they choose to''.}
\item They are at least partially related, but labeled as non-matching due to a better fitting key point for the same argument. For example: \emph{``We should subsidize space exploration because it increases the knowledge of the universe we are in''} and \emph{``Space exploration improves science/technology''} can be considered matched, but were labeled as unmatched due to the key point \emph{``Space exploration unravels information about the universe''}. Using the \emph{Best Match} policy helps in these cases.
\end{itemize}
For arguments and key points that were labeled as matched but received a low match score, the relation was in many cases implied or required some further knowledge, for examples: \emph{``Journalism is an essential part of democracy and freedom of expression and should not be subsidized by the state.''} and \emph{``government intervention has the risk of inserting bias/harming objectivity''.}
\subsubsection{Key Point Selection Policies}
\begin{figure*}[t]
\centering
\includegraphics[width=0.6\textwidth]{precision_recall_tredeoff.png}
\caption{Precision/Recall trade-off for different key point selection policies. For each method, the highest F1 score, as well as the F1 score for the chosen threshold are specified. For the \emph{Best Match + Threshold} policy, these two scores coincide.}
\label{tab:prec-acc}
\end{figure*}
Table~\ref{tab:Results-policy} compares different key point selection policies, all using the best performing match scoring method: BERT-large fine-tuned on ArgKP. We report the results over the whole dataset (``all arguments''), as well as the subsets of arguments having none, single or multiple matching key points according to the labeled data. In case of no matches we present the accuracy, as recall and F1 scores are undefined. When considering all the arguments, the \emph{Dual Threshold} policy achieves the best F1 score of 0.73. The \emph{Threshold} method performs well for arguments with no matches or multiple matches. When there is exactly one match (the common case in our data), it has lower precision. The \emph{Best Match} policy performs well when there is a single match, but is not able to cope with arguments that have no matches or have multiple matches. The \emph{BM+Threshold} method combines the two and is useful when there are no matching key points or a single matching key point, but still have lower recall when there are multiple matching key points. The \emph{Dual Threshold} method improves the recall and therefore the F1 score for multiple matches while maintaining good performance for arguments with single or no matches.
Figure ~\ref{tab:prec-acc} shows Precision-Recall trade-off for the various policies, using the different possible thresholds, computed for one of the folds. For each policy, we specify the best F1 score, as well as the F1 score obtained for the selected threshold, which was optimized over the development set. The \emph{Threshold} policy allows to control recall, up to one (where the threshold is zero), at the price of low precision. The \emph{BM+Threshold} policy generates the highest precision, but
low recall, since at most one candidate is selected. Note that when the threshold is zero, the \emph{BM+Threshold} policy is equivalent to the \emph{BM} policy. The \emph{Dual Threshold} policy offers the best trade-off, for mid-range precision and recall.
\subsubsection{Match Scoring}
\label{sec:match_scoring}
We experimented with both unsupervised and supervised methods for computing a match score for a given \emph{(argument, key point)} pair. We also explored transfer learning from the related task of natural language inference (NLI).
\paragraph{Unsupervised Methods}
\begin{itemize}
\item \textbf{Tf-Idf.} In order to assess the role of lexical overlap in the matching task, we represent each argument and key point as tf-idf weighted word vectors and use their cosine similarity as the match score.
\item \textbf{Word Embedding.} We examined averaged word embeddings using GloVe \citep{pennington2014glove} and BERT \citep{devlin-etal-2019-bert}. GloVe is a context independent model that computes a single embedding for each word. BERT is a contextualized embedding model that takes the entire sentence into account. We also experimented with other embedding methods that under-performed BERT and thus their results are not reported here: Universal Sentence Encoder \citep{cer2018universal} and InferSent \citep{conneau2017supervised}. Again, we use cosine similarity to compute the match score.
\end{itemize}
\paragraph{Supervised Methods.}
We fine tuned the BERT-base-uncased and BERT-large-uncased models \citep{devlin-etal-2019-bert} to predict matches between argument and key point pairs. We added a linear fully connected layer of size 1 followed by a sigmoid layer to the special [CLS] token in the BERT model, and trained it for three epochs with a learning rate of 2e-5 and a binary cross entropy loss.
\paragraph{NLI Transfer Learning.} We also experimented with transfer learning from NLI to our task of argument-to-key point match classification. This was motivated by the similarity between these tasks (as discussed in Section~\ref{ssec:argrel}), as well as the availability of large-scale NLI labeled datasets. We considered the Stanford (SNLI) and the Multi-Genre (MNLI) datasets \citep{bowman-etal-2015-large,williams-etal-2018-broad}, each comprising hundreds of thousands of labeled premise-hypothesis pairs. Pairs labeled as \textsc{Entailment} were considered positive instances, while the rest of the pairs, labeled as \textsc{Neutral} or \textsc{Contradiction} were considered negative. We trained BERT-base and BERT-large models on each of these datasets, following the procedure described above.
\subsubsection{Match Classification} \label{Selection Policy}
\label{sec:match_classification}
In the match classification step we select the matching key points for each argument, based on their respective matching scores.
The classification can be done locally, treating each pair individually, or globally, by examining all possible key points for each argument. We compared the following policies for selecting matching key points for a given argument.
\paragraph{Threshold.} For each fold, we find the threshold on the match score that maximizes the F1 score for the positive (matching) class. Pairs whose score exceeds the learned threshold are considered matched.
\paragraph{Best Match (BM).} Using a threshold is not optimal for our data, where most arguments have at most one matched key point. A natural solution is to select the best matching key point. For each argument, we consider all key points for the same topic and stance as candidates and predict only the candidate with the highest match score as matched to the argument and the rest as unmatched. Note that this is the only fully unsupervised selection policy, as it does not require labeled data for learning a threshold.
\paragraph{BM+Threshold.} The \emph{BM} policy always assigns exactly one key point for each argument, while 27.5\% of the arguments in our data are not matched to any key point. To address this, we combine the two former policies. The top matching key point is considered a match only if its match score exceeds the learned threshold.
\paragraph{Dual Threshold.} In order to account for arguments with more than one matching key point, two thresholds are learned. If two key points exceed the lower threshold and at least one of them exceeds the upper threshold, both will be matched. Otherwise, it works the same as the \emph{BM+Threshold} policy using only the lower threshold. This allows for zero to two matches per argument.
Thresholds are learned from the development set for supervised match scoring methods, and from both train and development set for unsupervised match scoring methods.
\subsubsection{Annotation Results}
\label{annotations_results}
\begin{table*}[t]\small
\begin{center}
\begin{tabular}{|p{3.5cm}|p{5.5cm}|p{5.5cm}|}
\hline
\multicolumn{1}{|c|}{\textbf{Topic}} &
\multicolumn{1}{|c|}{\textbf{Argument}} & \multicolumn{1}{|c|}{\textbf{Associated Key Point(s)}} \\
\hline
\hline
We should end mandatory retirement. &Forcing members of a profession to retire at a certain age creates an experience drain.&A mandatory retirement age decreases institutional knowledge. \\
\hline
We should ban the use of child actors. &Child actors are fine to use as long as there is a responsible adult watching them.&Child performers should not be banned as long as there is supervision/regulation. \\
\hline
We should close Guantanamo Bay detention camp.&Guantanamo can provide security for accused terrorists who would be hurt in the general prison population.&The Guantanamo bay detention camp is better for prisoners than the alternatives. \\
\hline
Assisted suicide should be a criminal offence.&People have a basic right to bodily autonomy, deciding whether or not to die with minimal suffering and dignity is integral to that right.&
People should have the freedom to choose to end their life.\newline
Assisted suicide gives dignity to the person that wants to commit it.
\\
\hline
We should ban human cloning.&The world is already overpopulated, cloning humans will only contribute to this problem.&No key point \\
\hline
\end{tabular}
\caption{Examples for key point association to arguments.}
\label{tab:label_type_examples}
\end{center}
\end{table*}
Next, we consolidate the individual annotations as follows. We say that an argument $a$ is mapped to a key point $k$ if at least 60\% of the annotators mapped $a$ to $k$. Recall that an argument can be mapped to more than one key point. Similarly, we say that $a$ has \emph{no key point} if at least 60\% of the annotators mapped $a$ to None (which is equivalent to not selecting any key point for the argument). Otherwise, we say that $a$ is \emph{ambiguous}, i.e., the annotations were indecisive. Table~\ref{tab:label_type_examples} shows examples for arguments and their matching key points in our dataset.
The distribution of the arguments in the dataset over the above categories is shown in Table~\ref{tab:num_kp_per_arg}. Remarkably, our key points, composed independently of the arguments, were able to cover 72.5\% of them, with 5\% of the arguments mapped to more than one key point.
We further investigated the differences between arguments in each category, by comparing their average quality score (taken from the IBM-Rank-30k dataset), number of tokens and number of sentences. The results are shown as additional columns in Table~~\ref{tab:num_kp_per_arg}. Interestingly, arguments that have no key point tend to be shorter and have lower quality score, comparing to arguments mapped to a single key point; arguments mapped to more than one key point are the longest and have the highest quality.
\begin{table*}[]\small
\centering
\begin{tabular}{|l|r|r|r|r|}
\hline
& \% Arguments &Quality & \# Tokens & \# Sentences \\ \hline \hline
No key point & 4.7\% & 0.75 & 16.35 & 1.09 \\ \hline
Ambiguous & 22.8\% & 0.80 & 18.97 & 1.15 \\ \hline
Single key point & 67.5\% & 0.84 & 18.54 & 1.15 \\ \hline
Multiple key points & 5.0\% & 0.91 & 23.66 & 1.33 \\ \hline
\end{tabular}
\caption{Argument statistics by key point matches.}
\label{tab:num_kp_per_arg}
\end{table*}
Figure~\ref{fig:avg_kp_coverage} examines the impact of the number of key points on argument coverage. For each topic and stance, we order the key points according to the number of their matched arguments, and add them incrementally. The results indicate that arguments are not trivially mapped to only one or two key points, but a combination of several key points is required to achieve high coverage. The marginal contribution decays for the sixth and seventh key points, suggesting that seven key points indeed suffice for this task.
\begin{figure}[t!]
\includegraphics[scale=0.7]{avg_kp_coverage.png}
\caption{Argument coverage per number of key points.}
\label{fig:avg_kp_coverage}
\end{figure}
22.8\% of the arguments are \emph{ambiguous}. Annotations for these arguments are split over several possible key points, none reaching the 60\% threshold. For instance, the argument \emph{``homeschooling enables parents with fringe views to push their agenda on their children without allowing exposure to alternative viewpoints.''}, had two key points with annotator votes higher than 40\%, but below 60\%:
\begin{enumerate}
\item \emph{Homeschools cannot be regulated / standardized.}
\item \emph{Parents are not qualified as teachers.}
\end{enumerate}
Such cases suggest that many arguments are somewhat covered by the key points, but if the judgment is not clear-cut, the different intuitions of the annotators may result in no label receiving the required majority.
\subsubsection{Annotation Process}
Using the Figure Eight crowd labeling platform\footnote{\url{http://figure-eight.com}}, we created gold labels for associating the arguments selected as described in Section~\ref{arguments_and_kps} with key points. For each argument, given in the context of its debatable topic, annotators were presented with the key points created for this topic in the relevant stance.
They were guided to mark all of the key points this argument can be associated with, and if none are relevant, to select the 'None' option. Each argument was labeled by $8$ annotators.
\textbf{Quality Measures:} to ensure the quality of the collected data, the following measures were taken - \begin{enumerate}
\item Test questions. Annotators were asked to determine the stance of each argument towards the topic. Similarly to \citet{toledo-etal-2019-automatic}, this question functioned as a hidden text question\footnote{Unlike \citeauthor{toledo-etal-2019-automatic}, the results were analyzed after the task was completed, and the annotators were not aware of their success/failure.}. All judgments of annotators failing in more than $10\%$ of the stance questions were discarded.
\item Annotator-$\kappa$ score. This score, measuring inter annotator agreement, as defined by \citet{toledo-etal-2019-automatic}, was calculated for each annotator, and all judgments of annotators with annotator-$\kappa < 0.3$ were ignored. This score averages all pair-wise Cohen's Kappa \citep{landis+koch77} for a given annotator, for any annotator sharing at least $50$ judgments with at least $5$ other annotators.
\item Selected group of trusted annotators. As in \citet{gretz2019largescale}, the task was only available to a group of annotators which had performed well in previous tasks by our team.
\end{enumerate}
As described above, the annotation of each key point with respect to a given argument was performed independently, and each annotator could select multiple key points to be associated with each given argument. For the purpose of calculating inter-annotator agreement, we considered \emph{(argument, key point)} pairs, annotated with a binary label denoting whether the argument was matched to the key point.
Fleiss' Kappa for this task was $0.44$ \citep{fleiss:71}, and Cohen's Kappa was $0.5$ (averaging Annotator-$\kappa$ scores). These scores correspond to ``moderate agreement'' and are comparable to agreement levels previously reported for other annotation tasks in computational argumentation \citep{boltuzic-snajder-2014-back,eindor2019corpus}. As for the stance selection question, $98\%$ of the judgments were correct, indicating overall high annotation quality.
\textbf{Data Cleansing}: In addition to the above measures, the following annotations were removed from the data: (i) Annotations in which the answer to the stance selection question was wrong; (ii) Annotations in which key point choice was illegal - the 'None' option and one of the key points were both selected. However, the rate of these errors, for each of the annotators, was rather low ($<10\%$ and $<5\%$, respectively)
Arguments left with less than $7$ valid judgments after applying the above quality measures and data cleansing were removed from the dataset. $6,568$ labeled arguments remain in the dataset.
\section{Introduction}
\input{introduction}
\section{Related Work}
\input{relatedwork}
\section{Data}
\subsection{Arguments and Key Points}
\label{arguments_and_kps}
\input{arguments_keypoints}
\subsection{Mapping Arguments to Key Points}
\input{annotation}
\input{analysis}
\subsection{Final Dataset Generation}
\input{dataset}
\section{Experiments}
\subsection{Experimental Setup}
\input{setup}
\subsection{Results}
\input{results}
\section{Conclusion}
\input{conclusion}
\section*{Acknowledgments}
We would like to thank the anonymous reviewers for their helpful comments and suggestions.
\subsection{Argument Mining}
The starting point for the current work is a collection of pro and con arguments for a given topic. As previously mentioned, these arguments may be collected from a large audience by conducting a survey, or mined automatically from text.
Some of the previous work on argument mining focused on specific domains such as legal documents \citep{Moens:2007,Wyner:2010}, student essays \citep{stab-gurevych-2017-parsing, persing-ng-2016-end}, and user comments on proposed regulations \cite{park-cardie-2014-identifying}.
Mining arguments and argument components for a given topic (also known as \emph{context}) has been a prominent line of research in argument mining. \citet{levy-etal-2014-context} introduced the task of context-dependent claim detection in a collection of Wikipedia articles, and \citet{rinott-etal-2015-show} did the same for context-dependent evidence detection. More recently, several works focused on topic-related argument mining from the Web or other massive corpora \citep{Levy:ARGMINING2017,Levy:COLING2018,Wachsmuth:ARGMINING2017,Stab:NAACL2018,Stab:EMNLP2018,eindor2019corpus}.
Stance classification of extracted arguments can be performed as a separate step \cite{Barhaim:2017} or jointly with argument detection, as a three-way classification (pro argument/con argument/none), as done by \citet{Stab:EMNLP2018}.
\subsection{Argument Clustering and Summarization}
\label{ssec:argrel}
Several works have focused on identifying pairs of similar arguments, or clustering similar arguments together. \citet{ajjour-etal-2019-modeling} addressed the task of splitting a set of arguments
into a set of non-overlapping \emph{frames} such as \emph{Economics}, \emph{Environment} and \emph{Politics}. \citet{reimers-etal-2019-classification} classified argument pairs as similar/dissimilar. \citet{misra-etal-2016-measuring} aimed to detect argument pairs that are assumed to share the same \emph{argument facet}, which is similar to our notion of \emph{key points}. However, they did not attempt to explicitly identify or generate these facets, which remained implicit, but rather focused on detecting similarity between argument pairs. In contrast to these works, we directly map arguments to key points.
\citet{egan-etal-2016-summarising} proposed to summarize argumentative discussions through the extraction of salient ``points'', where each point is a verb and its syntactic arguments. Applying their unsupervised method to online political debates showed significant improvement over a baseline extractive summarizer, according to human evaluation. While the current work also aims to summarize argumentative content via concise points, our goal is not to extract these points but to accurately map arguments to given points. Our main challenge is to identify the various ways in which the meaning of a point is conveyed in different arguments. The method employed by \citeauthor{egan-etal-2016-summarising} only matches arguments with the same \emph{signature} - the same verb, subject and object dependency nodes, hence its ability to capture such variability is limited.
The line of work that seems most similar to ours is of \citet{hasan-ng-2014-taking}, \citet{boltuzic-snajder-2014-back} and \citet{naderi-2016}. \citeauthor{hasan-ng-2014-taking} classified posts and individual sentences from online debates into a closed set of \emph{reasons}, composed manually for each topic. \citeauthor{boltuzic-snajder-2014-back} mapped comments from one debating website (\emph{ProCon.org}) to arguments taken from another debating website (\emph{iDebate.org}). \citet{naderi-2016} addressed a similar task: she used part of the \citeauthor{boltuzic-snajder-2014-back} corpus as training data for an SVM classifier, which was then tested on sentences and paragraphs from same-sex marriage debates in the Canadian Parliament, annotated with the same set of arguments.
Our work differs from these works in several respects. First, we deal with crowd-contributed arguments, taken from the dataset of \citet{gretz2019largescale} while these works dealt with posts or comments in debate forums, and parliamentary debates. Second, the dataset developed in this work is far more extensive, covering 28 topics and over 6,500 arguments\footnote{As detailed in the next section, a few hundreds of arguments out of the initial 7,000 were filtered in the process of constructing the dataset.}, as compared to 2-4 topics in the datasets of \citeauthor{boltuzic-snajder-2014-back} and \citeauthor{hasan-ng-2014-taking}, respectively. This allows us to perform a comprehensive analysis on the feasibility and effectiveness of argument-to-key point mapping over a variety of topics, which has not been possible with previous datasets. Lastly, while \citeauthor{hasan-ng-2014-taking} only perform within-topic classification, where the classifier is trained and tested on the same topic, we address the far more challenging task of cross-topic classification. \citeauthor{boltuzic-snajder-2014-back} experimented with both within-topic and cross-topic classification, however they used a limited amount of data for training and testing: two topics, with less than 200 comments per topic.
Finally, we point out the similarity between the argument/key point relation and the text/hypothesis relation in \emph{textual entailment}, also known as \emph{natural language inference (NLI)} \citep{DBLP:series/synthesis/2013Dagan}. Indeed, \citet{boltuzic-snajder-2014-back} used textual entailment as part of their experiments, following the earlier work of \citet{cabrio:ac13}, who used textual entailment to detect support/attack relations between arguments.
\subsubsection{Match Scoring Methods}
\begin{table*}[t]\small
\begin{center}
\begin{tabular}{|c|c|cccc|}
\hline
\multicolumn{2}{|c|}{} & Acc & P & R & F1 \\
\hline
& Majority Class & 0.793 & & 0.000 & \\
& Random Predictions & 0.679 & 0.206 & 0.200 & 0.203 \\
\hline
Unsupervised Methods & Tf-Idf & 0.512 & 0.246 & 0.644 & 0.352 \\
& Glove Embeddings & 0.346 & 0.212 & 0.787 & 0.330 \\
& BERT Embeddings & 0.660 & 0.319 & 0.550 & 0.403 \\
\hline
Supervised Methods & BERT-base (ArgKP) & 0.844 & 0.609 & 0.718 & 0.657 \\
& BERT-large (ArgKP) & \textbf{0.868} & \textbf{0.685} & \textbf{0.688} & \textbf{0.684 }
\\
\hline
NLI Transfer Learning & BERT-base (SNLI) & 0.777& 0.472 & 0.514& 0.485 \\
& BERT-base (MNLI) & 0.772& 0.470 & 0.558& 0.505 \\
& BERT-large (SNLI) & 0.765& 0.456 & 0.533& 0.487 \\
& BERT-large (MNLI) & 0.792& 0.518 & 0.542& 0.526 \\
\hline
\end{tabular}
\caption{Comparison of match scoring methods, using the \emph{Threshold} selection policy. P, R and F1 refer to the positive class. Acc is the accuracy.}
\label{tab:Results}
\end{center}
\end{table*}
\begin{table*}[t]\small
\centering
\begin{tabular}{|c||c|c|c|c||c|c|c||c|c|c||c|}
\hline
& \multicolumn{4}{c||}{All} & \multicolumn{3}{c||}{Single}& \multicolumn{3}{c||}{Multiple}& No \\
& \multicolumn{4}{c||}{Arguments} & \multicolumn{3}{c||}{Key Point}& \multicolumn{3}{c||}{Key Points}& Key Points\\
\hline
& Acc & P & R & F1 & P & R & F1 & P & R & F1 & Acc \\
\hline
Threshold & .868 & .685 & .688 & .684 & .720 & .686 & .701 & .904 & \textbf{.690} & \textbf{.782} & \textbf{.933} \\
Best Match & .876 & .696 & .711 & .703 & .836 & .747 & \textbf{.789} & .936 & .448 & .606 & .839 \\
BM+Threshold & \textbf{.890} & \textbf{.772} & .665 & .713 & \textbf{.856} & .699 & .769 & .941 & .421 & .580 & .915 \\
Dual Threshold & .887 & .721 & \textbf{.740} & \textbf{.730} & .784 & \textbf{.752} & .767 & \textbf{.945} & .656 & .773 & .908 \\
\hline
\end{tabular}
\caption{Comparing key point selection policies, using BERT-large trained on the ArgKP dataset for match scoring.}
\label{tab:Results-policy}
\end{table*}
Table~\ref{tab:Results} compares the various match scoring methods, all using the \emph{Threshold} key point selection policy. Results are obtained by micro-averaging over the argument-key point pairs in each fold, and averaging over the different folds. We consider Precision, Recall and F1 of the positive class, as well as the overall accuracy. We also list for reference the majority class baseline that always predicts ``no match'', and the random baseline, which randomly predicts the positive class according to its probability in the training data.
The unsupervised models fail to capture the relation between the argument and the key points. Tf-Idf and Glove perform the worst, showing that simple lexical similarity is insufficient for this task. BERT embedding does better but still reaches a relatively low F1 score of 0.4.
In contrast to the unsupervised models, supervised models are shown to perform well. BERT with fine tuning leads to a substantial improvement, reaching F1 score of 0.657 with the BERT-base model, and 0.684 with the BERT-large model.
BERT Models trained on NLI data are considerably better than the unsupervised methods, with the best model reaching F1 of 0.526, yet their performance is still far below the supervised models trained on our ArgKP dataset. This may reflect both the similarities and the differences between NLI and the current task. We have also experimented with combining these two types of data in cascade: BERT was first trained on a large NLI dataset (SNLI, MNLI or their union), and was then fine-tuned on the smaller ArgKP data. However, it did not improve the supervised results.
\paragraph{Error Analysis.}
By analyzing the top errors of the supervised classifier (BERT-large), we found several systematic patterns of errors. In most cases, non-matching arguments and key points received a high match score in one of the following cases:
\begin{itemize}
\item They share some key phrases. For example: \emph{``It is unfair to only subsidize vocational education. Achieving a more advanced education is very expensive and it would also need to be subsidized.''} and \emph{``Subsidizing vocational education is expensive''.}
\item They share a large portion of the sentence, but not the main point, for example: \emph{``Women should be able to fight if they are strong enough''} and \emph{``Women should be able to serve in combat if they choose to''.}
\item They are at least partially related, but labeled as non-matching due to a better fitting key point for the same argument. For example: \emph{``We should subsidize space exploration because it increases the knowledge of the universe we are in''} and \emph{``Space exploration improves science/technology''} can be considered matched, but were labeled as unmatched due to the key point \emph{``Space exploration unravels information about the universe''}. Using the \emph{Best Match} policy helps in these cases.
\end{itemize}
For arguments and key points that were labeled as matched but received a low match score, the relation was in many cases implied or required some further knowledge, for examples: \emph{``Journalism is an essential part of democracy and freedom of expression and should not be subsidized by the state.''} and \emph{``government intervention has the risk of inserting bias/harming objectivity''.}
\subsubsection{Key Point Selection Policies}
\begin{figure*}[t]
\centering
\includegraphics[width=0.6\textwidth]{precision_recall_tredeoff.png}
\caption{Precision/Recall trade-off for different key point selection policies. For each method, the highest F1 score, as well as the F1 score for the chosen threshold are specified. For the \emph{Best Match + Threshold} policy, these two scores coincide.}
\label{tab:prec-acc}
\end{figure*}
Table~\ref{tab:Results-policy} compares different key point selection policies, all using the best performing match scoring method: BERT-large fine-tuned on ArgKP. We report the results over the whole dataset (``all arguments''), as well as the subsets of arguments having none, single or multiple matching key points according to the labeled data. In case of no matches we present the accuracy, as recall and F1 scores are undefined. When considering all the arguments, the \emph{Dual Threshold} policy achieves the best F1 score of 0.73. The \emph{Threshold} method performs well for arguments with no matches or multiple matches. When there is exactly one match (the common case in our data), it has lower precision. The \emph{Best Match} policy performs well when there is a single match, but is not able to cope with arguments that have no matches or have multiple matches. The \emph{BM+Threshold} method combines the two and is useful when there are no matching key points or a single matching key point, but still have lower recall when there are multiple matching key points. The \emph{Dual Threshold} method improves the recall and therefore the F1 score for multiple matches while maintaining good performance for arguments with single or no matches.
Figure ~\ref{tab:prec-acc} shows Precision-Recall trade-off for the various policies, using the different possible thresholds, computed for one of the folds. For each policy, we specify the best F1 score, as well as the F1 score obtained for the selected threshold, which was optimized over the development set. The \emph{Threshold} policy allows to control recall, up to one (where the threshold is zero), at the price of low precision. The \emph{BM+Threshold} policy generates the highest precision, but
low recall, since at most one candidate is selected. Note that when the threshold is zero, the \emph{BM+Threshold} policy is equivalent to the \emph{BM} policy. The \emph{Dual Threshold} policy offers the best trade-off, for mid-range precision and recall.
\subsubsection{Match Scoring}
\label{sec:match_scoring}
We experimented with both unsupervised and supervised methods for computing a match score for a given \emph{(argument, key point)} pair. We also explored transfer learning from the related task of natural language inference (NLI).
\paragraph{Unsupervised Methods}
\begin{itemize}
\item \textbf{Tf-Idf.} In order to assess the role of lexical overlap in the matching task, we represent each argument and key point as tf-idf weighted word vectors and use their cosine similarity as the match score.
\item \textbf{Word Embedding.} We examined averaged word embeddings using GloVe \citep{pennington2014glove} and BERT \citep{devlin-etal-2019-bert}. GloVe is a context independent model that computes a single embedding for each word. BERT is a contextualized embedding model that takes the entire sentence into account. We also experimented with other embedding methods that under-performed BERT and thus their results are not reported here: Universal Sentence Encoder \citep{cer2018universal} and InferSent \citep{conneau2017supervised}. Again, we use cosine similarity to compute the match score.
\end{itemize}
\paragraph{Supervised Methods.}
We fine tuned the BERT-base-uncased and BERT-large-uncased models \citep{devlin-etal-2019-bert} to predict matches between argument and key point pairs. We added a linear fully connected layer of size 1 followed by a sigmoid layer to the special [CLS] token in the BERT model, and trained it for three epochs with a learning rate of 2e-5 and a binary cross entropy loss.
\paragraph{NLI Transfer Learning.} We also experimented with transfer learning from NLI to our task of argument-to-key point match classification. This was motivated by the similarity between these tasks (as discussed in Section~\ref{ssec:argrel}), as well as the availability of large-scale NLI labeled datasets. We considered the Stanford (SNLI) and the Multi-Genre (MNLI) datasets \citep{bowman-etal-2015-large,williams-etal-2018-broad}, each comprising hundreds of thousands of labeled premise-hypothesis pairs. Pairs labeled as \textsc{Entailment} were considered positive instances, while the rest of the pairs, labeled as \textsc{Neutral} or \textsc{Contradiction} were considered negative. We trained BERT-base and BERT-large models on each of these datasets, following the procedure described above.
\subsubsection{Match Classification} \label{Selection Policy}
\label{sec:match_classification}
In the match classification step we select the matching key points for each argument, based on their respective matching scores.
The classification can be done locally, treating each pair individually, or globally, by examining all possible key points for each argument. We compared the following policies for selecting matching key points for a given argument.
\paragraph{Threshold.} For each fold, we find the threshold on the match score that maximizes the F1 score for the positive (matching) class. Pairs whose score exceeds the learned threshold are considered matched.
\paragraph{Best Match (BM).} Using a threshold is not optimal for our data, where most arguments have at most one matched key point. A natural solution is to select the best matching key point. For each argument, we consider all key points for the same topic and stance as candidates and predict only the candidate with the highest match score as matched to the argument and the rest as unmatched. Note that this is the only fully unsupervised selection policy, as it does not require labeled data for learning a threshold.
\paragraph{BM+Threshold.} The \emph{BM} policy always assigns exactly one key point for each argument, while 27.5\% of the arguments in our data are not matched to any key point. To address this, we combine the two former policies. The top matching key point is considered a match only if its match score exceeds the learned threshold.
\paragraph{Dual Threshold.} In order to account for arguments with more than one matching key point, two thresholds are learned. If two key points exceed the lower threshold and at least one of them exceeds the upper threshold, both will be matched. Otherwise, it works the same as the \emph{BM+Threshold} policy using only the lower threshold. This allows for zero to two matches per argument.
Thresholds are learned from the development set for supervised match scoring methods, and from both train and development set for unsupervised match scoring methods. |
1,314,259,996,610 | arxiv | \section{Introduction and Problem Formulation}\label{sec:introduction}
Sampling from the posterior is the central part in Bayesian inference. Suppose there is a family of densities $p_\theta(x)$ on the sample space $x\in \mathcal{X}$, and a prior $\pi(\theta) $ on the parameter space $\Theta$. It is of our interest to sample from the posterior $\pi(\theta|x) \propto \pi(\theta) p_\theta(x)$. Assuming the prior and likelihood can be evaluated at every point, then Markov chain Monte Carlo (MCMC) would be the natural choice. The standard Metropolis-Hastings (M-H) Algorithm (Algorithm \ref{alg:M-H}) constructs a Markov chain on $\Theta$, with stationary distribution $\pi(\theta|x)$.
\begin{algorithm}
\caption{Metropolis-Hastings Algorithm}\label{alg:M-H}
\hspace*{\algorithmicindent} \textbf{Input:} initial setting $\theta$, number of iterations $T$ \\
\begin{algorithmic}[1]
\For{$t= 1,\cdots T$}
\State Propose $\theta'\sim q(\theta'|\theta)$
\State Compute $$a = \frac{\pi(\theta'|x)q(\theta|\theta')}{\pi(\theta|x)q(\theta'|\theta)} = \frac{\pi(\theta')p_{\theta'}(x)q(\theta|\theta')}{\pi(\theta)p_{\theta}(x)q(\theta'|\theta)}$$
\State Draw $r \sim \text{Uniform}[0,1]$
\State \textbf{If} $(r< a)$ \textbf{then} set $\theta =\theta'$
\EndFor
\end{algorithmic}
\end{algorithm}
Here $a$ is often named as `acceptance ratio', and $\min\{a,1\}$ is often called `acceptance probability'.
In real situations, however, the likelihood may often be intractable or computationally expensive. In this scenario the likelihood function is known up to a normalizing constant, that is:
\[
p_\theta(x) = \frac{f_\theta(x)}{Z(\theta)},
\]
where $f_\theta(x)$ can be evaluated at every $(x, \theta)\in \mathcal{X}\times \Theta$, but $Z(\theta)$ is unknown. This intractable constants arise in many statistical problems and interesting models, such as image analysis \cite{besag1986statistical}, Gaussian graphical models \cite{roverato2002hyper}, Ising models \cite{pettitt2003efficient}.
Suppose one is still interested in sampling from the posterior $\pi(\theta|x)$, a standard Metropolis-Hastings algorithm (Algorithm \ref{alg:M-H}) gives acceptance ratio:
\[
a = \frac{\pi(\theta')f_{\theta'}(x)q(\theta|\theta')}{\pi(\theta)f_{\theta}(x)q(\theta'|\theta)}\frac{Z(\theta)}{Z(\theta')},
\]
which cannot be calculated due to unknown ratio $\frac{Z(\theta)}{Z(\theta')}$.
This problem is known as `doubly intractable problem', as $Z(\theta)$ and $Z(\theta')$ are both unknown. Based on the idea of estimating the normalizing constant or the likelihood function, a wide range of techniques are proposed, such as maximum pseudo-likelihood estimator \cite{besag1974spatial}, ratio importance sampling \cite{chen1997monte}, bridge sampling \cite{meng1996simulating}, path sampling \cite{gelman1998simulating}. These methods use different approaches to estimate $Z(\theta)$ and $Z(\theta')$, plugging the estimator into the expression of $a$. However, it breaks the detailed balance and causes the Markov chain not converging to the correct stationary distribution, which may cause problems.
The Pseudo-marginal Monte Carlo is first introduced in \cite{beaumont2003estimation}. M{\o}ller et al. \cite{moller2006efficient} proposed a `single auxiliary variable method' , which is a special case of the pseudo marginal Monte Carlo approaches. This algorithm is asymptotically exact, i.e., the Markov chain converges to the correct stationary distribution. The convergence rate of Pseudo-marginal Markov chain can be found in \cite{andrieu2009pseudo}, \cite{andrieu2015convergence}.
In 2006, Murray et al. \cite{murray2012mcmc} proposed the `exchange algorithm', which provides a generalization of M{\o}ller et al. \cite{moller2006efficient}. The exchange algorithm also proposes auxiliary variables in each iteration and is asymptotically exact. However, the exchange algorithm requires perfect sampling from $p_{\theta'}$ which is not practical in many cases. Usually sampling from $p_{\theta'}$ also requires doing MCMC and would be very slow. Therefore there are several results working on proposing variants of exchange algorithm to tackle this problem. For example, Faming Liang et al. proposed `doubly Metropolis-Hastings sampler'\cite{liang2010double} and `adaptive exchange algorithm'\cite{liang2016adaptive}, Alquier et al. proposed `Noisy Monte Carlo' \cite{alquier2016noisy}. All these algorithms run Markov chains using approximate transition kernels, thus the stationary distribution is still no longer the exact posterior distribution and may not even exist.
The modified pseudo-marginal Monte Carlo which we will introduce later and the exchange algorithm construct Markov chains with randomized acceptance probability by introducing auxiliary variables at each iteration, which will both be referred to as `Randomized MCMC' (RMCMC) hereafter.
This paper provides a comparison of the two algorithms and a new MCMC algorithm, which chooses between these two algorithms adaptively at each iteration, obtaining better acceptance probabilities. In Section \ref{review}, we reviewed the two RMCMC algorithms and explained a statistical point of view of the two algorithms, which provides the main motivation of this paper. In Section \ref{sec: two examples}, two examples are introduced as a comparison between the two algorithms. In the first example, the exchange algorithm performs better than the Pseudo-marginal Monte Carlo and in the second example, vise versa. In Section \ref{sec: Pseudo-marginal Exchange}, we propose a new algorithm: Multi-armed Bandit MCMC (MABMC) which is a combination of the two RMCMC algorithms, obtaining higher acceptance probability.
\begin{comment}
\section{Some Literature Review (to be written again)} \label{literature review}
\subsection{Pseudo-marginal Monte Carlo}
The Pseudo-marginal Monte Carlo is first introduced in \cite{beaumont2003estimation}, and M{\o}ller et al. \cite{moller2006efficient} first use it to solve the `doubly intractable' problem. Christophe Andrieu wrote several papers working on the convergence rate of pseudo-marginal algorithm, in the 2009 paper \cite{andrieu2009pseudo} they showed the pseudo-marginal approach can share the same marginal stationary distribution as the idealized method. More careful analysis on the convergence are made on \cite{andrieu2015convergence}, in this paper they considered the asymptotic variance and spectral gap of the pseudo-marginal algorithm, comparing with the Metropolis-Hastings algorithm.
\subsection{Exchange algorithm}
Murray et al. \cite{murray2012mcmc} first proposed the `exchange algorithm', which is an alternative way of solving the `doubly intractable' problem. However, the exchange algorithm requires perfect sampling from $p_{\theta'}$ which is not practical in many cases. Usually sampling from $p_{\theta'}$ also requires doing MCMC and would be very slow. Therefore there are several follow-up papers working on proposing variants of exchange algorithm to tackle this problem. For example, Faming Liang et al. proposed `doubly Metropolis-Hastings sampler'\cite{liang2010double} and `'adaptive exchange algorithm'\cite{liang2016adaptive}, Alquier et al. proposed `Noisy Monte Carlo' \cite{alquier2016noisy}. All these algorithms run Markov chains using approximate transition kernels, thus the stationary distribution is no longer the exact posterior distribution and may not even exist.
\subsection{Convergence rate of Monte Carlo Markov chain}
\cite{mengersen1996rates} and \cite{roberts2004general} are two very readable surveys for convergence of Markov chains, the first paper \cite{mengersen1996rates} concentrated on providing necessary and sufficient conditions of geometric ergodicity of Monte Carlo Markov chain, they also provided several examples (and counterexamples) of the geometric ergodic Markov chain. The second paper \cite{roberts2004general} reviewed several results on MCMC algorithms on general state space, along with quantitative bounds on the rate of convergence to stationarity.
\end{comment}
\section{Review of the two RMCMC algorithms}\label{review}
\subsection{Pseudo-marginal Monte Carlo (PMC)}
To tackle the problem of the unknown ratio $\frac{Z(\theta)}{Z(\theta')}$, M{\o}ller et al. \cite{moller2006efficient} introduced an auxiliary variable $y$ which also takes value on the state space $\mathcal{X}$, the joint distribution is designed to be:
\[
\pi(x,y,\theta) = \pi(y|x,\theta) \frac{f_\theta(x)}{Z(\theta)}\pi(\theta)
\]
and therefore the previous joint distribution $\pi(x,\theta)$ is unaffected. Therefore, if we could sample from $\pi(y,\theta|x)$, the marginal distribution of $\theta$ would be our target $\pi(\theta|x)$. The Pseudo-marginal Monte Carlo algorithm is designed as follows:
\begin{algorithm}
\caption{Pseudo-marginal Monte Carlo Algorithm}\label{alg:pseudo}
\hspace*{\algorithmicindent} \textbf{Input:} initial setting $(\theta, y)$, number of iterations $T$ \\
\begin{algorithmic}[1]
\For{$t= 1,\cdots T$}
\State Generate $\theta'\sim q(\theta'|\theta)$
\State Generate $y' \sim p_{\theta'}(y')= f_{\theta'}(y')/Z(\theta') $
\State Compute $$a = \frac{\pi(\theta')q(\theta|\theta')f_{\theta'}(x)}{\pi(\theta)q(\theta'|\theta)f_{\theta}(x)}\cdot \frac{f_\theta(y)\pi(y'|x,\theta')}{f_{\theta'}(y')\pi(y|x,\theta)}$$
\State Draw $r \sim \text{Uniform}[0,1]$
\State \textbf{If} $(r< a)$ \textbf{then} set $(\theta, y)= (\theta',y')$.
\EndFor
\end{algorithmic}
\end{algorithm}
The auxiliary density $\pi(y|x,\theta)$ can be chosen by an arbitrary distribution. The only requirement for $\pi(y|x,\theta)$ is that, for every $\theta$, the support of $\pi(y|x,\theta)$ contains the support of $p_{\theta}(x)$. For example, potential choice for $\pi(y|x,\theta)$ is, uniform distribution on $[0,1]$ if $\mathcal X = [0,1]$, or normal distribution with mean $\theta$ and variance $1$ if $\mathcal X = \mathbb{R}$, or $\pi(y|x,\theta) = p_{\hat{\theta}}(y)$ where $\hat{\theta}$ is an estimator of $\theta$. The choice of $\pi(y|x,\theta)$ will not affect the correctness of the pseudo-marginal algorithm, but will definitely have a strong impact on the efficiency of the Markov chain. Detailed discussions and suggestions in choosing a proper
auxiliary density can be found in M{\o}ller et al. \cite{moller2006efficient}
PMC can be viewed as a M-H algorithm on the space $\Theta\times \mathcal{X}$, with transition kernel $q(\theta',y'|\theta, y) = q(\theta'|\theta) p_{\theta'}(y')$. With this choice of transition kernel, the acceptance ratio of the M-H algorithm becomes
\begin{align*}
a &= \frac{\pi(\theta',y'|x) } {\pi(\theta,y|x)}\cdot \frac{q(\theta|\theta') p_{\theta}(y)}{q(\theta'|\theta) p_{\theta'}(y')} \\
& = \frac{\pi(\theta')f_{\theta'}(x)\pi(y'|x,\theta')}{\pi(\theta)f_\theta(x)\pi(y|x,\theta)}\cdot \frac{Z(\theta)}{Z(\theta')}\cdot \frac{q(\theta|\theta')}{q(\theta'|\theta)}\cdot \frac{f_\theta(y)}{f_{\theta'}(y')}\cdot \frac{Z(\theta')}{Z(\theta)}\\
& = \frac{\pi(\theta')q(\theta|\theta')f_{\theta'}(x)}{\pi(\theta)q(\theta'|\theta)f_{\theta}(x)}\cdot \frac{f_\theta(y)\pi(y'|x,\theta')}{f_{\theta'}(y')\pi(y|x,\theta)} .\\
\end{align*}
Therefore the acceptance ratio does not depend on the unknown term $\frac{Z(\theta)}{Z(\theta')}$ .
One of the most important assumptions we made here is doing exact sampling from $p_{\theta'}(y')= f_{\theta'}(y')/Z(\theta')$ (Step 2 in Algorithm \ref{alg:pseudo}). As $Z(\theta')$ is unknown, this step is not easy and often not doable. Surprisingly, perfect sampling without knowing the normalizing constant is sometimes still possible using the `coupling from the past' method, see \cite{propp1996exact} for details.
However, in more cases, we usually establish another Markov chain with stationary distribution $p_{\theta'}(y')$ as an approximation in practice.
\subsection{Modified Pseudo-marginal Monte Carlo (MPMC)}
The state space of PMC is $\Theta\times\mathcal{X}$, while the state space of exchange algorithm (described in Section \ref{subsec:exchange}) is $\Theta$. Therefore we provide a modified version of PMC, which is essentially the same as PMC but with state space $\Theta$, making it possible for comparing the two algorithms and incorporating them together.
The modified Pseudo-marginal Monte Carlo is designed as follows:
\newpage
\begin{algorithm}
\caption{Modified Pseudo-marginal Monte Carlo Algorithm}\label{alg:mpseudo}
\hspace*{\algorithmicindent} \textbf{Input:} initial setting $\theta$, number of iterations $T$ \\
\begin{algorithmic}[1]
\For{$t= 1,\cdots T$}
\State Propose $\theta'\sim q(\theta'|\theta)$
\State Propose $y \sim \pi(y|x,\theta)$
and $y' \sim p_{\theta'}(y')= f_{\theta'}(y')/Z(\theta') $
\State Compute $$a = \frac{\pi(\theta')q(\theta|\theta')f_{\theta'}(x)}{\pi(\theta)q(\theta'|\theta)f_{\theta}(x)}\cdot \frac{f_\theta(y)\pi(y'|x,\theta')}{f_{\theta'}(y')\pi(y|x,\theta)}$$
\State Draw $r \sim \text{Uniform}[0,1]$
\State \textbf{If} $(r< a)$ \textbf{then} set $\theta = \theta'$.
\EndFor
\end{algorithmic}
\end{algorithm}
Before proving that MPMC is a valid Monte Carlo algorithm, we first discuss the difference between PMC and MPMC. For PMC, in each step there is an attempted move from $(y,\theta)$ to $(y', \theta')$ according to the kernel
$q(\theta'|\theta) p_{\theta'}(y')$, and the acceptance ratio $a$ is calculated according to the M-H algorithm, therefore randomness only occurs in the process of generating a new proposal $(y',\theta')$.
For MPMC, however, in each step the attempted move is proposed from $\theta$ to $\theta'$ according to $q(\theta'|\theta)$. Then two auxiliary variables $(y, y') \sim \pi(y|x,\theta) p_{\theta'}(y')$ are generated and the corresponding acceptance ratio depends on the value of random variables $y, y'$. Randomness comes not only from the proposal step, but also from the procedure of calculating the acceptance ratio, which is different from PMC, or other standard M-H type algorithms. Those M-H algorithms with randomized acceptance ratio would be referred to as \textbf{randomized Markov chain Monte Carlo (RMCMC)}. MPMC and exchange algorithms are two typical examples of RMCMC algorithms.
Now we prove the basic property of MPMC:
\begin{lemma} \label{lemma: detailed balance for MPMC}
The Modified Pseudo-marginal Monte Carlo Algorithm satisfies the detailed balance, i.e.,
\[
\pi(\theta |x) p(\theta \rightarrow \theta) = \pi(\theta'|x) p(\theta' \rightarrow \theta)
\]
\end{lemma}
\begin{proof}
First we calculate the transition probability $p(\theta\rightarrow \theta')$, notice that to move from $\theta$ to $\theta'$, one has to first propose $\theta'$ according to the density $q(\theta‘|\theta)$ and then accept, and the acceptance probability depends on $y$ and $y'$ thus we need to take expectation with respect to $(y,y'))$. Therefore,
\begin{align*}
p(\theta' \rightarrow \theta') & = q(\theta'|\theta) \mathbb{E}_{y,y'}\min\Bigg\{\bigg( \frac{\pi(\theta')q(\theta|\theta')f_{\theta'}(x)}{\pi(\theta)q(\theta'|\theta)f_{\theta}(x)}\cdot \frac{f_\theta(y)\pi(y'|x,\theta')}{f_{\theta'}(y')\pi(y|x,\theta)}\bigg), 1 \Bigg\} \\
& = q(\theta'|\theta)\int \int \min\Bigg\{\bigg( \frac{\pi(\theta')q(\theta|\theta')f_{\theta'}(x)}{\pi(\theta)q(\theta'|\theta)f_{\theta}(x)}\cdot \frac{f_\theta(y)\pi(y'|x,\theta')}{f_{\theta'}(y')\pi(y|x,\theta)}\bigg), 1 \Bigg\} p_{\theta'} (y')\pi(y|x,\theta) d y d y'\\
& = \int \int \min\Bigg\{ \frac{\pi(\theta')q(\theta|\theta')p_{\theta'}(x)}{\pi(\theta) p_{\theta}(x)}\cdot p_\theta(y)\pi(y'|x,\theta'), p_{\theta'}(y') q(\theta'|\theta)\pi(y|x,\theta) \Bigg\} d y d y'.\\
\end{align*}
So we have
\begin{align*}
\pi(\theta |x) p(\theta \rightarrow \theta') & = \frac{1}{\pi(x)} \pi(\theta) p_\theta(\theta|x) p(\theta \rightarrow \theta') \\
& = \frac 1 {\pi(x)} \int \int \min\Bigg\{ \pi(\theta')p_{\theta'}(x)q(\theta|\theta') p_\theta(y)\pi(y'|x,\theta'),\\
&\qquad \qquad\pi(\theta) p_{\theta}(x) q(\theta'|\theta)p_{\theta'}(y') \pi(y|x,\theta) \Bigg\} dy dy' \\
& = \frac 1 {\pi(x)} \int \int \min\Bigg\{ \pi(\theta')p_{\theta'}(x)q(\theta|\theta') p_\theta(y')\pi(y|x,\theta'),\\
&\qquad\qquad\pi(\theta) p_{\theta}(x) q(\theta'|\theta)p_{\theta'}(y) \pi(y'|x,\theta) \Bigg\} dy dy' \\
& = \pi(\theta'|x)p(\theta'\rightarrow \theta )
\end{align*}
The last equality comes from the fact that for any integrable function $f(y,y')$,
\[
\int \int f(y, y') dy dy' = \int \int f(y',y) dy dy'
\]
\end{proof}
Lemma \ref{lemma: detailed balance for MPMC} implies MPMC constructs a reversible Markov chain with stationary distribution $\pi(\theta|x)$.
It is not hard to show that (essentially one-step Janson's inequality), comparing with the original M-H chain (assuming the normalizing constant is known), the MPMC chain is less statically efficient. For all $\theta, \theta \in \Theta$, $a_{MPMC}(\theta, \theta') \leq a_{MH}(\theta, \theta')$. This puts MPMC chain below M-H chain in the ordering of Peskun \cite{peskun1973optimum}, although the M-H chain is not achievable here in our case.
The convergence property of MPMC requires more careful analysis. Nicholls et al. gives useful results on the convergence results for randomized MCMC \cite{nicholls2012coupled}. In our case, briefly speaking, if the original M-H chain is uniformly ergodic or geometrically ergodic and the ratio \[\frac{f_\theta(y)\pi(y'|x,\theta')}{f_{\theta'}(y')\pi(y|x,\theta)}\]
is upper and lower bounded by a positive number, then so is the MPMC chain.
\subsection{Single Variable Exchange Algorithm (SVE)}\label{subsec:exchange}
The exchange algorithm is another RMCMC algorithm, which is similar to MPMC. However, the acceptance ratio is calculated (estimated) in a different way.
\begin{algorithm}
\caption{Exchange Algorithm}\label{alg:Exchange}
\hspace*{\algorithmicindent} \textbf{Input:} initial setting $\theta$, number of iterations $T$ \\
\begin{algorithmic}[1]
\For{$t= 1,\cdots T$}
\State Generate $\theta'\sim q(\theta'|\theta)$
\State Generate an auxiliary variable $w\sim f_{\theta'}(w)/Z(\theta')$
\State Compute $$a = \frac{\pi(\theta')q(\theta|\theta')f_{\theta'}(x)}{\pi(\theta)q(\theta'|\theta)f_{\theta}(x)}\cdot \frac{f_\theta(w)}{f_{\theta'}(w)}$$
\State Draw $r \sim \text{Uniform}[0,1]$
\State \textbf{If} $(r< a)$ \textbf{then} set $\theta =\theta'$
\EndFor
\end{algorithmic}
\end{algorithm}
For SVE, in each step there is an attempted move from $\theta$ to $\theta'$ according to the same transition kernel $q(\theta'|\theta)$, however, SVE only generates one auxiliary variable $w \sim p_{\theta'}(w)$ and the acceptance ratio depends on $w$. SVE also preserves detailed balance.
\begin{lemma} \label{lemma: detailed balance for SVE}
The Single variable exchange algorithm satisfies the detailed balance, i.e.,
\[
\pi(\theta |x) p(\theta \rightarrow \theta) = \pi(\theta'|x) p(\theta' \rightarrow \theta)
\]
\end{lemma}
The proof is very similar to \ref{lemma: detailed balance for SVE} and can be found in \cite{diaconis2018bayesian}.
Similar results on Peskun's ordering and convergence rate can be established for SVE, but it will be omitted here as this is not of our main focus in this paper.
\newpage
\subsection{ MPMC versus SVE, a statistical point of view}
In the abstract of Murray's exchange algorithm paper \cite{murray2012mcmc}, the authors claimed that SVE achieves better acceptance probability than PMC, and is justified by numerical simulation. This motivates us to raise the following question:
\begin{itemize}
\item Is it possible to compare PMC and SVE theoretically?
\end{itemize}
As the state spaces of PMC and SVE are different, it makes more sense to compare SVE with MPMC. In this part, we provide a statistics point of view of SVE and PEMC, which also provides intuitions for future sections.
Recall that in standard M-H algorithm, given the stationary distribution $\pi(\theta|x) \propto \pi(\theta) \frac{f_\theta(x)}{Z(\theta)}$ and transition kernel $q(\theta|\theta')$, the acceptance ratio is
\[
a = \frac{\pi(\theta')f_{\theta'}(x)q(\theta|\theta')}{\pi(\theta)f_{\theta}(x)q(\theta'|\theta)}\frac{Z(\theta)}{Z(\theta')}.
\]
All the terms can be computed except the ratio of unknown normalizing constants $\frac{Z(\theta)}{Z(\theta')}$. The obvious idea is to find an estimator of $Z(\theta)/Z(\theta')$ and plug it into the expression of acceptance ratio. This is widely used in practice, however, as mention in Section \ref{sec:introduction}, such estimators without other constraint will break the detailed balance and the corresponding Markov chain is not guaranteed to converge to the desired stationary distribution. Heuristically speaking, the idea of two RMCMC algorithms (MPMC and SVE) is to find a `good' estimator of $\frac{Z(\theta)}{Z(\theta')}$. The word `good' here means the estimator should preserve detailed balance of the resulting chain. It will soon be clear that the only difference between MPMC and SVE is that they use different estimators (denoted by $\hat{a}_{\text{MPMC}}$ and $\hat{a}_{\text{SVE}}$, respectively) to estimate acceptance ratio $a$.
To be specific, in MPMC, the ratio
$\frac{Z(\theta)}{Z(\theta')}$ is estimated by:
\begin{align*}
\frac{f_\theta(y) \pi(y'|x,\theta')}{f_{\theta'}(y')\pi(y|x,\theta)} \qquad \text{where} \quad (y,y')|\theta,\theta' \sim \pi(y|x,\theta)\cdot p_{\theta'}(y').
\end{align*}
Therefore the resulting randomized acceptance ratio is given by:
\[
\hat{a}_{\text{MPMC}} = \frac{\pi(\theta')q(\theta|\theta')f_{\theta'}(x)}{\pi(\theta)q(\theta'|\theta)f_{\theta}(x)}\cdot \frac{f_\theta(y)\pi(y'|x,\theta')}{f_{\theta'}(y')\pi(y|x,\theta)}。
\]
$\hat{a}_{\text{MPMC}}$ is unbiased since
\begin{align*}
\mathbb{E}_{(y,y')}\Bigg[\frac{f_\theta(y) \pi(y'|x,\theta')}{f_{\theta'}(y')\pi(y|x,\theta)} \Bigg] & = \int \frac{f_\theta(y)\pi(y'|x,\theta')}{f_{\theta'}(y')\pi(y|x,\theta)} \pi(y|x,\theta)\cdot p_{\theta'}(y') dy dy' \\
& = \frac{Z(\theta)}{Z(\theta')} \int p_\theta(y) \pi(y'|x,\theta') dy dy'\\
& = \frac{Z(\theta)}{Z(\theta')} ,
\end{align*}
as we proved in Lemma \ref{lemma: detailed balance for MPMC}, unbiasness preserves the detailed balance of MPMC, which guarantees the asymptotic exactness of MPMC algorithm.
Similarly, for SVE, the ratio $\frac{Z(\theta)}{Z(\theta')}$ is estimated by
\begin{align*}
\frac{f_\theta(w)}{f_{\theta'}(w)} \qquad\text{where}\quad w|\theta,\theta'\sim p_{\theta'}(w).
\end{align*}
Therefore the resulting randomized acceptance ratio is given by:
$$\hat{a}_{\text{SVE}} = \frac{\pi(\theta')q(\theta|\theta')f_{\theta'}(x)}{\pi(\theta)q(\theta'|\theta)f_{\theta}(x)}\cdot \frac{f_\theta(w)}{f_{\theta'}(w)}$$
$\hat{a}_{\text{SVE}} $ is clearly unbiased since
\begin{align*}
\mathbb{E}_w\Bigg[\frac{f_\theta(w)}{f_{\theta'}(w)}\Bigg] & = \int \frac{f_\theta(w)}{f_{\theta'}(w)} p_{\theta'}(w) dw \\
& = \frac{Z(\theta)}{Z(\theta')} \int p_{\theta}(w) dw \\
& = \frac{Z(\theta)}{Z(\theta')},
\end{align*}
and again, unbiaseedness guarantees detailed balance.
\begin{remark}
This not necessary implies unbiasedness is the sufficient and necessary condition for designing an estimator of $\frac{Z(\theta)}{Z(\theta'}$.
\end{remark}
To summarize, given an attempted move from $\theta$ to $\theta'$, the acceptance ratio $a(\theta,\theta')$ is estimated by two unbiased estimators, $\hat{a}_{\text{MPMC}}$ and $\hat{a}_{\text{SVE}}$. Then the acceptance probability $r(\theta, \theta)$ is estimated by
\[
\hat{r}_{\text{MPMC}} = \min\{\hat{a}_{\text{MPMC}}, 1\}
\]
and
\[
\hat{r}_{\text{SVE}} = \min\{\hat{a}_{\text{SVE}}, 1\}
\]
respectively.
Therefore comparing the two algorithms is equivalent to comparing the performance of the two estimators. As both $\hat{a}_{\text{MPMC}}$ and $\hat{a}_{\text{SVE}}$ are unbiased, the performance only depends on the variance of the two estimators. The following theorem characterize the relative mean square error for both estimators by Pearson Chi-square distances $\chi_P$.
\begin{theorem} \label{thm: chi-square characterization}
Let
\begin{align*}
\text{RE}(\hat a) \doteq \frac{(\hat a - a)^2}{a^2}
\end{align*}
be the relative mean square error of estimator $\hat a$, then we have
\begin{align*}
\text{RE}(\hat{a}_{\text{SVE}}) = \chi_P(p_{\theta'}, p_\theta)
\end{align*}
and
\begin{align*}
\text{RE}(\hat{a}_{\text{MPMC}}) = \chi_P\big(p_{\theta'}(y')\pi(y|x,\theta), p_\theta(y)\pi(y'|x,\theta')\big),
\end{align*}
where $$\chi_p(f, g) = \frac{(f(x) - g(x))^2}{f(x)} dx.$$
\end{theorem}
\begin{proof}
For SVE, we have
\begin{align*}
\text{RE}(\hat{a}_{\text{SVE}}) & = \frac{(\hat{a}_{\text{SVE}} - a)^2}{a^2} \\
& = \int \frac{\Big(f_\theta(w)/f_{\theta'}(w) - Z(\theta)/ Z(\theta')\Big)^2 } {\Big(Z(\theta)/ Z(\theta')\Big)^2} p_{\theta'}(w) dw \\
& = \int (\frac{p_\theta(w)}{p_{\theta'}(w)} - 1 )^2 p_{\theta'}(w) dw \\
& = \int \frac{\big(p_\theta(w) - p_{\theta'}(w)\big)^2}{p_{\theta'}(w)} dw\\
& = \chi_P(p_{\theta'}, p_\theta).
\end{align*}
Similarly, for MPMC
\begin{align*}
\text{RE}(\hat{a}_{\text{MPMC}}) & = \frac{(\hat{a}_{\text{MPMC}} - a)^2}{a^2} \\
& = \int \frac{\Big(f_\theta(y) \pi(y'|x,\theta')/f_{\theta'}(y') \pi(y|x,\theta) - Z(\theta)/ Z(\theta')\Big)^2 } {\Big(Z(\theta)/ Z(\theta')\Big)^2} f_{\theta'}(y') \pi(y|x,\theta) dy dy'\\
& = \int (\frac{p_\theta(y) \pi(y'|x,\theta')}{p_{\theta'}(y')\pi(y|x,\theta)} -1)^2 p_{\theta'}(y')\pi(y|x,\theta)dy dy'\\
& = \chi_P\big(p_{\theta'}(y')\pi(y|x,\theta), p_\theta(y)\pi(y'|x,\theta')\big),
\end{align*}
as desired.
\end{proof}
Theorem \ref{thm: chi-square characterization} reveals a significant difference between MPMC and SVE. For MPMC, the choice of $\pi(\cdot|x,\theta)$ would influence the corresponding Peason Chi-square distance, and thus has a stong impace on the efficiency of the Markov chain. The optimal choice of $\pi(\cdot|x, \theta$ is clearly $p_\theta(\cdot)$ itself, such an estimator has $0$ variance and it makes Algorithm \ref{alg:M-H} and \ref{alg:mpseudo} agrees, but is impractical in our case. In practice, this suggests us to choose $\pi(\cdot|x,\theta)$ which is as close to $p_\theta$ as possible.
For SVE, the variance is controlled by the Pearson Chi-square distance between $p_\theta$ and $p_{\theta'}$.
Roughly speaking, when $p_\theta$ is close to $p_{\theta'}$ (which is often equivalent to $\theta$ is close to $\theta'$), the SVE estimator tends to perform well. However, when $p_\theta$ is far away from $p_{\theta'}$, the SVE estimator may perform poorly due to the large variance.
Therefore, with a properly chosen $\pi(\cdot|x,\theta)$, it is reasonable to have the following heuristics:
\begin{itemize}
\item When the proposed $\theta'$ is close to $\theta$, the SVE estimator would have better performance, resulting in a higher acceptance probability.
\item When $\theta'$ is far away from $\theta$, the MPMC estimator would outperform SVE, and one should choose MPMC if possible.
\end{itemize}
The above heuristics suggests it is not possible to conclude that one algorithm dominates the other in all cases. In the next section two concrete examples will be provided to justify our intuition. Meanwhile, in each step of M-H algorithm, a new $\theta'$ is proposed based on the previous $\theta$, this motivates us to find a method for choosing between MPMC and SVE adaptively, which is described in detail in Section \ref{sec: Pseudo-marginal Exchange}.
\begin{remark}
In this paper, it is of our main focus to choose between MPMC and SVE. But the methodology proposed in Section \ref{sec: Pseudo-marginal Exchange} is general for all RMCMC algorithms with the same transition kernel. It would be very interesting to construct other RMCMC algorithms which preserves detailed balance.
\end{remark}
\section{Two concrete examples}\label{sec: two examples}
In this section we will give two concrete examples, and argue that it is impossible to claim that one algorithm would always works better than the other.
\subsection{The first example}\label{eg:first}
Let $\mathcal X$ be the space with two points, i.e., $\mathcal X = \{0,1\}$. Therefore the probability measure on $\mathcal X$ are Bernoulli distributions. Let the parameter space $\Theta$ also consists two points, $\Theta = \{a =0.7, b = 0.6\}$. Here $\mathbb{P}_{a}$ corresponds to the probability distribution on $\mathcal X$:
\[
\mathbb{P}_a(X = 1) = 0.7 \qquad \mathbb{P}_a(X= 0) = 0.3.
\]
Similarly $\mathbb{P}_b$ corresponds to the probability distribution on $\mathcal X$:
\[
\mathbb{P}_b(X = 1) = 0.6 \qquad \mathbb{P}_b(X= 0) = 0.4.
\]
The prior distribution is chosen to be the uniform distribution over $\Theta$, and suppose the data $x$ equals $1$. Therefore, after simple calculation, the true posterior density would be:
\[
\mathbb{P} (\theta = a| x) = \frac 7 {13} \qquad \mathbb{P} (\theta = b|x) = \frac 6 {13}.
\]
For both algorithms, the transition probability $q(\cdot|\cdot)$ is the uniform distribution over $\Theta$. To make calculation easier, we choose uniform distribution over $\mathcal X$ as the conditional distribution $\pi(y|x,\theta)$, which is independent with data and parameter.
Now we are ready to calculate the transition probability of both algorithms, we will use $\mathbb{P}_{\text{SVE}}$ to denote the transition probability using exchange algorithm, and use $\mathbb{P}_{\text{MPMC}}$ to denote the transition probability using modified Pseudo-marginal Monte Carlo. The transition probabilities can be calculated as follows:
\begin{align*}
\mathbb{P}_{\text{SVE}}(\theta' =b| \theta =a) &= q(b|a)\cdot [ \mathbb{P}_b(w = 0) \cdot \min \{\frac{\mathbb{P}_b(x)}{\mathbb{P}_a(x)}\cdot \frac{\mathbb{P}_a(w)}{\mathbb{P}_b(w)} ,1\} + \mathbb{P}_b(w = 1) \cdot \min \{\frac{\mathbb{P}_b(x)}{\mathbb{P}_a(x)}\cdot \frac{\mathbb{P}_a(w)}{\mathbb{P}_b(w)} ,1\}] \\
& = \frac 12 \cdot [0.4 \cdot \min\{\frac 67 \times\frac 34, 1\} + 0.6 \cdot \min\{\frac 67 \times\frac 76, 1\}] \\
& = \frac 37
\end{align*}
\begin{align*}
\mathbb{P}_{\text{SVE}}(\theta' =a| \theta =b) &= q(a|b)\cdot [ \mathbb{P}_a(w = 0) \cdot \min \{\frac{\mathbb{P}_a(x)}{\mathbb{P}_b(x)}\cdot \frac{\mathbb{P}_b(w)}{\mathbb{P}_a(w)} ,1\} + \mathbb{P}_a(w = 1) \cdot \min \{\frac{\mathbb{P}_a(x)}{\mathbb{P}_b(x)}\cdot \frac{\mathbb{P}_b(w)}{\mathbb{P}_a(w)} ,1\}] \\
& = \frac 12 \cdot [0.3 \cdot \min\{\frac 76 \times\frac 43, 1\} + 0.7 \cdot \min\{\frac 76 \times\frac 67, 1\}] \\
& = \frac 12
\end{align*}
Similarly,
\begin{align*}
\mathbb{P}_{\text{MPMC}} (\theta' =b| \theta =a) & =q(b|a)\cdot \mathbb{E}_{y,y'}\big[\min\{ \frac{\mathbb{P}_b(x)}{\mathbb{P}_{a}(x)}\cdot \frac{\mathbb{P}_a(y)}{\mathbb{P}_{b}(y')} , 1 \}\big] \\
& = \frac 12 \times (\frac 3 {10} \times \frac 67 \times \frac 76 + \frac 3{10} \times \frac 67 \times \frac 36 + \frac 15\times \min\{\frac 67 \times \frac 74, 1\} + \frac 1 {5 }\times \frac 67 \times \frac 34 )\\
& = \frac {11}{28}
\end{align*}
\begin{align*}
\mathbb{P}_{\text{MPMC}} (\theta' =a| \theta =b) & =q(a|b)\cdot \mathbb{E}_{y,y'}\big[\min\{ \frac{\mathbb{P}_a(x)}{\mathbb{P}_{b}(x)}\cdot \frac{\mathbb{P}_b(y)}{\mathbb{P}_{a}(y')} , 1 \}\big] \\
& = \frac 12 \times (\frac 7 {20} \times \frac 76 \times \frac 67 + \frac 7{20} \times \frac 76 \times \frac 47 + \frac 3 {20 }\times \min\{\frac 76 \times \frac 63, 1\} + \frac 3 {20 }\times \min\{\frac 76 \times \frac 43, 1\})\\
& = \frac {55}{120}.
\end{align*}
Therefore we have $\mathbb{P}_{pm}(\theta'|\theta) < \mathbb{P}_{ex}(\theta'|\theta)$ for any $\theta' \neq \theta$ and thus the exchange algorithm has higher acceptance transition probability, which means exchange algorithm will converge faster.
\subsection{The second example}\label{eg:second}
The second example is designed to show that Pesudo-marginal Monte Carlo may perform better than exchange algorithm.
In this example we take $\mathcal X$ be the space with three points, i.e., $\mathcal X = \{0,1, 2\}$, the parameter space $\Theta$ also consists two points, $\Theta = \{a, b \}$. Here $\mathbb{P}_{a}$ corresponds to the probability distribution on $\mathcal X$:
\[
\mathbb{P}_a(X = 0) = 0.1 \qquad \mathbb{P}_a(X= 1) = 0.8 \qquad \mathbb{P}_a(X = 2) = 0.1.
\]
Similarly $\mathbb{P}_b$ corresponds to the probability distribution on $\mathcal X$:
\[
\mathbb{P}_b(X = 0) = 0.8 \qquad \mathbb{P}_b(X= 1) = 0.1 \qquad \mathbb{P}_b(X = 2) = 0.1.
\]
The prior distribution are chosen to be the uniform distribution over $\Theta$, and suppose the data $x$ equals $2$. Therefore, after simple calculation, the true posterior density would be:
\[
\mathbb{P} (\theta = a| x) = \frac 12 \qquad \mathbb{P} (\theta = b|x) = \frac 12.
\]
Similar to the previous example, the transition probability $q(\cdot|\cdot)$ is the uniform distribution over $\Theta$. The conditional distribution $\pi(y|x,\theta)$ is designed to be the uniform distribution over $\mathcal X$, which is independent with data and parameter.
The last thing we need to specify is the way we initialize variable $y$ in the pseudo-marginal Monte Carlo algorithm, we simply draw $y$ with $\mathbb{P}(y = 0) =\mathbb{P} (y = 1) = \mathbb{P}(y = 2) = \frac 13$.
In this setting, we are ready to calculate the transition probability of both algorithms, we will still use $P_{\text{SVE}}$ to denote the transition probability using exchange algorithm, and use $P_{\text{MPMC}}$ to denote the transition probability using pseudo marginal Monte Carlo. The transition probabilities can be calculated as follows:
\begin{align*}
\mathbb{P}_{\text{SVE}}(\theta' =b| \theta =a) &= q(b|a)\cdot \mathbb{E} _w\Bigg[\min \{\frac{\mathbb{P}_b(x)}{\mathbb{P}_a(x)}\cdot \frac{\mathbb{P}_a(w)}{\mathbb{P}_b(w)} ,1\}\Bigg]\\
& = \frac 12 \cdot [0.8 \cdot \min\{\frac 18 , 1\} + 0.1 \cdot \min\{8, 1\} + 0.1] \\
& = \frac{3}{20}
\end{align*}
By symmetry, we also have
\[
\mathbb{P}_{\text{SVE}}(\theta' =a| \theta =b) = \frac {3}{20}.
\]
Similarly,
\begin{align*}
\mathbb{P}_{\text{MPMC}} (\theta' =a| \theta =b) & =q(a|b)\cdot \mathbb{E}_{y,y'}\big[\min\{ \frac{\mathbb{P}_a(x)}{\mathbb{P}_{b}(x)}\cdot \frac{\mathbb{P}_b(y)}{\mathbb{P}_{a}(y')} , 1 \}\big] \\
& = \frac 12 \times (0.1 + 0.1 + 0.8\times \frac 13 + 0.8 \times \frac 13 \times \frac 18 + 0.8 \times \frac 13 \times \frac 18 )\\
& = \frac {4}{15}.
\end{align*}
and
\[
\mathbb{P}_{\text{MPMC}}(\theta' =a| \theta =b) = \frac {4}{15}.
\]
Therefore we have $\mathbb{P}_{pm}(\theta'|\theta) > \mathbb{P}_{ex}(\theta'|\theta)$ for any $\theta' \neq \theta$ and thus the Pseudo-marginal algorithm has higher acceptance transition probability, which means it will converge faster.
\subsection{Intuition and discussion}
In this part we briefly talk about the intuition behind the two examples above. Comparing with the ideal Metropolis-Hastings algorithm, the difficult part comes from the unknown ratio of normalizing constant
\[
\frac{Z(\theta)}{Z(\theta')}.
\]
The idea of both algorithms is to generate an auxiliary variable, and use the new random variable to construct an estimator to estimate the ratio of normalizing constants. Therefore, the accuracy of the estimator determines the performance of the algorithm. The main difference between Pseudo-marginal Monte Carlo and exchange algorithm is, the exchange algorithm uses the estimator
\[
\frac{f_\theta(w)}{f_{\theta'}(w)}
\]
to estimate the ratio directly. However, the Pseudo-marginal Monte Carlo uses $f_\theta(y)\pi(y'|x,\theta') $
to estimate $Z(\theta)$ and uses
$f_{\theta'}(y')\pi(y|x,\theta)$ to estimate $Z(\theta')$, then it uses the quotient as an estimator of $\frac{Z(\theta)}{Z(\theta')}$. Therefore, when the probability measure $\pi_{\theta}$ and $\pi_{\theta'}$ differs a lot (the case in the second example), then the exchange algorithm may perform poorly. But when the two measures are close, then the exchange algorithm may perform better than Pseudo-marginal Monte Carlo.
In the two examples above, one algorithm dominates the other for all pairs $(\theta, \theta')$. In real situations, however, the parameter space $\Theta$ may be much larger than our designed examples, thus the usually there exists two rigions, $R_1, R_2 \subset \Theta\times \Theta$. In $R_1$, PMC performs better than SVE, while in $R_2$, vice versa. Therefore, given a proposal move from $\theta$ to $\theta'$, it is natural to ask the following question:
\begin{itemize}
\item Is it possible to find a `reasonable' way of choosing between PMC and SVE to improve the acceptance probability ?
\end{itemize}
This question will be answered in the next section.
\section{MABMC: A Multi-armed Bandit MCMC Algorithm}\label{sec: Pseudo-marginal Exchange}
\subsection{How to choose between algorithms `legally'?} \label{subsec:decision rule}
Now we are ready to incorporate the two algorithms together. To make things more general and concrete, this problem can be formulated as follows:
Given an attempt move from $\theta$ to $\theta'$, and two valid estimates of $a(\theta, \theta)$, denoted by $\hat{a}_1(\theta,\theta')$, $\hat{a}_2(\theta,\theta')$. Can one find a decision rule $D(\theta,\theta') \in \{1,2\}$ such that the new Markov chain with transition probability $ q(\theta,\theta')\min\{a_D(\theta,\theta'), 1\}$ still preserves detailed balance and has higher acceptance probability than either algorithm?
It is worth mentioning that the seemingly obvious choice $\text{argmax}\{\hat{a}_1(\theta,\theta'), \hat{a}_2(\theta,\theta')\}$ is not valid, as this would break the detailed balance and thus the algorithm may not converge to the desired stationary distribution. It turns out that to preserve the detailed balance, the decision rule has to be `symmetric', i.e., $D(\theta, \theta') = D(\theta', \theta)$.
This problem is very similar to the multi-armed bandit problem in probability. In each iteration of the MCMC algorithm, an agent is facing the choose between $\hat a_1$ and $\hat a_2$, with the goal of increasing the acceptance probability. It is of our interest to design a reasonable policy (or make a decision rule) to get better performance than random guess.
\begin{definition}[Valid ratio]\label{def:valid ratio}
Let $\pi(\theta)$ be a probability density on the parameter space $\Theta$, which may be of the form $\pi(\theta) = \frac{f(\theta)}{Z(\theta)}$ where $Z(\theta)$ is intractable. Let $q(\theta,\theta')$ be a transition kernel in the M-H algorithm. A (randomized) acceptance ratio $\hat{a} (\theta, \theta')$ is called \textbf{valid} if
it preserves the detailed balance with respect to stationary distribution $\pi(\theta)$, i.e.,
\begin{align*}
\pi(\theta) q(\theta,\theta') \mathbb{E}\min\{\hat{a}(\theta, \theta'), 1\} = \pi(\theta') q(\theta',\theta) \mathbb{E}\min\{\hat{a}(\theta', \theta), 1\}.
\end{align*}
\end{definition}
\begin{example}
The acceptance ratio introduced in exchange algorithm (Alg \ref{alg:Exchange}) and modified Pseudo-marginal Monte Carlo (Alg \ref{alg:mpseudo}) are both valid.
\end{example}
\begin{definition}[Valid decision rule]
Given the target stationary distribution $\pi(\theta)$, the transition kernel $q(\theta, \theta')$ and two valid acceptance ratio $\hat{a}_1(\theta, \theta')$, $\hat{a}_2(\theta, \theta')$ . A decision rule $D: \Theta \times \Theta \rightarrow \{1, 2\}$ is called \textbf{valid} if the corresponding new acceptance ratio $\hat{a}_D (\theta, \theta')$ is valid.
\end{definition}
Intuitively, in each iteration of the M-H algorithm, given an attempted move from $\theta$ to $\theta'$, the decision rule $D(\theta, \theta')$ helps one to choose between the two acceptance ratio adaptively, aiming for a higher acceptance probability while still preserving the detailed balance. The decision rule is implicitly random, since the acceptance ratio $\hat{a}_1(\theta, \theta')$, $\hat{a}_2(\theta, \theta')$ are random.
The following example gives a simple, non-randomized decision rule.
\begin{example}[A simple valid decision rule]\label{eg: simple decision rule} The decision rule $D(\theta, \theta') \equiv 1$ is a valid decision rule. It corresponds to always choosing the first acceptance ratio for each attempted move. Similarly, $D(\theta, \theta') \equiv 2$ is also valid.
\end{example}
We could also define the `Bayes decision rule' and `inadmissible decision rule', which is similar to the definition of statistics in statistical decision theory.
\begin{definition}[Bayes decision rule]\label{def: optimal decision rule}
A valid decision rule $D$ is called \textbf{Bayes} if for any other valid decision rule $\tilde D$,
\[
\mathbb{E}{ \min \{\hat a_D, 1\}} \geq \mathbb{E}{ \min \{\hat a_{\tilde D}, 1\}},
\]
where the expectation is taken over all the randomness (including the transition kernel $q$, the decision procedure, the estimation of the acceptance ratio $a$, and integrating over $\Theta \times \Theta$.
\end{definition}
This decision rule is called `Bayes' since the stationary distribution $\pi(\theta|x)$ together with $q(\theta'|\theta)$ can be regarded as a prior distribution on $\Theta \times \Theta$, and the Bayes decision decision rule $D$ maximized the average acceptance probability according to this prior. Sometimes one may be interested in a point-wise relationship between two decision rules, which motivates the following definition.
\begin{definition}[Inadmissible decision rule]\label{def:inadmissible decision rule}
A decision rule $D_1$ is called `inadmissible' if there exists another decision rule $D_2$, such that for all $(\theta, \theta')\in \Theta \times \Theta$,
\[
\mathbb{E} \min\{\hat a_{D_1}(\theta,\theta') , 1 \} \leq \mathbb{E} \min\{\hat a_{D_2}(\theta,\theta') , 1 \},
\]
and the inequality is strict in at least one point $(\theta_0, \theta'_0)$. A decision rule which is not inadmissible is called \text{admissible} decision rule.
\end{definition}
A decision rule is inadmissible means one could find another decision rule which dominates the previous one. It is clear that a Bayes decision rule would be admissible, as by definition it maximizes the average acceptance probability. Though it is not necessary true that the Bayes decision rule will dominate other decision rules at every point. In general, the calculation of the Bayes decision rule is beyond our knowledge, therefore in this paper we will focus on finding a reasonable rule which is better than random guess, instead of finding the Bayes decision rule.
\begin{example}
In Section \ref{eg:first}, the decision rule $D =$ PMC is inadmissible, in section \ref{eg:second}, $D=$ SVE is inadmissible.
\end{example}
It is not hard to generalize the definition of valid decision rule to $n$ valid acceptance ratio, the formal definition is omitted.
\begin{example}[A max-min decision rule] \label{eg:max-min decision rule} The decision rule
\[
D(\theta,\theta') = \argmax_{i \in 1,2} \{\min\{r_i(\theta,\theta') , r_i(\theta',\theta)\} \}
\]
where $r_i(\theta,\theta') = \mathbb{E} \min\{\hat{a_i}(\theta,\theta'),1\} $, is valid.
\end{example}
The max-min decision rule is valid can be proved as a direct corollary after proving Theorem \ref{thm:valid decision rule}. This will be used as the decision rule for the Multi-armed Bandit MCMC (MABMC). The intuition behind the design will be explained later.
\begin{example}[An invalid decision rule] \label{eg:invalid decision} The decision rule
\begin{align*}
D(\theta,\theta') = \argmax_{i \in 1,2} \{\hat{a}_i(\theta,\theta')\}
\end{align*}
is not valid, (as we will see later) the corresponding acceptance ratio $\hat{a}_D (\theta, \theta')$ is not valid.
\end{example}
\begin{theorem}\label{thm:valid decision rule}
The decision rule $D$ is valid if and only if it is `symmetric', i.e.,
\[
D(\theta,\theta') = D(\theta', \theta) \quad \text{for all} \quad (\theta, \theta') \in K,
\]
where $K$ is a symmetric subset of $\Theta\times \Theta$, defined by
\[
K \doteq \{(\theta,\theta')\in \Theta\times \Theta: r_1(\theta,\theta')\neq r_2(\theta,\theta') \quad \text{or}\quad r_1(\theta',\theta) \neq r_2(\theta',\theta)\}
\]
\end{theorem}
\begin{proof}
For each fixed $(\theta,\theta')$, the detailed balance equation requires
\[ \pi(\theta) q(\theta,\theta') \mathbb{E}\min\{\hat{a}_D(\theta, \theta'), 1\} = \pi(\theta') q(\theta',\theta) \mathbb{E}\min\{\hat{a}_D(\theta', \theta), 1\}.\]
i.e.,
\[
r_D(\theta,\theta')/ r_D(\theta',\theta) = \pi(\theta') q(\theta',\theta) /\pi(\theta)q(\theta,\theta').
\]
Meanwhile, as $\hat a_1$, $\hat a_2$ are valid ratios, we have
\[
r_1(\theta,\theta')/ r_1(\theta',\theta) = r_2(\theta,\theta')/ r_2(\theta',\theta) =
\pi(\theta') q(\theta',\theta) /\pi(\theta)q(\theta,\theta').
\]
Therefore, for $(\theta,\theta')\in K$, it is clear that
$D(\theta,\theta') = D(\theta',\theta)$. As $D(\theta,\theta') = 1, D(\theta',\theta) = 2$ or
$D(\theta,\theta') = 2, D(\theta',\theta) = 1$ would breaks the detailed balance.
For $(\theta,\theta')\in K^c$, as $r_1(\theta,\theta') = r_2(\theta,\theta')$ and $r_1(\theta',\theta) = r_2(\theta',\theta)$. The two ratios are indistinguishable, thus $D(\theta,\theta‘)$ could be either $1$ or $2$.
\end{proof}
\begin{remark}
It could be reasonable to assume that $K = \Theta\times \Theta$ in real situations. Since $\hat a_1 and \hat a_2$ are two different estimators, usually the condition $ r_1(\theta,\theta')\neq r_2(\theta,\theta')$ is satisfied naturally.
\end{remark}
Therefore, given a proposal from $\theta$ to $\theta'$, to preserve the detailed balance, the decision $D$ has to be then same as the `reversed proposal' from $\theta'$ to $\theta$. This implies the decision rule in Example \ref{eg:invalid decision} is invalid.
\begin{corollary}
The max-min decision rule in Example \ref{eg:max-min decision rule} is valid as $D(\theta, \theta')$ is symmetric by design.
\end{corollary}
\begin{corollary}
The max decision rule in Example \ref{eg:invalid decision} is invalid as $D(\theta, \theta')$ is not symmetric.
\end{corollary}
\subsection{MABMC: Multi-armed Bandit MCMC, algorithm and intuition}\label{subsec: MABMC, algorithm and intuition}
The Multi-armed Bandit MCMC Algorithm(MABMC) incorporates the Pesudo-marginal Monte Carlo (PMC) and Exchange algorithm (SVE) adaptively, according to the max-min decision rule:
\begin{algorithm}
\caption{Multi-armed Bandit MCMC (MABMC)}\label{alg:MABMC}
\hspace*{\algorithmicindent} \textbf{Input:} initial setting $\theta$, number of iterations $T$ \\
\begin{algorithmic}[1]
\For{$t= 1,\cdots T$}
\State Propose $\theta'\sim q(\theta'|\theta)$
\State Generate auxiliary variables: $y \sim \pi(y|x,\theta), y' \sim f_{\theta'}(y')/Z(\theta') , w\sim f_{\theta'}(w)/Z(\theta')$
\State Compute
\begin{align*}
& a_1 = \frac{\pi(\theta')q(\theta|\theta')f_{\theta'}(x)}{\pi(\theta)q(\theta'|\theta)f_{\theta}(x)}\cdot \frac{f_\theta(y)\pi(y'|x,\theta')}{f_{\theta'}(y')\pi(y|x,\theta)},~~ r_1 = \min\{a_1,1\} \\
& a_2 = \frac{\pi(\theta')q(\theta|\theta')f_{\theta'}(x)}{\pi(\theta)q(\theta'|\theta)f_{\theta}(x)}\cdot \frac{f_\theta(w)}{f_{\theta'}(w)},~~ r_2 = \min\{a_2, 1\}
\end{align*}
\State Generate auxiliary variables: $\tilde y \sim \pi(y|x,\theta'), \tilde y' \sim f_{\theta}(y')/Z(\theta) , \tilde w\sim f_{\theta}(w)/Z(\theta)$
\State Compute
\begin{align*}
& \tilde a_1 = \frac{\pi(\theta)q(\theta'|\theta)f_{\theta}(x)}{\pi(\theta')q(\theta|\theta')f_{\theta'}(x)}\cdot \frac{f_{\theta'}(\tilde y)\pi(\tilde y'|x,\theta)}{f_{\theta}(\tilde y')\pi(\tilde y|x,\theta')},~~ \tilde r_1 = \min\{\tilde a_1,1\} \\ \\
& \tilde a_2 = \frac{\pi(\theta)q({\theta'}|\theta)f_{\theta}(x)}{\pi({\theta'})q(\theta|{\theta'})f_{\theta'}(x)}\cdot \frac{f_{\theta'}(\tilde w)}{f_{\theta}(\tilde w)}, ~~ \tilde r_2 = \min\{\tilde a_2, 1\}
\end{align*}
\State Choose $$D = \argmax_{i\in \{1,2\}} \min\{r_i, \tilde r_i\}$$
\State \textbf{If} $(D = 1)$ \textbf{then} \text{repeat Step $2-7$ in Algorithm \ref{alg:mpseudo}}
\State \textbf{If} $(D = 2)$ \textbf{then} \text{repeat Step $2-7$ in Algorithm \ref{alg:Exchange}}
\EndFor
\end{algorithmic}
\end{algorithm}
Roughly speaking, MABMC is a way of choosing between to algorithms adaptively, according to the max-min decision rule introduced in Section \ref{subsec:decision rule}. Theoretically any valid decision rule (also defined in Section \ref{subsec:decision rule}) would give a new Markov chain satisfying detailed balance. However, implementing simple decisions rules like Example \ref{eg: simple decision rule} would degenerate the algorithm to Algorithm \ref{alg:Exchange} or Algorithm \ref{alg:mpseudo} without essential improvement. It will be shown later that Algorithm \ref{alg:MABMC} do improve the average acceptance rate, comparing with Algorithm \ref{alg:Exchange} and Algorithm \ref{alg:mpseudo}.
To explain the intuition of MABMC, let's go back to the simple invalid example \ref{eg:invalid decision}. Given $\hat a_1(\theta, \theta')$ and $\hat a_2(\theta,\theta')$, the intuitively natural choice
$$ D(\theta,\theta') = \argmax_{i \in 1,2} \{\hat{a}_i(\theta,\theta')\}$$
as described in example \ref{eg:invalid decision} is itself invalid. But based on this decision rule, we could do a `symmetrization' as described in example \ref{eg:max-min decision rule}, making it to be a valid decision rule without losing too much. Heuristically speaking, it should be true that
\[
\argmax_{i \in 1,2} \{\min\{\hat{r_i}(\theta,\theta') , \hat{r_i}(\theta',\theta)\} \approx \argmax_{i \in 1,2} \{\hat{a}_i(\theta,\theta')\}.
\]
This heuristics comes from the fact that $a(\theta, \theta') = \frac{1}{a(\theta', \theta)}$ by design of M-H algorithm. Therefore, if $\hat a_1$, $\hat a_2$ are both reasonable estimates of $a$, then the maximum of $\hat r_i(\theta,\theta')$ and $\hat r_i(\theta',\theta)$ should be approximately $1$, as $a(\theta,\theta')$ and $a(\theta', \theta)$ can not be both less than $1$. We have,
\begin{align*}
\argmax_{i \in 1,2} \{\min\{\hat{r_i}(\theta,\theta') , \hat{r_i}(\theta',\theta)\} & \approx \argmax_{i \in 1,2} \{\hat{r}_i(\theta,\theta')\} \\
& \approx \argmax_{i \in 1,2} \{\hat{a}_i(\theta,\theta')\}.
\end{align*}
Therefore the max-min decision rule in example \ref{eg:max-min decision rule} should be close to the max decision rule in example \ref{eg:invalid decision}, but is itself valid.
\section{Numerical Example}
\subsection{Normal example}
We consider a concrete example for which all the computations can be done easily. Consider the problem of sampling from the posterior of $\theta$, which has likelihood $p_\theta(y) \sim \mathcal{N}(\theta, \sigma^2)$ with a conjugate prior $\pi(\theta) \sim \mathcal{N}(0, 1)$, standard calculation gives the posterior distribution
\[
\pi(\theta|x) \sim \mathcal{N}(\frac{y}{1+\sigma^2}, \frac{\sigma^2}{1 + \sigma^2}).
\]
The likelihood has the form
\[
p_\theta(y) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{- \frac{(y - \theta)^2}{2\sigma^2}}
\]
which is tractable. However, we pretend as if the normalizing constant $\frac{1}{\sqrt{2\pi\sigma^2}} $ is unknown to us. The result of the average acceptance probabilities by MPMC, SVE and MABMC is reported.
For each $\sigma^2$ ranging from $0.1$ to $1$, MPMC, SVE and MABMC is implemented for $20000$ iterations respectively. The transition kernel $q(\theta'|\theta) \sim \mathcal{N} (\theta, 1)$, meanwhile $\pi(y|x,\theta)$ is chosen to be a normal distribution $\mathcal{N}(\theta + \frac 13, \sigma^2)$, $y = 1$, figure \ref{fig:MABMC_gaussian} reports the average acceptance probability for MABMC, MPMC and SVE respectively.
Figure \ref{fig:MABMC_gaussian} shows MABMC achieves higher average acceptance probability than both MPMC and SVE for all choices of $\sigma^2$, which shows such an max-min decision rule do improve the performance of the corresponding Markov chain. In this artificial example, the performance of MPMC and SVE is similar (the blue curve and green curve are similar in the plot), if one algorithm always performs better than the other, then the decision rule will basically choose one algorithm, and thus the performance of MABMC will be similar to the better one. Meanwhile, while for all $\sigma^2$, MABMC performs better than MPMC and SVE, it is clear that the improvement decreases as $\sigma^2$ increases. This is natural since when $\sigma^2$ increases, the likelihood would be flatter and thus an attempting move is more likely to be accepted, thus all three algorithms would have higher acceptance probability and the improvement of MABMC would be less significant. It turns out that for more peaked distribution (corresponding to small values of $\sigma^2$, the advantage of MABMC is more significant.
\begin{figure}[htbp]
\includegraphics[width= \textwidth]{MABMC_gaussian.png}
\caption{Average acceptance probabilities as a function of $\sigma^2$ for a $\mathcal{N}(\theta, \sigma^2)$ likelihood.}
\label{fig:MABMC_gaussian}
\end{figure}
\subsection{Ising Example}
We have also considered the 2-D Ising distribution on on a square lattice $\Lambda$ with $N$ sites. For each site $k \in\Lambda$ there is a discrete variable $\sigma_k \in \{-1, +1\}$ representing the site's spin. A spin configuration $\sigma$ is an assignment of spin value to each lattice site. The configuration probability is given by the Boltzmann distribution with parameter $\beta$:
\[
p_\beta(\sigma) = \frac{e^{-\beta H(\sigma)}}{Z(\beta)},
\]
where $H(\sigma) = -J\sum_{\langle i, j \rangle} \sigma_i \sigma_j $, the notation $\langle i, j \rangle $ indicates that sites $i$ and $j$ are nearest neighbors.
The normalizing constant
\[
Z(\beta) = \sum_\sigma e^{-\beta H(\sigma)}
\]
is intractable for large $N$, as the summation contains $2^{N^2}$ terms. Throughout the experiment, we set the interaction parameter $J = 0.1$, the number of sites $N = 10$.
For each $\beta$, we have generated $10$ Ising configurations using the Wolff algorithm \cite{wolff1989collective}, a normal prior was put on $\beta$ and we sample from posterior using MPMC, SVE and MABMC respectively. The auxiliary density for MPMC is chosen to be $\pi(y|\text{data},\beta) = p_{\hat \beta}(y)$, where $\hat\beta$ is the maximum pseudo-likelihood estimator (MPLE) derived in \cite{besag1986statistical} for $\beta$.
Figure \ref{fig:MABMC_Ising} shows, the MABMC algorithm overall outperforms better than SVE and MPMC algorithm. In particular, if SVE is significantly better than MPMC (or vice versa), MABMC would have acceptance probability close to the better one. If SVE and MPMC perform similarly, MABMC would be able to achieve a higher acceptance probability.
\begin{figure}[htbp]
\includegraphics[width= \textwidth]{MABMC_Ising.png}
\caption{Average acceptance probabilities as a function of $\beta$ for Ising model.}
\label{fig:MABMC_Ising}
\end{figure}
\newpage
\section{Discussion}\label{sec:Discussion}
In this paper we have discussed the two most popular methods of tackling the doubly intractable problem -- Pseudo-marginal Monte Carlo (PMC) and exchange algorithm (SVE) in a statistical point of view. We have proposed MABMC which chooses between PMC and SVE for the `better' one in each step of the Markov chain iteration. It turns out that MABMC is easy to implement and achieves better average acceptance probability than PMC and SVE. However, many unknown questions remains:
\begin{itemize}
\item MPMC and SVE are two special cases of RMCMC algorithms, aiming for solving the doubly intractable problems. However, MABMC algorithms works for general MCMC algorithms with randomized acceptance ratios, given the transition kernels and target distributions are the same. It would be interesting to find out the applications of MABMC in other disciplines other than the doubly intractable problem.
\item The MABMC (min-max decision rule) comes from the intuition of modifying the natural invalid max decision rule (Example \ref{eg:invalid decision}), see \ref{subsec: MABMC, algorithm and intuition} for details. However, there is no guarantee that MABMC a is Bayesian decision rule, or even admissible (see Definition \ref{def: optimal decision rule}, \ref{def:inadmissible decision rule}), therefore there is still a large room to improve MABMC with respect to some metrics, or find another decision rule which dominates MABMC.
\item The convergence property of MABMC is unknown to us. It is clear that MABMC is less statistically efficient than the intractable original M-H algorithm. However, assuming the original M-H algorithm is uniformly ergodic or geometrically ergodic, it is worth investigating the condition that could ensure MABMC inherits such properties.
\end{itemize}
\begin{comment}
\subsection{Pseudo-marginal Exchange Algorithm}
The Pseudo-marginal Monte Carlo has the same state space as the exchange algorithm, therefore given the initial $\theta$ and the proposal transition $\theta'$, we could implement the Psdudo-marginal Monte Carlo and the exchange algorithm, randomly or alternatively. We call the two new algorithms `Random Scan Pseudo-marginal Exchange Algorithm' (RSPEA) and `Fixed Scan Pseudo-marginal Exchange Algorithm' (FSPEA), with details as follows:
\begin{algorithm}
\caption{Random Scan Pseudo-marginal Exchange Algorithm (RSPEA)}\label{alg:rspea}
\hspace*{\algorithmicindent} \textbf{Input:} initial setting $\theta$, number of iterations $T$ \\
\begin{algorithmic}[1]
\For{$t= 1,\cdots T$}
\State Draw $r \sim \text{Uniform}[0,1]$
\State \textbf{If} $(r< 0.5)$, \textbf{then} run one-step Modified Psdudo-marginal Monte Carlo Algorithm (algorithm \ref{alg:mpseudo}) with $\theta$ as output
\State \textbf{Else}, \textbf{then} run one-step Exchannge Algorithm (algorithm \ref{alg:Exchange}) with $\theta$ as output
\EndFor
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{Fixed Scan Pseudo-marginal Exchange Algorithm (FSPEA)}\label{alg:fspea}
\hspace*{\algorithmicindent} \textbf{Input:} initial setting $\theta$, number of iterations $T$ \\
\begin{algorithmic}[1]
\For{$t= 1,\cdots T$}
\State Run one-step Modified Psdudo-marginal Monte Carlo Algorithm (algorithm \ref{alg:mpseudo}) with $\theta$ as output
\State Run one-step Exchannge Algorithm with $\theta$ as output
\EndFor
\end{algorithmic}
\end{algorithm}
The RSPEA algorithm constructs a reversible Markov chain while the FSPEA algorithm constructs a non-reversible Markov chain, both chains have $\pi(\theta|x)$ as its stationary distribution.
\subsection{Numerical Example}
First we consider a concrete problem and is used in \cite{murray2012mcmc}. Let us consider sampling from the posterior of a single precision parameter $\theta$, which has likelihood corresponding to $N$ i.i.d. zero-mean Gaussian observations $y = \{y_1, \cdots, y_N\}$, with a conjugate prior
\[
p(y_n|\theta) =\mathcal N (0, \frac 1 \theta) \qquad p(\theta|\alpha, \beta) = \text{Gamma}(\alpha, \beta)
\]
The corresponding posterior is tractable
\[
p(\theta|y) = \text{Gamma}(\alpha + \frac N 2, \beta + \frac{\sum_n y_n^2} 2)
\]
but we pretend that the normalizing constant (which is $1$) is unknown to us.
\end{comment}
\newpage
\bibliographystyle{alpha}
|
1,314,259,996,611 | arxiv | \section{Introduction}
In recent years considerable attention has been paid to the analysis of dynamical systems behavior by analyzing the spectral properties of the associated Koopman operator. The Koopman operator was introduced in \cite{Koopman:1931}, as a composition operator acting on the space of observable functions. The crucial property of this operator is that it is linear on the space of observables. The renewed interest for Koopman operator starts with the works \cite{mezicandbanaszuk:2004} and \cite{mezic:2005}, where authors studied the problem of decomposing evolution of a field from the perspective of the operator theory. They proved that under certain conditions the flow dynamics can be accurately decomposed into simpler structures that are based on projection to the eigenfunctions of Koopman operator associated with the dynamical evolution of the observables. Due to the linearity of the Koopman operator, this approach is applicable even if the dynamics is nonlinear. Thus, the dynamical evolution of the system can be described by using Koopman mode analysis, which consists of determining the eigenvalues, eigenfunctions and eigenmodes of the Koopman operator, so that the considered dynamical system can be represented by the corresponding Koopman Mode Decomposition (KMD).
An overview of the spectral properties of Koopman operator and its application to the analysis of the fluid flow is given in \cite{rowleyetal:2009,Mezi2013,BudisicMezic_ApplKoop}.
A variety of methods for determining the numerical approximation of the KMD has been developed. A general method for computing the Koopman modes, based on the rigorous theoretical results for the generalized Laplace transform, is known under the name Generalized Laplace Analysis (GLA) \cite{Mezi2013,BudisicMezic_ApplKoop}. It reduces to the Wiener's generalized harmonic analysis in the case when all the eigenvalues are on the unit circle \cite{Mauroy2012}. Another method that is closely related to the Koopman operator and is based on its spectral properties is the Dynamic Mode Decomposition (DMD) method \cite{rowleyetal:2009}. Like GLA, DMD method belongs to the class of data driven algorithms, so that it can be applied to the time series of data, even if the underlying mathematical model is not known. The first DMD method for evaluating the Koopman mode and Koopman eigenvalues was the Arnoldi-like method based on the companion matrix \cite{rowleyetal:2009}. The more stable algorithm using the similar approach was based on the DMD decomposition and was proposed independently and with no relation to Koopman mode analysis by Schmid in \cite{schmid:2010,schmid:2011}. Tu et.al. provide in their paper \cite{Tu2014jcd} several alternative algorithms for evaluating DMD modes and eigenvalues and give comparison between them. They introduced the algorithm known under the name exact DMD. Williams et.al. introduced the extension of the DMD algorithm in \cite{williams2014data}, which they referred to as the Extended Dynamic Mode Decomposition (EDMD). This is an entirely data driven procedure for evaluating leading Koopman eigenfunctions, eigenvalues and modes from a data set of snapshot pairs and a dictionary of observables. Recently, Arbabi and Mezi\'{c} in \cite{ArbabiMezic2016} introduced a further extension of the DMD algorithm, Hankel DMD, based on the use of Hankel matrix instead of the companion matrix, for computation of the Koopman spectrum on single observable. They prove that the eigenvalues and eigenfunctions determined by the proposed Hankel-DMD method converge for the ergodic systems to the true eigenfunctions and eigenvalues of the infinite dimensional Koopman operator. The consequence of their results is that the convergence of the exact DMD \cite{Tu2014jcd} for the ergodic systems is obtained.
Mentioned numerical algorithms for evaluating the spectral elements of Koopman operator have been successfully used to analyze different dynamical systems and flow configurations \cite{BudisicMezic_ApplKoop,susukiandmezic:2009,susukiandmezic:2012,GlazMezicFonoberovaLoire,
eisenhoweretal:2010}.
However, the Koopman operator analysis was almost exclusively applied to the autonomous systems. As far as we know, the Koopman operator framework was firstly extended to the non-autonomous dynamical systems in \cite{suranamezic}. They introduced a rigorous definition of the non-autonomous Koopman eigenvalues, eigenfunctions and modes, which are the building blocks of the non-autonomous Koopman mode decomposition used for describing the dynamic evolution of the flow governed by a non-autonomous system. This extension entails the time dependent eigenfunctions, eigenvalues and modes of the Koopman operator. They successfully applied the introduced extension to the linear periodic and quasi-periodic non-autonomous systems. The study of a non-autonomous dynamical system through the spectral properties of the corresponding Koopman operator is used in \cite{susukiandmezic:2009,susukiandmezic:2012} for analyzing the power exchange deviation in the European grid disturbance. In these papers, the Arnoldi and Prony method for evaluating the Koopman mode decomposition were used. Recently, in \cite{KutzFuBrunton}, the multi resolution DMD (mrDMD) for decomposing data with multiple time scales has been proposed with the successful application to the non-stationary data.
The intention of this work is to apply the non-autonomous Koopman mode decomposition to linear non-autonomous systems. We use DMD algorithms, originally developed for the autonomous systems, to evaluate the time dependent eigenvalues, eigenfunctions and modes of the Koopman mode decomposition. We explore the limitation of the Arnoldi-like methods for their application to such systems. As a special case we consider the hybrid linear systems and develop a stable algorithm for evaluating the spectral decomposition by using the informations of the subspace projection error. Due to the special structure of the companion matrix used in the Arnoldi-like method, we show that, for the case of time-dependent linear systems, the appropriate choice of observables is necessary for determining the good approximations of non-autonomous Koopman eigenvalues.
The paper is organized as follows. In Section \ref{sec:LNA-koop} the definition of the Koopman operator family for non-autonomous system is introduced and for the linear non-autonomous dynamical systems the relation with the fundamental matrix of the system is clarified in Theorem \ref{thm:kopp-fmatrix}. In Section \ref{sec:data-algorithm} we point out the issues that arise when Arnoldi-like methods are applied and we specify the error that arises from continuously changing underlying matrix in Theorem \ref{thm:NLA-Aerror}. Then we propose two algorithms that resolve these issues: Algorithm 1 for the hybrid dynamical systems, and Algorithm 2 for the dynamical systems with continuously changing underlying matrix. The advantages of the new algorithms are demonstrated in several numerical examples in Section \ref{sec:results}.
\section{Koopman operator family of the linear \\ non-autonomous system}\label{sec:LNA-koop}
The linear non-autonomous system is a dynamical system governed by
\begin{equation}\label{eq:LNA-system}
\dot{\mathbf{x}}=\mathbf{A}(t)\mathbf{x},
\end{equation}
where $\mathbf{x}=\mathbf{x}(t)$ is the $n$-dimensional state vector, and $\mathbf{A}=\mathbf{A}(t)$ is a given time-dependent matrix.
For any dynamical system, the Koopman operator family $\mathcal{K}^{(t,t_0)}$ is defined with its action on the observables $f=f(\mathbf{x})$
\begin{equation}\label{eq:koop}
\mathcal{K}^{(t,t_0)}f(\mathbf{x}(t_0)) = f(\mathbf{x}(t)).
\end{equation}
As usual in the Koopman operator framework, our main goal is to find non-autonomous Koopman eigenvalues $\lambda^{(t,t_0)}$ and eigenfunctions $\phi^{(t,t_0)}$ \cite{Mezi2013,suranamezic} defined by
\begin{equation}\label{eq:koop-eigen-def}
\mathcal{K}^{(t,t_0)}\phi^{(t,t_0)} = e^{\lambda^{(t,t_0)}}\phi^{(t,t_0)}.
\end{equation}
For the linear non-autonomous system (\ref{eq:LNA-system}), the fundamental matrix is defined as the matrix $\mathcal{M}(t,t_0)$ whose $i^{th}$ column is the solution of the equation (\ref{eq:LNA-system}) for the initial condition ${\bf x}(t_0)={\bf e}_i$, $i=1,...,n$. Then, the solution of (\ref{eq:LNA-system}) for the initial condition $\mathbf{x}(t_0)=\mathbf{x}_0$ can be written in the form
\begin{equation}\label{eq:LNA-fmatrix}
\mathbf{x}(t) = \mathcal{M}(t,t_0) \mathbf{x}_0.
\end{equation}
\begin{proposition}\label{thm:kopp-fmatrix}
Consider the fundamental matrix $\mathcal{M}(t,t_0)$ of the linear non-autonomous dynamical system (\ref{eq:LNA-system}). If
\begin{equation}
(\mu_{i}^{(t,t_0)},{\bf w}_{i}^{(t,t_0)},{\bf v}_{i}^{(t,t_0)}), i=1,...,n
\end{equation}
are the eigenvalues, left and right eigenvectors of the fundamental matrix $\mathcal{M}(t,t_0)$, then $\lambda_{i}^{(t,t_0)}$, $i=1,...,n$ such that
\begin{equation}
\mu_{i}^{(t,t_0)}=e^{\lambda_{i}^{(t,t_0)}}, i=1,...,n
\end{equation}
are the eigenvalues and
\begin{equation}
\phi_{i}^{(t,t_0)}(\cdot)=\langle \cdot, {\bf w}_{i}^{(t,t_0)} \rangle, i=1,...,n
\end{equation}
are the eigenfunctions of the Koopman operator ${\cal K}^{(t,t_{0})}$.
Furthermore, ${\bf v}_{i}^{(t,t_0)}$, $i=1,...,n$ are the Koopman modes of the full state observable and the following expansion is valid
\begin{equation}
\mathbf{x}(t)={\cal K}^{(t,t_{0})}(\mathbf{x}_0)=\sum_{i=1}^{n} e^{\lambda_{i}^{(t,t_0)}} \phi_{i}^{(t,t_0)}(\mathbf{x}_0) {\bf v}_{i}^{(t,t_0)}.
\end{equation}
\end{proposition}
\begin{proof}
The stated connection between Koppman operator ${\cal K}^{(t,t_{0})}$ and the fundamental matrix $\mathcal{M}(t,t_0)$ for linear non-autonomous dynamical system is easily derived using the linearity of (\ref{eq:LNA-system}) relative to $\mathbf{x}$.
\end{proof}
{\bf Example 1.}
Let as consider the scalar linear non-autonomous dynamical system
\begin{equation}\label{eq:ls-1d}
\dot{z} = a(t) z.
\end{equation}
Since the solution of this system equals to
$${\displaystyle z(t) = z(t_0) e^{ \int_{t_0}^{t} a(\tau) d\tau}},$$
it is quite easy to verify that
$$\lambda_1^{(t,t_0)} = \int_{t_0}^{t} a(\tau) d\tau \mbox{ and } \phi_1^{(t,t_0)} (z) = z$$
are the eigenvalue and the eigenfunction of the Koopman operator. Some of the other Koopman eigenvalues and eigenfunctions are
$$\lambda_m^{(t,t_0)} = m \lambda_1^{(t,t_0)}\mbox{ and }\phi_m^{(t,t_0)} (z) = z^{m}, m=2,3,...$$
Several higher dimensional examples are examined in detail in Section \ref{sec:results}. \\
Additionally to what is stated in Theorem \ref{thm:kopp-fmatrix}, the fundamental matrix family has another property
\begin{equation}\label{eq:fmatrix-composition}
\mathcal{M}(t,t_0) = \mathcal{M}(t,t_1)\mathcal{M}(t_1,t_0)\mbox{, for all }t>t_1>t_0,
\end{equation}
analogous to the Koopman operator family property
\begin{equation}
\mathcal{K}^{(t,t_0)} = \mathcal{K}^{(t,t_1)}\mathcal{K}^{(t_1,t_0)}\mbox{, for all }t>t_1>t_0.
\end{equation}
There are some important cases of linear non-autonomous systems for which fundamental matrix can be analytically obtained. We discuss these next.
The first case is the hybrid linear non-autonomous system, i.e. dynamical system (\ref{eq:LNA-system}) with the piecewise constant matrix
\begin{equation}\label{eq:LNA-hybrid}
\mathbf{A}(t) = \sum_{l=0}^{\infty} \mathbf{A}_l\mathbb{1}_{\left[\right.T_{l},T_{l+1}\rangle}.
\end{equation}
Here $T_l$, $l=0,1,...$ is a sequence of time moments, and $\mathbf{A}_l$, $l=0,1,...$ is a sequence of constant matrices. In this case the fundamental matrix is given iteratively by
\begin{equation}\label{eq:LNA-hybrid-fmatrix}
\mathcal{M}(t,t_0) = e^{\mathbf{A}_{l(t)} (t-T_{l(t)})}\mathcal{M}(T_{l(t)},t_0)
\end{equation}
where the index $l(t)$ is determined so that $t \in \left[\right.T_{l(t)},T_{l(t)+1}\rangle$.
The second case is when the matrices $\mathbf{A}(t)$, $t>t_0$ have the same time independent eigenvectors, i.e.
\begin{equation}\label{eq:NLA-comm}
\mathbf{A}(t)=\mathbf{R}\cdot\mathbf{\Lambda}_\mathbf{A}(t)\cdot\mathbf{R}^{-1}
\end{equation}
where $\mathbf{R}=\left( {\bf v}_{1} ... {\bf v}_{n} \right)$ is the matrix of the right eigenvectors, and $\mathbf{\Lambda}_\mathbf{A}(t)$ is the diagonal matrix with the corresponding eigenvalues on the diagonal. Then the fundamental matrix is given by
\begin{equation}\label{eq:NLA-comm-fmatrix}
\mathcal{M}(t,t_0)=\mathbf{R}\cdot e^{\int_{t_0}^{t}\mathbf{\Lambda}_\mathbf{A}(\tau)d\tau} \cdot \mathbf{R}^{-1}.
\end{equation}
In the general case, the fundamental matrix can be computed by some appropriate numerical method for the underlying system of ordinary differential equations.
\section{Data-driven algorithm for time dependent eigenvalues}\label{sec:data-algorithm}
Suppose that for some linear non-autonomous system (\ref{eq:LNA-system}) we have a sequence of snapshots of the full state observable
\begin{equation}\label{eq:LNA-data}
\mathbf{x}_k=\mathbf{x}(t_k), k=0,1,...
\end{equation}
where $t_k=k\Delta t$, $k=0,1,...$. Our task is to compute spectrum of the Koopman operators $\mathcal{K}^{(t_k,t_0)}$, $k=0,1,...$ from these snapshots.
Due to the established connection with the spectrum of fundamental matrices, this task can be reduced to the computation of matrices $\mathcal{M}(t_k,t_0)$, $k=0,1,...$. From (\ref{eq:fmatrix-composition}) we get
\begin{equation}\label{eq:fmatrix-composition-dt}
\mathcal{M}(t_k,t_0) = \mathcal{M}(t_k,t_{k-1})\mathcal{M}(t_{k-1},t_0), k=1,2,...
\end{equation}
This gives us the possibility to further reduce the problem to the approximate evaluation of local fundamental matrices $\mathcal{M}(t_k,t_{k-1})$, $k=1,2,...$.
In order to do this, let us look at the local stencil of snapshots
\begin{equation}\label{eq:LNA-local-stencil}
\mathbf{x}_{k-1}, \mathbf{x}_{k}, ..., \mathbf{x}_{k+s-1}
\end{equation}
with small $s\ge n$, which is the same for all stencils. To the local stencil (\ref{eq:LNA-local-stencil}) we can apply any of the Arnoldi-like methods and obtain a matrix $\mathbf{M}_{k,k-1}$ such that
\begin{equation}\label{eq:LNA-local-fmatrix}
\mathbf{x}_{k+j}\approx\mathbf{M}_{k,k-1} \mathbf{x}_{k+j-1}, j=0,1,...,s-1
\end{equation}
The approximation is obtained by the projection of $\mathbf{x}_{k+s-1}$ to the Krylov subspace spanned with $\mathbf{x}_{k-1}, \mathbf{x}_{k}, ..., \mathbf{x}_{k+s-2}$
\begin{equation}
c_0 \mathbf{x}_{k-1} + c_1 \mathbf{x}_{k} + \cdots c_{s-1} \mathbf{x}_{k+s-2} = \mathbf{x}_{k+s-1} + \mathbf{r}
\end{equation}
under the condition that
\begin{equation}\label{eq:project_cond}
\mathbf{r}\perp\mathbf{x}_{k-1},\mathbf{x}_{k},...,\mathbf{x}_{k+s-2}.
\end{equation}
The matrix representation of the projection operator in basis $\mathbf{x}_{k-1}, \mathbf{x}_{k}, ..., \mathbf{x}_{k+s-2}$ is given by the companion matrix
\begin{equation}\label{eq:companion-matrix}
\mathbf{C}=\left(\begin{array}{ccccc}
0 & 0 & \cdots & 0 & c_0 \\
1 & 0 & \cdots & 0 & c_1 \\
0 & 1 & \cdots & 0 & c_2 \\
\vdots & \ddots & \ddots & \vdots & \vdots \\
0 & 0 & \cdots & 1 & c_{s-1}
\end{array} \right).
\end{equation}
The condition (\ref{eq:project_cond}) guaranties that the projection error
\begin{equation}\label{eq:AA-prj-err}
\|\mathbf{r}\|_2=\| \mathbf{x}_{k+s-1} - \left( c_0 \mathbf{x}_{k-1} + c_1 \mathbf{x}_{k} + \cdots c_{s-1} \mathbf{x}_{k+s-2} \right) \|_2
\end{equation}
is minimal.
Observe that the companion matrix (\ref{eq:companion-matrix}) is representation of a finite-dimensio-nal approximation of the Koopman operator $\mathcal{K}(t_k,t_{k-1})$ relative to the Krylov basis, while the matrix $\mathbf{M}_{k,k-1}$ is a representation of the same approximation, but relative to the original basis. After we find $\mathbf{M}_{k,k-1}$, $k=1,2,...$, we can construct the approximate fundamental matrix family
\begin{equation}
\mathbf{M}_{0,0} =\mathbf{I}\mbox{, } \mathbf{M}_{k,0} = \mathbf{M}_{k,k-1}\mathbf{M}_{k-1,0}, k=1,2,...
\end{equation}
In the case when the fundamental matrix can be analyticialy computed, we can messure the error of the proposed algorithm using
\begin{equation}\label{eq:data-alg-err}
E_{k}= \left|\mathbf{M}_{k,0}-\mathcal{M}(t_k,t_0)\right|_{2}/\left|\mathcal{M}(t_k,t_0)\right|_{2}
\end{equation}
Since at the end we apply Theorem \ref{thm:kopp-fmatrix} and use matrix family $\mathbf{M}_{k,0}$, $k=1,2,...$ to compute approximations of Koopman eigenvalues and eigenvectors, we can look at (\ref{eq:data-alg-err}) as an integrated error of the proposed approximation of the Koopman operator family spectrum.
\subsection{Hybrid linear non-autonomous system}\label{ss:hybrid-algorithm}
For the hybrid linear non-autonomous system (\ref{eq:LNA-hybrid}) the local fundamental matrix is given by
\begin{equation}\label{eq:LNA-hybrid-fmatrix-local}
\mathcal{M}(t_k,t_{k-1})=e^{\mathbf{A}_{l(k)}\Delta t}
\end{equation}
where $l(k)$ is such that $[t_{k-1},t_{k}]\subset\left[\right.T_{l(k)},T_{l(k)+1}\rangle.$
From the point of the local stencil (\ref{eq:LNA-local-stencil}), one possibility is that there is some $l(k)$ such that
$$T_{l(k)+1}\in [t_{k-1},t_{k+s-1}].$$
Then, $\mathbf{M}_{k,k-1}$ is an attempt to approximate two very different matrices: $e^{\mathbf{A}_{l(k)}\Delta t}$ and $e^{\mathbf{A}_{l(k)+1}\Delta t}$. As shown in the examples in the next section, this results in a significant increase in the Krylov subspace projection error (\ref{eq:AA-prj-err}). Therefore, the projection error is a switch time indicator and we can use it to indentify all $T_l$, $l=0,1,...$.
Suppose that $\Delta t$ is small enough, such that for each $l$, we can find $k(l)$ for which the following inclusion is valid
$$[t_{k(l)-1},t_{k(l)+s-1}]\subset\left[\right.T_{l},T_{l+1}\rangle.$$
This means that such local stencil (\ref{eq:LNA-local-stencil}) is completely produced by the action of matrix
\begin{equation}\label{eq:LNA-hybrid-fmatrix-app}
\mathbf{M}_{k(l),k(l)-1}=e^{\mathbf{A}_{l}\Delta t},
\end{equation}
and we can use this to identify all $\mathbf{A}_l$, $l=0,1,...$.
Therefore, in the hybrid linear non-autonomous system case, the proposed approach leads to a full identification of the system, and consequently also of its Koopman operator family. The resulting algorithm is summarized bellow.
\\
\\
\begin{small}\label{alg:hybrid}
{\bf Algorithm 1 (for hybrid systems)}
\begin{enumerate}
\item Choose stencil size $s=n$ and maximal projection error $\epsilon>0$.
\item Apply Arnoldi-like method to local stencil of snapshots $\{\mathbf{x}_{k-1},\ldots,\mathbf{x}_{k+s-1}\}$, to determine $\mathbf{M}_{k,k-1}$ and the projection error $\|r_k\|$ (\ref{eq:AA-prj-err}).
\item If $\|r_k\| > \epsilon$, set $\mathbf{M}_{k,k-1}=\mathbf{M}_{k-1,k-2}$.
\item Compute $\mathbf{M}_{k,0}=\mathbf{M}_{k,k-1}\mathbf{M}_{k-1,0}$
\item Compute dynamical system matrix eigenvalues from $\mathbf{M}_{k,k-1}$ and Koopman operator eigenvalues from $\mathbf{M}_{k,0}$.
\item Repeat steps 2-5, for all $k=1,2,...$.
\end{enumerate}
\end{small}
\subsection{Linear non-autonomous system with nonlinear time dependency}\label{ss:nonlinear-time}
\begin{theorem}\label{thm:NLA-Aerror}
Consider the dynamical system (\ref{eq:LNA-system}) with
\begin{equation}\label{eq:LNA-oscillation}
\mathbf{A}(t) =
\left(\begin{array}{rc}
\sigma(t) & \omega(t) \\
-\omega(t) & \sigma(t)
\end{array} \right)
\end{equation}
where $\omega, \sigma\in C^2\left( \left[ t_0,\infty \right.\rangle \right)$, $\omega\ne 0$.
Let $\mu_i$, $i=1,2$ be complex conjugate eigenvalues of the companion matrix related to the Krylov subspace spanned with
\begin{equation}\label{eq:Krylov3}
\mathbf{x}(t-\Delta t), \mathbf{x}(t), \mathbf{x}(t+\Delta t) .
\end{equation}
Then, the following relations hold
\begin{equation}\label{eq:AA-error-Re-lim}
\ln|\mu_i| = \left( \sigma(t) + \frac{\dot{\omega}(t)}{2\omega(t)} \right) \Delta t + \mathcal {O}(\Delta t^2),
\end{equation}
\begin{equation}\label{eq:AA-error-Im-lim}
\mbox{Arg}(\mu_i) = \omega(t)\Delta t\sqrt{1-\frac{\dot{\sigma}(t)}{\omega(t)^2}}+ \mathcal {O}(\Delta t^2).
\end{equation}
for every $t\in\left[ t_0,\infty \right.\rangle$.
\end{theorem}
\begin{proof}
Since
\begin{equation}
\mathbf{A}_1 \mathbf{A}_2 =
\left(\begin{array}{rc}
\sigma_1\sigma_2-\omega_1\omega_2 & \sigma_1\omega_2+\omega_1\sigma_2 \\
-(\sigma_1\omega_2+\omega_1\sigma_2) & \sigma_1\sigma_2-\omega_1\omega_2
\end{array} \right)
\end{equation}
where $\mathbf{A}_i=\mathbf{A}(t_i)$, $t_i\ge t_0$, $i=1,2$, i.e. matrices (\ref{eq:LNA-oscillation}) are commutative, we can apply (\ref{eq:NLA-comm-fmatrix}) and find the fundamental matrix
\begin{equation}\label{eq:osci-fmatrix}
\mathcal{M}(t,t_0)=
e^{\alpha(t,t_0)}
\left(\begin{array}{rc}
\cos\beta(t,t_0) & \sin\beta(t,t_0) \\
-\sin\beta(t,t_0) & \cos\beta(t,t_0)
\end{array} \right),
\end{equation}
where
\begin{equation}\label{eq:osci-ab}
\alpha(t,t_0)=\int_{t_0}^{t}\sigma(\tau)d\tau \mbox{, and } \beta(t,t_0)=\int_{t_0}^{t}\omega(\tau)d\tau.
\end{equation}
For the initial condition written in the form
$$\mathbf{x}(t_0)=
e^{\alpha_0}
\left(\begin{array}{c}
\cos\beta_0 \\
\sin\beta_0
\end{array} \right),$$
the solution is
\begin{equation}
\mathbf{x}(t)=
\mathcal{M}(t,t_0)\mathbf{x}(t_0)=
e^{\alpha(t,t_0)+\alpha_0}
\left(\begin{array}{r}
\cos(\beta(t,t_0)-\beta_0) \\
-\sin(\beta(t,t_0)-\beta_0)
\end{array} \right).
\end{equation}
Now, for a chosen $t\in\left[ t_0,\infty \right.\rangle$ and $\Delta t>0$ we look at the Krylov subspace spanned with $\mathbf{x}_{-}=\mathbf{x}(t-\Delta t), \mathbf{x}=\mathbf{x}(t), \mathbf{x}_{+}=\mathbf{x}(t+\Delta t)$
and compute the related companion matrix, i.e. we must find $c_0,c_1$ such that
$$
c_0 \mathbf{x}_{-} + c_1 \mathbf{x} = \mathbf{x}_{+} + \mathbf{r} \mbox{, and } \mathbf{r}\perp\mathbf{x}_{-},\mathbf{x}.
$$
The solution is
\begin{equation}
c_0 = -e^{\alpha_{-}+\alpha_{+}}\frac{\sin\beta_{+}}{\sin\beta_{-}},
\end{equation}
\begin{equation}
c_1 = e^{\alpha_{+}}\frac{\sin(\beta_{-}+\beta_{+})}{\sin\beta_{-}},
\end{equation}
where $\alpha_{\pm}=\pm\alpha(t\pm\Delta t,t)$ and $\beta_{\pm}=\pm\beta(t\pm\Delta t,t)$. The eigenvalues $\mu_i$ are roots of the equation $\mu^2-c_1\mu-c_0=0$ so they satisfy
\begin{equation}\label{eq:Arnoldi-root-Re}
\ln|\mu_i| = \frac{1}{2}\ln(-c_0),
\end{equation}
\begin{equation}\label{eq:Arnoldi-root-Im}
\tan(\mbox{Arg}(\mu_i))=\sqrt{\left(\frac{2\sqrt{-c_0}}{c_1} \right)^2-1}.
\end{equation}
Since $\omega=\omega(\tau)$ and $\sigma=\sigma(\tau)$ can be approximated with Taylor polynomials of first degree in the neighborhood of $t$, with integration we get
\begin{equation}\label{eq:AA-alphapm}
\alpha_{\pm} = \sigma(t)\Delta t \pm \frac{1}{2}\dot{\sigma}(t)\Delta t^2 + \mathcal {O}(\Delta t^3),
\end{equation}
\begin{equation}\label{eq:AA-betapm}
\beta_{\pm} = \omega(t)\Delta t \pm \frac{1}{2}\dot{\omega}(t)\Delta t^2 + \mathcal {O}(\Delta t^3).
\end{equation}
By applying (\ref{eq:AA-alphapm}) and (\ref{eq:AA-betapm}) to (\ref{eq:Arnoldi-root-Re}) we get
\begin{equation}\label{eq:AA-error-Re}
\ln|\mu_i| = \sigma(t)\Delta t + \frac{1}{2}\ln\left(\frac{\sin\beta_{+}}{\sin\beta_{-}} \right)+
\mathcal{O}(\Delta t^2)
\end{equation}
Further computations give us
\begin{equation}
\lim\limits_{\Delta t\rightarrow 0} \ln\left(\frac{\sin\beta_{+}}{\sin\beta_{-}} \right)=0,
\end{equation}
but
\begin{equation}
\lim\limits_{\Delta t\rightarrow 0} \frac{1}{\Delta t}\ln\left(\frac{\sin\beta_{+}}{\sin\beta_{-}} \right)=\frac{\dot{\omega}(t)}{\omega(t)}.
\end{equation}
Therefore, (\ref{eq:AA-error-Re-lim}) is valid.
For the imaginary part, we apply (\ref{eq:AA-alphapm}) and (\ref{eq:AA-betapm}) to (\ref{eq:Arnoldi-root-Im}) and get
\begin{equation}\label{eq:AA-error-Im}
\tan(\mbox{Arg}(\mu_i))=\tan(\omega(t)\Delta t)\sqrt{1+\frac{e^{-\dot{\sigma}(t)\Delta t^2}-1} {\sin^2(\omega(t)\Delta t)}} + \mathcal {O}(\Delta t^2).
\end{equation}
Computations give us
\begin{equation}
\lim\limits_{\Delta t\rightarrow 0} \frac{e^{-\dot{\sigma}(t)\Delta t^2}-1}{\sin^2(\omega(t)\Delta t)} =
\frac{-\dot{\sigma}(t)}{\omega(t)^2},
\end{equation}
Therefore, (\ref{eq:AA-error-Im-lim}) is also valid.
\end{proof}
From the Arnoldi-like method applied to the Krylov subspace (\ref{eq:Krylov3}) we expect to obtain the approximation of the eigenvalues that correspond to the action of the Koopman operator on that subspace. Thus, due to its close relation with the fundamental matrix we should expect to get the eigenvalues of the fundamental matrix $\mathcal{M}(t,t-\Delta t) \approx e^{\mathbf{A}(t) \Delta t}$, which are approximately equal to $e^{\sigma(t) \Delta t \pm i \omega(t) \Delta t}$. However, the consequence of the Theorem \ref{thm:NLA-Aerror} is that any Arnoldi-like method applied to the dynamical system with the underlying matrix (\ref{eq:LNA-oscillation}) mixes the values of the real and imaginary parts of the local Koopman eigenvalues and produces an error that doesn't vanish with $\Delta t\rightarrow 0$. This issue is visible in examples presented in Section \ref{sec:results}. Since form of the matrix (\ref{eq:LNA-oscillation}) is the form of the Jordan block belonging to any couple of complex conjugate eigenvalues, the discovered issue goes far beyond the examined two-dimensional dynamical system.
The appropriate way to handle this issue in the Koopman framework is to redefine the observables. Application of (\ref{eq:koop-eigen-def}) to the linear non-autonomous system (\ref{eq:LNA-oscillation}) gives eigenvalues and eigenfunctions of the Koopman operator family $\mathcal{K}^{(t,t_0)}$ that can be computed from the spectrum of matrix $\mathbf{A}(t)$
\begin{equation}
\lambda_{\pm}^{(t,t_0)}=\alpha(t,t_0)\pm i\beta(t,t_0), \phi_{\pm}^{(t,t_0)}=x_1\mp ix_2.
\end{equation}
However, this choice of observables doesn't solve the Arnoldi-like method issue because it doesn't decouple real and imaginary part of the eigenvalues.
We can obtain the desired decoupling by using observables defined by
\begin{equation}\label{eq:good-observables}
u_1 = \sqrt{x_1^2+x_2^2} \mbox{ and }
u_2 = (x_1 + ix_2)/u_1.
\end{equation}
These observables are also eigenfunctions of the same Koopman operator family $\mathcal{K}^{(t,t_0)}$ related to eigenvalues
\begin{equation}
\lambda_1^{(t,t_0)}=\alpha(t,t_0) \mbox{ and } \lambda_2^{(t,t_0)}=-i\beta(t,t_0)
\end{equation}
respectively.
It is easy to see that vector of these observables $\mathbf{u}=(u_1,u_2)^T$ satisfies
\begin{equation} \label{eq:LNA-osci-redef}
\dot{\mathbf{u}}=\mathbf{\tilde{A}}(t)\mathbf{u},
\end{equation}
with diagonal time dependent matrix $\mathbf{\tilde{A}}(t)$ whose diagonal elements are
\begin{equation}
\sigma(t) \mbox{ and } -i\omega(t).
\end{equation}
The fundamental matrix $\tilde{\mathcal{M}}(t,t_0)$ for (\ref{eq:LNA-osci-redef}) is also diagonal, with diagonal elements
\begin{equation}
e^{\alpha(t,t_0)} \mbox{ and } e^{-i\beta(t,t_0)}.
\end{equation}
With this choice of observables the proposed algorithm works well, and we get accurate identification of eigenvalues, as it can be seen in the examples in Section \ref{sec:results}. At the end, by using relations
\begin{equation}
x_1=u_1(u_2+\bar{u_2})/2 \mbox{ and } x_2=u_1(u_2-\bar{u_2})/2i
\end{equation}
we can reconstruct the full state observables.
The algorithm is summarized bellow.
\\
\\
\begin{small}\label{alg:continuous}
{\bf Algorithm 2 (for continuous time dependency systems)}
\begin{enumerate}
\item Chose a set of observables $\mathbf{u}=(u_1,...,u_m)^T$ appropriate for the considered system.
\item Apply Arnoldi-like method to local stencil of two snapshots $\{u_i(t_{k-1}),u_i(t_{k})\}$, separately for each $i=1,...,m$, and then determine $\mathbf{\tilde{M}}_{k,k-1}$.
\item Compute $\mathbf{\tilde{M}}_{k,0}=\mathbf{\tilde{M}}_{k,k-1}\mathbf{\tilde{M}}_{k-1,0}$
\item Compute dynamical system matrix eigenvalues from $\mathbf{\tilde{M}}_{k,k-1}$ and Koopman operator eigenvalues from $\mathbf{\tilde{M}}_{k,0}$.
\item Repeat steps 2-4, for all $k=1,2,...$.
\end{enumerate}
\end{small}
This approach leads us to the same good observables / coordinates that were pointed out in \cite{BudisicMezic_ApplKoop} through theoretical investigation.
Notice that if $r(\mathbf{x})$ and $\Theta(\mathbf{x})$ are polar coordinates of the state vector $\mathbf{x}$ as defined in \cite{BudisicMezic_ApplKoop}, then $u_1 = r(\mathbf{x})$ and $u_2 = e^{ i \Theta(\mathbf{x})}$. Thus, we can conclude that the eigenfunctions $r(\mathbf{x})$ and $e^{i \Theta(\mathbf{x})}$ provide us with a good coordinate system for studying dynamics of the considered system. Moreover, as it was shown in \cite{BudisicMezic_ApplKoop}, the initial system defined with (\ref{eq:LNA-system}) and (\ref{eq:LNA-oscillation}) and the obtained system (\ref{eq:LNA-osci-redef}) are
topologically conjugate to each other through the nonlinear conjugate transformation
$m : \mathbf{R}^{2} \rightarrow \mathbf{R}^{+} \times S^{1}$ given by
\begin{equation}\label{eq:ConjMap}
m(\mathbf{x})=\left(r(\mathbf{x}),e^{i \Theta(\mathbf{x})} \right).
\end{equation}
\section{Numerical results}\label{sec:results}
In this section first we demonstrate the application of Algorithm 1 on the hybrid linear non-autonomous systems (test examples \ref{ss:h2f}, \ref{ss:hdd}, \ref{ss:hco3}, and \ref{ss:mcm}). Then, on the linear non-autonomous systems with continuously changing underlying matrix (test examples \ref{ss:c2f}, \ref{ss:cdd}, and \ref{ss:cco3}), we show performance of Algorithm 2. Effects of the change (discontinuous and continuous) in the imaginary part of the underlying matrix eigenvalues are examined in test examples \ref{ss:h2f} and \ref{ss:c2f}; while changes in the real part are examined in test examples \ref{ss:hdd} and \ref{ss:cdd}. Examples \ref{ss:hco3} and \ref{ss:cco3} are introduced to test new algorithms on higher dimensional dynamical systems with all state variables strongly coupled. Test example \ref{ss:mcm} is a successful application of Algorithm~1 to multicompartment models with time delay \cite{Svenkeson}, which is often referenced in medicine and pharmacotherapy.
All the examples are chosen so that exact solutions and exact Koopman eigendecompositions can be computed. Also, if not stated otherwise, a sequence of snapshots for testing data-driven algorithms is provided using time step $\Delta t=0.01$.
The SVD enhanced DMD algorithm \cite{schmid:2010} is the Arnoldi-like method used in step 2 of both new algorithms. Also, new algorithms are compared with the same method applied on the moving stencil. The Standard DMD in all plot legends denotes the DMD algorithm from \cite{schmid:2010}.
Finally, in all the plots in which exact and numerical values overlap, the exact values are very slightly offset to assure both sets of data are visible.
\subsection{Switching frequency}\label{ss:h2f}
An oscillator with the switching frequency has governing equations of the form (\ref{eq:LNA-system}) with the underlying matrix (\ref{eq:LNA-hybrid}) where
\begin{equation}\label{eq:ex_2f}
\mathbf{A}_l = \left\{\begin{array}{ll}
\left(\begin{array}{cc}
0 & 1 \\
-\omega_1^2 & 0
\end{array} \right), & l=0,2,4,... \\
\\
\left(\begin{array}{cc}
0 & 1 \\
-\omega_2^2 & 0
\end{array} \right), & l=1,3,5,...
\end{array} \right.
\end{equation}
The eigenvalues of the underlying matrices are $\pm \omega_1 i$ and $\pm \omega_2 i$, and the matrices are non-commutative. For the frequency values $\omega_1=2$, $\omega_2=1$, and switching times $T_l=l$, $l=1,2,...$ the oscillator is unstable. The exact solution is given by (\ref{eq:LNA-hybrid-fmatrix}).
In Fig.\ref{fig:h2f-01}(b) we show the exact fundamental matrix eigenvalues for the time interval $[0,5]$, and we observe eigenvalues that are not on the unit circle. This time interval is chosen since the values of the eigenvalues off the unit circle grow even more with time, and the unit circle on such plot would not be visible. In Fig.\ref{fig:h2f-02} the eigenvalues of the underlying dynamical system matrix are plotted and these are not revealing the instability. However, when the time-dependent Koopman operator eigenvalues are plotted in Fig.\ref{fig:h2f-03}, we clearly observe the process of the amplitude growth. The negative real parts of Koopman eigenvalues belong to the particular solution that decays, which is added to the particular solution that has growing real parts of eigenvalues; so the solution is unstable. On these figures exact values and values computed with Algorithm 1 completely overlap. However, if the dynamical system matrix eigenvalues are computed with the standard DMD algorithm on moving stencils (as in Fig.\ref{fig:h2f-02-err}), incorrect values appear at every switch. This causes a significant error in Koopman operator eigenvalues also (as in Fig.\ref{fig:h2f-03-err}). In Fig.\ref{fig:h2f-02-err}(c) we observe how the Krylov subspace projection error has large values at every switch, which is the key point for Algorithm 1.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\linewidth]{paper_images/h2f/fig01.png}
\caption{Exact solution starting with the inital condition (1,1) in the state space (a), and exact fundamental matrix eigenvalues over time interval $[0,5]$ (b) for the dynamical system with switching frequency (\ref{eq:ex_2f}).}
\label{fig:h2f-01}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\linewidth]{paper_images/h2f/fig02-err.png}
\caption{Dynamical system matrix eigenvalues and the projection error (\ref{eq:AA-prj-err}) for the dynamical system with switching frequency (\ref{eq:ex_2f}), computed with standard DMD.}
\label{fig:h2f-02-err}
\includegraphics[width=0.9\linewidth]{paper_images/h2f/fig03-err.png}
\caption{Koopman operator eigenvalues and the algorithm error (\ref{eq:data-alg-err}) for the dynamical system with switching frequency (\ref{eq:ex_2f}), computed with standard DMD.}
\label{fig:h2f-03-err}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\linewidth]{paper_images/h2f/fig02.png}
\caption{Dynamical system matrix eigenvalues for the dynamical system with switching frequency (\ref{eq:ex_2f}), computed with Algorithm 1. (Exact values are offset to improve visibility.)}
\label{fig:h2f-02}
\includegraphics[width=0.9\linewidth]{paper_images/h2f/fig03.png}
\caption{Koopman operator eigenvalues for the dynamical system with switching frequency (\ref{eq:ex_2f}), computed with Algorithm 1. (Exact values are offset to improve visibility.)}
\label{fig:h2f-03}
\end{figure}
\subsection{Switching damped-driven behavior}\label{ss:hdd}
In this example we consider an oscillator with the switching damped-driven behavior, i.e. with governing equations (\ref{eq:LNA-system}), the underlying matrix (\ref{eq:LNA-hybrid}) and
\begin{equation}\label{eq:ex_dd}
\mathbf{A}_l = \left\{\begin{array}{ll}
\left(\begin{array}{cc}
\sigma_1 & 1 \\
-4 & \sigma_1
\end{array} \right), & l=0,2,4,... \\
\\
\left(\begin{array}{cc}
\sigma_2 & 1 \\
-4 & \sigma_2
\end{array} \right), & l=1,3,5,...
\end{array} \right.
\end{equation}
The eigenvalues of the underlying matrices are $\sigma_1\pm 2i$ and $\sigma_2\pm 2i$. For the real part $\sigma_1=1$, $\sigma_2=-1$, and switching times $T_l=T_{l-1}+l/2$, $l=1,2,...$, the matrices are commutative. Therefore, we can solve the system analytically and obtain the fundamental matrix
\begin{equation}\label{eq:dd_fmatrix}
{\cal M}(t,0)=
\left(\begin{array}{cc}
e^{\alpha(t,0)}\cos(2t) & \frac{1}{2}e^{\alpha(t,0)}\sin(2t) \\
-2 e^{\alpha(t,0)}\sin(2t) & e^{\alpha(t,0)}\cos(2t) \end{array} \right),
\end{equation}
and its eigenvalues $\mu(t,0)=e^{\alpha(t,0)\pm 2ti}$. Here
\begin{equation}\label{eq:dd_fmatrix_eigens}
\alpha(t,0)=\int_{0}^{t}(-1)^{l}\mathbb{1}_{\left[\right.T_{l},T_{l+1}\rangle} dt.
\end{equation}
In this case the real part $\sigma$ is switching between two values of which one causes driven behavior and the other damped behavior. We show the exact solution in Fig.\ref{fig:hdd-01}. In Fig.\ref{fig:hdd-02} we observe correct eigenvalues of the underlying matrix. The time-dependent Koopman eigenvalues in Fig.\ref{fig:hdd-03} clearly show oscillation (plot(b)), and damping-driving switching at correct switching times (plot (a)). The results obtained with Algorithm 1 match the exact ones, up to the machine round-off error.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\linewidth]{paper_images/hdd/fig01.png}
\caption{Exact solution starting with the inital condition (1,1) in the state space (a), and exact fundamental matrix eigenvalues (b), for the dynamical system with switching damped-driven behavior (\ref{eq:ex_dd}).}
\label{fig:hdd-01}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\linewidth]{paper_images/hdd/fig02.png}
\caption{Dynamical system matrix eigenvalues for the dynamical system with switching damped-driven behavior (\ref{eq:ex_dd}), computed with Algorithm 1. (Exact values are offset to improve visibility.)}
\label{fig:hdd-02}
\includegraphics[width=0.9\linewidth]{paper_images/hdd/fig03.png}
\caption{Koopman operator eigenvalues for the dynamical system with switching damped-driven behavior (\ref{eq:ex_dd}), computed with Algorithm 1. (Exact values are offset to improve visibility.)}
\label{fig:hdd-03}
\end{figure}
\subsection{Coupled oscillators with switching frequency}\label{ss:hco3}
Here we consider a two degrees of freedom oscillator with masses $m_1$ and $m_2$, spring elasticities $k_1$, $k_2$, and $k_3$ as shown in Fig.\ref{fig:mass_spring} \cite{Veselic}.
\begin{figure}
\centering
\resizebox{0.6\linewidth}{!}{
\begin{tikzpicture}[mass/.style={draw,outer sep=0pt,thick}]
\tikzstyle{spring}=[thick,decorate,decoration={coil,aspect=0.7,amplitude=5,pre length=0.3cm,post length=0.3cm,segment length=6}]
\tikzstyle{ground}=[fill,pattern=north east lines,draw=none,minimum width=0.75cm,minimum height=0.3cm]
\node [mass] (M1) [minimum width=0.5cm, minimum height=2.5cm] {$m_1$};
\node (wall) [ground, rotate=-90, minimum width=3cm,yshift=-3cm] {};
\draw (wall.north east) -- (wall.north west);
\draw [spring] (wall.100) -- ($(M1.north west)!(wall.100)!(M1.south west)$)
node[below = 0.5 cm, left = 0.9 cm] {$k_1$};
\node[at={($(M1.east)+(3cm,0)$)}] [mass] (M2) [minimum width=0.5cm, minimum height=2.5cm] {$m_2$};
\draw [spring] ($(M1.south east)!(M1.170)!(M1.north east)$) -- ($(M1.north east)!(M2.170)!(M2.west)$)
node[below = 0.5 cm, left = 1 cm] {$k_2$};
\node[at={($(M1.east)+(3cm,0)$)}] (wall2) [ground, rotate=90, minimum width=3cm,yshift=-3cm] {};
\draw (wall2.north east) -- (wall2.north west);
\draw [spring] ($(M2.north east)!(wall2.100)!(M2.south east)$) -- (wall2.100)
node[below = 0.5 cm, left = 0.9 cm] {$k_3$};
\end{tikzpicture}
}
\caption{Mass spring oscillator}
\label{fig:mass_spring}
\end{figure}
For this system of coupled oscillators the Newton's law gives
\begin{equation}
m_1 \ddot{x}_1=-k_1 x_1-k_2(x_1-x_2)
\end{equation}
\begin{equation}
m_2 \ddot{x}_2=-k_2(x_2-x_1)-k_3 x_2
\end{equation}
where $x_1$ and $x_2$ are mass displacements from the equilibrium position.
If we add variables $x_3=\dot{x}_1$ and $x_4=\dot{x}_2$ we obtain (\ref{eq:LNA-system}) with the underlying matrix
\begin{equation}\label{eq:hco3-matrix}
\mathbf{A} =
\left(\begin{array}{cccc}
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
-\frac{k_1+k_2}{m_1} & \frac{k_2}{m_1} & 0 & 0 \\
\frac{k_2}{m_2} & -\frac{k_2+k_3}{m_2} & 0 & 0
\end{array} \right).
\end{equation}
If we solve the generalized eigenvalue problem
\begin{equation}\label{eq:co3-geneig-1}
\det\left(\mathbf{K}-\nu_j\mathbf{M}\right)=0,
\end{equation}
$j=1,2$ for matrices
\begin{equation}\label{eq:co3-geneig-2}
\mathbf{K}=
\left(
\begin{array}{cc}
k_1+k_2 & -k_2 \\
-k_2 & k_2+k_3
\end{array}
\right) \mbox{ and }
\mathbf{M}=
\left(
\begin{array}{cc}
m_1 & 0 \\
0 & m_2
\end{array}
\right)
\end{equation}
then
\begin{equation}
\pm i\omega_j = \pm i\sqrt{\nu_j}, \quad j=1,2
\end{equation}
are the four eigenvalues of (\ref{eq:hco3-matrix}).
Now we suppose there is a sequence of time moments
$T_l=l$, $l=0,1,...$ at which elasticity coefficients change value, i.e.
\begin{equation}
k_j(t)=k_j^{(l)}, \mbox{ for } t\in[T_l,T_{l+1}\rangle, j=1,2,3
\end{equation}
For the computations, we take $T_l=l \mbox{ for } l=0,1,...,$, $m_1=m_2=1$, $k_2=1$,
\begin{equation}
k_1^{(l)}= \left\{\begin{array}{ll}
4, & l=0,2,4,... \\
9, & l=1,3,5,...
\end{array} \right.
\mbox{ and }
k_3^{(l)}= \left\{\begin{array}{ll}
9, & l=0,2,4,... \\
16, & l=1,3,5,...
\end{array} \right.
\end{equation}
For this choice of values exact eigenvalues of (\ref{eq:hco3-matrix}) are
\begin{equation}\label{eq:hco3-freqs}
\pm i\omega_{1,2}^{(l)}= \left\{\begin{array}{ll}
\pm i \sqrt{(27\pm\sqrt{53})/2}, & l=0,2,4,... \\
\pm i \sqrt{(15\pm\sqrt{29})/2}, & l=1,3,5,...
\end{array} \right.
\end{equation}
For the exact solution (Fig.\ref{fig:hco3-01}) we use (\ref{eq:LNA-hybrid-fmatrix}).
In Fig.\ref{fig:cco3-01}(a) solution pairs $(x_1,x_3)$ and $(x_2,x_4)$ are plotted.
In Fig.\ref{fig:hco3-01}(b) eigenvalues off the unit circle appear, which is consistent with the fact that due to elasticity coefficients switching, also the frequencies of coupled oscillators switch (\ref{eq:hco3-freqs}). Therefore, some instabilities appear, however are short lived and appears seemingly at random times.
Eigenvalues of the dynamical system matrix do not reveal that instability in an obvious way (Fig.\ref{fig:hco3-01}), but the time-dependent Koopman eigenvalues clearly show both: the change in frequency of the oscillations (Fig.\ref{fig:hco3-03}(b)), and the bursts of the amplitude of the oscillations (Fig.\ref{fig:hco3-03}(a)). Even in this four dimensional dynamical system case with strongly coupled state variables, Algorithm 1 gives highly accurate results.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\linewidth]{paper_images/hco3/fig01.png}
\caption{Exact solution starting with the inital condition (1,1,1,1) in the state space (a), and exact fundamental matrix eigenvalues (b) for the coupled oscillators with switching frequency (\ref{eq:hco3-matrix}).}
\label{fig:hco3-01}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\linewidth]{paper_images/hco3/fig02.png}
\caption{Dynamical system matrix eigenvalues for the coupled oscillators with switching frequency (\ref{eq:hco3-matrix}), computed with Algorithm 1. (Exact values are offset to improve visibility.)}
\label{fig:hco3-02}
\includegraphics[width=0.9\linewidth]{paper_images/hco3/fig03.png}
\caption{Koopman operator eigenvalues for the coupled oscillators with switching frequency (\ref{eq:hco3-matrix}), computed with Algorithm 1. (Exact values are offset to improve visibility.)}
\label{fig:hco3-03}
\end{figure}
\subsection{Multicompartment model with delay}\label{ss:mcm}
Multicompartment models are often used in medicine and pharmacoterapy. These are models of the form
\begin{equation}\label{eq:mcm_ode}
\dot{x}_{i} = - \sum_{j=1, j\ne i}^{n} k_{ij} x_i + \sum_{j=1, j\ne i}^{n} k_{ji} x_j \quad \mbox{ for }i=1,...,n.
\end{equation}
Here $x_i=x_i(t)$ is the concentration of a substance in the $i^{th}$ compartment, $i=1,...,n$, and $k_{ij}$, is the rate coefficient of the transport of the substance from the $i^{th}$ to the $j^{th}$ compartment, $i,j=1,...,n$, $i\ne j$.
As usual, concentrations are expressed as relative, i.e. as fractions of the sum of the initial concentrations. In the closed case (\ref{eq:mcm_ode}) the sum of the concentrations is constant, i.e.
\begin{equation}\label{eq:mcm_sum}
\sum_{i=1}^{n}{x}_{i} = 1.
\end{equation}
If the transfer between two compartments starts only after some delay time, this can be modeled by using time-dependent coefficients of the form
\begin{equation}\label{eq:mcm_k}
k_{ij}(t)=\left\{
\begin{array}{ll}
0 &\mbox{ if } t<T_{ij}\\
K_{ij} &\mbox{ if } t\ge T_{ij}
\end{array}
\right.
\end{equation}
where $T_{ij}$ is the delay time $i,j=1,...,n$, $i\ne j$. In such case the model is a hybrid linear non-autonomous system.
We consider the multicompartment model for endosomal trafficking of $eL^d$ molecules \cite{mahmutefendic2017late}. In that paper among other results, an application of a five compartment model was presented, where the non-zero rate coefficients and delay times were obtained that minimized the difference between measurements and simulation (Table \ref{tbl:mcm_kt}).
\begin{table}[!htb]
\centering
\caption{Example \ref{ss:mcm}, rate coefficients and delay times}
\label{tbl:mcm_kt}
\begin{tabular}{|c|c|c|}
\hline
(i,j) & $K_{ij}$ & $T_{ij}$ \\
\hline
(1,2) & 0.0988 & 0 \\
(2,1) & 0.1410 & 5 \\
(2,3) & 0.0590 & 3 \\
(3,4) & 0.1150 & 18 \\
(4,1) & 0.0149 & 30 \\
(4,5) & 0.0154 & 55 \\
\hline
\end{tabular}
\end{table}
In the computations we must take care of the fact that the number of independent observables is not equal to the state dimension for two reasons. The first reason is (\ref{eq:mcm_sum}) so concentration in the $5^{th}$ compartment can be eliminated from the observable set. The second reason is that zero values of the rate coefficients may cause that some compartments are completely inactive for some time. So again, those concentrations should not be included in the set of observables. The identification of the necessary number of observables can be easily achieved by checking the dimension of the Krylov subspace.
With this addition to the Algorithm 1, we obtain excellent results as it can be seen in Fig.\ref{fig:mcm-02} and Fig.\ref{fig:mcm-03}. The eigenvalues of the underlying matrix and the time delays are correctly identified (Fig.\ref{fig:mcm-02}). In Fig.\ref{fig:mcm-03}(b) we see that the imaginary part of the time-dependent Koopman eigenvalue is zero, so there are no oscillations in this system. In Fig.\ref{fig:mcm-03}(a) real part of one of the Koopman eigenvalues is zero, which is consistent with the fact that the sum of all concentrations is constant in time (\ref{eq:mcm_sum}). Other time-dependent real parts of Koopman eigenvalues are negative and decreasing, so related particular solutions of the governing equations are vanishing as time increases. This is consistent with the behavior of the solution, since it is obviously converging to a steady state (Fig.\ref{fig:mcm-01}).
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\linewidth]{paper_images/mcm/fig01.png}
\caption{Exact solution starting with the inital condition (1,0,0,0,0) (a), and exact compartment concentrations in state space (b) for the multicompartment model with delay (\ref{eq:mcm_ode}).}
\label{fig:mcm-01}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\linewidth]{paper_images/mcm/fig02.png}
\caption{Dynamical system matrix eigenvalues for the multicompartment model with delay (\ref{eq:mcm_ode}), computed with Algorithm 1. (Exact values are offset to improve visibility.)}
\label{fig:mcm-02}
\includegraphics[width=0.9\linewidth]{paper_images/mcm/fig03.png}
\caption{Koopman operator eigenvalues for the multicompartment model with delay (\ref{eq:mcm_ode}), computed with Algorithm 1. (Exact values are offset to improve visibility.)}
\label{fig:mcm-03}
\end{figure}
\subsection{Continuous frequency change}\label{ss:c2f}
In this example we consider an oscillator with continously changing frequency. The governing equations are (\ref{eq:LNA-system}) with the underlying matrix (\ref{eq:LNA-oscillation}) where we additionally set $\sigma(t)=0$ and
\begin{equation}\label{eq:c2f}
\omega(t)=\omega_0+A_d\cos(\omega_d t)+B_d\sin(\omega_d t)
\end{equation}
We can solve it analytically using fundamental matrix (\ref{eq:osci-fmatrix}) with $\alpha(t,t_0)=0$ and
$$ \beta(t,t_0)=\omega_0(t-t_0)+
\frac{A_d}{\omega_d}\left(\sin(\omega_d t) - \sin(\omega_d t_0)\right) -
\frac{B_d}{\omega_d}\left(\cos(\omega_d t) - \cos(\omega_d t_0)\right)$$
The computations are performed for $\omega_0=2$, $\omega_d=\pi$, and $A_d=0.5$.
Exact solution (Fig.\ref{fig:c2f-01}) shows that this continuous change in frequency of the underlying matrix does not produce instabilities. All eigenvalues are on the unit circle and there are no damping-driving effects.
This is confirmed by the results obtained using Algorithm 2. In Fig.\ref{fig:c2f-02}(a) and Fig.\ref{fig:c2f-03}(a) we see that real parts of both dynamical system matrix and Koopman operator eigenvalues stay equal to zero at all times. Imaginary parts of dynamical system matrix eigenvalues (Fig.\ref{fig:c2f-02}(b)) and Koopman operator eigenvalues (Fig.\ref{fig:c2f-03}(b)) computed with Algorithm 2 show the correct time-dependency.
However, when computations are performed with any Arnoldi-like method on moving stencils, numerically evaluated real part of the eigenvalue of the underlying matrix is displaying a nonexistent time-dependency (Fig.\ref{fig:c2f-02-err}(a)), while the imaginary parts of those eigenvalues are correct (Fig.\ref{fig:c2f-02-err}(a)). As proven in Theorem \ref{thm:NLA-Aerror} the numerical result for the imaginary part of the dynamical system matrix eigenvalues are correct because there is no time change in $\sigma$. Also, as proven in Theorem \ref{thm:NLA-Aerror} numerical results for the real part of the dynamical system matrix eigenvalues are compromised by the error which is proportional to the time derivative of $\omega(t)$.
This error then propagates into the Koopman operator eigenvalue computations (Fig. \ref{fig:c2f-03-err}(a) and (b)). From those results it might be concluded that there is an amplitude change in the system, and that even at some time moments frequencies stay at the value $\pm \pi$ (Fig.\ref{fig:c2f-03-err}(b)) and then real parts of eigenvalues split into two different values (Fig.\ref{fig:c2f-03-err}(a)). All of this is completely erroneous, as it is additionally confirmed by the algorithm error plotted in Fig.\ref{fig:c2f-03-err}(c).
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\linewidth]{paper_images/c2f/fig01.png}
\caption{Exact solution starting with the inital condition (1,1) in the state space (a), and exact fundamental matrix eigenvalues (b) for the dynamical system with continuous frequency change (\ref{eq:c2f}).}
\label{fig:c2f-01}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\linewidth]{paper_images/c2f/fig02-err.png}
\caption{Dynamical system matrix eigenvalues for the dynamical system with continuous frequency change (\ref{eq:c2f}), computed with standard DMD.}
\label{fig:c2f-02-err}
\includegraphics[width=0.9\linewidth]{paper_images/c2f/fig03-err.png}
\caption{Koopman operator eigenvalues and the algorithm error (\ref{eq:data-alg-err}) for the dynamical system with continuous frequency change (\ref{eq:c2f}), computed with standard DMD.}
\label{fig:c2f-03-err}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\linewidth]{paper_images/c2f/fig02.png}
\caption{Dynamical system matrix eigenvalues for the dynamical system with continuous frequency change (\ref{eq:c2f}), computed with Algorithm 2. (Exact values are offset to improve visibility.)}
\label{fig:c2f-02}
\includegraphics[width=0.9\linewidth]{paper_images/c2f/fig03.png}
\caption{Koopman operator eigenvalues for the dynamical system with continuous frequency change (\ref{eq:c2f}), computed with Algorithm 2. (Exact values are offset to improve visibility.)}
\label{fig:c2f-03}
\end{figure}
\subsection{Continuous change in damping rate}\label{ss:cdd}
Here we consider an oscillator with a continously changing amplitude. It is a linear non-autonomous system (\ref{eq:LNA-system}) with underlying matrix (\ref{eq:LNA-oscillation}) where we additionally set $\omega(t)=\omega_0$ and
\begin{equation}\label{eq:cdd}
\sigma(t)=\sigma_0+A_d\cos(\omega_d t)+B_d\sin(\omega_d t)
\end{equation}
We can solve it analytically using fundamental matrix (\ref{eq:osci-fmatrix}) with $\beta(t,t_0)=0$ and
$$ \alpha(t,t_0)=\sigma_0(t-t_0)+
\frac{A_d}{\omega_d}\left(\sin(\omega_d t) - \sin(\omega_d t_0)\right) -
\frac{B_d}{\omega_d}\left(\cos(\omega_d t) - \cos(\omega_d t_0)\right)$$
The computations are performed for $\sigma_0=0$, $\omega_0=2$, $\omega_d=\pi$, and $A_d=0.5$.
Since amplitude is changing the solution appears non-symmetrical relative to axis in the state space (Fig.\ref{fig:cdd-01}(a)), and this change is confirmed in Fig.\ref{fig:cdd-01}(b).
If the computations on snapshots are performed with any Arnoldi-like method on moving stencils we get results as in Fig.\ref{fig:cdd-02-err} and Fig.\ref{fig:cdd-03-err}. As proven in Theorem \ref{thm:NLA-Aerror} since $\omega$ is constant in time, there is no error in numerical real parts of the dynamical system matrix eigenvalues (Fig.\ref{fig:cdd-02-err}(a)). Also, the error in the numerical imaginary part of the dynamical system matrix eigenvalues is, as explained in the same theorem, proportional to the time derivative of $\sigma(t)$ (Fig.\ref{fig:cdd-02-err}(b)). This causes error in numerical time-dependent Koopman operator eigenvalues (Fig.\ref{fig:cdd-03-err}(a) and (b)), and produces large algorithm error (Fig.\ref{fig:cdd-03-err}(c)).
All these issues can be eliminated by applying Algorithm 2, which gives highly accurate results, as we show in Fig.\ref{fig:cdd-02} and Fig.\ref{fig:cdd-03}.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\linewidth]{paper_images/cdd/fig01.png}
\caption{Exact solution starting with the inital condition (1,1) in the state space (a), and exact fundamental matrix eigenvalues (b) for the dynamical system with continuous change in damping rate (\ref{eq:cdd}).}
\label{fig:cdd-01}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\linewidth]{paper_images/cdd/fig02-err.png}
\caption{dynamical system matrix eigenvalues for the dynamical system with continuous change in damping rate (\ref{eq:cdd}), computed with standard DMD.}
\label{fig:cdd-02-err}
\includegraphics[width=0.9\linewidth]{paper_images/cdd/fig03-err.png}
\caption{Koopman operator eigenvalues and the algorithm error (\ref{eq:data-alg-err}) for the dynamical system with continuous change in damping rate (\ref{eq:cdd}), computed with standard DMD.}
\label{fig:cdd-03-err}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\linewidth]{paper_images/cdd/fig02.png}
\caption{Dynamical system matrix eigenvalues for the dynamical system with continuous change in damping rate (\ref{eq:cdd}), computed with Algorithm 2. (Exact values are offset to improve visibility.)}
\label{fig:cdd-02}
\includegraphics[width=0.9\linewidth]{paper_images/cdd/fig03.png}
\caption{Koopman operator eigenvalues for the dynamical system with continuous change in damping rate (\ref{eq:cdd}), computed with Algorithm 2. (Exact values are offset to improve visibility.)}
\label{fig:cdd-03}
\end{figure}
\subsection{Nonautonomous coupled oscillators}\label{ss:cco3}
Let us consider two coupled oscillators (similar to Section \ref{ss:hco3}), but now both with continously changing frequencies.
In order to solve such a system analytically we write it in the equivalent form (see \cite{Veselic})
with underlying matrix
\begin{equation}\label{eq:cco3-matrix}
\mathbf{A}(t) =
\left(\begin{array}{cccc}
0 & 0 & \omega_1(t) & 0 \\
0 & 0 & 0 & \omega_2(t) \\
-\omega_1(t) & 0 & 0 & 0 \\
0 & -\omega_2(t) & 0 & 0
\end{array} \right).
\end{equation}
Here $\omega_j(t)=\sqrt{\nu_j(t)}$, $j=1,2$, and $\nu_j(t)$, $j=1,2$ are computed from the generalized eigenvalue problem (\ref{eq:co3-geneig-1})-(\ref{eq:co3-geneig-2}) for each $t>t_0$.
This form is commutative, so we can compute the fundamental matrix
\begin{equation}\label{eq:cco3-fmatrix}
\mathcal{M}(t,t_0)=
\left(\begin{array}{cccc}
\cos\beta_1 & 0 & \sin\beta_1 & 0 \\
0 & \cos\beta_2 & 0 & \sin\beta_2 \\
-\sin\beta_1 & 0 & \cos\beta_1 & 0 \\
0 & -\sin\beta_2 & 0 & \cos\beta_2
\end{array} \right),
\end{equation}
where
\begin{equation}\label{eq:cco3-ab}
\beta_j=\beta_j(t,t_0)=\int_{t_0}^{t}\omega_j(\tau)d\tau, j=1,2.
\end{equation}
This is a four dimensional test example with all variables strongly coupled.
In the computations we take $m_1=m_2=1$, $k_1=2$, $k_2=1$, $k_3=3$, and we compute constant part of the frequency $\omega_{1,2}^{(0)}=\sqrt{(7\pm\sqrt{5})/2}$, from (\ref{eq:co3-geneig-1})-(\ref{eq:co3-geneig-2}). Then we add a frequency forcing term
\begin{equation}\label{eq:cco3-fast}
\omega_1(t)=\omega_1^{(0)}+\frac{1}{2}\left(\cos(2 t)+\sin(2 t)\right)
\end{equation}
to one oscillator, and another frequency forcing term
\begin{equation}\label{eq:cco3-slow}
\omega_2(t)=\omega_2^{(0)}+\frac{1}{2}\left(\cos(0.4 t)+\sin(0.4 t)\right)
\end{equation}
to the other oscillator. Obeserve that from the perspective of the Theorem \ref{thm:NLA-Aerror} (\ref{eq:cco3-fast}) will produce larger error then (\ref{eq:cco3-slow}).
In Fig.\ref{fig:cco3-01}(a) solution pairs $(x_1,x_3)$ and $(x_2,x_4)$ are plotted.
This example is important since it is higher dimensional and the underlying matrix is not of the form examined in Theorem \ref{thm:NLA-Aerror}. We want to see if something similar to what is proven in that theorem will occur, and test if Algorithm 2 with appropriate choice of observables gives good results.
First, we compute the dynamical system matrix and Koopman operator eigenvalues with Standard DMD on moving stencils and obtain results presented in Fig.\ref{fig:cco3-02-err} and Fig.\ref{fig:cco3-03-err}. As expected, the exact real part of the eigenvalues should be zero, but the numerical real parts of eigenvalues exhibit an error (Fig.\ref{fig:cco3-02-err}(a)).
The only part that is almost accurately computed is the part related to the the slower frequency forcing term (\ref{eq:cco3-slow}) (Fig.\ref{fig:cco3-02-err}(b)).But what we observe in Fig.\ref{fig:cco3-02-err}(b) is that errors contributed to imaginary eigenvalues are not only by real parts (that are zero here) but also by imaginary part of the other oscillator.
This indicates, that the error proven in Theorem \ref{thm:NLA-Aerror}, escalates with the complexity of the system, which is quite logical.
Now in order to apply Algorithm 2, in step 2 we define new observables
\begin{equation}\label{eq:cco3-good-obs-1}
u_1 = \sqrt{x_1^2+x_3^2},
u_2 = (x_1+ix_3)/u_1,
\end{equation}
\begin{equation}\label{eq:cco3-good-obs-2}
u_3 = \sqrt{x_2^2+x_4^2},
u_4 = (x_2+ix_4)/u_4,
\end{equation}
Notice that the observable pair $(u_1,u_2)$ is obtained with a nonlinear conjugate transformation of form (\ref{eq:ConjMap}) on state variables $(x_1,x_3)$, and the same is valid for observable pair $(u_3,u_4)$ and state variables $(x_2,x_4)$.
Comparison between exact eigenvalues and numerical eigenvalues obtained with Algorithm 2 (Fig.\ref{fig:cco3-02} and Fig.\ref{fig:cco3-03}), one more time shows that the issues disappear and that Algorithm 2 gives highly accurate results.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\linewidth]{paper_images/cco3/fig01.png}
\caption{Exact solution starting with the inital condition (1,1,1,1) in the state space (a), and exact fundamental matrix eigenvalues (b) for the coupled oscillators with continuous frequency change (\ref{eq:cco3-matrix}).}
\label{fig:cco3-01}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\linewidth]{paper_images/cco3/fig02-err.png}
\caption{Dynamical system matrix eigenvalues for the coupled oscillators with continuous frequency change (\ref{eq:cco3-matrix}), computed with standard DMD.}
\label{fig:cco3-02-err}
\includegraphics[width=0.9\linewidth]{paper_images/cco3/fig03-err.png}
\caption{Koopman operator eigenvalues for the coupled oscillators with continuous frequency change (\ref{eq:cco3-matrix}), computed with standard DMD.}
\label{fig:cco3-03-err}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\linewidth]{paper_images/cco3/fig02.png}
\caption{Dynamical system matrix eigenvalues for the coupled oscillators with continuous frequency change (\ref{eq:cco3-matrix}), computed with Algorithm 2. (Exact values are offset to improve visibility.)}
\label{fig:cco3-02}
\includegraphics[width=0.9\linewidth]{paper_images/cco3/fig03.png}
\caption{Koopman operator eigenvalues for the coupled oscillators with continuous frequency change (\ref{eq:cco3-matrix}), computed with Algorithm 2. (Exact values are offset to improve visibility.)}
\label{fig:cco3-03}
\end{figure}
\section{Conclusion}
In this paper we prove a close connection between Koopman operator family for linear non-autonomous dynamical systems and the fundamental matrix family for the underlying system of governing equations. This results in a finite dimensional Koopman expansion for the full state observable.
Actually, if a new set of observables is introduced, which after transformation also satisfy governing equations of form (\ref{eq:LNA-system}) again the same Theorem \ref{thm:kopp-fmatrix} can be applied.
Then we analyze data-driven algorithms on hybrid linear non-autonomous systems. What we discover is that the increase in Krylov subspace projection error signals the time moments, when switching of the values in the underlying matrix occurs. An appropriate use of this information leads us to a full detection of the governing equations. This approach is formalized as the Algorithm 1.
Another important result is the revealed nature of the error that occurs when Arnoldi-like methods are used for approximation of time-dependent underlying matrices. The error in these approximations is proportional to time derivatives of the eigenvalues. It is obvious that at the core of this issue lies the fact that Arnoldi-like methods are constructed to capture eigenvalues of constant high-dimensional matrices. Particularly significant is the fact that this error does not vanish if we decrease the time step between snapshots.
In order to solve this issue for the continuously time-dependent nonautonomous systems we propose Algorithm 2. The essence of this algorithm is that we must introduce observables containing only one real or one imaginary part of the eigenvalues, so that stencil with only two snapshots contains enough information to reconstruct that value. Arnoldi-like methods that expect the underlying matrix is constant, will, on any larger stencil span, produce significant error. One good path for the definition of appropriate observables is through nonlinear conjugate transformations of the form (\ref{eq:ConjMap}) which might lead to discovery of new, adaptive algorithms for observable selection.
All numerical results show that application of Arnoldi-like methods on moving stencils to data collected on non-autonomous systems, will lead to wrong conclusions about the nature of the dynamical system. Also, all numerical tests confirm high accuracy of both algorithms that are proposed in this paper.
\section{Acknowledgment}
This research has been supported by the DARPA Contract HR0011-16-C-0116 "On A Data-Driven,
Operator-Theoretic Framework for Space-Time Analysis of Process Dynamics". S.M. is grateful to Dr. Ryan Mohr and Dr. Maria Fonoberova for helpful mathematical discussions and comments on the manuscript.
\bibliographystyle{plain}
|
1,314,259,996,612 | arxiv | \section{Introduction}
\label{Intro}
Focused Electron Beam Induced Deposition (FEBID) is a technology for the controllable fabrication of complex nanostructures with nanometer resolution \cite{Utke_book_2012,DeTeresa-book2020,Winkler_2019_JAP_review, Huth_2021_JAP_review}. The FEBID process consists of the deposition of organometallic precursor molecules on a substrate and irradiation of the adsorbed molecules by a focused keV-energy electron beam. Electron-induced decomposition releases organic ligands resulting in clusterization of the precursor's metallic component on a surface. The lateral size of the resulting deposit is comparable to that of the incident electron beam (typically, $\sim$1--10 nanometers) \cite{Plank2020}.
The FEBID process involves a complex interplay of different phenomena taking place on different temporal and spatial scales: (i) adsorption, diffusion and desorption of precursor molecules on/from a substrate; (ii) transport of primary, secondary and backscattered electrons; (iii) electron-induced dissociation of the adsorbed precursor molecules; and (iv) follow-up chemical transformations.
The atomistic modeling of the FEBID process has become possible recently by means of Irradiation-Driven Molecular Dynamics (IDMD) \cite{Sushko2016}, a novel and general methodology for computer simulations of irradiation-driven transformations of complex molecular systems. This method enables the atomistic-level description of nanostructures grown by FEBID \cite{Sushko2016,MBNbook_Springer_2017, DeVera2020} with accounting for chemical transformations of adsorbed molecular systems \cite{Sushko2016a} irradiated with a focused electron beam.
Within the IDMD framework, various quantum processes occurring in an irradiated system (e.g. covalent bond breakage induced by ionization or electron attachment) are treated as random, fast and local transformations incorporated into the classical MD framework in a stochastic manner with the probabilities elaborated on the basis of quantum mechanics \cite{Sushko2016}.
Major transformations of irradiated molecular systems (such as molecular topology changes, redistribution of atomic partial charges, or alteration of interatomic interactions) are simulated by means of MD with the reactive CHARMM (rCHARMM) force field \cite{Sushko2016a} using the advanced software packages MBN Explorer \cite{Solovyov2012} and MBN Studio \cite{Sushko2019}. MBN Explorer is a multi-purpose software package for multiscale simulations of the structure and dynamics of complex Meso-Bio-Nano (MBN) systems \cite{MBNbook_Springer_2017}. MBN Studio is a powerful multi-task toolkit used to set up and start MBN Explorer calculations, monitor their progress, examine calculation results, visualize inputs and outputs, and analyze specific characteristics determined by the output of simulations \cite{Sushko2019}.
A detailed overview of the computational workflow for IDMD-based simulations of the FEBID process has been presented in the recent study \cite{Prosvetov2021}, and the methodology was utilized to simulate the FEBID of Pt(PF$_3$)$_4$ precursor molecules. The simulations carried out in Ref.~\cite{Prosvetov2021} described the initial stage of nanostructure growth, including nucleation of metal atoms, formation of small metal clusters on a surface, their aggregation and, eventually, the formation of a dendritic metal nanostructure.
In the follow-up study \cite{Prosvetov2021a} with Fe(CO)$_5$ precursor molecules, the variation of the deposit's structure, morphology and metal content at different irradiation and replenishment conditions of the FEBID process was investigated. It was demonstrated that either a nanogranular deposit consisting of small-size metal clusters surrounded by organic ligands or a single dendrite-like structure with the size corresponding to the primary electron beam is formed depending on the beam current. The aforementioned studies \cite{Sushko2016, DeVera2020, Prosvetov2021, Prosvetov2021a} have demonstrated the successful application of the IDMD methodology for the atomistic simulations of FEBID.
Investigation of the phenomena that govern the formation and growth of nanostructures in the FEBID process is a complex multi-parameter problem. Indeed, different precursor molecules, substrate types as well as irradiation, replenishment and post-processing conditions
can be explored to fabricate deposits with the optimal geometries and compositions.
However, due to the complexity of the problem, not all the mentioned aspects of the FEBID process have been explored so far by means of IDMD simulations.
One of the parameters influencing the properties of FEBID-grown deposits is the operational temperature of the FEBID process \cite{Mulders2011,Rosenberg2012,DeTeresa2019a,Huth2020}.
The temperature effects arising during the FEBID process have been considered by means of the continuum diffusion-reaction model \cite{Toth2015}. However, this approach cannot provide atomistic details of the deposit's structure. Up to now, the thermal effects during FEBID have not been studied by means of IDMD.
\begin{sloppypar}
Different types of precursor molecules have been proposed for FEBID applications, see reviews~\cite{Utke2008, Barth2020_JMaterChemC, Utke2022} and references therein. For example, one can mention metal carbonyls (e.g. Fe(CO)$_5$, W(CO)$_6$ or Co$_2$(CO)$_8$), phosphines (e.g. Pt(PF$_3$)$_4$), halides (e.g. Pt(NH$_3$)$_2$Cl$_2$ or Pt(CO)$_2$Cl$_2$), cyclopentadienyl complexes (e.g. MeCpPtMe$_3$) and $\beta$-diketonates (e.g. Cu(hfac)$_2$).
Some precursors, such as metal carbonyls considered in the previous IDMD-based studies \cite{Sushko2016, DeVera2020, Prosvetov2021a}, have relatively simple geometries where one or two metal atoms are linked to small ligands of a same type. Other precursors, such as $\beta$-diketonates (e.g. dimethyl-gold-trifluoroacetylacetonate, Me$_2$Au(tfac), shown in Fig.~\ref{Fig:molecule}), have more complex geometries with many different atom types and different covalent interactions, opening a broad spectrum of electron irradiation induced fragmentation channels. At the same time, the available data on the absolute fragmentation cross sections for such complex precursors are very limited or do not exist. A detailed comparative study on dissociative electron attachment (DEA) to the isolated diketones (acetylacetone -- \textit{acac}, trifluoroacetylacetone -- \textit{tfac}, and hexafluoroacetylacetone -- \textit{hfac}) was presented in Ref.~\cite{Omarsson2014}. A comparison of the experimentally measured electron-induced fragmentation of acetone and acac was performed in Ref.~\cite{Warneke2015a}. Decomposition of the metal-acac complexes with Cu, Mn and Zn atoms irradiated with low-energy ($0-10$~eV) electrons was studied in Refs.~\cite{Kopyra2018,Kopyra2020,Kopyra2020a}. However, only relative yields of fragments created due to the electron-induced fragmentation of parent molecules were reported in the cited studies. Electron-induced surface reactions and products, reaction kinetics and structure of FEBID-grown deposits for Me$_2$Au(acac) precursors adsorbed onto solid substrates were discussed in Ref.~\cite{Wnuk2010}.
\end{sloppypar}
\begin{figure}[t]
\includegraphics[width=0.4\textwidth]{Fig01_Me2Autfac_molecule.eps}
\caption{Optimized geometry of a Me$_2$Au(tfac) molecule considered in this study. The optimization calculation has been performed by means of MBN Explorer using the interatomic potentials given by Eqs.~(\ref{Eq. Morse})--(\ref{Eq. Lennard-Jones}). Different atom types are indicated. The corresponding bonded and angular interactions are listed in Table~\ref{Table:CovBonds}. }
\label{Fig:molecule}
\end{figure}
\begin{sloppypar}
$\beta$-diketonates are an important class of organometallic precursors in FEBID. In particular, $\beta$-diketonate complexes with Au atoms (Me$_2$Au(acac) = C$_7$H$_{13}$O$_2$Au, Me$_2$Au(tfac) = C$_7$H$_{10}$F$_3$O$_2$Au, and Me$_2$Au(hfac) = C$_7$H$_{7}$F$_6$O$_2$Au) are among the main FEBID precursors used for the fabrication of gold nanostructures \cite{Graells2007, Kuhness2021}. A $\beta$-diketone ligand encloses the metal atom forming a rigid 6-membered ring (see Fig.~\ref{Fig:molecule}), which efficiently protects the metal atom against chemical reactions. As a result, as-grown Au deposits produced using these precursors under normal conditions usually contain only $\sim 5-20$~at.\% of metal and are contaminated with a high percentage of carbon atoms \cite{Barth2020_JMaterChemC,Utke2022}.
\end{sloppypar}
Different purification techniques have been developed to increase the Au content \cite{Botman2009a}, for example post-growth annealing \cite{DosSantos2018}, deposition at elevated temperatures \cite{Mulders2011}, FEBID under reactive atmosphere, e.g. water vapor or O$_2$, alongside the deposition or the subsequent annealing \cite{Shawrav2016a, Mansilla2016}. Overall, a higher metal content can be achieved by promoting the release of ligands and adjusting the environment to enable the formation of volatile chemical products. Nevertheless, the efficiency of the purification methods and, particularly, thermal treatment of FEBID-grown deposits varies greatly for different precursors. Moreover, results for the same precursor differ in the studies reported by different research groups \cite{Botman2009a, Barth2020_JMaterChemC, Utke2022}.
In Ref.~\cite{Koops1996} the gold-containing nanostructure growth during FEBID of Me$_2$Au(tfac) was studied experimentally. An increase of Au content in the deposited material from $1-15$~at.\% up to 24~at.\% during the substrate heating up to 373~K was reported.
Mulders \textit{et al.} \cite{Mulders2011} observed an increase in Au content from ca. 20 to 30~at.\% with increasing the temperature from 300 to 450~K during FEBID of Me$_2$Au(acac).
At the same time, other experimental studies indicated that the effect of post-deposition thermal processing of FEBID-grown deposits of Me$_2$Au(acac) is strongly influenced by the accompanying gas. Botman \textit{et al.} \cite{Botman2006} reported that the Au content of $\sim$8~at.\% in the deposit did not increase upon annealing at different elevated temperatures up to 673~K in an N$_2$ atmosphere and annealing at 473~K in air. However, when the annealing was performed in an O$_2$ atmosphere, the average Au concentration increased gradually up to $\sim$13~at.\% at 523~K and raised up to 60~at.\% at 673~K \cite{Botman2006}. Post-growth annealing of the structures obtained after FEBID of Me$_2$Au(acac) and Cu(hfac)$_2$ precursors at 573~K did not increase the initial metal content of $\sim$5~at.\% \cite{DosSantos2018}.
In the present study, the role of thermal effects during the FEBID process is investigated at the atomistic level by means of IDMD simulations.
The Me$_2$Au(tfac) precursor molecule is considered as an illustrative case study. The FEBID of Me$_2$Au(tfac) is simulated at different temperatures in the range of $300-450$~K, and the deposit's structure, morphology, growth rate, and elemental composition at different temperatures are analyzed. The simulations show that the deposit consists of small metal clusters containing several gold atoms embedded into an organic matrix. An increase in Au:C ratio in the deposits from $\sim$0.18 to $\sim$0.25 is observed when the temperature increases from 300 to 450~K, which is within the range of experimentally reported data.
The absolute cross section of electron-impact induced fragmentation of Me$_2$Au(tfac) is evaluated by different methods. Four different approximations for the fragmentation cross section are considered and compared. In the simplest approximation, the total cross section accounts only for the dissociative ionization (DI)-induced cleavage of covalent bonds between the gold atom and the ligands. The most complete approximation for the fragmentation cross section accounts for the contribution of DI and DEA processes in the cleavage of covalent bonds between the gold atom and the ligands, as well as for the bond cleavage within the ligands. The yields of created atomic and molecular fragments are compared for the considered approximations for the fragmentation cross section and for different values of the energy deposited into the system upon a covalent bond breakage, $E_d$. The simulation results and their analysis indicate that accounting for both DEA- and DI-induced fragmentation of all the covalent bonds in Me$_2$Au(tfac) and increasing $E_d$ result in growing the concentration of metal content in the deposits. The simulated concentration of gold in the deposit and the dependence of the deposit's growth rate on temperature are within the range of experimental values reported for Me$_2$Au(tfac) and structurally similar precursor molecules.
\section{Computational methodology}
\label{Methods}
Computer simulations of the FEBID process of Me$_2$Au(tfac) on a fully hydroxylated silica (SiO$_2$-H) substrate have been performed by means of the MBN Explorer software package \cite{Solovyov2012}. The MBN Studio toolkit \cite{Sushko2019} has been utilized to create the systems, prepare all necessary input files and analyze simulation outputs. The simulations have followed the multi-step computational protocol described in Ref.~\cite{Prosvetov2021}.
\subsection{Interatomic interactions}
Interatomic interactions involving the precursor molecules and their molecular fragments have been described by means of the reactive CHARMM (rCHARMM) force field \cite{Sushko2016a}. rCHARMM permits simulations of systems with dynamically changing molecular topologies, which is essential for modeling the precursor fragmentation \cite{DeVera2019} and the formation of metal-containing nanostructures \cite{Sushko2016, DeVera2020, Prosvetov2021, Prosvetov2021a}. A detailed description of rCHARMM is given in Ref.~\cite{Sushko2016a}, see also a recent review \cite{Verkhovtsev2021}.
The radial part of bonded interactions is described in rCHARMM by means of the Morse potential:
\begin{equation}
U^{{\rm bond}}(r_{ij}) = D_{ij} \left[ e^{-2\beta_{ij}(r_{ij} - r_0)} - 2e^{-\beta_{ij}(r_{ij} - r_0)} \right] \ .
\label{Eq. Morse}
\end{equation}
Here $D_{ij}$ is the dissociation energy of the bond between atoms $i$ and $j$, $r_0$ is the equilibrium bond length, and $\beta_{ij} = \sqrt{k_{ij}^{r} / D_{ij}}$ (with $k_{ij}^{r}$ being the bond force constant) determines the steepness of the potential. The bonded interactions are truncated at a user-defined cutoff distance that characterizes the distance beyond which the bond becomes broken and the molecular topology of the system changes.
The rupture of covalent bonds in the course of simulation employs the reactive potential for valence angles:
\begin{equation}
U^{{\rm angle}}(\theta_{ijk}) = 2 k^\theta_{ijk} \, \sigma(r_{ij}) \, \sigma(r_{jk}) \left[ 1 - \cos(\theta_{ijk}-\theta_0 ) \right] \ ,
\label{Eq. Angles}
\end{equation}
where $\theta_0$ is the equilibrium angle, $k^{\theta}$ is the angle force constant, and the function $\sigma(r_{ij})$ describes the effect of bond breakage \cite{Sushko2016a}:
\begin{equation}
\sigma(r_{ij}) = \frac{1}{2} \left[1-\tanh(\beta_{ij}(r_{ij}-r_{ij}^*)) \right] \ .
\label{Eq. Rupture_param}
\end{equation}
Here $r_{ij}^*=(R^{{\rm vdW}}_{ij}+r_0)/2$, with $r_0$ being the equilibrium distance between two atoms involved in the angular interaction and $R^{{\rm vdW}}_{ij}$ being the van der Waals radius for those atoms.
\begin{table*}[t!]
\centering
\caption{Parameters of the covalent bonded and angular interactions, Eqs.~(\ref{Eq. Morse}) and (\ref{Eq. Angles}), for a Me$_2$Au(tfac) molecule, used in the simulations. The corresponding atom types are shown in Fig.~\ref{Fig:molecule}.}
\begin{tabular}{p{3.5cm}p{2cm}p{4cm}p{3.5cm}}
\hline
Bond & $r_0$~(\AA) & $D_{ij}$~(kcal/mol) & $k_{ij}^{r}$~(kcal/mol \AA$^{-2}$) \\
\hline
F--C2 & 1.43 & 154.3 & 371.7 \\
C4--C2 & 1.57 & 144.5 & 538.5 \\
C4--O & 1.34 & 215.9 & 538.5 \\
C4--C3 & 1.39 & 222.0 & 683.0 \\
C3--H2 & 1.16 & 171.2 & 387.0 \\
C1--H1 & 1.16 & 171.2 & 387.0 \\
Au--O & 2.16 & 45.0 & 133.4 \\
Au--C1 & 2.06 & 81.2 & 206.4 \\
\hline
Angle & $\theta_0$~(deg.) & $k_{ijk}^{\rm {\theta}}$~(kcal/mol rad$^{-2}$) \\
\hline
C3--C4--C1 & 118.2 & 48.0 \\
C3--C4--C2 & 118.2 & 56.0 \\
C3--C4--O & 128.0 & 56.0 \\
C1--C4--O & 113.0 & 75.0 \\
C2--C4--O & 113.0 & 75.0 \\
C4--C3--C4 & 120.0 & 65.0 \\
C4--C2--F & 112.0 & 50.0 \\
C4--C1--H1 & 110.0 & 42.0 \\
C1--Au--C1 & 88.3 & 34.0 \\
F--C2--F & 107.0 & 50.0 \\
C4--C3--H2 & 117.0 & 42.0 \\
C4--O--Au & 125.0 & 42.0 \\
O--Au--O & 86.5 & 42.0 \\
H1--C1--H1 & 109.0 & 42.0 \\
Au--C1--H1 & 107.0 & 42.0 \\
C1--Au--O & 92.5 & 42.0 \\
\hline
\end{tabular}
\label{Table:CovBonds}
\end{table*}
\begin{table}[ht!]
\caption{Parameters of the Lennard-Jones potential, Eq.~(\ref{Eq. Lennard-Jones}), describing the van der Waals interaction between atoms of a Me$_2$Au(tfac) precursor molecule, its fragments and atoms of the substrate.
}
\centering
\begin{tabular}{c|c|c|c}
Atom type & $\varepsilon_i$ (kcal/mol) & $r_i^{{\rm min}}/2$~(\AA) & Ref. \\
\hline
Au & 5.29 & 1.48 & \cite{pohjolainen2016unified} \\
F & 0.07 & 1.47 & \cite{SwissParam_paper} \\
C1--C4 & 0.06 & 2.02 & \cite{SwissParam_paper} \\
O & 0.10 & 1.65 & \cite{SwissParam_paper} \\
H1, H2 & 0.04 & 1.34 & \cite{SwissParam_paper} \\
Si & 0.31 & 2.14 & \cite{Mayo1990} \\
O$_{\rm sub}$ & 0.10 & 1.70 & \cite{Mayo1990} \\
H$_{\rm sub}$ & 0.08 & 0.30 & \cite{Mayo1990} \\
\hline
\end{tabular}
\label{Table:van_der_Waals}
\end{table}
The initial geometry of a Me$_2$Au(tfac) molecule has been determined via density-functional theory (DFT) calculations using the Gaussian software package \cite{Gaussian09} and then optimized using MBN Explorer. The optimized geometry of Me$_2$Au(tfac) is shown in Fig.~\ref{Fig:molecule}.
The rCHARMM parameters for Me$_2$Au(tfac) have been determined from a series of DFT-based potential energy surface scans, following the procedure described in the earlier studies~\cite{DeVera2019, Prosvetov2021}.
The parameters of the bonded and angular interactions for Me$_2$Au(tfac) are listed in Table~\ref{Table:CovBonds}.
In the present simulations, we consider the physisorption of precursor molecules on a SiO$_2$-H substrate. Therefore, the molecules do not form covalent bonds with atoms of the substrate but interact with them via van der Waals forces described by means of the Lennard-Jones potential:
\begin{equation}
U_{{\rm LJ}}(r_{ij})=\varepsilon_{ij} \, \left [ \left (\frac{r^{{\rm min}}}{r_{ij}} \right )^{12}-2\left (\frac{r^{{\rm min}}}{r_{ij}}\right )^6 \right ] ,
\label{Eq. Lennard-Jones}
\end{equation}
where $\varepsilon_{ij}=\sqrt{\varepsilon_i \, \varepsilon_j}$ and $r^{{\rm min}} = (r^{{\rm min}}_i+r^{{\rm min}}_j)/2$.
Parameters of the Lennard-Jones potential for gold atoms
have been taken from Ref.~\cite{pohjolainen2016unified}. Parameters for other atoms of the precursor molecule have been generated using the SwissParam web-service \cite{SwissParam_paper}. Parameters for atoms of the substrate have been taken from Ref.~\cite{Mayo1990}. All these parameters are summarized in Table~\ref{Table:van_der_Waals}.
The bonded interaction between Au atoms in the formed deposits has been described by means of the many-body Gupta potential \cite{Gupta_1983_PRB.23.6265} with the parameters taken from Ref.~\cite{Cleri1993}.
Following the earlier IDMD-based studies of FEBID \cite{Sushko2016,DeVera2020, Prosvetov2021}, the substrate has been considered frozen to speed up the simulations.
\subsection{The system formation}
In this study, a layer of Me$_2$Au(tfac) molecules with the size $20~\textrm{nm} \times 20~\textrm{nm}$ has been created by means of MBN Studio \cite{Sushko2019}, optimized, deposited on the SiO$_2$-H substrate and equilibrated at different temperatures (300, 350, 400 and 450~K) for 0.5~ns using the Langevin thermostat with a damping time of 0.2~ps. The number of precursor molecules added at each temperature considered has been determined through the equilibrium surface density of Me$_2$Au(tfac) evaluated via the adsorption-desorption rates from the continuum model of FEBID \cite{Toth2015}. The surface densities of Me$_2$Au(tfac) have been calculated as follows.
According to the kinetic theory of gases, the uniform molecular flux $F_{\rm p}$ impinging on a surface placed in a chamber with pressure $P_{\rm p}$ is given by \cite{LANDAU1980111}:
\begin{equation}
F_{\rm p} = \frac{P_{\rm p}}{\sqrt{2\pi \, m_{\rm p} \, k_{\rm B} T_{\rm p}}} \ ,
\label{Eq. GasFlux}
\end{equation}
where $T_{\rm p}$ is the gas temperature, $m_{\rm p}$ is the mass of the precursor molecule, and $k_{\rm B}$ is the Boltzmann constant.
The rate of newly physisorbed precursor molecules $\varphi_{\rm p}$ is defined as:
\begin{equation}
\varphi_{\rm p} = s_{\rm p} F_{\rm p} (1 - A_{\rm p} n_{\rm p}) \ ,
\label{Eq. Adsorption}
\end{equation}
where $s_{\rm p}$ is the precursor sticking coefficient, commonly set equal to 1, $d_{\rm p}$ is the diameter of a circle circumscribing the molecule, $A_{\rm p} = \pi d_{\rm p}^2/4$ is the surface area covered by one precursor molecule, and $n_{\rm p}$ is the surface density of precursors.
The thermal desorption rate $k_{\rm p}$ is calculated according to:
\begin{equation}
k_{\rm p} = \kappa_{\rm p} \exp{\left(-\frac{E_{\rm p}}{k_{\rm B} T}\right)} \ ,
\label{Eq. Desorption}
\end{equation}
where $\kappa_{\rm p}$ is the desorption attempt frequency, $E_{\rm p}$ is the desorption energy, and $T$ is the substrate temperature.
The surface density of precursor molecules is given by the equation
\begin{equation}
\frac{{\rm d}n_{\rm p}}{{\rm d}t} = \varphi_{\rm p} - n_{\rm p} k_{\rm p} \ ,
\label{Eq. SurfaceConcentration}
\end{equation}
with the solution for $n_{\rm p}(t)$:
\begin{equation}
\label{Eq. SurfaceConcentrationSolution}
n_{\rm p}(t) = \frac{(1-e^{-(s_{\rm p} F_{\rm p} A_{\rm p} + k_{\rm p}) t }) s_{\rm p} F_{\rm p} }{s_{\rm p} F_{\rm p} A_{\rm p} + k_{\rm p}}.
\end{equation}
The solution of Eq.~(\ref{Eq. SurfaceConcentration}) at $t \gg \left( s_{\rm p} F_{\rm p} A_{\rm p} + k_{\rm p} \right)^{-1}$ (which is equal to $\sim$0.1--10~$\mu$s for the studied temperature range $T = 300-450$~K) gives the steady-state surface density of precursors:
\begin{equation}
n_{\rm p_0} \approx \frac{s_{\rm p} F_{\rm p}}{s_{\rm p} F_{\rm p} A_{\rm p} + k_{\rm p}} \ .
\label{Eq. SteadyStateConcentration}
\end{equation}
The values of $n_{\rm p_0}$ calculated using Eq.~(\ref{Eq. SteadyStateConcentration}) for the surface temperatures $T = 300$, 350, 400 and 450~K are equal to $n_{\rm p_0} = 2.6$, 2.3, 0.8 and 0.1 molecules/nm$^2$, respectively.
The calculations have been performed with the values of gas temperature $T_{\rm p} = 300$~K and pressure $P_{\rm p} = 20$~Pa.
The mass of a Me$_2$Au(tfac) molecule is $m_{\rm p} = 380$~a.m.u. and the diameter $d_{\rm p} \approx 0.7$~nm. The values of desorption attempt frequency $\kappa_{\rm p} = 1.0 \times 10^{-14}$~s$^{-1}$ and desorptioin energy $E_{\rm p} = 0.7$~eV have been approximated by the $\kappa_{\rm p}$ and $E_{\rm p}$ values for organic molecules of a similar size \cite{Fichthorn2002,Cullen2015a}.
The chosen value of $P_{\rm p}$ is several times larger than the vapor pressure of Me$_2$Au(tfac) at room temperature, equal to 7~Pa \cite{Ohta2001}, and $1-2$ orders of magnitude larger than the values of precursor gas pressure considered in Ref.~\cite{Cullen2015a} within the continuum diffusion-reaction model.
The steady-state precursor surface density $n_{\rm p_0}$ calculated using Eq.~(\ref{Eq. SteadyStateConcentration}) for $T = 300$~K decreases only by 10\% when the pressure $P_{\rm p}$ decreases from 20~Pa to 1~Pa. At higher temperatures, the thermal desorption (the second term in the denominator of Eq.~(\ref{Eq. SteadyStateConcentration})) becomes dominant, leading to a faster decrease of the equilibrium surface density for smaller $P_{\rm p}$ values. The resulting system is very sparse, with an average distance between the deposited molecules on the order of several nanometers and a surface coverage close to zero. Considering smaller values of $P_{\rm p}$ and $n_{\rm p_0}$ at elevated temperatures would require running very long MD simulations (on the $\mu$s time scale), which is a challenging computational task.
Therefore, in the present study we have considered a higher $P_{\rm p}$ value that enables us to simulate the deposit's growth at elevated temperatures on the computationally feasible timescale of hundreds of nanoseconds.
IDMD simulations of the FEBID process use the information on the space resolved fragmentation probability per unit time, $P(x,y)$. The probability is calculated using the spatial distribution of flux density of primary (PE), secondary (SE) and backscattered (BSE) electrons \cite{DeVera2020} and the absolute fragmentation cross section of the precursor molecules:
\begin{eqnarray}
P(x,y) &=& \sigma_{\rm frag}(E_0) J_{\rm PE}(x,y,E_0) \\
&+& \sum_i \sigma_{\rm frag}(E_i) [J_{\rm SE}(x,y,E_i) + J_{\rm BSE}(x,y,E_i) ] \nonumber \ .
\label{Eq. Frag_Probability_total}
\end{eqnarray}
Here $\sigma_{\rm frag}(E)$ is the energy-dependent precursor fragmentation cross section, $E_i < E_0$ is the electron energy discretized in steps of 1~eV, and $J_{\rm PE/SE/BSE}(x,y,E_i)$ are the flux densities of PE, SE and BSE with energy $E_i$ at the point ($x$,$y$), respectively.
The spatial distribution of the electron flux density employed in the calculation of $P(x,y)$ was obtained previously \cite{DeVera2020} using the track-structure Monte Carlo code SEED for a cylindrical PE beam with a radius of 5~nm and energy $E_0 = 1$~keV.
\subsection{Fragmentation cross section}
\label{sec:fragm_CS}
The main mechanisms of molecular fragmentation are dissociative electron attachment (DEA) at low electron energies below the ionization potential of the molecule (typically below $\sim$10~eV) and dissociative ionization (DI) at higher electron energies.
To the best of our knowledge, there is no data in the literature on the absolute fragmentation cross section of Me$_2$Au(tfac). Therefore, the fragmentation cross section $\sigma_{\rm frag}(E)$ has been evaluated based on the compilation of data available for the fragmentation cross sections of structurally similar molecules and smaller functional groups of Me$_2$Au(tfac).
\begin{figure*}[t!]
\includegraphics[width=0.75\textwidth]{Fig02_Molecules_Frag_CS_Me2Autfac.eps}%
\caption{Panels \textbf{A} and \textbf{B} show a schematic representation of the Me$_2$Au(tfac) molecule. Colored lines indicate the covalent bonds whose electron-induced dissociation is considered in the simulations using Set~1 (panel~A) and Sets 2, 3 and 4 (panel~B) of the fragmentation parameters (see the main text for details). Panel~\textbf{C} shows four sets of the electron impact fragmentation cross section of Me$_2$Au(tfac) used in the simulations. \textbf{C-I:} Set 1 accounts for the fragmentation of metal-ligand bonds due to DI, see Eqs.~(\ref{Eq. TotalIonizationCS}) and (\ref{Eq. tfac_IonizationCS}). \textbf{C-II:} Set 2 accounts for bond breakage within the CH$_3$ and \textit{tfac} ligands as a result of DEA and breakage of the metal-ligands bonds due to DI. \textbf{C-III:} Sets 3 and 4 take into account bond breakage due to DEA and DI for all bonds in the Me$_2$Au(tfac) molecule, see Eqs.~(\ref{Eq. TotalIonizationCS})--(\ref{Eq. PartialDIAu}). Additionally, chemical reactions involving produced atomic and molecular fragments have been accounted for in simulations performed with Set~4. Panel~\textbf{D} indicates atomic and molecular fragments produced in the simulations using the considered sets of fragmentation parameters.}
\label{Fig:Cross-section}
\end{figure*}
The electron-impact induced fragmentation of a Me$_2$Au(tfac) molecule is governed by the contributions from various dissociation channels. As such, determination of the fragmentation cross section for Me$_2$Au(tfac) is a non-trivial task. In the present study, the cross section $\sigma_{\rm frag}(E)$ has been evaluated by different methods. Four different approximations for the total fragmentation cross section (denoted hereafter as ``sets'') accounting for various dissociation channels have been considered for comparison. The first set is based on the simplest approximation accounting for DI-induced cleavage of the bonds between the gold atom and the ligands. The most detailed approximation considered in this study accounts for DEA and DI-induced cleavage of all the bonds in Me$_2$Au(tfac) with follow-up chemistry involving the produced fragments. The summary of the considered sets is presented in Fig.~\ref{Fig:Cross-section}.
Experimental studies of electron-impact ionization and fragmentation of organometallic precursor molecules showed that the partial cross section of ionization without fragmentation is significantly (by 1--2 orders of magnitude) smaller than the sum of partial ionization cross sections leading to the emission of ionic fragments \cite{Wnorowski2012, Engmann2013, Thorman2015}.
Therefore, the total ionization cross section can be used as a reasonable approximation for the DI cross section. In the present study, the DI cross section of Me$_2$Au(tfac) has been calculated according to the additivity rule principle \cite{DEUTSCH1994} as a sum of ionization cross sections of the largest functional groups of the molecule.
The following conclusions were made previously from the analysis of the ionization cross sections for two groups of organic molecules -- aldehydes and ketones \cite{Gupta2014, Bull2012a}. First, the shape of ionization cross sections as functions of the projectile electron energy is similar for different molecules of the same group. Second, the maximum value of the ionization cross section is proportional to the number of electrons in a target molecule. The additivity rule-based approach worked well for the studied organic molecules and provided an agreement with experimental data within the range of experimental uncertainties \cite{Gupta2014, Bull2012a}.
The cross section of DEA has been evaluated in this study as follows. We have utilized the experimental data \cite{Omarsson2014,Warneke2015a} on electron-impact fragmentation mass spectra of \textit{tfac} and structurally similar molecules \textit{acac} and acetone (see Fig.~\ref{Fig:StructuralFormulas}). The absolute DEA cross section for \textit{tfac} has been evaluated by rescaling the spectra from Refs.~\cite{Omarsson2014,Warneke2015a} using the ratio of peak intensities for common molecular fragments. The reported absolute DEA cross section of acetone \cite{Prabhudesai2014} has been used as a reference. The detailed procedure for obtaining the absolute DEA-induced fragmentation cross sections for different bonds of Me$_2$Au(tfac) is described in Sect.~\ref{sec:Fragm_CS_Set2}.
IDMD-based simulations of the FEBID process require the specification of (i) the fragmentation rates for different covalent bonds in the precursor molecule and (ii) the amount of energy $E_d$ deposited into the system during the fragmentation process. This energy is deposited locally into a specific covalent bond of the target and converted into kinetic energy of the two atoms forming the bond \cite{DeVera2019, Sushko2016}.
The choice of $E_d$ may influence the rate of precursor molecule fragmentation \cite{DeVera2020}. For each particular case study the amount of energy transferred by the incident radiation to the system can be evaluated from quantum mechanical calculations of the processes of energy deposition and excitation.
This task, however, goes beyond the scope of this work. Therefore, $E_d$ is considered here as a variable parameter that can be determined from the comparison with experimentally measurable characteristics of the FEBID process, being within the physically justifiable range of values $5~{\rm eV} \lesssim E_d \lesssim 25~{\rm eV}$.
In this study, the value of $E_d$ is varied within the range $400-500$~kcal/mol (from $\sim$17.3 to 21.7~eV) to study the influence of $E_d$ on the Me$_2$Au(tfac) fragmentation process.
As discussed above, the partial cross section of DI, $\sigma_{\rm DI}(E)$, leading to the breakage of specific bonds in the molecule, has been approximated using the total ionization cross section of the molecule, $\sigma_{\rm ion}^{\rm total}(E)$. The latter can be presented as a sum:
\begin{equation}
\sigma_{\rm ion}^{\rm total} (E) = \sigma_{\rm ion}^{\rm fr}(E) + \sigma_{\rm ion}^{\rm nonfr}(E) = \left[1 + \alpha (E)\right] \sigma_{\rm ion}^{\rm fr}(E) \ ,
\label{Eq. Ionization_CS}
\end{equation}
where $\sigma_{\rm ion}^{\rm nonfr}(E)$ is the cross section of non-dissociative ionization (i.e. ionization without fragmentation), $\sigma_{\rm ion}^{\rm fr}(E)$ is the cross section of ionization with subsequent dissociation, i.e. the DI cross section, and the coefficient $\alpha(E) = \sigma_{\rm ion}^{\rm nonfr}(E)/\sigma_{\rm ion}^{\rm fr}(E)$. Hence, the DI cross section can be written as:
\begin{equation}
\sigma_{\rm ion}^{\rm fr} (E) = \frac{1}{\alpha (E) + 1} \, \sigma_{\rm ion}^{\rm total}(E) .
\label{Eq. DI_CS}
\end{equation}
In the case of a weak covalent bonding, the partial cross section of ionization without fragmentation is much smaller than the DI cross section, $\alpha \ll 1$; hence
\begin{equation}
\label{Eq. DI_CS_approx}
\sigma_{\rm DI}(E) \approx \sigma_{\rm ion}^{\rm total}(E).
\end{equation}
This approximation has been used in this study to evaluate the partial cross sections of DI, resulting in the cleavage of different covalent bonds in a Me$_2$Au(tfac) molecule, see Sections~\ref{sec:Fragm_CS_Set1} and \ref{sec:Fragm_CS_Set3} below.
\subsubsection{Set 1}
\label{sec:Fragm_CS_Set1}
Set 1 (Fig.~\ref{Fig:Cross-section}A and Fig.~\ref{Fig:Cross-section}C-I) accounts only for the breakage of the metal--ligand bonds of Me$_2$Au(tfac), i.e. for Au--C and Au--O bond breakage due to DI. In this case, the total fragmentation cross section for Me$_2$Au(tfac) has been calculated as a sum of ionization cross sections of its functional groups (see Fig.~\ref{Fig:molecule}) according to the additivity rule principle \cite{DEUTSCH1994}:
\begin{equation}
\sigma^{\rm Me_2Au(tfac)}_{\rm DI}(E) \approx \sigma^{\rm Au}_{\rm ion}(E) + 2 \sigma^{\rm CH_3}_{\rm ion}(E) + \sigma^{\rm tfac}_{\rm ion}(E) \ .
\label{Eq. TotalIonizationCS}
\end{equation}
Here $\sigma^{\rm Au}_{\rm ion}$, $\sigma^{\rm CH_3}_{\rm ion}$ and $\sigma^{\rm tfac}_{\rm ion}$ are the ionization cross sections for a gold atom \cite{Nelson1976}, CH$_3$ \cite{Hwang1996}, and trifluoroacetylacetone (\textit{tfac}, C$_5$H$_5$F$_3$O$_2$) molecules, respectively. The structure of \textit{tfac} molecule is schematically shown in Fig.~\ref{Fig:StructuralFormulas}A.
\begin{figure*}[t!]
\includegraphics[width=0.85\textwidth]{Fig03_Molecules.eps}%
\caption{Schematic representations of \textbf{A:} trifluoroacetylacetone (\textit{tfac}), \textbf{B:} 2-pentanone, \textbf{C:} acetylacetone (\textit{acac}), and \textbf{D:} acetone molecules.}
\label{Fig:StructuralFormulas}
\end{figure*}
The energy-dependent ionization cross section of \textit{tfac}, $\sigma^{\rm tfac}_{\rm ion}(E)$, has been evaluated using the ionization cross section for a structurally similar molecule 2-pentanone (C$_5$H$_{10}$O, see Fig.~\ref{Fig:StructuralFormulas}B)~\cite{Gupta2014}. Assuming a similar shape of the ionization cross sections for \textit{tfac} and 2-pentanone as functions of the projectile kinetic energy $E$, the magnitude of the cross section $\sigma^{\rm tfac}_{\rm ion}(E)$ has been scaled using the ratio of maximum values of cross sections for \textit{tfac} and 2-pentanone:
\begin{eqnarray}
&&\sigma^{\rm tfac}_{\rm ion}(E) \approx \sigma^{\rm C_5H_{10}O}_{\rm ion}(E) \times \left( \frac{\sigma^{\rm tfac}_{\rm ion, max}}{\sigma^{\rm C_5H_{10}O}_{\rm ion, max}} \right) \\
&=& \sigma^{\rm C_5H_{10}O}_{\rm ion}(E) \times \left[ \frac{ \left( \sigma_{\rm CH_3}+\sigma_{\rm CF_3}+\sigma_{\rm CH}+2\sigma_{\rm CO} \right)_{\rm max} }{ \left( 2\sigma_{\rm CH_3}+2\sigma_{\rm CH_2}+\sigma_{\rm CO} \right)_{\rm max} } \right] \nonumber \ .
\label{Eq. tfac_IonizationCS}
\end{eqnarray}
The maximal values of the total ionization cross sections have been evaluated using the functional group and bond additivity model \cite{Bart2001, Bull2012a}. This model is based on a multidimensional matrix least-squares fitting of the correlation between the experimentally measured ionization cross section for 65 organic and halocarbon molecules and the constituent functional groups calculated by means of the Binary-Encounter-Bethe (BEB) model \cite{Hwang1996}. The values of $\sigma^{\rm tfac}_{\rm ion, max}$ and $\sigma^{\rm C_5H_{10}O}_{\rm ion, max}$ have been calculated as a sum of cross section contributions corresponding to CH$_3$, CH$_2$, CH, and CO functional groups.
In Refs.~\cite{Bart2001, Bull2012a}, the maximum values of total electron-impact ionization cross sections for a wide range of halocarbon molecules (including CF$_4$, C$_2$F$_4$, C$_2$F$_6$, C$_3$F$_8$, and others) were evaluated by summing up the partial contributions from C--C and C--F bonds, multiplied by the number of bonds of each type. The calculated cross sections agreed within 10\% accuracy with the corresponding experimental values.
In this study, the contribution from the CF$_3$ functional group to the ionization cross section of \textit{tfac} has therefore been approximated as $\sigma_{\rm CF_3} \approx 3\sigma_{\rm C-F}$, where $\sigma_{\rm C-F}$ is the partial contribution to the ionization cross section from a C--F bond \cite{Bull2012a}.
\subsubsection{Set 2}
\label{sec:Fragm_CS_Set2}
In Set 2 (Fig.~\ref{Fig:Cross-section}B and Fig.~\ref{Fig:Cross-section}C-II), fragmentation of CH$_3$ and \textit{tfac} ligands due to the DEA mechanism has been considered alongside with the breakage of Au--C and Au--O bonds due to DI (considered in Set~1).
The DEA cross section of \textit{tfac} has been evaluated based on the compilation of published data for similar molecules, acetylacetone (\textit{acac}, Fig.~\ref{Fig:StructuralFormulas}C) and acetone (Fig.~\ref{Fig:StructuralFormulas}D).
A comparison of DEA-induced fragmentation mass spectra for \textit{acac} and acetone were reported in Ref.~\cite{Warneke2015a}, showing similar fragmentation patterns for both molecules. In particular, a CHCO$^-$ fragment was detected in the mass spectra for both \textit{acac} and acetone irradiated under the same conditions. Given the absolute cross section for the release of CHCO$^-$ from acetone \cite{Prabhudesai2014}, the ratio of CHCO$^-$ peak intensities in the mass spectra for \textit{acac} and acetone has been used to evaluate the absolute DEA cross section of other fragments from \textit{acac}.
The absolute DEA cross sections for the formation of H$^-$ and CH$_3^-$ fragments from acetone (corresponding to dissociation of C--H and C--CH$_3$ bonds, respectively) have been taken from Ref.~\cite{Prabhudesai2014}.
The absolute DEA cross sections for the formation of F$^-$, CF$_3^-$ and [$\textrm{M} - \textrm{CF}_3$CO]$^-$ fragments (where M denotes the parent Me$_2$Au(tfac) molecule) have been evaluated using the calculated absolute DEA cross section for \textit{acac} and the ratio of relative fragmentation cross sections for \textit{acac} and \textit{tfac} molecules \cite{Omarsson2014}. The partial DEA cross section leading to the breakage of the C--O bond in \textit{tfac} has been evaluated similarly.
\subsubsection{Set 3}
\label{sec:Fragm_CS_Set3}
Set 3 (see Fig.~\ref{Fig:Cross-section}B and Fig.~\ref{Fig:Cross-section}C-III) includes partial DI cross sections of \textit{tfac} and CH$_3$ leading to the cleavage of the C--C, C--F, C--O and C--H bonds. In addition, set 3 includes the partial cross sections of DEA, described above for Set~2.
According to Eq.~(\ref{Eq. DI_CS_approx}), the cross section of DI resulting in the breakage of a C--O bond has been approximated by the ionization cross section of a CO molecule \cite{Hwang1996}.
The partial DI cross section leading to the breakage of C--H bonds in CH$_3$ and \textit{tfac} ligands has been calculated using the total ionization cross section of a CH$_3$ molecule \cite{Hwang1996}, divided by three.
The same partial DI cross section has been used for the dissociation of a C--F bond \cite{Bull2012a}.
The study of DI of 2-butanone (CH$_3$COCH$_2$CH$_3$) \cite{Vacher2008} demonstrated that the formation of CH$_3$CO fragments as a result of the ``cental'' C--C bond breakage is the most probable fragmentation channel contributing to 64\% of the total ionization cross section. We have assumed that the \textit{tfac} molecule has a similar fragmentation pattern with the release of CH$_3$CO and CF$_3$CO fragments as the main fragmentation channel. Therefore, the total DI cross section of \textit{tfac} has been split into the contributions leading to the cleavage of the ``central'' C--C bonds ($\sigma_{\rm DI}^{\rm C-CH}$) and the ``side'' C--C bonds ($\sigma_{\rm DI}^{\rm C-CH_3}$ and $\sigma_{\rm DI}^{\rm C-CF_3}$) in approximately the same ratio 2:1 as determined in Ref.~\cite{Vacher2008}. The Me$_2$Au(tfac) molecule contains two ``central'' C--C bonds (C$_3$--C$_4$ bonds in Fig.~\ref{Fig:molecule}) and two ``side'' C--C bonds (C$_1$--C$_4$ and C$_2$--C$_4$ bonds in Fig.~\ref{Fig:molecule}). Therefore, the fragmentation cross section for each bond has been evaluated according to:
\begin{eqnarray}
\sigma_{\rm DI}^{\rm C-CH_3}(E) &\sim& \sigma_{\rm DI}^{\rm C-CF_3}(E) \approx \frac{1}{2} \left( \frac{1}{3} \sigma_{\rm ion}^{\rm tfac}(E) \right) = \frac{1}{6} \sigma_{\rm ion}^{\rm tfac}(E) \ , \nonumber \\
\sigma_{\rm DI}^{\rm C-CH}(E) &\approx& \frac{1}{2} \left( \frac{2}{3} \sigma_{\rm ion}^{\rm tfac}(E) \right) = \frac{1}{3} \sigma_{\rm ion}^{\rm tfac}(E) \ .
\label{Eq. PartialDItfac}
\end{eqnarray}
The partial DI cross sections of Me$_2$Au(tfac) leading to the dissociation of Au--C and Au--O bonds, $\sigma_{\rm DI}^{\rm Au-C}(E)$ and $\sigma_{\rm DI}^{\rm Au-O}(E)$, have been evaluated according to the sum of total ionization cross sections of Au, CH$_3$ and \textit{tfac} fragments, see Eq.~(\ref{Eq. TotalIonizationCS}).
The cross sections $\sigma_{\rm DI}^{\rm Au-C}(E)$ and $\sigma_{\rm DI}^{\rm Au-O}(E)$ have been calculated as follows
\begin{eqnarray}
\sigma_{\rm DI}^{\rm Au-C}(E) &\approx& \frac{1}{3} \sigma_{\rm ion}^{\rm Au} + \sigma_{\rm ion}^{\rm CH_3} \ , \nonumber \\
\sigma_{\rm DI}^{\rm Au-O}(E) &\approx& \frac{1}{3}\sigma_{\rm ion}^{\rm Au} + \sigma_{\rm ion}^{\rm tfac}
\label{Eq. PartialDIAu}
\end{eqnarray}
to fulfil the sum rule principle for the total DI cross section of Me$_2$Au(tfac), $\sigma^{\rm Me_2Au(tfac)}_{\rm DI}(E) =
2\sigma_{\rm DI}^{\rm Au-C}(E) + \sigma_{\rm DI}^{\rm Au-O}(E)$, and avoid multiple counting of the contribution $\sigma_{\rm ion}^{\rm Au}$.
The factor 1/3 in Eq.~(\ref{Eq. PartialDIAu}) has been introduced based on the assumption that the contribution of the Au ionization cross section is divided equally between the dissociation channels involving each of two CH$_3$ ligands and the \textit{tfac} ligand.
\subsubsection{Set 4}
\label{sec:Fragm_CS_Set4}
Set 4 uses the same DEA- and DI-induced fragmentation cross sections for all bonds in the Me$_2$Au(tfac) molecule as in Set~3 (see Fig.~\ref{Fig:Cross-section}B and Fig.~\ref{Fig:Cross-section}C-III). Additionally, accounting for chemical reactions involving produced atomic and molecular fragments has been performed in Set~4.
The interactions involving the created fragments can lead to the formation of other volatile molecular species, such as H$_2$, O$_2$, CH$_4$, C$_2$H$_6$ and H$_2$O (see Fig.~\ref{Fig:Cross-section}D). This may affect the number of non-bonded atoms in the deposit and, thus, the resulting metal content.
\subsection{Simulation parameters}
The calculated fragmentation probability $P(x,y)$, Eq.~(\ref{Eq. Frag_Probability_total}), has been tabulated for a $20~{\rm nm} \times 20~{\rm nm}$ grid covering the simulation box and used as input for the IDMD simulations of the irradiation phase of the FEBID process.
Following the earlier studies \cite{Sushko2016, DeVera2020, Prosvetov2021, Prosvetov2021a},
the simulated PE flux density $J_0$ (and hence PE beam current $I_0$) have been rescaled to match the same number of PEs per unit area and per dwell time as in experiments.
This procedure enables the correspondence of simulated results to experimental ones through the correspondence of the electron fluence per dwell time per unit area in simulations and experiments \cite{Sushko2016}.
According to the experimental study of the FEBID of Me$_2$Au(acac) \cite{Mulders2011}, an increase of the electron current $I_{\rm exp}$ from 1.6~nA to 6.3~nA causes minor changes in the elemental composition of the deposits produced by electron irradiation of Me$_2$Au(acac) in the temperature range of $298-423$~K.
Based on those results, the electron current used in the simulations has been set to a characteristic average value $I_{{\rm exp}} =$ 4~nA.
The beam spot radius $R_{{\rm sim}}$ has been set equal to 5~nm and the dwell time value $\tau_d$ has been set to 10~ns, similar to the previous studies \cite{Sushko2016, DeVera2020, Prosvetov2021, Prosvetov2021a}.
The physical state of the system at the end of the replenishment phase of FEBID and prior to the next irradiation phase has been simulated similarly to our earlier IDMD simulations of FEBID \cite{Sushko2016, DeVera2020, Prosvetov2021, Prosvetov2021a}. At first, weakly bound fragments and precursor molecules were removed from the system by an external force field during a 1~ns-long simulation. Afterward, new precursor molecules have been deposited over the circular area with a radius of 7~nm to cover the PE beam spot area and a halo of secondary electrons. Such a model of replenishment prevents the accumulation of non-fragmented molecules along the perimeter of the simulation box where the fragmentation probability is significantly lower \cite{Prosvetov2021a}. The number of precursor molecules added at each FEBID cycle corresponds to the values of the steady-state surface density of Me$_2$Au(tfac) calculated according to Eq.~(\ref{Eq. SteadyStateConcentration}) for each temperature considered in this study.
The simulations have been performed using the Verlet integration algorithm with a time step of 0.5~fs and reflective boundary conditions. Interatomic interactions have been computed using the linked cell algorithm \cite{Solovyov2012, Solo2017} with a cell size of 10~\AA.
\section{Results and discussion}
\label{Results}
\subsection{Analysis of Me$_2$Au(tfac) fragmentation}
The earlier IDMD-based studies of FEBID \cite{DeVera2020, Prosvetov2021} showed that the intensity of precursor fragmentation depends on the number of available fragmentation channels and the amount of energy deposited into the system during the fragmentation process.
A variation of these parameters affects the number and type of molecular fragments produced upon breakage of covalent bonds in the parent molecule.
\begin{figure*}[t!]
\includegraphics[width=0.8\textwidth]{Fig04_MassSpectr.eps}%
\caption{Relative yields of molecular fragments formed after a 10 ns-long irradiation simulation of adsorbed Me$_2$Au(tfac) molecules (indicated as $M$) at 300~K (panel~\textbf{A}) and 400~K (panel~\textbf{B}) using four sets of fragmentation cross sections described in Sect.~\ref{sec:fragm_CS}. Fragments denoted as $[M - X]$ are produced by the release of a fragment $X$ from the parent molecule $M$. The intensity of each fragment peak is normalized to the number of intact precursor molecules prior to irradiation.}
\label{Fig:MassSpectr}
\end{figure*}
In order to evaluate and analyze the contribution of various DEA and DI fragmentation channels involving different covalent bonds in Me$_2$Au(tfac) to the formation of molecular fragments, four sets of fragmentation cross sections described in Sect.~\ref{sec:fragm_CS} have been considered. Figure~\ref{Fig:MassSpectr} shows the relative yields of Me$_2$Au(tfac) fragments created by the end of a 10-ns long irradiation phase of FEBID. Results of these simulations are summarized also in Fig.~\ref{Fig:Cross-section}D. At a given temperature, the fragment yields shown in Fig.~\ref{Fig:MassSpectr} have been normalized to the number of precursor molecules prior to irradiation for each of the four sets of the fragmentation parameters.
The simulations carried out using Set~1 (top row in Fig.~\ref{Fig:MassSpectr}) indicate the dissociation of Au--C bonds in the Me$_2$Au(tfac) molecule (denoted as M) and the release of CH$_3$ ligands. The addition of fragmentation channels associated with the DEA to CH$_3$ and \textit{tfac} ligands (Set~2, second row) does not lead to any significant change in the relative fragment yield. An explanation for this result is that the corresponding fragmentation cross sections are several orders of magnitude smaller than the partial fragmentation cross sections associated with DI (see Fig.~\ref{Fig:Cross-section}C-II).
In contrast, simulations carried out using the Set~3 (third row in Fig.~\ref{Fig:MassSpectr}) show a larger variety of created fragments. In this case, the dissociation of C--H, C--F and C--O bonds has been observed. The formation of CH$_4$, C$_2$H$_6$ and H$_2$ molecules detected experimentally during electron irradiation of Me$_2$Au(acac) molecules deposited on a surface \cite{Wnuk2010} has been observed in the simulations using only Set~4, in which the formation of C--C, H--H and O--H bonds is enabled by means of the reactive rCHARMM force field. Thus, accounting for DEA and DI fragmentation channels for all the bonds in the Me$_2$Au(tfac) molecule is required to simulate the formation of the experimentally detected molecular fragments. It should be noted that some fragmented molecules have merged in the course of irradiation and formed small clusters containing two or more gold atoms. For the sake of clarity, mass spectra shown in Fig.~\ref{Fig:MassSpectr} are limited by the mass of a parent Me$_2$Au(tfac) molecule, and larger molecular products are not shown.
The results of FEBID simulations carried out at $T = 300$~K (Fig.~\ref{Fig:MassSpectr}A) and 400~K (Fig.~\ref{Fig:MassSpectr}B) demonstrate similar fragmentation patterns for Me$_2$Au(tfac). The fraction of Me$_2$Au(tfac) molecules remaining intact in the entire simulation box after electron-beam irradiation at 400~K is 2 to 5 times lower than at 300~K.
This observation is explained by the difference between the spatial distributions of precursor molecules and the fragmentation rate. Indeed, Me$_2$Au(tfac) molecules are distributed uniformly over the substrate with the surface density depending on the system's temperature. In contrast, the fragmentation rate (independent of temperature) is maximal within the beam spot area with a radius of 5~nm. Towards the edge of the simulation box, the fragmentation rate decreases by several orders of magnitude due to the smaller number of SEs emitted from the substrate in that spatial region. Due to a higher surface density of precursors deposited at 300~K, $n_{\rm p0} = 2.6$~molecules/nm$^2$, most of the molecules outside the beam spot area do not dissociate during one irradiation phase due to the low fragmentation rate in that region. However, a lower surface density of the adsorbed precursors at 400~K ($n_{\rm p0} = 0.8$~molecules/nm$^2$) leads to the fragmentation of almost all the molecules within the beam spot area and in the surrounding halo region by the end of the irradiation phase.
\begin{figure}[t!]
\includegraphics[width=0.5\textwidth]{Fig05_MassSpectr_Ed.eps}%
\caption{Relative yields of molecular fragments formed after a 10 ns-long irradiation simulation of adsorbed Me$_2$Au(tfac) molecules (denoted as $M$) at $T = 400$~K. Fragments denoted as $[M - X]$ are produced by the release of a fragment $X$ from the parent molecule $M$. The simulations have been conducted using the Set~4 of fragmentation cross sections described in Sect.~\ref{sec:fragm_CS} and the values of the deposited energy $E_d = 400$, 450 and 500~kcal/mol.}
\label{Fig:MassSpectr_Ed}
\end{figure}
Figure~\ref{Fig:MassSpectr_Ed} illustrates the variation of the yield of Me$_2$Au(tfac) fragments for different values of the energy $E_d$ transferred to the bonded atoms during the fragmentation process. As an illustration, the results are presented for the simulations conducted using Set~4 of fragmentation cross sections described in Sect.~\ref{sec:fragm_CS}. As $E_d$ increases, the fraction of precursor molecules that remain intact by the end of a 10-ns long irradiation phase of FEBID decreases. At the same time, a larger number of fragments (particularly F, C$_2$H$_6$ and C$_5$F$_3$O$_2$H$_6$) is produced.
In general, the value of $E_d$ required for the bond dissociation depends on the molecular structure and environment.
For bulky ligands made of several organic groups (as it happens in $\beta$-diketonates), the energy given to a metal--ligand bond is distributed over many degrees of freedom, thus suppressing the fragmentation process.
As shown below, results of the simulations performed with the values $E_d = 500$~kcal/mol agree with experimental results in terms of the metal content in a deposit.
\subsection{Temperature effects in the FEBID process}
Temperature at which the FEBID process operates may influence the growth rate and metal content of deposits \cite{Mulders2011, Koops1996}. A variation in the process temperature also has an impact on the adsorption and diffusion of precursor molecules on a substrate, and their desorption from a substrate. As a result, the equilibrium precursor concentration on the surface depends strongly on temperature.
Figure~\ref{Fig:Snapshot} shows the simulation snapshots of the nanostructures grown at temperatures ranging from 300 to 450~K. As an example, the snapshots are presented for the simulations performed using the fragmentation cross sections from Set~1 (Fig.~\ref{Fig:Cross-section}A). The morphologies of nanostructures obtained by employing other sets of fragmentation cross sections do not show any significant differences to those shown in Fig.~\ref{Fig:Snapshot}. Variation in the number of simulated FEBID cycles at different temperatures is due to the difference in the number of atoms accumulated on a surface.
\begin{figure}[t!]
\includegraphics[width=0.5\textwidth]{Fig06_Snapshot1.eps}%
\caption{Snapshots of the IDMD simulations of the FEBID process for Me$_2$Au(tfac) with electron current $I_{\rm exp}=4$~nA at different temperatures $T$ and the corresponding steady-state concentrations of adsorbed precursors $N_{\rm a}$. Left column shows the system's side view on diagonal cross sections indicated by dotted lines on the top view shown in right column.
The merged largest clusters are visualized in color using the same color scheme as in Fig.~\ref{Fig:molecule}. Isolated precursor molecules and small fragments are shown in gray scale. The primary electron beam spot is depicted by dashed lines in the left column and by circles in the right column. Grid line spacing is 1~nm in all dimensions.}
\label{Fig:Snapshot}
\end{figure}
The largest topologically-connected cluster is shown in Fig.~\ref{Fig:Snapshot} in color, while smaller isolated clusters and intact precursor molecules are presented in gray scale.
Most of precursor molecules adsorbed within the PE beam spot area (indicated in Fig.~\ref{Fig:Snapshot} by dashed lines and circles) undergo fragmentation and merge into a larger structure for all precursor concentrations $n_{\rm p0}$ considered. In comparison with FEBID simulations for Pt(PF$_3$)$_4$ and Fe(CO)$_5$ molecules \cite{Prosvetov2021,Prosvetov2021a}, where the deposited metal clusters merged together forming dendrite-like metal structures, the gold-containing deposit is characterized by small-size metal grains consisting of several gold atoms incorporated into an organic matrix independent of temperature. This difference is explained by the different topology of Pt(PF$_3$)$_4$, Fe(CO)$_5$ and Me$_2$Au(tfac) molecules and the number of non-volatile fragments produced in the course of electron beam irradiation.
The simulation results are in agreement with the experimental analysis of the deposit's morphology for gold-containing $\beta$-diketonate precursors \cite{Riazanova2012, DosSantos2018}. The lateral size of the grown structure depends on the surface density of precursors. At room temperature corresponding to high precursor surface density, the merged structure is limited by the PE beam spot, while it occupies a larger area at $T = 400$~K. The localization of the deposit mostly within the beam spot area at $T = 450$~K (Fig.~\ref{Fig:Snapshot}C) can be explained by a small number of adsorbed precursor molecules.
The deposit's growth rate, defined as the average deposit height per accumulated electron fluence,
is plotted in Fig.~\ref{Fig:Height} at different temperatures $T$ within the range from 300 to 450~K. The growth rate of the deposit decreases with an increase in the FEBID operating temperature. This result can be explained by a decrease in the steady-state surface density of adsorbed precursors with $T$, see Eq.~(\ref{Eq. SteadyStateConcentration}). Therefore, the number of adsorbed precursor molecules becomes too small to enable the formation of large metal-containing clusters. This simulation result is in agreement with the experimentally measured dependence of the deposit's growth rate on temperature for a structurally similar precursor molecule Me$_2$Au(acac) \cite{Mulders2011}.
\begin{figure}[t!]
\includegraphics[width=0.48\textwidth]{Fig07_GrowthRate_Temp.eps}%
\caption{Simulated growth rate of the deposit (red squares) defined as its height in the beam spot area per accumulated electron fluence at different temperatures in the range of $300-450$~K. Blue circles depict the experimental data on the deposit's growth rate during FEBID of Me$_2$Au(acac) \cite{Mulders2011}.}
\label{Fig:Height}
\end{figure}
\begin{figure}[t!]
\includegraphics[width=0.48\textwidth]{Fig08_Au-C_Temp.eps}%
\caption{Comparison of the Au:C ratio in the FEBID-grown deposits as a function of treatment temperature. Full symbols correspond to the simulation results for Me$_2$Au(tfac) performed using Set 4 of the fragmentation cross section. Open symbols indicate the experimentally measured Au:C ratios in the deposits grown with Me$_2$Au(tfac) \cite{Koops1996} and Me$_2$Au(acac) \cite{Mulders2011} precursor molecules at different substrate temperatures.}
\label{Fig: Au:C ratio}
\end{figure}
The metal content in deposits is evaluated differently in different FEBID experiments. Some experimental studies reported the Au:C ratio \cite{Koops1996}, while other studies reported the relative fraction of Au, C and O atoms in the deposits \cite{Mulders2011a, DosSantos2018,Shawrav2016a}. In this study, the metal content in the deposit is characterized by the Au:C ratio for easier comparison with the experimental results.
Figure~\ref{Fig: Au:C ratio} compares the results of IDMD simulations (full circles) with experimental data on the Au:C ratio in the deposits of Me$_2$Au(tfac) \cite{Koops1996} and Me$_2$Au(acac) \cite{Mulders2011} (open symbols) as a function of the FEBID process temperature. The Au:C ratio obtained in the simulations is within the range of experimentally reported values and follows the experimentally observed trend that the concentration of gold in the deposit increases with temperature.
Higher metal content in the deposit grown at 400~K can be explained by a combination of several factors. First, the deposition process at elevated temperatures leads to faster thermal desorption of intact precursor molecules and created volatile fragments. As a result, the electron-induced dissociation process takes place in a less dense environment, which leads to a more efficient release of fragments from the deposit. Second, the rates of chemical reactions involving atomic and molecular fragments of Me$_2$Au(tfac) should increase with increasing the temperature at which the FEBID process operates. Third, the operational temperature of FEBID governs the diffusion of precursor molecules and fragments during irradiation. This affects follow-up chemistry and, thus, atomic content, morphology and the growth rate of the deposits.
\section{Conclusions}
\label{Conclusions}
Irradiation-driven molecular dynamics (IDMD) simulations have been performed to explore the role of thermal effects during the FEBID process of Me$_2$Au(tfac), a commonly used precursor molecule for the fabrication of gold nanostructures.
The absolute cross section of electron-induced fragmentation of Me$_2$Au(tfac) required as an input for IDMD has been obtained from the experimentally measured fragmentation mass spectra and fragment ion yields for structurally similar molecules and smaller functional groups of Me$_2$Au(tfac).
The cross section has been evaluated by different methods accounting for DI- and DEA-induced cleavage of different covalent bonds in the molecule. In the simplest approximation, the calculated total fragmentation cross section accounted only for the DI-induced cleavage of covalent bonds between the gold atom and the ligands. The most complete approximation for the fragmentation cross section accounted for the contribution of DI and DEA processes in the cleavage of covalent bonds between the gold atom and the ligands, as well as for the bond cleavage within the ligands.
The explicit simulation of chemical reactions involving the created atomic and molecular fragments has enabled the formation of volatile molecular products H$_2$, CH$_4$ and C$_2$H$_6$ which were observed experimentally during FEBID of Me$_2$Au(acac).
The FEBID process of Me$_2$Au(tfac) precursor molecules has been simulated at different temperatures in the range $300-450$~K.
The simulations confirm experimental observations that deposits consist of small gold clusters embedded into a carbon-rich organic matrix. The simulated growth rate of the deposit decreases from $5 \times 10^{-4}$ to $0.1 \times 10^{-4}$ $\mu$m$^3$/nC upon the temperature increase from 300 to 450~K. A larger number of Me$_2$Au(tfac) fragments created while accounting for the DEA- and DI-induced cleavage of all the bonds in the precursor molecule leads to an increase in the concentration of gold in the deposit. The simulations predict an increase in Au:C ratio in the deposits from $\sim$0.18 to $\sim$0.25 upon increasing the temperature from 300 to 450~K. The simulated deposit's characteristics, such as the deposit's structure, morphology, growth rate, and elemental composition at different temperatures, are in agreement with experimental data.
\begin{acknowledgments}
The authors acknowledge financial support from the Deutsche Forschungsgemeinschaft (Project no. 415716638), and the European Union's Horizon 2020 research and innovation programme – the RADON project (GA 872494) within the H2020-MSCA-RISE-2019 call.
This article is also based upon work from the COST Action CA20129 MultIChem, supported by COST (European Cooperation in Science and Technology).
The possibility of performing computer simulations at the Goethe-HLR cluster of the Frankfurt Center for Scientific Computing is gratefully acknowledged.
\end{acknowledgments}
|
1,314,259,996,613 | arxiv | \section{Introduction}
Speech translation (ST), the task of translating acoustic speech signals into text in a foreign language, is a complex and multi-faceted task that builds upon work in automatic speech recognition (ASR) and machine translation (MT).
ST applications are diverse and include travel assistants \cite{Takezawa1998}, simultaneous lecture translation \cite{Fugen2008}, movie dubbing/subtitling \cite{Saboo2019,Matusov2019a}, language documentation and crisis response \cite{Bansal2017}, and developmental efforts \cite{Black2002}.
Until recently, the only feasible approach has been the cascaded approach that applies an ASR to the speech inputs, and then passes the results on to an MT system. Progress in ST has come from two fronts: general improvements in ASR and MT models, and moving from the loosely-coupled cascade in its most basic form toward a tighter coupling. However, despite considerable efforts toward tight coupling, a large share of the progress has arguably been owed simply to general ASR and MT improvements.\footnote{For instance, \newcite{Pham2019a}'s winning system in the IWSLT 2019 shared ST task \cite{Niehues2019} makes heavy use of recent ASR and MT modeling techniques, but is otherwise a relatively simple cascaded approach.}
Recently, new modeling techniques and in particular end-to-end trainable encoder-decoder models have fueled hope for addressing challenges of ST in a more principled manner. Despite these hopes, the empirical evidence indicates that the success of such efforts has so far been mixed \cite{Weiss2017,Niehues2019}.
In this paper, we will attempt to uncover potential reasons for this. We start by surveying models proposed throughout the three-decade history of ST.
By contrasting the extreme points of loosely coupled cascades vs.\ purely end-to-end trained direct models, we identify foundational challenges:
erroneous early decisions, mismatch between spoken-style ASR outputs and written-style MT inputs, and loss of speech information (e.g.\ prosody) on the one hand, and data scarcity on the other hand.
We then show that to improve data efficiency, most end-to-end models employ techniques that re-introduce issues generally attributed to cascaded ST.
Furthermore, this paper proposes a categorization of ST research into well-defined terms for the particular challenges, requirements, and techniques that are being addressed or used. This multi-dimensional categorization suggests a modeling space with many intermediate points, rather than a dichotomy of cascaded vs.\ end-to-end models, and reveals a number of trade-offs between different modeling choices.
This implies that additional work to more explicitly analyze the interactions between these trade-offs, along with further model explorations, can help to determine more favorable points in the modeling space, and ultimately the most favorable model for a specific ST application.
\section{Chronological Survey}
This chapter surveys the historical development of ST and introduces key concepts that will be expanded upon later.\footnote{For a good comparison of empirical results, which are not the focus of this paper, we refer to concurrent work \cite{Sulubacak2019}. Moreover, for conciseness we do not cover the sub-topic of simultaneous translation \cite{Fugen2008}.}
\subsection{Loosely Coupled Cascades}
\label{sec:loosely-coupled-cascade}
Early efforts to realize ST \cite{Stentiford1988,Waibel1991}
introduced what we will refer to as the \textbf{loosely coupled cascade} in which separately built ASR and MT systems are employed and the best hypothesis of the former is used as input to the latter. The possibility of \textbf{speech-to-speech} translation, which extends the cascade by appending a text-to-speech component, was also considered early on \cite{Waibel1991}.
These early systems were especially susceptible to \textbf{errors propagated} from the ASR, given the widespread use of interlingua-based MT which relied on parsers unable to handle mal-formed inputs \cite{Woszczyna1993,Lavie1996,Liu2003}. Subsequent systems \newcite{Wang1998,Takezawa1998,Black2002,sumita2007nict}, relying on data driven, statistical MT, somewhat alleviated the issue, and also in part opened the path towards tighter integration.
\subsection{Toward Tight Integration}
Researchers soon turned to the question of how to avoid early decisions and the problem of error propagation. While the desirable solution of full integration over transcripts is intractable \cite{Ney1999}, approximations are possible.
\newcite{Vidal1997,Bangalore2001,Casacuberta2004,perez2007comparison} compute a composition of FST-based ASR and MT models, which approximates the full integration up to search heuristics, but suffers from limited reordering capabilities. A much simpler, though computationally expensive, solution is the \textbf{$n$-best} translation approach which replaces the sum over all possible transcripts by a sum over only the $n$-best ASR outputs \cite{Woszczyna1993,Lavie1996}.
Follow-up work suggested \textbf{lattices} and \textbf{confusion nets} \cite{Saleem2004,Zhang2005,Bertoldi2005} as more effective and efficient alternatives to $n$-best lists. Lattices proved flexible enough for integration into various translation models, from word-based translation models to phrase-based ST \newcite{matusov2005phrase,Matusov2008} to neural lattice-to-sequence models \cite{Sperber2017,Sperber2019b,Zhang2019b,Beck2019}.
Another promising idea was to limit the detrimental effects of early decisions, rather than attempting to avoid early decisions. One way of achieving this is to train \textbf{robust translation} models by introducing synthetic ASR errors into the source side of MT corpora \cite{Peitz2012,Tsvetkov2014,Ruiz2015,Sperber2017a,Cheng2018,Cheng2019a}.
A different route is taken by \newcite{Dixon2011,He2011} who directly optimize ASR outputs towards translation quality.
Beyond early decisions, research moved towards tighter coupling by addressing issues arising from ASR and MT models being trained separately and on different types of corpora. \textbf{Domain adaptation} techniques were used by \newcite{Liu2003,Fugen2008} to adapt models to the spoken language domain. \newcite{Matusov2006,Fugen2008} propose re-segmenting the ASR output and inserting \textbf{punctuation}, so as to provide the translation model with well-formed text inputs. In addition, \textbf{disfluency removal} \cite{Fitzgerald2009} was proposed to avoid translation errors caused by disfluencies that are often found in spoken language.
\newcite{aguero2006prosody,Anumanchipalli2012,Do2017,Kano2018a} propose \textbf{prosody transfer} for speech-to-speech translation by determining source-side prosody and applying transformed prosody characteristics to the aligned target words.
\subsection{Speech Translation Corpora}
\label{sec:e2e-corpora}
It is important to realize that all efforts to this point had used separate ASR and MT corpora for training. This often led to a mismatch between ASR trained on data from the spoken domain, and MT trained on data from the written domain. \textbf{End-to-end ST data} (translated speech utterances) was only available in small quantities for test purposes.
\newcite{Paulik2010} proposes the use of audio recordings of interpreter-mediated communication scenarios, which is not only potentially easier to obtain, but also does not exhibit such domain mismatches. \newcite{Post2013} manually translate an ASR corpus to obtain an end-to-end ST corpus, and show that training both ASR and MT on the same corpus considerably improves results compared to using out-of-domain MT data. Unfortunately, high annotation costs prevent scaling of the latter approach, so follow-up work concentrates on compiling ST corpora from available web sources \cite{Godard2018,Kocabiyikoglu2018,DiGangi2019c,Boito2019,Beilharz2019,Iranzo-Sanchez2019}. Note that despite these efforts, publicly available ST corpora are currently strongly limited in terms of both size and language coverage. For practical purposes, the use of separate ASR and MT corpora is therefore currently unavoidable.
\subsection{End-to-End Models}
\label{sec:survey-e2e}
The availability of end-to-end ST corpora, along with the success of end-to-end models for MT and ASR, led researchers to explore ST models trained in an end-to-end fashion. This was fueled by a hope to solve the issues addressed by prior research in a principled and more effective way.
\newcite{Duong2016,Berard2016,Bansal2018} explore \textbf{direct ST models} that translate speech without using explicitly generated intermediate ASR output.
In contrast, \newcite{Kano2017,Anastasopoulos2018,Wang2020} explore \textbf{end-to-end trainable cascades and triangle models}, i.e. models that do rely on transcripts, but are optimized in part through end-to-end training.
\textbf{Multi-task training} and \textbf{pre-training} were proposed as a way to incorporate additional ASR and MT data and reduce dependency on scarce end-to-end data \cite{Weiss2017,Berard2018,Bansal2019,Stoian2019,Wang2020}.
As these techniques were not able to exploit ASR and MT data as effectively as the loosely coupled cascade, other approaches like \textbf{sub-task training} for end-to-end-trainable cascades \cite{Sperber2019}, \textbf{data augmentation} \cite{Jia2019b,Pino2019a}, knowledge distillation \cite{Liu2019}, and meta-learning \cite{Indurthi2020} were proposed.
\newcite{Salesky2019} propose pre-segmenting speech frames, \cite{Jia2019,Tjandra2019a} explore speech-to-speech translation. \newcite{Sung2019,Gangi2019,DiGangi2019b,Bahar2019b,Inaguma2019,DiGangi2019} transfer ideas from MT and ASR fields to ST.
\section{Central Challenges}
\label{sec:challenges}
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{conditioning.pdf}
\caption{Illustration of inference strategies (\S\ref{sec:inference}): Committed/marginalizing cascade (\texttt{CC}/\texttt{MC}), direct (\texttt{Di}), committed/marginalizing triangle (\texttt{CT}/\texttt{MT}), joint (\texttt{Jt}). Double lines differentiate the observed variable (speech input \textit{X}) from random variables (intermediate representations \textit{IR} and translations \textit{T}). Shaded circles marginalize over random variables.}
\label{fig:conditioning}
\end{figure}
Given the abundance of prior work, a clear picture on where we currently stand is needed. For purposes of identifying the key challenges in ST research, this section will contrast the extreme cases of the \textit{loosely coupled cascade} (\texttt{CC} in Fig.~\ref{fig:conditioning})\footnote{ASR and MT models trained separately on different corpora; intermediate representation is ASR 1-best output.} against the \textit{vanilla direct model} (\texttt{Di} in Fig.~\ref{fig:conditioning}).\footnote{Encoder-decoder model trained on speech utterances paired with translations; no intermediate representations used.} We emphasize that these models are only extreme points in a modeling space with many intermediate points, as we will see in \S\ref{sec:modeling_techniques}. We assume appropriate speech features $X$ as inputs. $T,\hat{T}\in\mathcal{T}$ denote candidate/best translations, respectively, from the MT hypothesis space. $S{\in}\mathcal{H}$ denotes a graphemic transcript from the ASR hypothesis space.
\subsection{Challenges of Loosely Coupled Cascades}
\label{sec:challenges-loosely-coupled}
The loosely coupled cascade justifies its decomposition into MT model $P_\text{MT}\left({T}{\mid}{S}\right)$ and ASR model $P_\text{ASR}\left({S}{\mid}{X}\right)$ as follows:
\begin{align}
\hat{T}&=\argmax_{T\in\mathcal{T}} P\left({T}\mid X\right) \label{eq:cascade1}
\\
&=\argmax_{T\in\mathcal{T}} \sum_{S\in\mathcal{H}}P\left({T}{\mid}{S},{X}\right)P\left({S} {\mid} {X}\right) \label{eq:cascade2}
\\
&\approx\argmax_{T\in\mathcal{T}} \sum_{S\in\mathcal{H}}P_\text{MT}\left({T}{\mid}{S}\right)P_\text{ASR}\left({S}{\mid}{X}\right) \label{eq:cascade3}
\\
&\approx\argmax_{T\in\mathcal{T}} \sum_{S\in\mathcal{H}'}P_\text{MT}\left({T}{\mid}{S}\right)P_\text{ASR}\left({S}{\mid}{X}\right) \label{eq:cascade4}
\end{align}
Note that here the set $\mathcal{H}'$ contains only a single entry, the 1-best ASR output. The approximations in these derivations directly result in the following three foundational challenges:
\paragraph{Erroneous early decisions:} \textit{Committing to a potentially erroneous $S$ during inference.} This leads to the well-known problem of \textbf{error propagation} \cite{Ruiz2014} and is caused by avoiding the intractable full integration over transcripts (Eq.~\ref{eq:cascade3}) and using only the 1-best ASR output instead (Eq.~\ref{eq:cascade4}). Typical countermeasures include increasing $\mathcal{H}'$ to cover a larger space using lattices or confusion nets, or improving the robustness of MT models.
\paragraph{Mismatched source-language:} \textit{ASR and MT components model the source-language (transcript) priors }$P_\text{MT}(S)$ \textit{and} $P_\text{ASR}(S)$ \textit{differently.}\footnote{Note that our definition does not entail covariance shift and other forms of domain mismatch \cite{Kouw2018} which, though relevant, are not unique to cascaded ST and are widely covered by general ASR and MT literature \cite{Cuong2018}.}
Causes include both modeling assumptions, e.g.\ ASR modeling only unpunctuated transcripts; and mismatched training data, leading to stylistic and topical divergence.
Typical countermeasures are domain adaptation techniques, disfluency removal, text normalization, and segmentation/punctuation insertion.
\paragraph{Information loss:} \textit{Assumed conditional independence between inputs and outputs, given the transcript: $\big(T\upmodels X\big)\mid{S}$.} This can be seen in Eq.~\ref{eq:cascade3} and results in any information not represented in $S$ to be lost for the translation step. In particular, the MT model is unaware of \textbf{prosody} which structures and disambiguates the utterances, thus playing a role similar to punctuation in written texts; and provides ways to emphasize words or parts of the messages that the speaker think are important. Prosody also conveys information on the speaker’s attitude and emotional state \cite{Jouvet2019}.
\begin{table*}[tb]
\centering
\small
\begin{tabular}{cc}
\hline \textbf{English} & \textbf{Japanese} \\ \hline
& \textit{kochira wa suekko no lucy desu} \\
\underline{this} is my \underline{niece} , \underline{lucy} & \begin{CJK}{UTF8}{min}こちら は 姪っ子 の ルーシー です 。\end{CJK} \\
& \textit{lucy, kono ko ga watashi no suekko desu} \\
\underline{this} is my niece , lucy & \begin{CJK}{UTF8}{min}ルーシー 、 この 子 が 私 の 姪っ子 です 。\end{CJK} \\
\hline
& \textit{chiizu toka jamu toka, dore ni shimasu ka} \\
will you have \tone{15}cheese or \tone{15}jam & \begin{CJK}{UTF8}{min}チーズ とか ジャム とか、 どれ に します か ?\end{CJK} \\
& \textit{chiizu ka jamu, docchi ni shimasu ka} \\
will you have \tone{15}cheese or \tone{51}jam & \begin{CJK}{UTF8}{min}チーズ か ジャム、 どっち に します か ?\end{CJK} \\
\hline
\end{tabular}
\caption{Motivating examples for prosody-aware translation from English to Japanese. In the first example, prosody disambiguates whether the speaker is talking about \textit{Lucy} as a third person or directly addressing \textit{Lucy}. In the second example, prosody disambiguates whether \textit{cheese or jam} is an open set or a closed set. In both cases, the surface form of the Japanese translation requires considerable changes depending on the prosody.}
\label{tab:prosody-examples}
\end{table*}
\subsection{Challenges of the Vanilla Direct Model}
Consider instead the other extreme case: an encoder-decoder model trained to directly produce translations from speech (Eq.~\ref{eq:cascade1}). Because this model avoids the decomposition in Eq.~\ref{eq:cascade2}-\ref{eq:cascade4}, it is not subject to the three issues outlined in \S\ref{sec:challenges-loosely-coupled}.
Unfortunately, this second extreme case is often impractical due to its dependency on scarce end-to-end ST training corpora (\S\ref{sec:e2e-corpora}), rendering this model unable to compete with cascaded models that are trained on abundant ASR and MT training data.
Most recent works therefore depart from this purely end-to-end trained direct model, and incorporate ASR and MT back into training, e.g.\ through weakly supervised training, or by exploring end-to-end trainable cascades or triangle models (\texttt{CT}/\texttt{MT} in Fig.~\ref{fig:conditioning}). This departure raises two questions: (1)~To what extent does the re-introduction of ASR and MT data cause challenges similar to those found in loosely coupled cascades? (2)~Are techniques such as weakly supervised training effective enough to allow competing with the loosely coupled cascade? To address the second question, we propose the notion of data efficiency as a fourth key challenge.
\paragraph{Data efficiency:} \textit{The increase in accuracy achievable through the addition of a certain amount of training data.} To assess data efficiency, data ablations that contrast models over at least two data conditions are required. We argue that empirical evidence along these lines will help considerably in making generalizable claims about the relative performance between two ST models. Generalizable findings across data conditions are critical given that ST models are trained on at least three types of corpora (ASR, MT, and end-to-end corpora), whose availability vastly differs across languages.
\subsection{Data Efficiency vs.\ Modeling Power -- A Trade-Off?}
Consider how the incorporation of MT and ASR data into ST models of any kind may inherently cause the problems as outlined in \S\ref{sec:challenges-loosely-coupled}: Training on MT data may weaken the model's sensitivity to prosody; the effectiveness of training on ASR+MT data may be impacted by mismatched source-language issues; even some types of end-to-end-trainable models make (non-discrete) early decisions that are potentially erroneous.
This suggests a potential trade-off between data efficiency and modeling power. In order to find models that trade off advantages and disadvantages in the most favorable way, it is therefore necessary to thoroughly analyze models across the dimensions of early decisions, mismatched source-language, information loss, and data efficiency.
\paragraph{Analyzing early decisions:}
Problems due to erroneous early decisions are inference-time phenomena in which upstream ASR errors are responsible for errors in the final translation outputs. It follows that the problem disappears for hypothetical utterances for which the ASR can generate error-free intermediate representations. Thus, models that do not suffer from erroneous early decisions will expectedly exhibit an advantage over other models especially for acoustically challenging inputs, and less so for inputs with clean acoustics. This angle can provide us with strategies for isolating errors related to this particular phenomenon.
Prior work in this spirit has demonstrated that lattice-to-sequence translation is in fact beneficial especially for acoustically challenging inputs \cite{Sperber2017}, and that cascaded models with non-discrete intermediate representations are less sensitive to artificially perturbed intermediate representations than if using discrete transcripts as an intermediate representation \cite{Sperber2019}.
\paragraph{Analyzing mismatched source-language:} End-to-end ST corpora allow for controlled experiments in which one can switch between matched vs.\ mismatched (out-of-domain) MT corpora. \newcite{Post2013} demonstrated that using a matched corpus can strongly improve translation quality for loosely coupled cascades. We are not aware of such analyses in more recent work.
\paragraph{Analyzing information loss:} Prior work \cite{aguero2006prosody,Anumanchipalli2012,Do2017,Kano2018a} has addressed prosody transfer in speech-to-speech translation, but to our knowledge the question of how such information should inform textual translation decisions is still unexplored. Table~\ref{tab:prosody-examples} shows examples that may motivate future work in this direction.
\paragraph{Analyzing data efficiency:} While several prior works aim at addressing this problem, often only a single data condition is tested, limiting the generalizability of findings. We are aware of three recent works that do analyze data efficiency across several data conditions \cite{Jia2019b,Sperber2019,Wang2020}. Findings indicate that both pretraining and data synthesizing outperform multi-task training in terms of data efficiency, and that end-to-end trainable cascades are on par with loosely coupled cascades, while strongly outperforming multi-task training.
\section{Modeling Techniques}
\label{sec:modeling_techniques}
Let us now break apart modeling techniques from prior literature into four overarching categories, with the aim of exposing the ST modeling space between the extreme points of vanilla direct models and loosely coupled cascades.
\subsection{Intermediate Representations}
Almost all models use intermediate representations (IRs) in some form: non-direct models to support both training and inference, and direct models to overcome data limitations. IRs are often speech transcripts, but not necessarily so. A number of factors must be considered for choosing an appropriate IR, such as availability of supervised data, inference accuracy, expected impact of erroneous early decisions, and the feasibility of backpropagation through the IR for end-to-end training. We list several possibilities below:
\paragraph{Transcripts:} Generally used in the loosely coupled cascade. Being a discrete representation, this option prevents end-to-end training via back-propagation, although future work may experiment with work-arounds such as the straight-through gradient estimator \cite{Bengio2013}. Besides graphemic transcripts, phonetic transcripts are another option \cite{Jiang2011}.
\paragraph{Hidden representations:} \newcite{Kano2017,Anastasopoulos2018,Sperber2019} propose the use of hidden representations that are the by-product of a neural decoder generating an auxiliary IR such as a transcript. Advantages of this representation are differentiability, prevention of information loss, and weakened impact of erroneous early decisions. A downside is that end-to-end ST data is required for training.
\paragraph{Lattices:} Lattices compactly represent the space over multiple sequences, and therefore weaken the impact of erroneous early decisions. Future work may explore lattices over continuous, hidden representations, and end-to-end training for ST models with lattices as intermediate representation.
\paragraph{Other:} Prior work further suggests pre-segmented speech frames \cite{Salesky2019} or unsupervised speech-unit clusters \cite{Tjandra2019a} as intermediate representation. Further possibilities may be explored in future work.
\subsection{Inference Strategies}
\label{sec:inference}
The conditioning graph (Fig.~\ref{fig:conditioning}) reveals independence assumptions and use of IRs at inference time. Some strategies avoid the problem of early decisions (\texttt{MC}, \texttt{Di}, \texttt{MT}, \texttt{Jt}), while others remove the conditional independence assumption between inputs and outputs (\texttt{Di}, \texttt{CT}, \texttt{MT}, \texttt{Jt}).
\paragraph{Committed cascade (\texttt{CC}):} \textit{Compute one IR, rely on it to generate outputs (Eq.~\ref{eq:cascade4}).} Includes both the loosely coupled cascade, and recent end-to-end trainable cascaded models such as by \newcite{Kano2017,Sperber2019}.
\paragraph{Marginalizing cascade (\texttt{MC}):} \textit{Compute outputs by relying on IRs, but marginalize over them instead of committing to one (Eq.~\ref{eq:cascade3}).} As marginalization is intractable, approximations such as $n$-best translation or lattice translation are generally used.
\paragraph{Direct (\texttt{Di}):} \textit{Compute outputs without relying on IRs (Eq.~\ref{eq:cascade1}).} To address data limitations, techniques such as multi-task training or data augmentation can be used, but may reintroduce certain biases
\paragraph{Committed triangle (\texttt{CTr}):} \textit{Commit to an IR, then produce outputs by conditioning on both inputs and intermediate representation.}
\newcite{Anastasopoulos2018}, who introduce the triangle model, use it in its marginalizing form (see below). Unexplored variations include the use of discrete transcripts as IR, which interestingly could be seen as a strict generalization of the loosely coupled cascade and should therefore never perform worse than it if trained properly.
\paragraph{Marginalizing triangle (\texttt{MTr}):} \textit{Produce output by conditioning on both input and IR, while marginalizing over the latter (Eq.~\ref{eq:cascade2}).} \newcite{Anastasopoulos2018} marginalize by taking an $n$-best list, with $n$ set to only 4 for computational reasons. This raises the question of whether the more computationally efficient lattices could be employed instead. Similar considerations apply to the end-to-end trainable marginalizing cascade.
\paragraph{Joint (\texttt{Jt}):} \textit{Changes the problem formulation to }$\hat{S},\hat{T}=\argmax_{S\in\mathcal{H},T\in\mathcal{T}} Pr\left({S,T}\mid X\right)$. This is a useful optimization for many applications which display both transcripts and translations to the user, yet to our knowledge has never been explicitly addressed by researchers.
\subsection{Training Strategies}
\label{sec:inference-strategies}
This group of techniques describes the types of supervision signals applied during \textit{training}.
\paragraph{Subtask training:} \textit{Training of sub-components by pairing IRs with either the speech inputs or the output translations.} Loosely coupled cascades rely on this training technique while recently proposed cascaded and triangle models often combine subtask training and end-to-end training.
\paragraph{Auxiliary task training:} \textit{Training by pairing either model inputs or outputs with data from an arbitrary auxiliary task through multi-task training.}\footnote{This definition subsumes pretraining, which is simply using a specific multitask training schedule.} This technique has been used in two ways in literature: (1) To incorporate ASR and MT data into direct models by using auxiliary models that share parts of the parameters with the main model \cite{Weiss2017}. Auxiliary models are introduced for training purposes only, and discarded during inference. This approach has been found inferior at exploiting ASR and MT data when compared to subtask training \cite{Sperber2019}. (2)~To incorporate various types of less closely related training data, such as the use of multitask training to exploit ASR data from an unrelated third language \cite{Bansal2019,Stoian2019}.
\paragraph{End-to-end:} \textit{Supervision signal that directly pairs speech inputs and output translations.} This technique is appealing because it jointly optimizes all involved parameters and may lead to better optima. The main limitation is lack of appropriate data, which can be addressed by combined training with one of the alternative supervision types, or by training on augmented data, as discussed next.
\subsection{End-to-End Training Data}
\paragraph{Manual:} \textit{Speech utterances for training are translated (and possibly transcribed) by humans.} This is the most desirable case, but such data is currently scarce. While we have seen growth in data sources in the past two years (\S\ref{sec:e2e-corpora}), collecting more data is an extremely important direction for future work.
\paragraph{Augmented:} \textit{Data obtained by either augmenting an ASR corpus with automatic translations, or augmenting an MT corpus with synthesized speech.} This has been shown more data efficient than multitask training in the context of adding large MT and ASR corpora \cite{Jia2019b}. \newcite{Pino2019a} find that augmented ASR corpora are more effective than augmented MT corpora. This approach allows training direct models and end-to-end models even when no end-to-end data is available. Knowledge distillation can be seen as an extension \cite{Liu2019}. An important problem that needs analysis is to what extent mismatched source-language and information loss degrade the augmented data.
\paragraph{Zero-Shot:} \textit{Using no end-to-end data during training.} While augmented data can be used in most situations in which no manual data is available, it suffers from certain biases that may harm the ST model. Similarly to how zero-shot translation enables translating between unseen combinations of source and target languages,
it may be worth exploring whether some recent models, such as direct models or cascades with non-discrete IRs, can be trained without resorting to any end-to-end data for the particular language pair of interest.
\section{Applications and Requirements}
\label{sec:requirements}
While we previously described the task of ST simply as the task of generating accurate text translations from speech inputs, the reality is in fact much more complicated. Future work may exploit new modeling techniques to explicitly address the aspects drawn out below.
\subsection{Mode of Delivery}
\paragraph{Batch mode:} \textit{ A (potentially large) piece of recorded speech is translated as a whole.} Segmentation into utterances may or may not be given. This mode allows access to future context, and imposes no strict computational restrictions. Typical applications include movie subtitling \cite{Matusov2019a} and dubbing \cite{Saboo2019,Federico2020}.
\paragraph{Consecutive:} \textit{Real-time situation where inputs are provided as complete utterances or other translatable units, and outputs must be produced with low latency.} A typical example is a two-way translation system on a mobile device \cite{Hsiao2006}. This is the only mode of delivery that allows interaction between speaker and translator \cite{Ayan2013}.
\paragraph{Simultaneous:} \textit{Real-time situation where latency is crucial and outputs are produced incrementally based on incoming audio stream.} Simultaneous translation is faced with an inherent delay vs.\ accuracy trade-off, such as in a typical lecture translation application \cite{Fugen2008}. In addition to computational latency, which is relevant also with consecutive translation, simultaneous translation suffers from inherent modeling latency caused by factors including reordering.
\subsection{Output Medium}
\paragraph{Text:} This is a standard setting, but is nevertheless worth discussing in more detail for at least two reasons: (1) as is well-known in the subtitling industry, reading speeds can be slower than speaking and listening speeds \cite{Romero-fresco2009}, implying that a recipient may not be able to follow verbatim text translations in case of fast speakers, and that summarization may be warranted. (2) Text display makes repair strategies possible that are quite distinct from spoken outputs: One can alter, highlight, or remove past outputs. One possible way of exploiting this is \newcite{Niehues2018}'s strategy of simultaneous translation through re-translation.
\paragraph{Speech:} Speech outputs have been used since the early days \cite{Lavie1996}, but whether to apply text-to-speech on top of translated text has often been seen as a question to leave to user interface designers. Here, we argue that ST researchers should examine in what ways speech outputs should differ from text outputs. For example, is disfluency removal \cite{Fitzgerald2009} beneficial for speech outputs, given that human listeners are naturally able to repair disfluencies \cite{lickley1994detecting}? Further examples that need more exploration are prosody transfer \cite{aguero2006prosody} and models that directly translate speech-to-speech \cite{Jia2019}.
\subsection{The Role of Transcripts}
\paragraph{Mandatory transcripts:} \textit{User interface displays both transcripts and translations to the user.} This scenario has been implemented in many applications \cite{Hsiao2006,Cho2013a}, but has received little attention in the context of end-to-end ST research. It ties together with the \textit{joint} inference model (\S\ref{sec:inference-strategies}). Note that with loosely coupled cascades, there is little need to consider this scenario explicitly because the application can simply display the by-product transcripts to the user. But this is not easily possible with direct models or with models using IRs other than transcripts.
\paragraph{Auxiliary transcripts:} \textit{Transcriptions are not needed as user-facing model outputs, but may be exploited as IRs during training and possibly inference.} This is the most typical formal framing of the ST task, assuming that transcribed training data is useful mainly for purposes of improving the final translation.
\paragraph{Transcript-free:} \textit{No transcribed training data exists, so the model cannot rely on supervised transcripts as IR.} The main scenario is endangered language preservation for languages without written script, where it is often easier to collect translated speech than transcribed speech \cite{Duong2016}.
\subsection{Translation Method}
\begin{table}
\small
\centering
\begin{tabular}{cc}
\hline
ES & también tengo um eh estoy tomando una clase .. \\
EN & i also have um eh i’m taking a marketing class .. \\
\hline
ES & porque qué va, mja ya te acuerda que .. \\
EN & because what is, mhm do you recall now that .. \\
\hline
\end{tabular}
\caption{\label{tab:ex-faithful} Examples for faithful Spanish to English translations, taken from \cite{Salesky2019a}.}
\end{table}
The method of translation is an especially relevant factor in ST, which commonly includes a transfer from the spoken into the written domain. Here, we provide two reference points for the method of translation, while referring to \newcite{Newmark1988} for a more nuanced categorization.
\paragraph{Faithful:} \textit{Keeps the contextual meaning of the original as precisely as possible within the grammatical constraints of the target language.} With text as output medium, faithful translation may result in poor readability, e.g.\ due to the translation of disfluencies (Table~\ref{tab:ex-faithful}). Arguably the most appropriate output medium for faithful ST would be speech, although user studies are needed to confirm this. Another application are high-stake political meetings in which translations must stay as close to the original sentence as possible. As we move toward more distant language pairs, the practicability of faithful translation of spoken language with disfluencies becomes increasingly questionable.
\paragraph{Communicative:} \textit{Renders the contextual meaning of the original such that both content and style are acceptable and comprehensible by the target audience.} An important example for improving communicativeness is disfluency removal \cite{Fitzgerald2009}. Given that human translators and interpreters adapt their translation method depending on factors that include input and output medium \cite{He2016c}, more research is needed beyond disfluency removal. Communicative translations are especially relevant in casual contexts where convenience and low cognitive effort are mandative. Arguably the closest neighbor of spoken language style in the text realm is social media, it would be interesting to attempt speech-to-text translation with social-media style outputs.
\section{Discussion}
Recent works on end-to-end modeling techniques are motivated by the prospect of overcoming the loosely coupled cascade's inherent issues, yet of the issues outlined in \S\ref{sec:loosely-coupled-cascade}, often only the goal of avoiding early decisions is mentioned motivationally. While early decisions and data efficiency have been recognized as central issues, empirical insights are still limited and further analysis is needed. Mismatched source-language and information loss are often not explicitly analyzed.
We conjecture that the apparent trade-off between data efficiency and modeling power may explain the mixed success in outperforming the loosely coupled cascade. In order to make progress in this regard, the involved issues (early decisions, mismatched source-language, information loss, data efficiency) need to be precisely analyzed (\S\ref{sec:challenges}), and more model variants (\S\ref{sec:modeling_techniques}) should be explored. As a possible starting point one may aim to extend, rather than alter, traditional models, e.g.\ applying end-to-end training as a fine-tuning step, employing a direct model for rescoring, or adding a triangle connection to a loosely coupled cascade. We further suggest that more principled solutions to the different application-specific requirements (\S\ref{sec:requirements}) should be attempted. Perhaps it is possible to get rid of segmentation as a separate step in batch delivery mode, or perhaps text as output medium can be used to visualize repairs more effectively. Several of the application-specific requirements demand user studies and will not be sufficiently solved by relying on automatic metrics only.
\section{Conclusion}
We started this paper with a chronological survey of three decades of ST research, focusing on carving out the key concepts.
We then provided definitions of the central challenges, techniques, and requirements, motivated by the observation that recent work does not sufficiently analyze these challenges. We exposed a significant space of both modeling ideas and application-specific requirements left to be addressed in future research.
Our hope is to encourage meaningful and generalizable comparisons on our quest toward overcoming the long-standing issues found in ST models.
|
1,314,259,996,614 | arxiv | \section{Introduction}
Let $S=k[x_1,\ldots,x_m]$ be the polynomial algebra over a field $k$ in $m$ variables and let $I = (m_1,\ldots, m_r)$ be an ideal generated by monomials.
In that case, $S/I$ is called a \emph{monomial} ring. Given a monomial ring $R = S/I$, the \emph{Poincar\'e series} of $R$ is defined as
\[
P(R) = \sum_{j=0}^{\infty} \dim \tor^R_j(k,k) t^j.
\]
A result due to Serre states that there is an inequality of power series
\begin{equation*}
P(R) \leq \frac{(1+t)^m}{1-t(\sum_{j=0}^{\infty}\dim \tor^S_j(R,k) t^j -1)}.
\end{equation*}
The ring $R$ is said to be \emph{Golod} if equality is obtained. The problem of when a monomial ring is Golod goes back to at least the 70s when Golod \cite{golod1978} showed that a monomial ring $R$ is Golod if and only if all Massey products on the Tor-algebra $\tor^S(R,k)$ vanish.
In general, it is hard to directly verify the vanishing of Massey products and so in practice the Golod property is still hard to determine.
In recent decades, the Golod property has received an increasing amount of attention in topology. The Tor-algebra shows up naturally in topology as follows. Let $\Delta$ be a simplicial complex on vertex set $[m] = \{1, \ldots, m \}$ and define the \emph{moment-angle complex} $Z_{\Delta}$ as follows. Let $D^2$ denote the $2$-disc and $S^1$ its bounding circle. For $\sigma \in \Delta$, define
$$X_{\sigma} = \prod_{i=1}^m Y_i \subseteq (D^2)^m \quad \mbox{ where } \quad Y_i = \begin{cases} D^2 &\mbox{ if } i \in \sigma \\ S^1 &\mbox{ if } i \notin \sigma \end{cases}$$
Lastly, we put
$$Z_{\Delta} = \colim_{\sigma \in \Delta} X_{\sigma} \subseteq (D^2)^m.$$
Moment-angle complexes are one of the central objects of study in toric topology. For us, the cohomology of $Z_{\Delta}$ is of particular interest.
\begin{theorem}[\cite{buchstaberpanov2015}, Theorem 4.5.4]
Let $\Delta$ be a simplicial complex. There is an isomorphism of graded algebras
$$H^*(Z_{\Delta},k) \cong \tor^S(k[\Delta],k).$$
\end{theorem}
Here, $k[\Delta]$ denotes the \emph{Stanley-Reisner ring}
$$k[\Delta] = S / (x_{i_1}\cdots x_{i_k} \mid \{i_1,\ldots,i_k \} \notin \Delta )$$
of the simplicial complex $\Delta$. Note that $k[\Delta]$ is a square-free monomial ring. In general, the homotopy type of $Z_{\Delta}$ is not well understood, but significant progress has been made for those $Z_{\Delta}$ where $\Delta$ is Golod, see for example Grbi\'c and Theriault \cite{grbictheriault2007,grbictheriault2013}, Iriye and Kishimoto \cite{iriyekishimoto2013b} and Beben and Grbi\'c \cite{bebengrbic2017}.
The preceding discussion makes clear that the Golod property is of interest in both commutative algebra and algebraic topology. Consequently, a lot of work has been done on the Golodness problem. For example, a combinatorial characterization of Golodness in terms of the homology of the lower intervals in the lattice of saturated subsets is given by Berglund in \cite{berglund2006}. Using results from J\"ollenbeck \cite{jollenbeck2006}, it has been claimed in Berglund and J\"ollenbeck \cite{berglundjollenbeck2007} that $R$ is Golod if and only if the product on $\tor^S(R,k)$ vanishes. However, recently a counterexample to this claim was found by Katth\"an in \cite{katthan2015} where the error is traced back to \cite{jollenbeck2006}. This leads naturally to the central question this work investigates.
\begin{question}
For which classes of monomial rings $R$ is the Golod property equivalent to the vanishing of the product on $\tor^S(R,k)$?
\end{question}
A partial answer to this question is given by Theorem \ref{maintheorem2} below. To answer this question, we develop a new approach to the Golodness problem using $A_{\infty}$-algebras.
An $A_{\infty}$-algebra is similar to a differential graded algebra (dga), except that associativity only holds up coherent homotopy.
By contrast with dgas, every resolution admits the structure of an $A_{\infty}$-algebra (as first shown by Burke \cite{burke2015}) hence in particular the minimal free resolution does.
The first main result of this paper characterizes vanishing of Massey products in terms of this $A_{\infty}$-structure. A monomial ring $R$ is said to satisfy condition $(B_r)$ if all $k$-Massey products are defined and contain only zero for all $k \leq r$. Denote by $K_R$ the Koszul dga of the monomial ring $R$. We obtain the following result.
\begin{maintheorem}
Let $R = S/I$ be a monomial ring with minimal free resolution $F$. Let $r \in \N$ and let $\mu_n$ be an $A_{\infty}$-structure on $F$ such that $F \otimes_S k$ and $K_R$ are quasi-isomorphic as $A_{\infty}$-algebras. Then $R$ satisfies $(B_r)$ if and only if $\mu_k$ is minimal for all $k \leq r$.
\end{maintheorem}
Next, we turn our attention to the class of rooted rings. A monomial ring is said to be \emph{rooted} if the minimal free resolution $F$ of $R$ is rooted in the sense of Novik \cite{novik2002}. Rooted resolutions include both the Taylor and Lyubeznik resolution. Given a rooted ring with rooting map $\pi$, we give an explicit $A_{\infty}$-structure in terms of $\pi$.
This $A_{\infty}$-structure allows us to give a combinatorial characterization of the Golod property for rooted rings as follows. Following \cite{jollenbeck2006}, we say that $R$ satisfies the \emph{gcd condition} if for all generators $m_i$ and $m_j$ with $\gcd(m_i,m_j)=1$ there exists a $m_k \neq m_i,m_j$ such that $m_k$ divides $\lcm(m_i,m_j)$. The second main result is then the following.
\begin{maintheorem}
\label{maintheorem2}
Let $R$ be a rooted ring. Then the following are equivalent.
\begin{enumerate}
\item The ring $R$ is Golod.
\item The product on $\tor^S(R,k)$ vanishes.
\item The ring $R$ is gcd.
\end{enumerate}
\end{maintheorem}
In particular, the main result from \cite{berglundjollenbeck2007} does hold when restricted to rooted rings.
\section{Simplicial resolutions}
Let $S=k[x_1,\ldots, x_m]$ and let $I$ be the ideal minimally generated by monomials $m_1, \ldots,m_r$. The \emph{Taylor resolution} $T$ \cite{taylor1966} of $S/I$ is constructed as follows.
Let $E$ denote the exterior algebra on generators $u_1, \ldots, u_r$. The resolution $T$ has underlying module $S \otimes_k E$.
If $J = \lbrace j_1 < \cdots < j_k \rbrace \subseteq \lbrace1, \ldots,r \rbrace$, then we write $u_J = u_{j_1} \cdots u_{j_k}$. Furthermore, we put $m_J = \lcm(m_{j_1}, \ldots, m_{j_k})$.
We will also write $J^i = \lbrace j_1 < \cdots < \widehat{j_i} < \cdots < j_k \rbrace$.
The differential $d$ of $T$ is give by
\[
d(u_J) = \sum_{i=1}^{\vert J \vert} (-1)^{i+1} \frac{m_J}{m_{J^i}} u_{J^i}.
\]
The Taylor resolution admits a multiplication defined by
\[
u_I \cdot u_J = \begin{cases} \sgn(I,J) \frac{m_Im_J}{m_{I\cup J}} u_{I \cup J} &\mbox{ if } I \cap J = \emptyset \\ 0 &\mbox{ otherwise} \end{cases}
\]
where $\sgn(I,J)$ is the sign of the permutation making $I \cup J$ into an increasing sequence.
This multiplication induces a differential graded algebra (dga) structure on $T$. The \emph{Tor-algebra} $\tor^S(S/I,k)$ of $S/I$ is
$$ \tor^S(S/I,k) = \bigoplus_{n} \tor^S_n(S/I,k) = \bigoplus_n H_n(T \otimes_S k)$$
where the multiplication is induced by the multiplication on $T$\\
The following method of constructing free resolutions of monomial rings is due to Bayer, Peeva and Sturmfels \cite{bayerpeevasturmfels1998}. Our exposition will follow that of Mermin \cite{mermin2012}.
Let $\lbrace m_1, \ldots, m_r \rbrace$ be a set of monomials. Fix some total order $\prec$ on $\lbrace m_1, \ldots, m_r \rbrace$.
After relabelling we may assume that $m_1 \prec m_2 \prec \cdots \prec m_r$. Let $\Delta$ be a simplicial complex on the vertex set $\lbrace 1,\ldots,r \rbrace$.
By abuse of notation, we will say $\Delta$ is a simplicial complex on vertex set $\lbrace m_1, \ldots, m_r \rbrace$.\\
Assign a multidegree $m_J$ to each simplex $J \in \Delta$ by defining
\[
m_J = \lcm \lbrace m_j \mid j \in J \rbrace.
\]
Define a chain complex $F_{\Delta}$ associated to $\Delta$ as follows.
Let $F_n$ be the free $S$-module on generators $u_J$ with $\vert J \vert = n$.
For $J = \lbrace j_1 \prec \cdots \prec j_n \rbrace$, put $J^i = \lbrace j_1 \prec \cdots \prec \widehat{j_i} \prec \cdots \prec j_n \rbrace$.
The differential $d\colon F_n \to F_{n-1}$ is defined, for $J \in \Delta$, by
\[
d(u_J) = \sum_{i=1}^{\vert J \vert} (-1)^{i+1} \frac{m_J}{m_{J^i}} u_{J^i}.
\]
\begin{example}
Let $\Delta^r$ be the full $r$-simplex. Then $F_{\Delta^r}$ is the Taylor resolution of $R = S/I$. This also justifies the use of the same notation for both.
\end{example}
In general, $F_{\Delta}$ need not be a resolution of $S / I$. However, we have the following theorem.
\begin{theorem}[\cite{bayerpeevasturmfels1998}, Lemma 2.2]
Let $\Delta$ be a simplicial complex on vertex set $\lbrace m_1, \ldots, m_r \rbrace$ and define, for a multidegree $\mu$, a subcomplex
\[
\Delta_{\mu} = \lbrace J \in \Delta \mid m_J \mbox{ divides } \mu \rbrace.
\]
Then $F_{\Delta}$ is a resolution of $R$ if and only if $\Delta_{\mu}$ is either acyclic or empty for all multidegrees $\mu$.
\end{theorem}
A resolution $F$ is called a \emph{simplicial resolution} if $F = F_{\Delta}$ for some simplicial complex $\Delta$.
\begin{remark}
\label{twonotations}
Note that if $\Delta' \subseteq \Delta$, then $F_{\Delta'}$ is a subcomplex of $F_{\Delta}$.
In particular, since each simplicial complex $\Delta$ is included in the full simplex on its vertex set, each simplicial resolution of $S/I$ is a subcomplex of the Taylor resolution of $S/I$.
\end{remark}
In the rest of the paper we will restrict our attention to the following special type of simplicial resolution which is due to Novik \cite{novik2002}.
Given an monomial ideal $I = (m_1, \ldots, m_r)$ we define the \emph{lcm-lattice} $L(I)$ to be the set of all $\lcm(m_{i_1},\ldots, m_{i_k})$ where $1 \leq i_1 \leq \cdots \leq i_k \leq r$ and $k=1,\ldots, r$.
The set $L=L(I)$ admits a partial order given by divisibility. Then $L$ forms a lattice under $a \vee b = \lcm(a,b)$ and $a \wedge b = \gcd(a,b)$. The lattice $L$ has minimal element $\hat{0} = 1$ and maximal element $\hat{1} = \lcm(m_1, \ldots, m_r)$.
\begin{definition}
A \emph{rooting map} on $L$ is a map $\pi\colon L \setminus \lbrace \hat{0} \rbrace \to \lbrace m_1, \ldots, m_r \rbrace$ such that
\begin{enumerate}
\item for every $m \in L$, $\pi(m)$ divides $m$
\item $\pi(m) = \pi(n)$ whenever $\pi(m)$ divides $n$ and $n$ divides $m$.
\end{enumerate}
\end{definition}
Now, let $\pi$ be a rooting map and let $A \subseteq \lbrace m_1, \ldots, m_r \rbrace$ be non-empty. Define $\pi(A) = \pi(\lcm(A))$.
A set $A$ is \emph{unbroken} if $\pi(A) \in A$ and $A$ is \emph{rooted} if every non-empty $B \subseteq A$ is unbroken.
Let $RC(L,\pi)$ denote the set of all rooted sets with respect to $L$ and $\pi$. Then $RC(L,\pi)$ is easily seen to be a simplicial complex on vertex set $\lbrace m_1, \ldots, m_r \rbrace$ and we have the following result.
\begin{theorem}[\cite{novik2002}, Theorem 1]
Let $I = (m_1, \ldots, m_r)$ be a monomial ideal and let $L$ denote its lcm-lattice. Suppose that $\pi$ is a rooting map on $L$.
Then the chain complex $F_{RC(L,\pi)}$ associated to the simplicial complex $RC(L,\pi)$ is a free resolution of $I$.
\end{theorem}
An important special case of this construction is the Lyubeznik resolution:
\begin{definition}
\label{lyubeznikresolution}
Let $I = (m_1, \ldots, m_r)$ be a monomial ideal and pick some total order $\prec$ on the $m_i$. After relabelling we may assume that $m_1 \prec m_2 \prec \cdots \prec m_r$.
Define
\[
\pi(A) = \min_{\prec}\lbrace m_i \mid m_i \mbox{ divides } \lcm(A) \rbrace.
\]
Then $\pi$ is easily seen to be a rooting map. The resolution associated $RC(L,\pi)$ is called the \emph{Lyubeznik resolution}.
\end{definition}
In this paper we will only consider resolutions $F$ that are as small as possible in the sense that each $F_n$ has the minimal number of generators.
More precisely, we have the following definition.
\begin{definition}
Let $S/I$ be a monomial ring. A free resolution $F \to S/I$ is said to be \emph{minimal} if $d(F) \subseteq (x_1,\ldots,x_m)F$.
\end{definition}
If the minimal free resolution of $S/I$ is a resolution associated to $RC(L,\pi)$ for some rooting map $\pi$, then $I$ (respectively $S/I$) is called a \emph{rooted ideal} (respectively a \emph{rooted ring}).
Similarly, if the Lyubeznik resolution of $S/I$ is minimal then $I$ (respectively $S/I$) is called a \emph{Lyubeznik ideal} (respectively a \emph{Lyubeznik ring}).
\begin{example}
Let $S = k[x,y,z]$ and let $I$ be the ideal generated by $m_1 = xy$, $m_2 = yz$ and $m_3 = xz$. Order the generators as $m_1 \prec m_2 \prec m_3$.
Let $\pi$ be the rooting map of the Lyubeznik resolution as in Definition \ref{lyubeznikresolution}. Then the rooted sets are $m_1$, $m_2$, $m_3$, $m_1m_2$ and $m_1m_3$. So the Lyubeznik resolution is
\begin{center}
\begin{tikzpicture}[descr/.style={fill=white,inner sep=1.5pt}]
\matrix (m) [
matrix of math nodes,
row sep=3em,
column sep=5em,
text height=1.5ex, text depth=0.25ex
]
{ S^2 & S^3 & S \\
};
\path[overlay,->, font=\small, >=latex]
(m-1-1) edge node[yshift=1.5ex] {$d_2$} (m-1-2)
(m-1-2) edge node[yshift=1.5ex] {$d_1$} (m-1-3);
\end{tikzpicture}
\end{center}
where the differential is given by
\begin{center}
\begin{tikzpicture}[descr/.style={fill=white,inner sep=1.5pt}]
\matrix (m) [
matrix of math nodes,
row sep=3em,
column sep=5em,
text height=1.5ex, text depth=0.25ex
]
{ d_1 = \begin{bmatrix}xy & yz & xz\end{bmatrix} & \text{and} & d_2 = \begin{bmatrix}
-z & -z \\ x & 0 \\ 0 & y \end{bmatrix}. \\
};
\end{tikzpicture}
\end{center}
In particular, the resolution is minimal and so $I$ is a Lyubeznik ideal.
\end{example}
We point out that the class of rooted rings is fairly general. It includes for example monomial ideals whose lcm lattice is a geometric lattice as well as matroid ideals of modular matroids \cite{novik2002}. The inclusion of Lyubeznik rings in rooted rings is strict since not every rooting map arises from a total order on the monomial generators as Example 4.1 of \cite{bjornerziegler1991} shows. Finally, not every monomial ring is rooted. Let $I$ be the ideal with monomial generators
\begin{center}
\begin{tikzpicture}[descr/.style={fill=white,inner sep=1.5pt}]
\matrix (m) [
matrix of math nodes,
row sep=0.5em,
column sep=3em,
text height=1.5ex, text depth=0.25ex
]
{x_1x_4x_5x_6 & x_2x_4x_5x_6 & x_3x_4x_5x_6 & x_2x_4x_5x_7 & x_3x_4x_5x_7 \\
x_1x_3x_5x_7 & x_1x_2x_4x_7 & x_1x_4x_6x_7 & x_1x_5x_6x_7 & x_3x_4x_6x_7 \\
& x_2x_5x_6x_7 & x_2x_3x_6x_7 & x_1x_2x_3x_7 \\
};
\end{tikzpicture}
\end{center}
and let $F$ denote the minimal free resolution. As is shown in \cite{reinerwelker2001}, the matrices of the differential of $F$ cannot be chosen in $\{0, \pm 1 \}$ and consequently $F$ cannot be supported on \emph{any} simplicial complex and hence, in particular, not on a complex $RC(L,\pi)$ coming from a rooting map $\pi$.
\section{$A_{\infty}$-algebras}
In this section we will discuss some basic aspects of the theory of $A_{\infty}$-algebras. The notion was first introduced by Stasheff \cite{stasheff1963} in the context of algebraic topology.
Since their introduction $A_{\infty}$-algebras have found applications in various branches of mathematics such as geometry \cite{getzlerjones1990}, algebra \cite{stasheff1992} and mathematical physics \cite{kontsevich1995}, \cite{mccleary1999}.
Though the following section aims to be self-contained, a more extensive introduction can be found in \cite{keller2001}. The exposition below follows that of \cite{lupalmieriwuzhang2009}.\\
In what follows all signs are determined by the \emph{Koszul sign convention}
\begin{equation}
\label{koszulsignconvention}
(f \otimes g) (x \otimes y) = (-1)^{\vert g \vert \cdot \vert x \vert} fx \otimes gy.
\end{equation}
\begin{definition}
Let $R$ be a commutative ring and $A = \oplus A_n$ a $\Z$-graded free $R$-module. An $A_{\infty}$-algebra structure on $A$ consists of maps $\mu_n\colon A^{\otimes n} \to A$ for each $n \geq 1$ of degree $n-2$ satisfying the \emph{Stasheff identities}
\begin{equation}
\label{stasheffidentities}
\sum (-1)^{r+st} \mu_u(1^{\otimes r} \otimes \mu_s \otimes 1^{\otimes t}) = 0
\end{equation}
where the sum runs over all decompositions $n=r+s+t$ with $r,t \geq 0$, $s \geq 1$ and $u=r+t+1$.
\end{definition}
Observe that when applying (\ref{stasheffidentities}) to an element additional signs appear because of the Koszul sign convention (\ref{koszulsignconvention}). In the special case when $\mu_3=0$, it follows that $\mu_2$ is strictly associative and so $A$ is a differential graded algebra with differential $\mu_1$ and multiplication $\mu_2$. An $A_{\infty}$-algebra $A$ is called \emph{strictly unital} if there exists an element $1 \in A$ that is a unit for $\mu_2$ and such that for all $n \neq 2$
$$\mu_n(a_1 \otimes \cdots \otimes a_n) = 0$$
whenever $a_i=1$ for some $i$.
The notion of a morphism between $A_{\infty}$-algebras will also be needed.
\begin{definition}
Let $(A, \mu_n)$ and $(B,\overline{\mu}_n)$ be $A_{\infty}$-algebras. A \emph{morphism} of $A_{\infty}$-algebras (or $A_{\infty}$\emph{-morphism}) $f\colon A \to B$ is a family of linear maps
\[
f_n\colon A^{\otimes n} \to B
\]
of degree $n-1$ satisfying the \emph{Stasheff morphism identities}
\begin{equation}
\label{stasheffmorphismidentities}
\sum (-1)^{r+st}f_u(1^{\otimes r} \otimes \mu_s \otimes 1^{\otimes t}) = \sum (-1)^w \overline{\mu}_q(f_{i_1} \otimes f_{i_2} \otimes \cdots \otimes f_{i_q})
\end{equation}
for every $n \geq 1$. The first sum runs over all decompositions $n=r+s+t$ with $s \geq 1$ and $r,t \geq 0$ where $u=r+t+1$. The second sum runs over all $1 \leq q \leq n$ and all decompositions $n = i_1 + i_2 + \cdots + i_q$ with all $i_s \geq 1$.
The sign on the right-hand side of (\ref{stasheffmorphismidentities}) is given by
\[
w = \sum_{p=1}^{q-1}(q-p)(i_p-1).
\]
If $A$ and $B$ are strictly unital, an $A_{\infty}$-morphism is also required to satisfy $f_1(1) = 1$ and
\[
f_n(a_1 \otimes \cdots \otimes a_n) = 0
\]
if $n \geq 2$ and $a_i = 1$ for some $i$.
\end{definition}
A morphism $f$ is called a \emph{quasi-isomorphism} if $f_1$ is a quasi-isomorphism in the usual sense.
Let $A$ be an $A_{\infty}$-algebra. Then its homology $HA$ is an associative algebra. A crucial result relating the $A_{\infty}$-algebra $A$ and its homology algebra $HA$ is the \emph{homotopy transfer theorem}.
\begin{theorem}[Homotopy Transfer Theorem, \cite{kadeishvili1980}, see also \cite{merkulov1999}]
\label{homotopytransfertheorem}
Let $(A, \mu_n)$ be an $A_{\infty}$-algebra over a field $R$ and let $HA$ be its homology algebra.
There exists an $A_{\infty}$-algebra structure $\mu'_n$ on $HA$ such that
\begin{enumerate}
\item $\mu'_1 = 0$, $\mu'_2 = H(\mu_2)$ and the higher $\mu'_n$ are determined by $\mu_n$
\item there exists an $A_{\infty}$-quasi-isomorphism $HA \to A$ lifting the identity morphism of $HA$.
\end{enumerate}
Moreover, this $A_{\infty}$-structure is unique up to isomorphism of $A_{\infty}$-algebras.
\end{theorem}
An explicit way of contructing $A_{\infty}$-structures on the homology of a dga is due to Merkulov \cite{merkulov1999} and will be discussed next.
\begin{definition}
Let $A$ be a chain complex and $B \subseteq A$ a subcomplex. A \emph{transfer diagram} is a diagram of the form
\begin{equation}
\label{transferdiagram}
\begin{tikzpicture}[baseline=(current bounding box.center)]
\matrix(m)[matrix of math nodes,
row sep=3em, column sep=2.8em,
text height=1.5ex, text depth=0.25ex]
{B &A\\};
\path[->]
(m-1-1) edge [bend left=35] node[yshift=1.5ex] {$i$} (m-1-2)
(m-1-2) edge [bend left=35] node[yshift=-1.5ex] {$p$} (m-1-1)
(m-1-2) edge [loop right, in=35,out=-35,looseness=5, min distance=10mm] node {$\phi$} (m-1-2)
;
\end{tikzpicture}
\end{equation}
where $pi = 1_B$ and $ip - 1 = d \phi + \phi d$.
\end{definition}
Some authors use the term strong deformation retract for what we call a transfer diagram. Let $(A,d)$ be a dga and let $B$ be a subcomplex of $A$ such that there exists a transfer diagram as in \eqref{transferdiagram} Let $\cdot$ denote the product on $A$. Define linear maps $\lambda_n\colon A^{\otimes n} \to A$ as follows. First, put $\lambda_2(a_1,a_2) = a_1 \cdot a_2$ and we set
\begin{equation}
\label{merkulovlambda}
\lambda_n = \sum_{\substack{s+t = n \\ s,t \geq 1}} (-1)^{s+1} \lambda_2 (\phi \lambda_s, \phi \lambda_t)
\end{equation}
Now, define a second series of maps $\mu_n\colon B^{\otimes n} \to B$ by setting $\mu_1 = d$ and, for $n \geq 2$,
\begin{equation}
\label{merkulovmun}
\mu_n = p \circ \lambda_n \circ i^{\otimes n}.
\end{equation}
The following theorem will be crucial in the remainder of the paper.
\begin{theorem}[\cite{merkulov1999}, Theorem 3.4]
\label{merkulovtheorem}
Let $(A,d)$ be a dga and $B$ a subcomplex of $A$ such that there exists a transfer diagram of the form \eqref{transferdiagram}. Then the maps $\mu_n$ defined in (\ref{merkulovmun}) give the structure of an $A_{\infty}$-algebra on $B$.
\end{theorem}
\section{$A_{\infty}$-resolutions and the Golod property}
Let $R$ be a monomial ring. Recall that $R$ is called \emph{Golod} if there is an equality of power series
\begin{equation}
\label{goloddefinition}
P(R) = \frac{(1+t)^m}{1-t(\sum_{j=0}^{\infty}\dim \tor^S_j(R,k) t^j -1)}
\end{equation}
The Golod property admits an equivalent description in terms of Massey products which will be defined next.
\begin{definition}
Let $(A,d)$ be a differential graded algebra. If $a \in A$, we write $\bar{a}$ for $(-1)^{\text{deg}(a) +1}a$. \\
Let $\alpha_1,\alpha_2 \in HA$. The length $2$ \textit{Massey product} $\langle \alpha_1, \alpha_2 \rangle$ is defined to be the product $\alpha_1 \alpha_2$ in the homology algebra $HA$. \\
Let $\alpha_1, \ldots, \alpha_n \in HA$ be homology classes with the property that each length $j-i+1$ Massey product $\langle \alpha_i, \ldots, \alpha_j \rangle$ is defined and contains zero for $i<j$ and $j-i < n-1$. A \emph{defining system} $\{ a_{ij} \}$ consists of
\begin{enumerate}
\item For $i=1,\ldots,n$, representing cycles $a_{i-1,i}$ of the homology class $\alpha_i$.
\item For $j > i+1$, elements $a_{ij}$ such that
$$da_{ij} = \sum_{i<k<j} \bar{a}_{ik}a_{kj}.$$
\end{enumerate}
Note that the existence is guaranteed by the condition that $\langle \alpha_i, \ldots, \alpha_j \rangle$ is defined and contains zero for $i<j$ and $j-i < n-1$.
The length $n$ \textit{Massey product}$\langle \alpha_1, \ldots, \alpha_n \rangle$ is defined as the set
$$ \langle \alpha_1, \ldots, \alpha_n \rangle = \{ [\sum_{0<i<n} \bar{a}_{0i} a_{in}]\mid \{ a_{ij} \} \mbox{ is a defining system } \} \subseteq H^{s+2-n}$$
where $s = \sum_{i=1}^n \deg \alpha_i$.
\end{definition}
A Massey product $\langle \alpha_1, \ldots, \alpha_n \rangle$ is said to be \emph{trivial} if it contains zero. The \emph{Koszul homology} of a monomial ring $R$ is $H(R) = \tor^S(R,k)$. The Golod property and Massey products are related by the following theorem.
\begin{theorem}[\cite{golod1978}, see also Section 4.2 of \cite{gulliksenlevin1969}]
\label{golodiffmasseytrivial}
Let $R$ be a monomial ring. Then $R$ is Golod if and only if all Massey products on the Koszul homology $\tor^S(R,k)$ are trivial.
\end{theorem}
Following \cite{katthan2016}, we will say that a dga $A$ satisfies \emph{condition} $(B_r)$ if all $k$-ary Massey products are defined and contain only zero for all $k \leq r$. Recall the following lemma.
\begin{lemma}[\cite{may1969}, Proposition 2.3]
\label{brimpliesuniquemassey}
Let $A$ be a dg algebra satisfying $(B_{r-1})$. Then $\langle a_1, \ldots, a_r \rangle$ is defined and contains only one element for any choice $a_1,\ldots, a_r \in H(A)$.
\end{lemma}
Let $R$ be a monomial ring and let $K_S$ be the Koszul resolution of the base field $k$ over $S$. The \emph{Koszul dga} $K_R$ of $R$ is defined as $K_R = R \otimes_S K_S$. The Koszul dga and the Taylor resolution are related by a zig-zag of dga quasi-isomorphisms
\begin{center}
\begin{tikzpicture}[baseline=(current bounding box.center)]
\matrix(m)[matrix of math nodes,
row sep=3em, column sep=3em,
text height=1.5ex, text depth=0.25ex]
{T \otimes_S k &T \otimes_S K_S & R \otimes_SK_S = K_R\\};
\path[->]
(m-1-2) edge (m-1-1)
(m-1-2) edge (m-1-3)
;
\end{tikzpicture}
\end{center}
Consequently, Massey products on $\tor^S(R,k)$ can be computed using either $K_R$ or $T \otimes_S k$. Again following \cite{katthan2016}, we say that a monomial ring $R$ satisfies $(B_r)$ if the dga $K_R$ of $R$ satisfies $(B_r)$.
\begin{lemma}
Let $R$ be a monomial ring. Then $R$ is Golod if and only if $R$ satisfies condition $(B_r)$ for all $r \in \N$.
\end{lemma}
\begin{proof}
It is clear that if $R$ satisfies condition $(B_r)$ for every $r$ then $R$ is Golod. Conversely, suppose that $R$ is Golod.
We proceed by induction on $r$. The case $r=2$ is trivial. So assume $R$ satisfies $(B_{r-1})$.
By Lemma \ref{brimpliesuniquemassey}, the Massey product $\langle a_1, \ldots, a_r \rangle$ is defined and contains only one element for any choice $a_1,\ldots, a_r \in \tor^S(R,k)$.
Since $R$ is Golod, it follows by Theorem \ref{golodiffmasseytrivial} that this element must be zero and so $R$ satisfies $(B_r)$.
\end{proof}
In general it is very hard to study Massey products directly. However, $A_{\infty}$-algebras provide a systematic way of studying Massey products in view of the following theorem.
\begin{theorem}[\cite{lupalmieriwuzhang2009}, Theorem 3.1]
\label{ainfinitymasseyproductsaremasseyproducts}
Let $A$ be a differential graded algebra. Up to a sign, the higher $A_{\infty}$-multiplications $\mu'_n$ on $HA$ from Theorem \ref{homotopytransfertheorem} give Massey products.
That is to say, if $\alpha_1, \ldots, \alpha_n \in HA$ are homology classes such that the Massey product $\langle \alpha_1, \ldots, \alpha_n \rangle$ is defined then
$$\pm \mu'_n(\alpha_1 \otimes \cdots \otimes \alpha_n) \in \langle \alpha_1, \ldots, \alpha_n \rangle.$$
\end{theorem}
A map of $S$-modules $f\colon M \to N$ is said to be minimal if $f \otimes 1 \colon M \otimes_S k \to N \otimes_S k$ is zero. It is readily verified that $f$ is minimal if and only if $f$ maps into $(x_1, \ldots, x_m)N$.
Using Theorem \ref{ainfinitymasseyproductsaremasseyproducts}, we can describe under what conditions the Massey products on $\tor^S(R,k)$ vanish.
\begin{theorem}
Let $R = S/I$ be a monomial ring with minimal free resolution $F$. Let $r \in \N$ and let $\mu_n$ be an $A_{\infty}$-structure on $F$ such that $F \otimes_S k$ and $K_R$ are quasi-isomorphic as $A_{\infty}$-algebras. Then $R$ satisfies $(B_r)$ if and only if $\mu_k$ is minimal for all $k \leq r$.
\end{theorem}
\begin{proof}
Since $\mu_n$ is an $A_{\infty}$-structure on $F$, it follows that $\mu_n \otimes 1$ is an $A_{\infty}$-structure on $F \otimes_S k$. Now, assume $\mu_n$ is minimal for all $k \leq r$.
Since $\tor^S(R,k)$ is the homology of the $A_{\infty}$-algebra $F \otimes_S k$ the homotopy transfer theorem (Theorem \ref{homotopytransfertheorem}) implies that $\tor^S(R,k)$ inherits an $A_{\infty}$-structure $\mu'_n$.
Since $F$ is minimal, $\tor^S(R,k)$ is isomorphic to $F \otimes_S k$ and we can take $\mu'_n = \mu_n \otimes 1$. Let $k \leq r$ and let $\alpha_1, \ldots, \alpha_k \in \tor^S(R,k)$ be such that the Massey product $\langle \alpha_1,\ldots, \alpha_k \rangle$ is defined. By Theorem \ref{ainfinitymasseyproductsaremasseyproducts} we have
$$\pm (\mu_k \otimes 1)(\alpha_1, \ldots, \alpha_k) \in \langle \alpha_1, \ldots, \alpha_k \rangle.$$
Since $\mu_k$ is minimal, we have $(\mu_k \otimes 1)(\alpha_1, \ldots, \alpha_k)=0$. Therefore, $\langle \alpha_1,\ldots, \alpha_k \rangle$ is trivial and so $R$ satisfies $(B_r)$. \\
Conversely, assume that $R$ satisfies $(B_r)$. We need to show that $\mu_k$ is minimal for all $k \leq r$.
For $k=2$, we have $(\mu_2 \otimes 1)(a_1, a_2) = a_1a_2$ but the product on $\tor^S(R,k)$ is zero as $R$ satisfies $(B_r)$.
Now, let $3 \leq k \leq r$. Since $R$ satisfies $(B_k)$, for all $a_1, \ldots, a_k$ the Massey product $\langle a_1, \ldots, a_k \rangle$ is defined and contains only zero.
Since $(\mu_k \otimes 1)(a_1, \ldots, a_k) \in \langle a_1, \ldots, a_k \rangle$ we have $(\mu_k \otimes 1)(a_1, \ldots, a_k) = 0$ for all $a_1, \ldots, a_k$.
Consequently, $\mu_k$ is minimal as required.
\end{proof}
\begin{corollary}
\label{munminimalimpliesgolod}
Let $R = S/I$ be a monomial ring with minimal free resolution $F$. Let $\mu_n$ be an $A_{\infty}$-structure on $F$ such that $F \otimes_S k$ and $K_R$ are quasi-isomorphic as $A_{\infty}$-algebras. Then $R$ is Golod if and only if $\mu_n$ is minimal for all $n \geq 1$.
\end{corollary}
Corollary \ref{munminimalimpliesgolod} was first proved in \cite{burke2015} using different methods.
The following immediate corollary to Theorem \ref{munminimalimpliesgolod} is well-known, see for example Proposition 5.2.4(4) of \cite{avramov2010} where it is proved using different methods.
\begin{corollary}[\cite{avramov2010}, Proposition 5.2.4(4)]
\label{dgagolodproducttrivial}
Let $R = S/I$ be a monomial ring with minimal free resolution $F$. If $F$ admits the structure of a dga, then $R$ is Golod if and only if the product on $\tor^S(R,k)$ vanishes.
\end{corollary}
\section{Homotopy transfer on the Taylor resolution}
Theorem \ref{munminimalimpliesgolod} implies that monomial rings with minimal dg algebra resolution are Golod if and only if the product on $\tor^S(S/I,k)$ vanishes.
However, there exists monomial rings whose minimal resolution does not admit the structure of a dg algebra \cite{avramov1981}. On the other hand, every free resolution of a monomial ring $S/I$ admits an $A_{\infty}$-structure \cite{burke2015}.
In general, it is not clear how to obtain an explicit description of such an $A_{\infty}$-structure. Instead of considering general $A_{\infty}$-structures on resolutions, we will consider only those that arise as a deformation of the dg algebra structure on the Taylor resolution. To make this idea precise we will use rooting maps to construct transfer diagrams on the Taylor resolution.
In that case Theorem \ref{merkulovtheorem} tells us how to construct an $A_{\infty}$-structure to which we may apply Theorem \ref{munminimalimpliesgolod}. \\
Let $\pi$ be a rooting map and let $F$ be the free resolution of $S/I$ associated to $RC(L,\pi)$. Recall that $F_n$ is the free $S$-module on $u_J$ where $J \in RC(L, \pi)$ with $\vert J \vert =n$.
The remainder of this section is devoted to computing an explicit $A_{\infty}$-algebra structure on $F$.
Let $T$ will denote the Taylor resolution of $S/I$. We will write $d$ for the differential of $F$ whereas $\partial$ will be reserved for the ``simplicial'' differential, i.e.
$$\partial u_J = \sum_{i=1}^{\vert J \vert} (-1)^{i+1} u_{J^i}$$
on a basis set $u_J$ of $F$.
If $u_J$ is a basis set of $F$ we define $[u_J] = \frac{1}{m_J}u_J$. Let $u_{J_1}, \ldots, u_{J_n}$ be rooted sets and $\alpha_1, \ldots, \alpha_n \in S$.
Then for $u = \sum \alpha_k u_{J_k}$, we set $[u] = \sum \frac{\alpha_k}{m_{J_k}} u_{J_k}$.
The following lemma will be used extensively.
\begin{lemma}
For any basis set $u_J$ of $F$ we have $d[u_J] = [\partial u_J]$.
\end{lemma}
\begin{proof}
We have
\begin{equation*}
\begin{split}
d[u_J] &= \frac{1}{m_J} du_J =\frac{1}{m_J} \sum_{i=1}^{\vert J \vert} (-1)^{i+1} \frac{m_J}{m_{J^i}} u_{J^i}\\
&= \sum_{i=1}^{\vert J \vert} (-1)^{i+1} \frac{1}{m_{J^i}} u_{J^i} = \sum_{i=1}^{\vert J \vert} (-1)^{i+1} [u_{J^i}] \\
&=[\partial u_J].
\end{split}
\end{equation*}
\end{proof}
Let $\pi$ be a rooting map. For $u_J \in T$, define $\pi(u_J) = u_i$ if $\pi(m_J) = m_i$.
Define a map $p'\colon T \to F$ as follows. Let $u \in T$ and write $u = u_{i_1} \cdots u_{i_k}$.
For $q=1,\ldots,k$ define $I_q = \lbrace i_1,\ldots,i_q \rbrace$. For a permutation $\sigma \in S_k$, put $\sigma I_q = \{ i_{\sigma(1)}, \ldots ,i_{\sigma(q)} \}$. We define
\begin{equation}
\label{definitionp'}
p'(u) = \sum_{\sigma \in S_k} \sgn(\sigma) \pi(u_{\sigma I_1})\pi(u_{\sigma I_2}) \cdots \pi(u_{\sigma I_k}).
\end{equation}
Geometrically, the map $p'$ can be thought of as similar to the barycentric subdivision of a simplex. For example, if $u_{i_1,i_2} \in T$ and we think of $\pi(u_{i_1,i_2})$ as its barycenter then $p'$ replaces $u_{i_1,i_2}$ by its barycentric subdivision
$$p'(u_{i_1,i_2}) = u_{i_2}\pi(u_{i_1,i_2}) - u_{i_1}\pi(u_{i_1,i_2}).$$
In the same way, given $u_{i_1,i_2,i_3} \in T$ the right hand terms in
$$p(u_{i_1,i_2,i_3}) = \sum_{\sigma \in S_3} \sgn(\sigma) \pi(u_{i_{\sigma(1)}})\pi(u_{i_{\sigma(1)},i_{\sigma(2)}})\pi(u_{i_{\sigma(1)},i_{\sigma(2)}, i_{\sigma(3)}})$$
are precisely the six constituent triangles in the barycentric subdivision of a $2$-simplex. Before proceeding, we need to verify that $\im(p') \subseteq F$. Let $\sigma \in S_k$, we need to show that
$$\pi(u_{\sigma I_1})\pi(u_{\sigma I_2}) \cdots \pi(u_{\sigma I_k})$$
is rooted. Since $u_{\sigma I_1} \subseteq u_{\sigma I_2} \subseteq \cdots \subseteq u_{\sigma I_k}$, it follows that for all $j_1, \ldots, j_k$ we have
$$\pi ( \pi(u_{I_{j_1}}), \pi(u_{I_{j_2}}),\ldots,\pi(u_{I_{j_k}})) = \pi(u_{I_{j_k}}).$$
Therefore, $\pi(u_{\sigma I_1})\pi(u_{\sigma I_2}) \cdots \pi(u_{\sigma I_k})$
is rooted and so $\im(p') \subseteq F$.
\begin{lemma}
\label{pprimechainmappartial}
The map $p'$ is a chain map with respect to the differential $\partial$.
\end{lemma}
\begin{proof}
It is sufficient to prove the result for basis elements $u_I \in T$. Write $I = \lbrace i_1, \ldots, i_k \rbrace$.
We first show that
$$\partial p'(u_I) = \sum_{\sigma \in S_k} (-1)^{k+1} \sgn(\sigma) \pi(u_{\sigma I_1}) \cdots \pi(u_{\sigma I_{k-1}}).$$
We have
\begin{equation*}
\begin{split}
\partial p'(u_I) &= \sum_{\sigma \in S_k} \sgn(\sigma) \partial \big( \pi(u_{\sigma I_1}) \cdots \pi(u_{\sigma I_k})\big) \\
&= \sum_{\sigma \in S_k} \sum_{j=1}^k (-1)^{j+1} \sgn(\sigma) \pi(u_{\sigma I_1}) \cdots \widehat{\pi(u_{\sigma I_j})} \cdots \pi(u_{\sigma I_k}).
\end{split}
\end{equation*}
Now, fix some $j<k$ and let $\tau_j$ be the transposition $(\sigma(j), \sigma(j+1))$. Then the summands indexed by $\sigma$ and $\tau_j\sigma$ cancel.
Indeed, if $q < j$ then $\tau_j$ acts as the identity on $\sigma I_q$ and so $u_{\sigma I_q} = u_{\tau_j \sigma I_q}$. On the other hand, if $q \geq j+1$ then the underlying sets of $\sigma I_q$ and $\tau_j \sigma I_q$ are the same. Since $\pi(u_J)$ depends only on the set $J$ and not on the ordering we have
$$\pi(u_{\sigma I_q}) = \pi(u_{\tau_j \sigma I_q})$$
and so the summands indexed by $\sigma$ and $\tau_j\sigma$ cancel. Note that since the map $\sigma \to \tau_j \sigma$ is an involution these permutations cancel in pairs. Therefore, we obtain
$$\partial p'(u_I) = \sum_{\sigma \in S_k} (-1)^{k+1} \sgn(\sigma) \pi(u_{\sigma I_1}) \cdots \pi(u_{\sigma I_{k-1}}).$$
For $\sigma \in S_k$, write
$$G_{\sigma} = \pi(u_{\sigma I_1}) \cdots \pi(u_{\sigma I_{k-1}})$$
and so
\begin{equation}
\label{gsigma}
\partial p'(u_I) = \sum_{\sigma \in S_k} (-1)^{k+1} \sgn(\sigma) G_{\sigma}.
\end{equation}
Next, we compute $p' \partial (u_I)$. For $j \in \{ 1,\ldots, k \}$ and $\sigma \in S_{k-1}$, set $I_q(j) = I_q \setminus \{ j \}$ and
$$F_{\sigma, j} = \pi( u_{\sigma I_1(j)} ) \cdots \pi( u_{\sigma I_{j-1}(j)} )\pi( u_{\sigma I_{j+1}(j)} ) \cdots \pi( u_{\sigma I_k(j)} ).$$
Then
\begin{equation}
\label{fsigmaj}p' \partial u = \sum_{j=1}^k (-1)^{j+1} p'(u_{I_k(j)}) = \sum_{j=1}^k \sum_{\sigma \in S_{k-1}} (-1)^{j+1} \sgn (\sigma) F_{\sigma,j}.
\end{equation}
Given $j \in \{ 1,\ldots, k \}$, we can embed $S_{k-1}$ into $S_k$ by fixing $j$. Therefore, we have
$$p' \partial u = \sum_{j=1}^k \sum_{\sigma \in S_{k-1}} (-1)^{j+1} \sgn (\sigma) F_{\sigma,j} = \sum_{j=1}^k \sum_{\substack{\sigma \in S_k \\ \sigma(j) = j}}(-1)^{j+1} \sgn (\sigma) F_{\sigma,j}.$$
Now, fix $j \in \{ 1,\ldots, k \}$ and fix $\sigma \in S_k$ such that $\sigma(j) = j$. Define $\rho$ to be the cycle $(j \cdots k)$ and let $\tau = \sigma \rho$. Then we have $G_{\tau} = F_{\sigma,j}$ and
$$(-1)^{k+1} \sgn (\tau) G_{\tau} = (-1)^{2k+j+1} G_{\sigma \rho} = (-1)^{j+1} \sgn (\sigma) F_{\sigma,j}.$$
Since both sums in \eqref{gsigma} and \eqref{fsigmaj} have $k!$ terms, it follows that they are equal.
\end{proof}
Let $i\colon F \to T$ denote the inclusion.
\begin{lemma}
\label{piuip=ipu}
For all $u \in T$, we have
$$\pi(u) ip'\partial u = ip' u.$$
\end{lemma}
\begin{proof}
It is sufficient to prove the result for basis elements $u_I \in T$. Write $I = \lbrace i_1, \ldots, i_k \rbrace$. As in the proof of Lemma \ref{pprimechainmappartial}, we have
$$\partial p'(u_I) = \sum_{\sigma \in S_k} (-1)^{k+1} \sgn(\sigma) \pi(u_{\sigma I_1}) \cdots \pi(u_{\sigma I_{k-1}}).$$
Since $p'$ is a chain map by Lemma \ref{pprimechainmappartial}, we have
\begin{equation*}
\begin{split}
\pi(u_I) ip' \partial u_I &= \pi(u_I) \partial ip'(u_I) \\
&= \pi(u_I) \sum_{\sigma \in S_k} (-1)^{k+1} \sgn(\sigma) \pi(u_{\sigma I_1}) \cdots \pi(u_{\sigma I_{k-1}}) \\
&= \sum_{\sigma \in S_k} (-1)^{k+1+k-1} \sgn(\sigma) \pi(u_{\sigma I_1}) \cdots \pi(u_{\sigma I_{k-1}}) \pi(u_I)\\
&= \sum_{\sigma \in S_k} \sgn(\sigma) \pi(u_{\sigma I_1}) \cdots \pi(u_{\sigma I_{k-1}}) \pi(u_\sigma I_k)\\
&= ip'(u_I)
\end{split}
\end{equation*}
where we have used that $\pi(u_I) = \pi(u_{I_k}) = \pi(u_{\sigma I_k})$.
\end{proof}
\begin{lemma}
\label{ip'chainhomotopyto1}
The composition $ip'$ is chain homotopic to $1_T$ as chain maps $(T,\partial) \to (T,\partial)$.
\end{lemma}
\begin{proof}
Define $\phi'\colon T \to T$ by induction as follows. Set $\phi'_0 = \phi'_1=0$ and
$$\phi'_2(u_{i_1}u_{i_2}) = \pi(u_{i_1,i_2}) u_{i_1} u_{i_2}.$$
For $k>2$, write $u = u_{i_1} \cdots u_{i_k}$ and define
$$\phi'_k(u) = \pi(u)\big( u - \phi'_{k-1}(\partial u) \big).$$
We need to show that $1_T - ip' = \partial \phi' + \phi' \partial$. We proceed by induction on $k$.
If $k=1$, there is nothing to prove. If $k=2$, we have
\begin{equation*}
\partial \phi_2(u_{i_1}u_{i_2}) = \partial(\pi(u_{i_1,i_2}) u_{i_1} u_{i_2}) = u_{i_1}u_{i_2} - \pi(u_{i_1,i_2})u_{i_2} + \pi(u_{i_1,i_2}) u_{i_1} = (1_F - ip')(u_{i_1}u_{i_2}).
\end{equation*}
Now, let $k>2$. Using Lemma \ref{piuip=ipu}, we get
\begin{equation*}
\begin{split}
\partial \phi'_k(u) &= u-\phi'_{k-1} \partial u - \pi(u)(\partial u - \partial \phi'_{k-1} \partial u) \\
&= u-\phi'_{k-1} \partial u - \pi(u) \big( \partial u - \partial u + ip' \partial u + \phi_{k-2} \partial^2 u \big) \\
&= u - \phi_{k-1} \partial u - \pi(u) ip'\partial u \\
&= u-ip'u - \phi_{k-1} \partial u
\end{split}
\end{equation*}
which finishes the proof.
\end{proof}
Define a map $p\colon T \to F$ as follows. For $u_J \in T$, define
\begin{equation}
\label{definitionp}
p(u_J) = m_J [p'(u_J)]
\end{equation}
where $p'$ is the map from (\ref{definitionp'}). Then we have the following theorem.
\begin{theorem}
\label{rootedresolutionsaresdrtaylor}
Let $\pi$ be a rooting map for a monomial ideal $I$ and let $F$ be the resolution of $S/I$ associated to $\pi$. Then there exists a transfer diagram
\begin{center}
\begin{tikzpicture}
\matrix(m)[matrix of math nodes,
row sep=3em, column sep=2.8em,
text height=1.5ex, text depth=0.25ex]
{F &T\\};
\path[->]
(m-1-1) edge [bend left=35] node[yshift=1.5ex] {$i$} (m-1-2)
(m-1-2) edge [bend left=35] node[yshift=-1.5ex] {$p$} (m-1-1)
(m-1-2) edge [loop right, in=35,out=-35,looseness=5, min distance=10mm] node {$\phi$} (m-1-2)
;
\end{tikzpicture}
\end{center}
where $i\colon F \to T$ is the inclusion and $p\colon T \to F$ is the map from (\ref{definitionp}).
\end{theorem}
\begin{proof}
Let $u_J \in T$ and define $\phi$ by $\phi(u_J) = m_J [\phi'(u_J)]$.
Then, using Lemma \ref{ip'chainhomotopyto1}, we have
$$ d \phi(u_J) =m_J d[\phi'(u_J)] = m_J [ \partial \phi'(u_J) ] = m_J [ u_J - ip'u_J - \phi' \partial u_J] = u_J - ipu_J - \phi du_J$$
and so $1_T$ and $ip$ are homotopic. On the other hand, we clearly have $pi = 1_F$ which finishes the proof.
\end{proof}
\section{The Golod property for rooted rings}
Let $R=S/I$ be a rooted ring with rooting map $\pi$ and minimal free resolution $F$. The purpose of this section is to provide necessary and sufficient conditions for $R$ being Golod. Following \cite{jollenbeck2006}, we have the following definition.
\begin{definition}
Let $R=S/I$ be a monomial ring and write $I=(m_1,\ldots,m_r)$. We say that $R$ satisfies the \emph{gcd condition} if for all generators $m_i$ and $m_j$ with $\gcd(m_i,m_j)=1$ there exists a $m_k \neq m_i,m_j$ such that $m_k$ divides $\lcm(m_i,m_j)$.
\end{definition}
We have the following lemma where we write $\pi(m_i,m_j)$ for $\pi(\{ m_i,m_j\})$.
\begin{lemma}
\label{gcdpigcd}
Let $R=S/I$ be a rooted ring with rooting map $\pi$. Write $I=(m_1,\ldots,m_r)$. Then $R$ satisfies the gcd condition if and only if $\pi(m_i,m_j) \neq m_i,m_j$ whenever $\gcd(m_i,m_j)=1$.
\end{lemma}
\begin{proof}
First, assume that $\pi(m_i,m_j) \neq m_i,m_j$ whenever $\gcd(m_i,m_j)=1$. Since $\pi(m_i,m_j)$ divides $\lcm(m_i,m_j)$, we can take $m_k = \pi(m_i,m_j)$ and so $R$ satisfies the gcd condition.
Conversely, suppose that $R$ satisfies the gcd condition and take $m_i$ and $m_j$ with $\gcd(m_i,m_j)=1$. For contradiction, assume that $\pi(m_i,m_j) = m_i$. By the gcd condition, there exists some $m_k \neq m_i,m_j$ such that $m_k$ divides $\lcm(m_i,m_j)$. We claim that the set $\{ m_i,m_j, \pi(m_j,m_k) \}$ is rooted. To prove this, we need to verify that every subset is unbroken. Since $\pi(m_i,m_j) = m_i$, it follows immediately that $\{ m_i, m_j \}$ is unbroken. For $\{ m_j, \pi(m_j,m_k) \}$, note that
$$\pi(m_j,m_k) \vert \lcm(m_j, \pi(m_j,m_k)) \vert \lcm(m_j,m_k)$$
and so $\pi(m_j, \pi(m_j,m_k)) = \pi(m_j,m_k)$ as $\pi$ is a rooting map. Therefore, $\{ m_j, \pi(m_j,m_k) \}$ is unbroken. Next, consider $\{ m_i, \pi(m_j,m_k) \}$. Since $\pi(m_i,m_j) = m_i$, we have
$$\pi(m_i,m_j) \vert \lcm(m_i,\pi(m_j,m_k)) \vert \lcm(m_i,m_j)$$
and so $\pi(m_i,\pi(m_j,m_k)) = \pi(m_i,m_j)=m_i$. Consequently, $\{ m_i, \pi(m_j,m_k) \}$ is unbroken. Similarly, we have that $\{ m_i,m_j, \pi(m_j,m_k) \}$ is unbroken as
$$\pi(m_i,m_j) \vert \lcm(m_i,m_j,\pi(m_j,m_k)) \vert \lcm(m_i,m_j)$$
and thus $\pi(m_i,m_j,\pi(m_j,m_k)) = \pi(m_i,m_j)=m_i$. Therefore, $\{ m_i,m_j, \pi(m_j,m_k) \}$ is rooted as claimed.
Let $u = u_iu_j\pi(u_j,u_k)$. Since $\pi(m_j,m_k)$ divides $\lcm(m_i,m_j)$, we have
\begin{equation*}
\begin{split}
du = \frac{\lcm(m_i,m_j)}{\lcm(m_j,\pi(m_j,m_k))}u_j\pi(u_j,u_k) -\frac{\lcm(m_i,m_j)}{\lcm(m_i,\pi(m_j,m_k))}u_i\pi(u_j,u_k) + u_iu_j.
\end{split}
\end{equation*}
Hence, $du \notin (x_1,\ldots,x_m)F$ which is a contradiction as $R$ is rooted. Therefore, $\pi(m_i,m_j) \neq m_i$. Swapping the roles of $i$ and $j$, we see that $\pi(m_i,m_j) \neq m_j$ which finishes the proof.
\end{proof}
The following lemma is straightforward but included for completeness.
\begin{lemma}
\label{plambda2minimalonnondisjoint}
Let $u_I$ and $u_J$ be basis elements of $T$ with the property that $\gcd(m_I,m_J) \neq 1$. Then
$$p\lambda_2(u_I,u_J) \in (x_1,\ldots,x_m)F.$$
\end{lemma}
\begin{proof}
Indeed, we have
\begin{equation*}
p\lambda_2(u_I\otimes u_J) = p(\frac{m_Im_J}{m_{I \cup J}}u_{I \cup J})
= \frac{m_Im_J}{m_{I \cup J}} p(u_{I \cup J}).
\end{equation*}
By assumption $\frac{m_Im_J}{m_{I \cup J}} \neq 1$ and so the result follows.
\end{proof}
\begin{lemma}
\label{pigcdimpliesgolod}
Let $R$ be a rooted ring. If $R$ is gcd then $R$ is Golod
\end{lemma}
\begin{proof}
Let $F$ be the minimal free resolution of $R$. Then by Theorem \ref{rootedresolutionsaresdrtaylor} there is a transfer diagram
\begin{center}
\begin{tikzpicture}
\matrix(m)[matrix of math nodes,
row sep=3em, column sep=2.8em,
text height=1.5ex, text depth=0.25ex]
{F &T\\};
\path[->]
(m-1-1) edge [bend left=35] node[yshift=1.5ex] {$i$} (m-1-2)
(m-1-2) edge [bend left=35] node[yshift=-1.5ex] {$p$} (m-1-1)
(m-1-2) edge [loop right, in=35,out=-35,looseness=5, min distance=10mm] node {$\phi$} (m-1-2)
;
\end{tikzpicture}
\end{center}
where $i\colon F \to T$ is the inclusion and $p\colon T \to F$ is the map from (\ref{definitionp}). By Theorem \ref{merkulovtheorem}, we obtain an $A_{\infty}$-structure $\mu_n$ on $F$.
From Theorem \ref{munminimalimpliesgolod} it follows that it is sufficient to show that each $\mu_n$ is minimal. Recall that $\mu_n = p\lambda_n$ where
$$\lambda_n = \sum_{\substack{s+t=n \\ s,t \geq 1}} (-1)^{s+1} \lambda_2(\phi \lambda_s \otimes \phi \lambda_t).$$
Therefore, it is sufficient to prove that $p\lambda_2$ maps into the maximal ideal. Let $u_I$ and $u_J$ be basis elements of $T$.
We may assume that $\gcd(m_I,m_J) =1$ since otherwise $p\lambda_2(u_I \otimes u_J) \in (x_1,\ldots,x_m)F$ by Lemma \ref{plambda2minimalonnondisjoint}.
Write $I = \lbrace i_1, \ldots, i_k \rbrace$ and $J = \lbrace i_{k+1}, \ldots, i_{n} \rbrace$ where $n=k+l$.
By definition of $p$ we have
$$p(u_{i_1} \cdots u_{i_n}) = m[\sum_{\sigma \in S_n} \sgn(\sigma) \pi(u_{\sigma I_1}) \cdots \pi(u_{\sigma I_n}) ]$$
where $m = \lcm(m_I,m_J) = m_Im_J$ and $u_{\sigma I_p} = u_{i_{\sigma(1)}} \cdots u_{i_{\sigma(p)}}$.
Write
$$\alpha_{\sigma} = \frac{m}{\lcm(\pi(m_{\sigma I_1}), \ldots, \pi(m_{\sigma I_n}))}$$
then
$$p(u_{i_1} \cdots u_{i_n}) = \sum_{\sigma \in S_n} \sgn(\sigma) \alpha_{\sigma} \pi(u_{\sigma I_1}) \cdots \pi(u_{\sigma I_n}).$$
We need to show that $\alpha_{\sigma} \in (x_1,\ldots,x_m)$ for all $\sigma \in S_n$.
Suppose $\alpha_{\sigma} = 1$ for some $\sigma \in S_n$.
Without loss of generality, we may assume $i_{\sigma(1)} \in I$.
Set
$$q = \min \lbrace q' \vert i_{\sigma(q')} \in J \rbrace.$$
By assumption, $\lcm(\pi(m_{\sigma I_1}),\ldots, \pi(m_{\sigma I_n}))$ is divisible by $m_{i_{\sigma(q)}}$.
Since $\gcd(m_{i_{\sigma(q)}}, m_I) = 1$, we have $\gcd(m_{i_{\sigma(q)}}, \pi(m_{\sigma I_k})) =1$ for all $k<q$. Therefore, $\lcm(\pi(m_{\sigma I_q}), \ldots, \pi(m_{\sigma I_n}))$ is still divisible by $m_{i_{\sigma(q)}}$.
We claim that
$$m_{i_{\sigma(q)}} \notin \lbrace \pi(m_{\sigma I_q}), \ldots, \pi(m_{\sigma I_n}) \rbrace.$$
Indeed, assume that $m_{i_{\sigma(q)}} = \pi(m_{\sigma I_s})$ for some $s \geq q$. We have that $\pi(m_{\sigma I_s}) = \pi(m_{i_{\sigma(1)}}, \ldots, m_{i_{\sigma(s)}})$.
Then
$$m_{i_{\sigma(q)}} \vert \lcm(m_{i_{\sigma(1)}},m_{i_{\sigma(q)}}) \vert \lcm(m_{i_{\sigma(1)}}, \ldots, m_{i_{\sigma(s)}})$$
and so $m_{i_{\sigma(q)}} = \pi(m_{i_{\sigma(1)}},m_{i_{\sigma(q)}})$ since $\pi$ is a rooting map.
But by definition of $q$ we have $\gcd(m_{i_{\sigma(1)}},m_{i_{\sigma(q)}})=1$ so this contradicts $I$ being gcd by Lemma \ref{gcdpigcd}.
Therefore
$$m_{i_{\sigma(q)}} \notin \lbrace \pi(m_{\sigma I_q}), \ldots, \pi(m_{\sigma I_n}) \rbrace.$$
Define
$$u = u_{i_{\sigma(q)}} \pi(u_{\sigma I_q})\cdots \pi(u_{\sigma I_n})$$
then we claim that $u$ is in $F$. To see that $u$ is rooted, let $v \subseteq \{ u_{i_{\sigma(q)}}, \pi(u_{\sigma I_q}), \ldots, \pi(u_{\sigma I_n}) \}$.
If $u_{i_{\sigma(q)}} \notin v$ then there is nothing to prove as $\{\pi(u_{\sigma I_q}), \ldots, \pi(u_{\sigma I_n}) \}$ is rooted. So, assume $u_{i_{\sigma(q)}} \in v$. We can write
$$ v = u_{i_{\sigma(q)}} \pi(u_{\sigma I_{q_1}})\cdots \pi(u_{\sigma I_{q_k}})$$
for some $q_i \geq q$. We have
$$ \pi(u_{\sigma I_{q_k}}) \vert m_v \vert m_{\sigma I_{q_k}}$$
and so $\pi(v) = \pi(u_{\sigma I_{q_k}}) \in v$. Hence, $u$ is rooted as claimed. But $du \notin (x_1,\ldots,x_m)F$ since $m_{i_{\sigma(q)}}$ divides $\lcm(\pi(m_{\sigma I_q}),\ldots, \pi(m_{\sigma I_n}))$ which contradicts minimality of $F$.
\end{proof}
We now come to the main theorem of this section.
\begin{theorem}
\label{golodequicupzeroequipigcd}
Let $R$ be a rooted ring. Then the following are equivalent.
\begin{enumerate}
\item The ring $R$ is Golod.
\item The product on $\tor^S(R,k)$ vanishes.
\item The ring $R$ is gcd.
\end{enumerate}
\end{theorem}
\begin{proof}
The implication $1 \Rightarrow 2$ is immediate from the definition and $3 \Rightarrow 1$ follows by Lemma \ref{pigcdimpliesgolod}.
We prove $2 \Rightarrow 3$. Since the product on $\tor^S(R,k)$ is just $\mu_2 \otimes 1$, the product vanishes if and only if $\mu_2$ is minimal.
Let $m_i$ and $m_j$ be generators such that $\gcd(m_i,m_j)=1$. Then
$$\mu_2(u_i, u_j) = \frac{\lcm(m_i,m_j)}{\lcm(\pi(m_i,m_j)m_i)} \pi(u_i,u_j)u_i - \frac{\lcm(m_i,m_j)}{\lcm(\pi(m_i,m_j)m_j)} \pi(u_i,u_j)u_j.$$
If $\pi(m_i,m_j)=m_j$ then
$$\frac{\lcm(m_i,m_j)}{\lcm(\pi(m_i,m_j)m_j)} = 1$$
which contradicts minimality of $\mu_2$ and so $\pi(m_i,m_j) \neq m_j$.
By the same argument, $\pi(m_i,m_j) \neq m_i$ and thus $R$ is gcd by Lemma \ref{gcdpigcd}.
\end{proof}
\begin{remark}
The equivalence between the second and third statement of Theorem \ref{golodequicupzeroequipigcd} is known. See for example Lemma 2.4 of \cite{katthan2015}
\end{remark}
\begin{example}
Let $S=k[x_1,\ldots,x_9]$ and let $I$ be the ideal
$$(x_2x_5x_8, x_2x_3x_8x_9,x_5x_6x_7x_8,x_1x_2x_4x_5,x_1x_2x_3,x_4x_5x_6,x_7x_8x_9).$$
Label the generators by $u_1, \ldots, u_9$ and order them by $u_1 \prec u_2 \prec \cdots \prec u_9$. Let $L$ be the Lyubeznik resolution with respect to the ordering $\prec$.
Then $L$ is easily seen to be minimal. Plainly, $I$ satisfies the gcd condition and so $S/I$ is Golod.
\end{example}
\section*{Acknowledgements}
The author would like to thank his PhD supervisor Jelena Grbi\'c for advice and guidance,
Fabio Strazzeri and Francisco Belch\'{\i} for useful discussions
and Lukas Katth\"an, Bernhard K\"ock, Taras Panov and the referee for helpful comments on an earlier version of this manuscript.
\bibliographystyle{plain}
|
1,314,259,996,615 | arxiv |
\section{Related Work}
Many challenging tasks in computer vision such as single image depth prediction or semantic image segmentation require models for dense prediction, since they either involve regressing quantities pixelwise or classifying the pixels. Most current state-of-the-art methods for dense prediction tasks are based on end-to-end trainable deep learning architectures. Early methods segment the image into regions such as superpixels in a bottom-up fashion. Predictions for the regions are determined based on deep neural network features \cite{misc:yan15cvpr:object,misc:farabet13pami:hierfeatscenelabeling,cnn:liu15cvpr:deepneuralfields}. The use of image-based bottom-up regions supports adherence of the dense predictions to the boundaries in the image.
Aim at end-to-end CNNs, Long~et~al.~\cite{cnn:long15cvpr:fcn} propose a fully connected convolutional (FCN) architecture for semantic image segmentation which successively convolves and pools feature maps of an increasing number of feature channels. FCNs employ the transposed convolution to learn the upsampling of coarse feature maps. To obtain segmentation, feature maps of the intermediate resolutions are concatenated and further processed by transposed convolutions. Since the introduction of FCNs, many variants for dense prediction are proposed. Hariharan~et~al.~\cite{cnn:hariharan15cvpr:hypercolumns} classify pixels based on feature vectors that are extracted at corresponding locations across all feature maps in a CNN. This way, the method combines features across all layers available in the network, capturing high-resolution detail as well as context in large receptive fields. However, this approach becomes inefficient in deep architectures with many wide layers. Noh~et~al.~\cite{cnn:noh15iccv:deconv} and Dosovitsky~et~al.~\cite{cnn:dosovitskiy15iccv:flownet} propose encoder-decoder CNN architectures which successively unpool and convolve the lowest resolution feature map of the encoder back to a high output resolution. Since the feature maps in the encoder lose spatial information through pooling, Noh~et~al.~\cite{cnn:noh15iccv:deconv} exploint the memorized unpooling~\cite{cnn:zeiler11cvpr:memorizedunpool} to upscale coarse feature maps at the decoder stage, where the pooling locations are used to unpool accordingly. The FCN of Laina~et~al.~\cite{cnn:laina163dv:depth} uses the deep residual network~\cite{cnn:he16cvpr:resnet} as an encoder, where most pooling layers are replaced by stride-two convolution. For upscaling, the upprojection block is developed as an efficient implementation of upconvolution. The principle of upconvolution is developed by \cite{cnn:dosovitskiy15cvpr:chairs}, which first unpools a feature map by putting activations to one entry of a $2\times 2$ block and then filter the sparse feature map with convolution. Details in the predictions of such encoder-decoder FCNs can be improved by feeding the feature maps in each scale of the encoder to the corresponding scale of the decoder (skip connections, e.g.~\cite{cnn:dosovitskiy15iccv:flownet}). In RefineNet~\cite{cnn:lin17cvpr:refinenet}, the decoder feature maps are successively refined using multi-resolution fusion with their higher resolution counterparts in the encoder. In this paper, we also reincorporate the high-frequency information that is discarded during pooling to successively refine feature maps in the decoder.
Some FCN architectures use dilated convolutions in order to increase receptive field without pooling and maintain high-resolution of the feature maps~\cite{cnn:chen15iclr:deeplab, cnn:chen18:deeplabv2, cnn:yu16iclr:dilate}. These dilated CNNs trade high-resolution output with the high memory consumption, which quickly become a bottleneck for training with large batch size for encoder-decoder CNNs. The full-resolution residual network (FRRN) by \cite{cnn:pohlen17cvpr:frrn} is an alternative model which keeps features in a high-resolution lane and at the same time, extracts low-resolution higher-order features in an encoder-decoder architecture. The high-resolution features are successively refined from residuals computed through the encoder-decoder lane. While the model is highly demanding in memory and training time, it achieves high-resolution predictions that well adhere to segment boundaries.
\cite{cnn:ghiasi16eccv:lrr} take inspiration from Laplace image decompositions for their network design. They successively refine the high-frequency parts of the score maps in order to improve predictions at segment boundaries. Structured prediction approaches integrate inference in CRFs with deep neural networks in end-to-end trainable models~\cite{cnn:zheng15iccv:crfrnn, cnn:liu15iccv:semantic, cnn:chandra16eccv:gaussiancrfs, cnn:lin16cvpr:piecewise}. While the models are capable of recovering high-resolution predictions, inference and learning typically requires tedious iterative procedures. In contrast to those approaches, we aim to provide detailed predictions in a swift and direct forward pass. Recently, the pyramid scene parsing network (PSPNet) from \cite{cnn:zhao17cvpr:pspnet} extracts global context features using a pyramid pooling module, which shows the benefit of aggregation global information for dense predictions. The pyramid design in PSPNet relies multiple average pooling layers with heuristic window size. In this work, we also propose a more efficient pyramid pooling stage based on multi-resolution DWT.
\section{WCNN Encoder-Decoder Architectures}
Recently, CNNs have demonstrated impressive performance on many dense pixelwise prediction tasks, including image semantic segmentation, optical flow estimation, and depth regression. CNNs extract image features through successive layers of convolution and non-linear activation. In encoder architectures, as the stack of layers gets deeper, the dimension of the feature vectors increases while the spatial resolution is reduced. For dense prediction tasks, CNNs with encoder-decoder architecture are widely applied in which the feature maps of the encoder are successively unpooled and deconvolved. Research on architectures for the encoder part is relatively mature, e.g., the state-of-the-art CNNs such as~VGGNet~\cite{cnn:simonyan15iclr:vgg} and ResNet~\cite{cnn:he16cvpr:resnet} are commonly used in various applications. In contrast, the design of the decoder has not yet converged to a universally accepted solution. While it is easy to reduce spatial dimension by either pooling or strided convolution, recovering a detailed prediction from a coarse and high-dimensional feature space is less straight-forward. In this paper, we make an analogy between CNN encoder-decoders to the multi-resolution wavelet transform (see \cref{arxiv18:fig:network}). We match the pooling operations of the CNN encoder with the multilevel forward transformation of a signal by a wavelet. The decoder performs the corresponding inverse wavelet transform for unpooling. The analogy is straight-forward: the wavelet transform successively filters the signal into frequency subbands while reducing the spatial resolution. The inverse wavelet transform successively composes the frequency subband back to full resolution. While the encoder and the decoder transform between different domains (e.g. image-to-semantic segmentation vs. image-to-image in wavelet transforms), we find that wavelet unpooling provides an elegant mechanism to transmit high-frequency information from the image domain to the semantic segmentation. It also imposes a strong architectural regularization, as the feature dimensions between the encoder and the decoder need to match through the wavelet coefficients.
\input{./plots/wcnn-network2.tex}
\subsection{Discrete Wavelet Transform}
We briefly introduce main concepts of DWT (see~\cite{book:mallat09:wavelet} for a comprehensive introduction). The multi-resolution wavelet transform provides localized time-frequency analysis of signals and images. Consider a 2D input data $X\in\mathbb{R}^{2M\times 2N}$, $\phi\in\mathbb{R}^2$ and $\psi\in\mathbb{R}^2$ as 1D low-pass and high-pass filters, respectively. Denote the indexed array element by $x_{ij}$, the single-level DWT is defined as follows,
\ieqn{\label{eq:dwt}
\ialid{
&y^{ll}_{kl} =\sum_{l}\sum_{k} x_{2i+k,2j+l}\phi_k\phi_l, \\
&y^{lh}_{kl} =\sum_{l}\sum_{k} x_{2i+k,2j+l}\phi_k\psi_l, \\
&y^{hl}_{kl} =\sum_{l}\sum_{k} x_{2i+k,2j+l}\psi_k\phi_l, \\
&y^{hh}_{kl} =\sum_{l}\sum_{k} x_{2i+k,2j+l}\psi_k\psi_l.
}}
All the convolutions above are performed with stride 2, yielding a subsampling of factor 2 along each spatial dimension. Let the low-low frequency component $Y^{ll}:=\{y^{ll}_{kl}\}$, the low-high frequency component $Y^{lh}:=\{y^{lh}_{kl}\}$, the high-low frequency component $Y^{hl}:=\{y^{hl}_{i,j}\}$, and the high-high frequency component $Y^{hh}:=\{y^{hh}_{i,j}\}$. The DWT results in $\{Y^{ll}, Y^{lh}, Y^{hl}, Y^{hh}\}\in\mathbb{R}^{M\times N}$.
Conversely, supplied with the wavelet coefficients, and provided that~$\{\phi,\psi\}$ and~$\{\tilde\phi,\tilde\psi\}$ are bi-orthogonal wavelet filters, the original input~$X$ can be reconstructed by the inverse DWT as
\iali{
x_{ij} =&\sum_{l}\sum_{k} \bigg(
y^{ll}_{kl}\tilde\phi_{i-2k}\tilde\phi_{j-2l}+
y^{lh}_{kl}\tilde\phi_{i-2k}\tilde\psi_{j-2l} \notag\\
&
+y^{hl}_{kl}\tilde\psi_{i-2k}\tilde\phi_{j-2l}+
y^{hh}_{kl}\tilde\psi_{i-2k}\tilde\psi_{j-2l}\bigg)\, .
\label{eq:idwt}
}
A cascaded wavelet decomposition successively performs \cref{eq:dwt} on low-low frequency coefficients~$\{(\cdot)^{ll}\}$ from fine to coarse resolution, while the reconstruction works reversely from coarse to fine resolution. In this sense, decomposition-reconstruction in multi-resolution wavelet analysis is in analogy to the pooling-unpooling steps in an encoder-decoder CNN (e.g., \cite{cnn:noh15iccv:deconv}). Moreover, it is worth noting that, while the low-frequency coefficients~$\{(\cdot)^{ll}\}$ store local averages of the input data, its high-frequency counterparts, namely $\{(\cdot)^{lh}\}$, $\{(\cdot)^{hl}\}$, and $\{(\cdot)^{hh}\}$
encode local textures which are vital in recovering sharp boundaries. This motivates us to make use of the high-frequency wavelet coefficients to improve the quality of unpooling during the decoder stage and, hence, improve the accuracy of CNN in pixelwise prediction.
Throughout this paper, we extensively use the Haar wavelet for its simplicity and effectiveness to boost the performances of the underlying CNN. In this scenario, the Haar filters used for decomposition, see \cref{eq:dwt}, are given by
\begin{equation}\label{nips17:eq:haarfilt}
\phi =\left(\frac12\;,\; \frac12\right)\;\;,\quad
\psi =\left(\frac12\;,\; -\frac12\right)\;.
\end{equation}
The corresponding reconstruction filters in \cref{eq:idwt} are given by $\tilde\phi=2\phi$, $\tilde\psi=2\psi$, and hence the inverse transform reduces to a sum of Kronecker products (denoted with $\otimes$)
\iali{
X=&Y^{ll}\otimes \tilde\phi\,^\top \otimes\tilde\phi
+Y^{lh}\otimes \tilde\phi\,^\top \otimes\tilde\psi \notag\\
+&Y^{hl}\otimes \tilde\psi\,^\top \otimes\tilde\phi
+Y^{hh}\otimes \tilde\psi\,^\top \otimes\tilde\psi \;.
}
With CNNs, data at every layer are structured into 4D tensors, i.e., along the dimensions of the batch size, the channel number, the width and the height. To perform the wavelet transform for CNNs, we apply DWT/iDWT channelwise. Without confusion, the remaining text adopts the shorthand notations $G_h(X)$ for the Haar DWT and $G_h^{-1}(Y^{ll}, Y^{lh},Y^{hl},Y^{hh})$ for the corresponding iDWT.
\subsection{Wavelet CNN Encoder-Decoder Architecture}
We propose a CNN encoder-decoder that resembles multi-resolution wavelet decomposition and reconstruction by its pooling and unpooling operations. In addition, we introduce two pyramid variants to capture global contextual features based on the wavelet transformation.
\Cref{arxiv18:fig:network} gives an overview of the proposed WCNN architecture. WCNN employs ResNet~\cite{cnn:he16cvpr:resnet} for the encoder. In ResNet, the input resolution is successively reduced by a factor of 32 via one max-pooling layer and four stride-two convolutional layers, \emph{i.e.,} conv1, conv3\_1, conv4\_1 and conv5\_1. In order to restore the input resolution with the decoder, WCNN inserts three DWT layer after conv2, conv3 and conv4 to decompose the corresponding feature maps into four frequency bands. The high frequencies $Y^{lh}, Y^{hl}, Y^{hh}$ are skip-connected to the decoder to perform unpooling via the iDWT layers, which we will discuss in details with \cref{sect:waveletunpool}. We add three convolutional residual block~\cite{cnn:he16cvpr:resnet} to filter the unpooled feature maps further before the next unpooling stage. As illustrated in \cref{arxiv18:fig:network}, the three iDWT layers upsample the output to $1/4$ input resolution. The full-resolution output is obtained with two upconvolutional blocks by transposed convolution. To bridge the encoder and decoder, the contextual pyramid with wavelet transformation is added. \Cref{sect:pyramid} will detail the pyramid design.
\input{./plots/pyramids2.tex}
\subsubsection{Wavelet Unpooling}\label{sect:waveletunpool}
WCNN achieves the unpooling through iDWT layers. To this end, the DWT layers are added consistently into the encoder to obtain high-frequency components. The idea is straight-forward. At encoder, the DWT layers decompose the feature map into four frequency bands channelwise, where each frequency band is half-resolution of the input. The high-frequency components are skip-connected to the decoder where the spatial resolution needs to be upscaled by a factor of two. Taking the layer idwt\_4 in \cref{arxiv18:fig:network} as an example, the input to this layer are four components of spatial resolution $1/32$ to perform iDWT. The pyramid output serve the low-low frequency $\tilde{Y}^{ll}$, while the output of the dwt4 layer operating on the conv4 provide the three high-frequency components $Y^{lh}$, $Y^{hl}$, and $Y^{hh}$. With iDWT, the spatial resolution is upscaled to $1/16$. The output of layer idwt4 is finalized by adding the $1/16$ resolution direct output of conv4, which is a standard skip connection commonly used by many state-of-the-art encoder-decoder CNNs to improve the upsampling performance. The iDWT layer can thus be described by
\begin{equation}\label{eq:idwtlayer}
G_h^{-1}(\tilde{Y}^{ll}, Y^{lh}, Y^{hl}, Y^{hh}) + X \;.
\end{equation}
We denote this appproach of upscaling the decoder feature map with the wavelet coefficients from the encoder as wavelet unpooling.
Typically, CNNs extract feature with many layers of convolution and nonlinear operations, which transform and embed the feature space differently layer by layer. The wavelet unpooling aims to maintain the similar frequency structure throughout CNNs. By replacing the low-frequency of the encoder with the corresponding output of the decoder to perform iDWT with the high-frequency bands from the encoder, the wavelet unpooling aims to enforce learning feature maps of invariant frequency structure under layers of filtering. The skip connections of the signals before DWT also support learning such consistency.
In comparison to the other unpooling methods, for example to upsampling by transposed convolution as proposed in \cite{cnn:long15cvpr:fcn}, wavelet unpooling does not require any parameters for both DWT and iDWT layers. Compare to the memorized unpooling as proposed in \cite{cnn:zeiler11cvpr:memorizedunpool}, or the method to map the low-resolution feature map to the top-left entry of a $2\times 2$ block~\cite{cnn:dosovitskiy15cvpr:chairs}, the wavelet unpooling aims to restore every entries according to the frequency structure.
\subsubsection{Wavelet Pyramid}\label{sect:pyramid}
With CNNs that are designed for classification task, the last few layers typically reduce the spatial resolution to $1\times1$. Such feature maps have the receptive field of the entire input image and therefore capture the global context. Recent works have demonstrated that capturing global context information is also crucial for accurate dense pixelwise prediction~\cite{cnn:chen15iclr:deeplab, cnn:zhao17cvpr:pspnet}. While it is straight-forward to obtain global context with fully-connected layer or with convolutional layers of large filter size, it is difficult to bridge an encoder with drastically reduced spatial resolution to a proper decoder. Most state-of-the-art CNN encoder reduce the spatial resolution by a factor of 32, which produces $7\times7$ output given $224\times224$ input dimensions. If the global context is captured by a simple fully-connected layer, learning $7\times7$
upsampling kernels is challenging.
One solution is to use the dilated convolutions, which increase the perceptive field with the same amount of parameters~\cite{cnn:chen15iclr:deeplab, cnn:chen18:deeplabv2}. Building on the dilated CNNs, the pyramid spatial pooling network PSPNet~\cite{cnn:zhao17cvpr:pspnet} introduces a pyramid on the feature map with multiple average pooling of different window sizes. Noticeably, the dilated convolutions demand considerably larger amounts of memory to host the data, which quickly becomes the bottleneck for training with large batch size. In this work, we base our network design on non-dilated CNNs and instead construct the pyramids through wavelet transformations. We propose two wavelet pyramids variants, namely the low frequency propagation (LFP) and the full frequency composition (FFC) as shown in \cref{arxiv18:fig:pyramids}.
\paragraph{Low-Frequency Propagation Wavelet Pyramid}
Shown in \cref{arxiv18:fig:pyramids}~(a), the LFP pyramid successively performs DWT on the low-low frequency components $Y^{ll}$. At each pyramid level, the extracted $Y^{ll}$ component is further transformed with a convolutional layer, which is then bilinear upsampled to the same spatial resolution as the pyramid input, i.e., conv5. We then concatenate these the upsampled feature maps to aggregate the global context that are captured at different scale. This concatenated feature map is combined with the skip-connected conv5 by an elementwise addition, which sis then filtered with a $1\times1$ convolutional layer to match the channel dimension of the decoder.
With LFP, a multi-resolution wavelet pyramid is constructed, where only the low-low frequency bands of each level are used. The LFP pyramid resembles the pyramid proposed by the PSPNet~\cite{cnn:zhao17cvpr:pspnet}. In particular, with the Haar wavelet, the low-low frequency is equivalent to the average pooling by a $2\times2$ window. However, the difference is the PSPNet design average pooling with a multiple heuristic window size, whereas LFP pyramid is strictly performed accordingly to frequency decomposition. Despite the Haar wavelet is used in this work, the LFP pyramid can be easily generalized with other wavelet base functions.
\paragraph{Full-Frequency Composition Wavelet Pyramid}
The LFP pyramid only uses the low-low frequency bands. In order to make full use of the frequency decomposition, the FFC pyramid is developed. Shown in \cref{arxiv18:fig:pyramids}~(b), the FFC pyramid amounts to a small encoder-decoder with wavelet unpooling. Start from the input conv5, DWT is performed to obtain the four frequency bands. The low-low frequency band $Y^{ll}$ is filtered by an additional convolutional layer and the high frequency bands $Y^{lh}, Y^{hl}, Y^{hh}$ are cached for unpooling. The filtered low-low frequency is then further decomposed by DWT into the finer level and the same operation repeats until the finest feature map is obtained. To upscale from the finest level, we again adopt the wavelet unpooling as described by \cref{eq:idwtlayer}. To this end, the iDWT is first performed using the cached high frequency bands, and then the output is further fused with the skip connection. The wavelet unpooling successively restore the spatial resolution to the same as the input to the pyramid. Finally, we skip connect conv5 with the wavelet output by an elementwise addition, and project the global context with a $1\times1$ convolution to bridge the following decoder. It can be seen that, the FFC pyramid mimic the encoder-decoder design, which naturally reduces the spatial resolution and restore it in the consistent manner with the remaining network.
\section{Introduction}
Dense pixelwise prediction tasks such as semantic segmentation, optical flow or depth estimation remain up-to-date challenges in computer vision. They find rapidly rising interests for applications such as autonomous driving, robotic vision and image scene understanding. Succeeded by its remarkable success in image recognition~\cite{cnn:krizhevsky12nips:alexnet}, deep convolutional neural networks (CNNs) have achieved state-of-the-art performances in dense prediction tasks such as semantic segmentation~\cite{cnn:zhao17cvpr:pspnet,cnn:lin17cvpr:refinenet,cnn:pohlen17cvpr:frrn} or single-image depth estimation~\cite{cnn:laina163dv:depth}.
Many dense prediction tasks consist of two concurrent goals: classification and localization. Classification is well tackled by an end-to-end trainable CNN architecture, e.g.~VGGNet~\cite{cnn:simonyan15iclr:vgg} or ResNet~\cite{cnn:he16cvpr:resnet}, which typically stacks multiple layers of successive convolution, nonlinear activation, and pooling. A typical pooling step, which performs either a subsampling or some strided averaging on an input volume, is favorable for the invariance of prediction results to small spatial translations in the input data as well as for the boost of computational efficiency via dimension reduction. Its downside, however, is the loss of resolution in output feature maps, which renders high-quality pixelwise prediction challenging.
Several remedies for such a dilemma have been proposed in the literatures. As suggested in \cite{cnn:noh15iccv:deconv,cnn:badrinarayanan15:segnet}, one may mirror the encoder network by a decoder network. Each upsampling (or unpooling) layer in the decoder network is introduced in symmetry to a corresponding pooling layer in the encoder network, and then followed by trainable convolutional layers. Alternatively, one may use dilated (also known as atrous) convolutions in a CNN encoder as proposed in \cite{cnn:yu16iclr:dilate, cnn:chen15iclr:deeplab, cnn:chen18:deeplabv2}. This enables the CNN to expand the receptive fields of pixels as convolutional layers stack up without losing resolution in the feature maps, however, at the cost of significant computational time and memory. Another alternative is to combine a CNN low-resolution classifier with a conditional random field (CRF) \cite{misc:krahenb11nips:crf, cnn:krahenbuhl13icml:param}, either as a stand-alone post-processing step \cite{cnn:chen15iclr:deeplab, cnn:chen18:deeplabv2} or combined with a CNN in an end-to-end trainable architecture \cite{cnn:zheng15iccv:crfrnn, cnn:lin16cvpr:piecewise}. The latter also comes with an increased demand in run-time for training and inference.
Motivated by close analogy between pooling (resp.~unpooling) in an encoder-decoder CNN and decomposition (resp.~reconstruction) in multi-resolution wavelet analysis, this paper proposes a new class of CNNs with wavelet unpooling and wavelet pyramid. We name the network WCNN. The first contribution with WCNN is to achieve unpooling with the inverse discrete wavelet transform (iDWT). To this end, DWT is applied at the encoder to decompose feature maps into frequency bands. The high frequency components are skip-connected to the decoder to perform iDWT jointly with the coarse-resolution feature maps. The wavelet unpooling does not require any additional parameters over baseline CNNs, where the overhead only comes from the memory to cache frequency coefficients from encoder. The second contribution of WCNN are two wavelet-based pyramid variants to bridge the standard encoder and decoder. The wavelet pyramids obtain global context from a receptive field of the entire image by exploiting multi-resolution DWT/iDWT. The experiments over the dataset Cityscape show that the proposed WCNN yields systematically improvements in dense prediction accuracy.
\section{Evaluations}
In this section, we evaluate the proposed WCNN method for the task of semantic image segmentation. To this end, we use the Cityscape benchmark dataset~\cite{dataset:cordts16cvpr:cityscapes} which contains 2,975 training, 500 validation and 1,525 test images that are captured in 50 different cities from a driving car. All the images are densely annotated into 30 commonly observed objects classes occurring in urban street scenes from which 19 classes are used for evaluation. The Cityscape benchmark provides all the images with the same high resolution of~$2048\times 1024$. The ground truth for the test images is not publicly available and evaluations on the test set are submitted online\footnote{http://www.cityscapes-dataset.com} for fair comparison.
\begin{table}
\centering
\caption{The layer configurations of the proposed WCNN (see \cref{arxiv18:fig:network}). The encoder is based on ResNet101~\cite{cnn:he16cvpr:resnet}. The resblock is the residual block from ResNet, where $(x, y)\times z$ denotes stacking $z$ blocks of $[(1\times1, x), (3\times3, x), (1\times1, y)]$ convolutional layers. For upconvolution, the transposed convolution is first used to upscale the input by a factor of two, followed by residual blocks. We denote the stride-two operations with s2, and elementwise addition with $\boxplus$. The dimension of the layer output assumes the spatial resolution of input image is normalized to 1, and the second entry denotes the depth of the feature maps.}
\label{arxiv18:tab:wcnn}
\setlength{\tabcolsep}{3pt}
\begin{tabular}{L{10ex}L{25ex}L{17ex}L{10ex}}
\toprule
layer & operation & input & dimension \\
\midrule
conv1 & $(7\times7, 64)$, s2 &RGB &1/2, 64\\
maxpool & $(2\times2)$, s2 &conv1 &1/4, 64\\
conv2\_x & resblock $(64, 256)\times3$ &maxpool &1/4, 256 \\
dwt2 & $G_h$ &conv2\_x &1/8, 256\\
conv3\_1 & resblock $(128, 512)$, s2 &conv2\_x &1/8, 512\\
conv3\_x & resblock $(128, 512)\times3$ &conv3\_1 &1/8, 512\\
dwt3 & $G_h$ &conv3\_x &1/16, 512\\
conv4\_1 & resblock $(256,1024)$, s2 &conv3\_x &1/16, 1024\\
conv4\_x & resblock $(256,1024)\times22$ &conv4\_1 &1/16, 1024\\
dwt4 & $G_h$ &conv4\_x &1/32, 1024\\
conv5\_1 & resblock $(512,2048)$, s2 &conv4\_x &1/32, 2048\\
conv5\_x & resblock $(512,2048)\times2$ &conv5\_1 &1/32, 2048\\
\midrule
pyramid & &conv5x &1/32, 1024\\
\midrule
idwt4 & $G_h^{-1}$ &$\begin{dcases}\text{pyramid} \\ Y_4^{lh}, Y_4^{hl}, Y_4^{hh}\end{dcases}$ &1/16, 1024\\
dconv4\_x & resblock $(256,512)\times3$ &idwt4 $\boxplus$ conv4\_x &1/16, 512\\
idwt3 & $G_h^{-1}$ &$\begin{dcases}\text{dconv4\_x}\\ Y_3^{lh}, Y_3^{hl}, Y_3^{hh}\end{dcases}$ &1/8, 512\\
dconv3\_x & resblock $(128,256)\times3$ &idwt3 $\boxplus$ conv3\_x &1/8, 256\\
idwt2 & $G_h^{-1}$ &$\begin{dcases}\text{dconv3\_x}\\ Y_2^{lh}, Y_2^{hl}, Y_2^{hh}\end{dcases}$ &1/4, 256\\
dconv2\_x & resblock $(64,128)\times3$ &idwt2 $\boxplus$ conv2\_x &1/4, 128\\
upconv2\_x& upconv $(64, 64)\times3$ &dconv2\_x &1/2, 64\\
upconv1\_x& upconv $(64, 64)\times2$ &upconv2\_x &1/1, 64\\
\bottomrule
\end{tabular}
\end{table}
\subsection{WCNN Configurations}
\Cref{arxiv18:tab:wcnn} presents the network configurations of the proposed WCNN. We take the state-of-the-art ResNet101~\cite{cnn:he16cvpr:resnet} for the encoder. The ResNet101 uses stride-two convolution to reduce spatial resolution. To implement WCNN, we preserve the stride-two convolution layers and insert three DWT layers dwt2, dwt3, dwt4 into the decoder conv2\_x, conv3\_x, conv4\_x, respectively to obtain the frequency bands. At each upscaling stage at the decoder, the corresponding frequency bands are used, then followed by several residual blocks before the next upscaling stage. The last two upscaling stages are implemented as upconvolution, where transposed convolution is first applied to scale up the resolution by a factor of two, then residual blocks are used to further filter the intermediate output. In WCNN, we reply heavily on the residual blocks proposed in ResNet \cite{cnn:he16cvpr:resnet}, where each block is a stack of three convolutional layers with the second layer of $3\times3$ for feature extraction and the first and third layers as $1\times1$ convolutions for feature projection.
In this work, we develop CNNs for high-resolution predictions. An input image of $512\times1024$ yields conv5\_x to have the spatial resolution of $16\times32$. Therefore, we design both LFP and FFC pyamids to have four levels of DWT, which produce the four levels of frequency components of $8\times16$, $4\times8$, $2\times4$ and $1\times2$, respectively. The finest pyramid level thus has the receptive field of the entire input. The details of the LFP and FFC pyramids are given in \cref{arxiv18:tab:pyr}.
To evaluate the proposed network, the baseline CNN is designed to have minimum difference with WCNN. Taking the WCNN configuration in \cref{arxiv18:tab:wcnn}, the baseline model 1) removes all DWT layers at encoder 2) replaces the pyramid by one $1\times1, 1024$ convolutional layer, and 3) replaces the iDWT layers by transposed convolution to upscale the feature map by a factor of 2. The rest layers are the same with WCNN. In the following experiment, we compare the baseline model, the baseline model with LFP and FFC pyramid, the WCNN with LFP and FFC pyramids.
\begin{table}
\centering
\caption{The configurations of the proposed LFP and FFC pyramids (see \cref{arxiv18:fig:pyramids}). Assuming conv5 has a resolution of $16\times32, 2048$, both LFP and FFC pyramids have four levels. For simplicity, the outer two levels are presented in the table, whereas the inner two levels repeats the same patterns. The operator $\star a$ denotes bilinear upsample by a factor of $a$ and the operator $\boxplus$ denotes elementwise addition. }
\label{arxiv18:tab:pyr}
\setlength{\tabcolsep}{3pt}
\begin{tabular}{L{10ex}L{17ex}L{20ex}L{13ex}}
\toprule
\multicolumn{4}{c}{LFP-pyramid} \\
layer & operation & input & dimension \\
\hline
dwt\_p1 & $G_h$ &conv5 &$8\times16$, 2048\\
conv\_p1 & $(1\times1, 512)$ &$Y^{ll}_{p1}$ &$8\times16$, 512\\
dwt\_p2 & $G_h$ &$Y^{ll}_{p1}$ &$4\times8$, 512\\
conv\_p2 & $(1\times1, 512)$ &$Y^{ll}_{p2}$ &$4\times8$, 512\\
$\vdots$ &$\vdots$ &$\vdots$&$\vdots$\\
concat & concatenation &$\begin{dcases}Y^{ll}_{p1}\star2, Y^{ll}_{p2}\star4 \\Y^{ll}_{p3}\star8, Y^{ll}_{p4}\star16 \end{dcases}$ &$16\times32$,2048\\
conv\_pyr &$(1\times1, 1024)$ &concat $\boxplus$ conv5 &$16\times32$,1024\\
\midrule
\multicolumn{4}{c}{FFP-pyramid} \\
layer & operation & input & dimension \\
\hline
dwt\_p1 & $G_h$ &conv5 &$8\times16$, 2048\\
conv\_p1 & $(1\times1, 2048)$ &$Y^{ll}_{p1}$ &$8\times16$, 2048\\
dwt\_p2 & $G_h$ &conv\_{p1} &$4\times8$, 2048\\
conv\_p2 & $(1\times1, 2048)$ &$Y^{ll}_{p2}$ &$4\times8$, 2048\\
$\vdots$ &$\vdots$ &$\vdots$&$\vdots$\\
idwt\_p2 & $G_h^{-1}$ &$\begin{dcases}\text{conv\_p2}\boxplus\text{idwt\_p3}\\ Y_2^{lh}, Y_2^{hl}, Y_2^{hh}\end{dcases}$ &$8\times16$, 2048\\
idwt\_p1 & $G_h^{-1}$ &$\begin{dcases}\text{conv\_p1}\boxplus\text{idwt\_p2}\\ Y_1^{lh}, Y_1^{hl}, Y_1^{hh}\end{dcases}$ &$16\times32$, 2048\\
conv\_pyr &$(1\times1, 1024)$ &idwt\_p1$\boxplus$ conv5 &$16\times32$, 1024\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Implementation Details}
We have implemented all our methods based on the TensorFlow~\cite{cnn:martin16:tensorflow} machine learning framework. For network training, we initialize the parameters of the encoder layers from pretrained ResNet model on ImageNet and initialize the convolutional kernels on the decoder with He~\cite{cnn:he15iccv:msrcinit} initialization. We run the training with batch size of four on the Nvidia Titan X GPU. For both training, we minimize the cross-entropy loss using the Stochastic Gradient Descent (SGD) solver with Momentum of 0.9. The initial learning rate is set to 0.001 and decrease with a factor of 0.9 every 10 epoch. We train the network until convergences. For cityscapes, all the variants of our experiments converges around 60K iterations. Following~\cite{cnn:pohlen17cvpr:frrn}, we apply bootstrapping loss minimization for Cityscapes benchmark in order to speed up the training and boost the segmentation accuracy. For all Cityscapes experiments, we fix the threshold of bootstrapping to the top 8192 most difficult pixels per images.
\begin{table*}
\centering
\caption{Cityscapes 19-class semantic segmentation IoU scores on \emph{val} set. All test results are obtained by comparing to half resolution ground-truth labeling, which is the resolution of input images into our networks. The second part of the table report the performance with multi-scale test time data augmentation, indicated by the MS suffix.}
\label{tab:classwiseiou}
\setlength{\tabcolsep}{2.5pt}
\begin{tabular}{L{20ex} cl*{20}{r} }
\toprule
method
&\block{city0}{road}
&\block{city1}{sidewalk}
&\block{city2}{building}
&\block{city3}{wall}
&\block{city4}{fence}
&\block{city5}{pole}
&\block{city6}{traffic}
&\block{city7}{traffic light}
&\block{city8}{vegetarian}
&\block{city9}{terrain}
&\block{city10}{sky}
&\block{city11}{person}
&\block{city12}{rider}
&\block{city13}{car}
&\block{city14}{truck}
&\block{city15}{bus}
&\block{city16}{train}
&\block{city17}{motorcycle}
&\block{city18}{bicycle}
&avg\\
\midrule
frequency &37.7 & 5.4 &21.9 & 0.7 & 0.8 & 1.5 & 0.2 & 0.7 &17.2 & 0.8 & 3.4 & 1.3 & 0.2 & 6.6 & 0.3 & 0.4 & 0.1 & 0.1 & 0.7 \\
\midrule
baseline & 98.8 &88.8 &96.0 &51.5 &61.6 &62.0 &66.6 &76.5 &96.0 &70.1 &97.1 &85.8 &66.4 &97.0 &81.4 &85.4 &59.0 &53.8 &84.6 & 69.2\\
baseline-LFP &98.6 &90.1 &95.5 &62.6 &62.6 &61.3 &65.7 &76.0 &95.9 &69.3 &97.4 &85.4 &63.6 &97.1 &80.1 &88.4 &73.8 &61.2 &85.1 &71.2\\
baseline-FFC &98.6 &89.6 &95.3 &63.4 &62.0 &61.3 &67.8 &74.4 &96.1 &64.6 &97.3 &85.9 &63.0 &96.9 &85.5 &89.4 &73.6 &58.5 &84.5 &70.7\\
WCNN-LFP &98.6 &89.8 &95.7 &63.0 &65.8 &61.5 &67.8 &76.2 &96.3 &69.4 &97.4 &85.8 &67.4 &97.2 &82.0 &88.9 &69.9 &59.9 &84.9 &71.6\\
WCNN-FFC &98.7 &90.5 &95.6 &64.8 &64.6 &63.2 &67.8 &77.3 &96.1 &71.0 &97.3 &86.1 &65.3 &97.0 &82.7 &88.7 &77.6 &57.7 &85.1 &71.9\\
\midrule
baseline-MS &\bf 99.0 &90.6 &\bf 96.7 &48.0 &61.2 &68.2 &72.9 &80.2 &96.3 &\bf72.5 &97.7 &89.1 &70.3 &97.6 &76.6 &82.2 &48.9 &60.7 &84.9 &71.4\\
baseline-LFP-MS &98.7 &92.2 &96.5 &54.0 &65.5 &68.9 &71.2 &79.0 &96.1 &64.7 &97.6 &88.1 &64.3 &\bf 97.8 &71.2 &87.3 &71.8 &\bf 68.5 &85.7 &73.3\\
baseline-FFC-MS &98.7 &91.7 &96.4 &64.6 &65.0 &67.4 &\bf 74.3 &79.7 &\bf 96.7 &68.9 &\bf 98.0 &88.8 &68.9 &97.5 &\bf 88.3 &\bf 90.6 &\bf 79.3 &60.9 &\bf 85.8 &74.7\\
WCNN-LFP-MS &98.8 &\bf 92.4 &96.2 &61.2 &\bf 68.0 &68.5 &71.2 &79.8 &96.3 &64.8 &97.5 &88.4 &\bf 70.1 &\bf 97.8 &77.8 &89.3 &61.6 &74.1 &87.1 &73.9\\
WCNN-FFC-MS &98.8 &92.2 &96.6 &\bf 68.6 &64.8 &\bf 69.1 &73.9 &\bf 81.6 &\bf 96.7 &72.4 &97.8 &\bf 89.3 &68.9 &97.5 &87.3 &90.5 &73.3 &58.0 &85.3 &\bf 75.2\\
\bottomrule
\end{tabular}
\end{table*}
\begin{table}
\centering
\caption{IoU scores for the Cityscapes 19-class and category semantic segmentation on the \emph{test} set (benchmark). All test results are obtained by testing on half resolution and comparing to full resolution groundtruth labeling through upsampling.}
\label{tab:testsetmiou}
\setlength{\tabcolsep}{1em}
\begin{tabular}{L{20ex}L{18ex}L{18ex}}
\toprule
method &class mIoU &category mIoU\\
\midrule
FRRN~\cite{cnn:pohlen17cvpr:frrn} & 71.8 & \bf 88.9\\
WCNN-FFC & 70.9 & 86.1 \\
WCNN-FFC-MS & \bf 73.7 & 88.3 \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure*}
\centering
\begin{tikzpicture}[inner sep=1pt]
\def12em{12em}
\def6em{6em}
\def1mm{1mm}
\tikzstyle{cmt}=[rotate=90,anchor=center, font=\normalsize, text centered]
\def\onecmp#1#2#3
{
\begin{scope}
\node(rgb) at(#2,0)[anchor=west]{\includegraphics[width=12em]{#1_leftImg8bit.png}};
\node(gt) at(rgb.south)[anchor=north]{\includegraphics[width=12em]{#1_gtFine.png}};
\node(uppsp) at(gt.south)[anchor=north,,yshift=-3pt] {\includegraphics[width=12em]{#1_pred_upconv_psp.png}};
\node(uppsp2) at(uppsp.south)[anchor=north] {\includegraphics[width=12em]{#1_pred_residual_upconv_psp.png}};
\node(uphaar) at(uppsp2.south)[anchor=north,yshift=-3pt] {\includegraphics[width=12em]{#1_pred_upconv_haar.png}};
\node(uphaar2) at(uphaar.south)[anchor=north] {\includegraphics[width=12em]{#1_pred_residual_upconv_haar.png}};
\node(wcnnpsp) at(uphaar2.south)[anchor=north,yshift=-3pt] {\includegraphics[width=12em]{#1_pred_wcnn_psp.png}};
\node(wcnnpsp2) at(wcnnpsp.south)[anchor=north] {\includegraphics[width=12em]{#1_pred_residual_wcnn_psp.png}};
\node(wcnnhaar) at(wcnnpsp2.south)[anchor=north,yshift=-3pt] {\includegraphics[width=12em]{#1_pred_wcnn_haar.png}};
\node(wcnnhaar2) at(wcnnhaar.south)[anchor=north] {\includegraphics[width=12em]{#1_pred_residual_wcnn_haar.png}};
\end{scope}
}
\onecmp{frankfurt_000000_010351}{0}
\onecmp{frankfurt_000001_002512}{12em+1mm}
\onecmp{munster_000108_000019}{12em*2+1mm*2}
\onecmp{lindau_000020_000019}{12em*3+1mm*3}
\tikzstyle{cmt}=[font=\normalsize, text centered, anchor=east, rotate=90]
\begin{scope}[xshift=-2ex]
\node(t1) at (0,0)[text width=6em, yshift=2.5em, xshift=0em, cmt]{RGB};
\node(t2) at (t1.west)[text width=6em, yshift=0.5em, xshift=0em, cmt]{ground truth};
\node(t3) at (t2.west)[text width=2*6em, yshift=-0em, xshift=0em, cmt]{baseline-LFP-MS};
\node(t4) at (t3.west)[text width=2*6em, yshift=-0em, xshift=0em, cmt]{baseline-FFC-MS};
\node(t5) at (t4.west)[text width=2*6em, yshift=-1em, xshift=0em, cmt]{WCNN-LFP-MS};
\node(t6) at (t5.west)[text width=2*6em, yshift=-0.5em, xshift=0em, cmt]{WCNN-FFC-MS};
\end{scope}
\end{tikzpicture}
\caption{Qualitative exemplary semantic segmentation results on the Cityspaces dataset. From top to bottom: RGB image, ground-truth segmentation, baseline-LFP-MS, baseline-FFC-MS, WCNN-LFP-MS, WCNN-FFC-MS. The semantic color coding is given in \cref{tab:classwiseiou}.}
\label{fig:cityscapes}
\end{figure*}
To train all the variants of the baseline and our model, we fix the input to the network to quarter resolution of the original dataset, i.e., $512\times 1024$. For evaluation on the validation dataset, we upsample the output logits bilinear to half of the resolution (to match the network input resolution) and compute the intersection-over-union (IoU) score for each class and on average. We also experiment with test time data augmentation, where we randomly scale the input images and feed them through the network before fuse the score.
\subsection{Cityscapes}
We evaluate segmentation accuracy using the commonly used evaluation metric of IoU. \Cref{tab:classwiseiou} gives the class-wise IoU and the mean IoU over the 19 classes. It can be seen that adding LFP and FFC pyramids to the baseline network already significantly improves the segmentation performance over the baseline. The FFC pyramid consistently outperforms the LFP pyramid. With WCNN we gain another increase in mean IoU of up to 1.2 over the corresponding baseline. With multi-scale test time augmentation, the accuracy of each model is increased, but the similar rank is observed among the different methods. Our variants strongly benefit, while the combination of wavelet unpooling and FFC wavelet pyramid achieves best increase in performance towards the baseline (6.0 mIoU). These results demonstrate that wavelet unpooling as well as the FFC wavelet pyramid improve the dense prediction of the baseline model. The qualitative comparisons are shown in \cref{fig:cityscapes}. It can be seen that the WCNN approach recovers fine-detailed structures such as fences, poles or traffic signs with higher accuracy than the baselines.
\Cref{tab:testsetmiou} compares our method with the current state-of-the-art method FRRN~\cite{cnn:pohlen17cvpr:frrn} on the same input resolution (2x subsampling) on the Cityscapes benchmark. It can be seen that our method WCNN-FFC-MS outperforms FRRN by 1.9 mean IoU over the 19-classes while it is worse (0.6 mIoU) on the category level. Notably, WCNN is much less memory demanding than FRRN.
\section{Conclusion}
This paper introduce WCNN, a novel encoder-decoder CNN architecture for dense pixelwise prediction. The key innovation is to exploits the discrete wavelet transform (DWT) and inverse DWT to design the unpooling operation. In the proposed network, the high-frequency coefficients extracted by DWT at the encoder stage are cached and later combined with coarse-resolution feature maps at the decoder to perform accurate upsampling and hence, ultimate pixelwise prediction. Further, two wavelet pyramid variants are introduced, i.e., the low frequency propagation (LFP) pyramid and the full frequency composition (FFC) pyramid. Both pyramid extract the global context from the encoder output with multi-resolution wavelet decomposition. Shown in experiment, WCNN outperforms the variant baseline CNNs and achieve the state-of-the-art semantic segmentation performance on the Cityscapes dataset.
In the future work, we will evaluate WCNNs for different dense pixelwise prediction tasks, e.g., depth estimation and optical flow estimation. We will also perform ablation study of the wavelet pyramid to evaluate different pyramid configuration. It is also interesting to extend the WCNN for different wavelet base functions or ultimately learn the optimal base functions with CNNs.
|
1,314,259,996,616 | arxiv | \section{\textbf{Introduction}}
The \textit{Bouchaud trap model} (B.T.M.) is a continuous time random walk $X$ on a graph $\cal G$ with random jump rates. To each vertex $x$ of $\cal G$ we assign a positive number $\tau_x$
where $(\tau_x)_{x \in \cal{G}}$ is an i.i.d. sequence such that
\begin{equation}\label{heavytails}
\lim_{u\rightarrow\infty}u^\alpha\mathbb{P}[\tau_x\geq u]=1
\end{equation}
with $\alpha \in (0,1)$. This means that the distribution of $\tau_x$ has heavy tails.
Each visit of $X$ to $x\in \cal{G}$ lasts an exponentially distributed time with mean $\tau_x$.
Let $S(k)$ be the time of the $k$-th jump of $X$. $(S(k),k\in\mathbb{N})$ is called the \textit{clock process} of $X$.
Let $Y_k:=X(S(k))$ be the position of $X$ after the $k$-th jump. $(Y_k:k\in\mathbb{N})$ is called the \textit{embedded discrete time random walk} associated to $X$.
This model was introduced by J.-P. Bouchaud in \cite{weakergodicitybreaking} and has been studied by physicists as a toy model for the analysis of the dynamics of some complex systems such as spin-glasses.
More precisely, each vertex $x$ of $\cal{G}$ corresponds to a metastable state of the complex system, and $X$ represents the trajectory of the system over its phase space.
One of the phenomena that this model has helped to understand is that of aging, a characteristic feature of the slow dynamics of many
metastable systems. For an account of the physical literature on the B.T.M. we refer to \cite{phys}.
The model has also been studied by mathematicians on different graphs, exhibiting a variety of behaviors. In \cite{fin}, Fontes, Isopi and Newman analyze the one-dimensional case ($\cal{G}=\mathbb{Z}$)and where the walk $X$ is symmetric.
They obtain a scaling limit for $X$ which is called the \textit{Fontes-Isopi-Newman} (F.I.N.) singular diffusion. This diffusion is a speed measure change of a
Brownian motion by a random, purely atomic measure $\rho$, where $\rho$ is the Stieltjes measure associated to an $\alpha$-stable subordinator.
Different aging regimes for the one-dimensional case where found by Ben-Arous and C\v{e}rn\'{y} in \cite{bc}. In higher dimensions ($\cal G=\mathbb{Z}^d, d \geq 2$), the symmetric model has a behavior
completely different to the one-dimensional case, as shown by Ben Arous and C\v{e}rn\'{y} in \cite{zetade}, and by Ben Arous, C\v{e}rn\'{y} and Mountford in \cite{two}.
In these papers, a scaling limit and aging results where obtained for $X$.
The scaling limit is called \textit{fractional kinetic process} (F.K.P)
which is a time-change of a $d$-dimensional Brownian motion by the inverse of an $\alpha$-stable subordinator.
In \cite{bbg} and \cite{bbg2} Ben Arous, Bovier and Gayrard obtained aging properties of the model on the complete graph. A study of this walk for a wider class of graphs can be found
on \cite{universal}. For a general account on the mathematical study of the model, we refer to \cite{bcnotes}.
The difference between the one dimensional case and the model in higher dimensions can be understood as follows.
We can express the clock process $S(k)$ of $X$ as $S(k)=\sum_{i=0}^{k-1}\tau_{Y_i}e_i$, where the $e_i$ are standard i.i.d. exponential random variables. Thus, the increments of
$S(k)$ are the depths of the traps $(\tau_x)_{x\in\cal{G}}$ as sampled by $Y_k$.
In the model in dimensions higher than two, the embedded discrete time random walk $Y_k$ is transient (the case $d=2$ is more delicate). Thus $Y_k$ will sample each trap $\tau_x$ a finite number of times.
That implies that $S(k)$ does not have long range interactions with its past and its scaling limit will be a Markovian process, which is an $\alpha$-stable subordinator.
On the other hand, in the one-dimensional symmetric B.T.M., we have that the embedded discrete time random walk $Y_k$ is recurrent.
Thus $Y_k$ will sample each trap $\tau_x$ an infinite number of times.
In this case, $S(k)$ has long range interactions with its past and its scaling limit will be non-Markovian.
Furthermore, the clock process $S(k)$ will converge to the local time of a Brownian motion integrated against the random measure $\rho$.
Here $\rho$ plays the role of a scaling limit for the environment $(\tau_x)_{x\in\mathbb{Z}}$.
It is natural to ask if we can find intermediate behaviors between the transient case $(d\geq1)$ and the recurrent case $(d=1)$: if we introduce a drift to the one-dimensional B.T.M., note that the embedded discrete random walk becomes transient. Thus, intermediate behaviors between the transient and the recurrent case might appear when one analyzes a sequence of one-dimensional B.T.M.'s with a drift that decreases to $0$ as we rescale the walks. In this paper we study this question, showing that the speed of decay of the drift sets the long-term behavior of the model and exhibiting a sharp phase transition in terms of the type of limiting processes obtained. We next describe with more precision the way in which we define the B.T.M. with drift and the results that are obtained in this paper.
For each $\epsilon>0$, denote by $X^{\epsilon}$ the B.T.M. on $\mathbb{Z}$ where the transition probabilities of the embedded discrete time random walk are $\frac{1+\epsilon}{2}$ to the right and $\frac{1-\epsilon}{2}$ to the left. We will call this process the B.T.M. with drift $\epsilon$.
For $a\geq0$, consider a rescaled sequence of B.T.M's with drift $n^{-a}$,
$(h_a(n)X^{n^{-a}}(tn);t\geq0)$, indexed by $n$, where $h_a(n)$ is an
appropriate space scaling depending on $a$.
We will see that as the drift decays slowly (small $a$), the sequence
of walks converges to the inverse of an $\alpha$-stable subordinator, whereas
if the drift decays fast (large $a$) the limiting process is the F.I.N.
diffusion. As these two posibilities are qualitatively different, we are led to think that there is either, a gradual interpolation between these two behaviors as the
speed of decay changes, or a sharp transition between them as the speed of decay changes. We establish that there is a sharp transition between the two
scaling limits, that there is a critical speed of decay where a new, previously, process appears and that the transition happens at $a=\alpha/(\alpha+1)$.
As the main theorem of this paper, we prove that, depending on the value of
$a$, there are three different scaling limits:
\medskip
\begin{itemize}
\item
\textbf{Supercritical case} ($a<\alpha/(\alpha+1)$).
The sequence of walks converges to the inverse of an $\alpha$-stable subordinator.
\item
\textbf{Critical case} ($a=\alpha/(\alpha+1)$).
The sequence of walks converges to a process which is a speed measure change of a Brownian motion with drift that we will call the \textit{drifted F.I.N. diffusion}.
\item
\textbf{Subcritical case } ($a>\alpha/(\alpha+1)$).
The sequence of walks converges to the F.I.N. diffusion.
\end{itemize}
\medskip
\noindent The case $a=0$ (contained in the supercritical case), which corresponds to a constant drift, was already addressed by Zindy in \cite{zindy}.
Let us now make a few remarks concerning the proof of our main theorem.
The strategy of the proof for the supercritical case is a generalization of the method
used in \cite{zindy} and relies on the analysis of the sequence of processes of first hitting times
$(H^n_b(x);x \in [0,nS])$ ($ S $ is fixed, $b>0$) defined as
\begin{equation}\label{hitting}
H^n_b(x):=\inf\{t:X^{n^{-b}}(t) \geq x\} .
\end{equation}
We show that these processes (properly rescaled) converge to an $\alpha$-stable
subordinator. From that, it follows that the maximum of the walks converges to the inverse of an $\alpha$-stable subordinator.
This part of the proof requires some care, because, as we are working with a sequence of walks with variable drift, we cannot apply directly the methods used in \cite{zindy}. It turns out that we have to choose $b$ properly to obtain a sequence of walks with the desired drift as we invert the hitting time processes.
Then, it is easy to pass from the maximum of the walk to the walk itself.
In \cite{fin}
The proof corresponding to the critical case follows the arguments used by
\cite{fin}. There they express rescaled, symmetric one-dimensional B.T.M.'s as speed measure changes of a Brownian motion trough a random speed measure. But here we are working with asymmetric walks, so we cannot work with the expression used there. To treat the asymmetry of the walks, we use a Brownian motion with drift instead of a Brownian motion. That is, we express each walk $X^{n^{-\alpha/(\alpha+1)}}$ as a speed measure change of a Brownian motion with drift, and then prove convergence of the sequence of speed measures to $\rho$.
The latter is achieved by means of a coupling of the environments.
In the subcritical case, although we obtain the same scaling limit as in \cite{fin} (a F.I.N. diffusion), again, because of the asymmetry of the model, we cannot work with the expression used there. We deal with this obstacle using, besides a random speed measure, a scaling function.
That is, we express the rescaled walks as time-scale changes of a Brownian motion. Then we prove that the scale change can be
neglected and show convergence of the sequence of speed measures to the random measure $\rho$.
The organization of the paper is as follows. In section \ref{results} we give the definition of the model and state our main results. There we also give simple heuristic arguments to understand the transition at $a=\alpha/(\alpha+1)$.
In section \ref{super} we obtain the behavior for the supercritical
case, and in section \ref{critical} we obtain the scaling limit for the
critical case.
The behavior for the subcritical case is obtained in section \ref{sub}.
Finally, we would like to mention that while preparing the
final version of this article we have learned that Theorem (\ref{main})
has been independently obtained by Gantert, M\"{o}rters and Wachtel
\cite{gmw}. There, they also obtain aging results for the
B.T.M. with vanishing drift.
\textbf{Acknowledgements}:
The author was supported by a fellowship of the National Commission on Science and Technology of Chile (Conicyt)\#29100243.
This paper contains material presented at the probability seminar at the P.U.C. of Chile in September 2009.
\section{\textbf{Notations and Main Results}}\label{results}
A Bouchaud trap model on $\mathbb{Z}$ with drift $\epsilon$, $(X^\epsilon(t);t\in[0,\infty])$ is a homogeneous Markov process with jump rates:\\
\label{generator}
\begin{equation} c (x,y) := \left\lbrace
\begin{array}{c l}
(1+\epsilon)\tau_x^{-1}/2 \text{ if}\ y=x+1 \\
(1-\epsilon)\tau_x^{-1}/2 \text{ if}\ y=x-1 \\
\end{array}
\right., \end{equation}\\
\noindent
where $\tau=(\tau_x)_{x\in \mathbb Z}$ are positive, i.i.d. under a measure $P$ and satisfy\
\begin{equation}
\label{tail}\lim_{u\rightarrow\infty}u^{\alpha} P[\tau_x\geq u]=1.
\end{equation}
For any topological space $E$, $\mathcal{B}(E)$ will stand for the $\sigma$-algebra of Borelians of $E$.
$\mathbb{P}_{\tau}^x$ and $\mathbb{E}_{\tau}^x$ will denote the probability and expectation conditioned on the environment $\tau=(\tau_x)_{x\in\mathbb{Z}}$
and with $X^{\epsilon}(0)=x$.
These probabilities are often referred as quenched probabilities.
We define $\mathbb{P}^x$ on $\mathbb{Z}^{\mathbb{N}}\times{\mathbb{R}^+}^{\mathbb{Z}}$ stating that
for every $A\in\mathcal B(\mathbb{Z}^{\mathbb{N}})$ and $B\in\mathcal B({\mathbb{R}^+}^{\mathbb{Z}})$,
$\mathbb{P}^x[A\times B]:=\int_B \mathbb{P}^x_{\tau}[C_{\tau}] P(d\tau),$
where $C_{\tau}:=\{x\in\mathbb{Z}^{\mathbb{N}}:(x,\tau)\in A\times B\}.$
$\mathbb{P}^x$ is called the annealed probability. Note that $X^{\epsilon}$ is Markovian w.r.t. $\mathbb{P}^x_{\tau}$ but non-Markovian w.r.t. $\mathbb{P}^x$.
$\mathbb{E}^x$ is the expectation associated to $\mathbb{P}^x$. $\mathbb{P}^0$ and $\mathbb{E}^0$ will be simply denoted as $\mathbb{P}$ and $\mathbb{E}$. Also $\mathbb{P}_{\tau}$
and $\mathbb{E}_{\tau}$ will stand for $\mathbb{P}_\tau^0$ and $\mathbb{E}_\tau^0$ respectively.
These notations will be used with the same meaning for all the processes appearing in this paper.
We have to make some definitions in order to state our main result:
let $B(t)$ be a standard one dimensional Brownian motion starting at zero and $l(t,x)$ be a bi-continuous version of his local time.
Given any locally finite measure $\mu$ on $\mathbb{R}$, denote
$$\phi_{\mu}(s):=\int_{\mathbb{R}}l(s,y)\mu(dy),$$
and its right continuous generalized inverse by
$$\psi_{\mu}(t):=inf\{s>0:\phi_{\mu}(s)>t \}.$$
The right continuous generalized inverse exists by definition, is increasing and, as its name indicates, it is a right continuous function.
Then we define the speed measure change of $B$ with speed measure $\mu$, $X(\mu)(t)$ as
\begin{equation}\label{timechange}
X(\mu)(t):=B(\psi_{\mu}(t)).
\end{equation}
We also need to define speed measure changes of a drifted Brownian motion.
Let $C(t):=B(t)+t$. We know that $C(t)$ has a bi-continuous local time $\tilde{l}(t,y)$.
Given any locally finite measure $\mu$ in $\mathbb R$ we
define
$$\tilde{\phi}_{\mu}(s):=\int_{\mathbb R} \tilde{l}(s,y) \mu (dy),$$
and its generalized right-continuous inverse by
$$\tilde{\psi}_{\mu}(t):= \inf\{s>0:\tilde{\phi}_{\mu}(s)>t\}.$$
Then we define $\tilde{X}(\mu)(t)$ (the speed measure change of $C$ with speed measure $\mu$) by
\begin{equation}\label{timechange2}
\tilde{X}(\mu)(t):=C(\tilde{\psi}_{\mu}(t)).
\end{equation}
By changing the starting point of our underlying Brownian motion $B$, we can change the starting point of $\tilde{X}(\mu)$ and $X(\mu)$.
Let $(x_i,v_i)$ be an inhomogeneous Poisson point process on
$\mathbb{R}\times\mathbb{R}^+$, independent of $B$ with intensity measure
$\alpha v^{-1-\alpha}dxdv$. We define the random measure $\rho$ as
\begin{equation}\label{rho}
\rho:=\sum v_i \delta_{x_i}.
\end{equation}
The diffusion $(Z(t);t\in[0,T])$ defined as $Z(s):=B(\psi_{\rho}(s))$ is called the \textit{F.I.N diffusion}.
We also define the \textit{drifted F.I.N. diffusion} $\tilde{Z}(t)$ as
$\tilde{Z}(t):=C(\tilde{\psi}_{\rho}(t))$.
$D[0,T]$ will denote the space of cadlag functions from $[0,T]$ to $\mathbb{R}$. $(D[0,T],M_1)$, $(D[0,T],J_1)$ and $(D[0,T],U)$ will stand for $D[0,T]$ equipped with the
Skorohod-$M_1$, Skorohod-$J_1$, and uniform topology respectively. We refer to \cite{witt} for an account on these topologies.
We define $(X^{(n,a)};t\in[0,T])$, a rescaling of a walk with drift $n^{-a}$, by
\begin{equation}
X^{(n,a)}(t):=\left\lbrace
\begin{array}{c l}
\frac{X^{n^{-a}}(tn)}{n^{\alpha(1-a)}}\text{if} \ a<\frac{\alpha}{1+\alpha}\\\\
\frac{X^{n^{-a}}(tn)}{n^{\alpha/(\alpha+1)}}\text{if}\ a\geq\frac{\alpha}{1+\alpha}\\
\end{array}
\right. .
\end{equation}
Let $V_{\alpha}$ be an $\alpha$-stable subordinator started at zero. That is, $V_{\alpha}$ is the increasing Levy process with Laplace transform $\mathbb{E}[\exp(-\lambda V_{\alpha}(t))]=\exp(-t\lambda^{\alpha}).$
Now we are in conditions to state the main result of this paper.
\begin{teo}\label{main}
For all $T>0$:
\begin{itemize}
\item[(i)]If $a<\alpha/(\alpha+1)$ we have that
$(X^{(n,a)}(t);t\in [0,T])$
converges in distribution to
$(V_{\alpha}^{-1}(t); t\in (0,T))$ in $(D[0,T],U)$ where $V_{\alpha}^{-1}$
is the right continuous generalized inverse of
$V_{\alpha}$.
\item[(ii)] If $a=\alpha/(\alpha+1)$ we have that $(X^{(n,a)}(t); t \in [0,T])$
converges in distribution
to the drifted F.I.N. diffusion $(\tilde{Z}(t); t \in [0,T])$ on
$(D[0,T],U)$.
\item[(iii)] If $a>\alpha/(\alpha+1)$ we have that $(X^{(n,a)}(t); t \in [0;T])$ converges in distribution
to the F.I.N. diffusion $(Z(t); t \in [0,t]) $ on $(D[0,T],U)$.
\end{itemize}
\end{teo}
We present heuristic arguments to understand the transition at $a=\frac{\alpha}{1+\alpha}$. First we analyze a sequence of discrete time random walks.
Let $(S^{\epsilon}(i),i\in\mathbb{N})$ be a simple asymmetric random walk with drift $\epsilon$,
$S^{\epsilon}(i):=\sum_{k=1}^ib_k^{\epsilon},$
where $(b_k^{\epsilon})_{i\in\mathbb{N}}$ is an i.i.d. sequence of random variables with:
$\mathbb{P}[b^{\epsilon}_k=1]=\frac{1+\epsilon}{2};\textrm{ }\mathbb{P}[b^{\epsilon}_k=-1]=\frac{1-\epsilon}{2}.$
We want to find the possible scaling limits of $(S^{\epsilon(n)}(in);i \in [0,T])$, depending on the speed of decay of $\epsilon(n)$ to $0$ as $n\rightarrow\infty$.
We couple the sequence of walks $S^{\epsilon(n)}$ in the following way: Let $(U_i)_{i\in\mathbb{N}}$ be an i.i.d. sequence of uniformly distributed random variables taking values on
$[0,1]$. We require that $S^{\epsilon(n)}$ takes his $i$-th step to the right ($b_i^{\epsilon(n)}=1$) if $U_i>\frac{1-\epsilon(n)}{2}$ and to the left otherwise.
For each walk, we can decompose the steps into two groups: the first group
is given by the steps $i$ such that $\frac{1-\epsilon(n)}{2}<U_i<\frac{1+\epsilon(n)}{2}$ and the second group
consists of the remaining steps.
We can think that the first group of steps takes account of the drift effect and the second one takes account of the symmetric fluctuations of the walk.
If the walk has given $n$ steps, then the first group has about $n\epsilon(n)$ steps, and the second group has fluctuations of order $\sqrt n$.
It is obvious that the drift effect will dominate the behavior if $\sqrt n=o(\epsilon(n))$. In this case we will have a ballistic (deterministic) process as a scaling limit.
If $\epsilon(n)=o(\sqrt n)$ the fluctuations will dominate and we will have a Brownian motion as scaling limit.
Finally the two behaviors will be of the same order if $\epsilon(n)\approx\sqrt n$, and a Brownian motion with drift will be the scaling limit.
The same reasoning can now be used to understand the change of behavior at $a=\alpha/(\alpha+1)$ for the sequence of walks $(X^{n^{-a}}(tn),t\in[0,T])_{n \in \mathbb N}$.
In order to apply the precedent arguments we first have to estimate the number of steps that $X^{n^{-a}}$ has given up to time $Tn$. To simplify we take $T=1$. First, suppose that $X^{n^{-a}}(n)$ is of order $n^{u}$, where $u$ is to be found.
We know that after $k$ steps, a walk with drift $n^{-a}$ is approximately on site $kn^{-a}$, so, it takes about $n^{u+a}$ steps
to be on site $n^{u}$. Thus, we can also deduce that at time $n$, $X^{n^{-a}}$ has
visited approximately $n^{a}$ times each site.
As the distribution of $\tau_i$ satisfies (\ref{tail}), then the sum $\sum_{i=0}^{n^u}\tau_i$ is of the same order that $\max_{\{0\leq i \leq n^u\}}\tau_i$, and both are of order $n^{u/\alpha}$.
We can estimate the time needed to arrive at $n^{u}$ as the depth of the deepest trap found ($\approx n^{u/\alpha}$) multiplied by the number of visits to that trap ($\approx n^a$). This gives that $n\approx n^{\frac{u}{\alpha}+a}$.
But, we know, by definition, that $X^{n^{-a}}$ arrives at the site $n^{n/u}$ approximately at time $n$. It follows that $1=(u/\alpha)+a$, which yields $u=(1-a)\alpha$.
This means that the number of steps that $X^{n^{-a}}$ has given up to time $n$ is of order $n^{(1-a)\alpha+a}$.
Again, we can decompose the steps of $X^{n^{-a}}$ into two groups. The first group accounts for the drift effect, and the second one accounts for the fluctuations.
The first group will have approximately $n^{-a+[(1-a)\alpha+a]}$ steps and the second group will give a contribution to the position of order $n^{\frac{(1-a)\alpha+a}{2}}$.
Now it is easy to see that the ballistic behavior and the fluctuations will be of the same order i.f.f.
$[(1-a)\alpha + a]/2=(1-a)\alpha$ or
$a=\alpha/(1+\alpha).$
\section {\textbf{The Supercritical Regime}}\label{super}
The proof for the constant drift case ($a=0$) in \cite{zindy} is roughly as follows: first he prove that the sequence of rescaled first hitting times, $(n^{-1/\alpha}\inf\{s\geq0:X^{\epsilon}(ns)\geq x\}: x\geq 0)$, converges to an $\alpha$-stable subordinator. Then, using that the right continuous generalized inverse of the process of first hitting times is the maximum of $X^{\epsilon}(t)$, he can deduce that $(\max\{n^{-1}X^{\epsilon}(n^{1/\alpha}s): s\leq t\}: t\geq 0)$ converges to the inverse of an $\alpha$-stable subordinator. Finally he shows that the walk and its maximum are close.
For the proof of part (i) of theorem \ref{main} we cannot follow the proof of \cite{zindy} in a straightforward way:
suppose we show that a properly rescaled sequence of first hitting time processes $(p_a(n)H^n_a(nx):x\in\mathbb{R}_+)$ (where $p_a(n)$ is an appropriate scaling) converges to an $\alpha$-stable subordinator. Then, by inverting the processes, we get that the sequence $(\max\{n^{-1}X^{n^{-a}}(p_a(n)^{-1}s):s \leq t\}: t\in\mathbb{R}_+)$ converges to the inverse of an $\alpha$-stable subordinator. But we are searching a limit for $(\max\{d_a(n)X^{n^{-a}}(tn):t\in\mathbb{R}_+\})$ (where $d_a(n)$ is appropriate space scaling). That is, we want to obtain the limit of a sequence of rescaled walks where the drift decays as $n^{-a}$ when the time is rescaled by $n$. But when we invert $(p_a(n)H^n_a(nx):x\in\mathbb{R}_+)$, we obtain the sequence $(\max\{n^{-1}X^{n^{-a}}(p_a(n)^{-1}s):s\leq t\}: t\geq 0)$, which is a sequence of maximums of rescaled walks in which the drift decays as $n^{-a}$ when the time is rescaled as $p_a(n)^{-1}$.
To solve this, we will prove that the limit of $(q_a(n)H^n_{b^{\ast}}(nx):x\in\mathbb{R}_+)$ is an $\alpha$-stable subordinator, where $q_a(n)$ is an appropriate scaling and $b^{\ast}$ sets an appropriate drift decay and depends on $a$.
Inverting, we will obtain that $(\max\{n^{-1}X^{n^{-b^{\ast}}}(q_a(n)^{-1}s):s\leq t\}:t\geq 0)$ converges to an $\alpha$-stable subordinator.
As we have said, we want the limit of a sequence of rescaled walks with
a drift that decays as $n^{-a}$ as the time parameter is rescaled by $n$.
Hence, when the time parameter is rescaled as $q_a(n)^{-1}$, the drift
should rescale as $q_a(n)^{a}$. Thus we need to choose $b^{\ast}$ so that $n^{-b^{\ast}}=q_a(n)^{a}$. But we know that $q_a(n)$ is the appropriate scaling for $(H^n_{b^{\ast}}(nx):x\in\mathbb{R}_+)$. Hence, $q_a(n)$ must be the order of magnitude of $H^n_{b^{\ast}}(n)$. That is $q_a(n)$ is of the order of the time that the walk $X^{n^{-b^{\ast}}}$ needs to reach $n$.
We now give a heuristic argument to find $q_a(n)$ and $b^{\ast}$.
When $X^{n^{-b^{\ast}}}(t)$ has given $k$ steps, it has an order $kn^{-b^{\ast}}$. So it takes about $n^{b^{\ast}+1}$ steps to be on site $n$.
We can think that the number of visits to each site $x$ is evenly
distributed. Then each site is visited about $n^{b^{\ast}}$ times before $X^{n^{-b^{\ast}}}$ hits $n$.
The time that the walks needs to reach $n$ is of the order of the time
spent in the largest trap. Thus we can estimate the total time spent by the
walk as the depth of the deepest trap (which is of order $n^{-1/\alpha}$)
multiplied by the number of visits to that trap. This gives a time of order $n^{1/\alpha+b^{\ast}}$.
What the previous arguments show is that $X^{n^{-b^{\ast}}}(t)$ arrives at $n$ at time $t\approx n^{1/\alpha+b^{\ast}}$ ($q_a(n)\approx n^{1/\alpha +b^{\ast}}$).
But at that time we want to analyze a walk of drift $(n^{1/\alpha+b^{\ast}})^{-a}$.
That is, we need that
$a(1/\alpha+b^{\ast})=b^{\ast}.$
In this way we find that $b^\ast:=a/[(1-a)\alpha].$
\subsection{The embedded discrete time walk}
For each natural $n$, the \textit{clock processes} $S^n$ is defined as $S^n(0):=0$.
Furthermore $S^n(k)$ is the time of the $k$-th jump of $X^{n^{-b^\ast}}$. $S^n$ is extended to all $\mathbb{R}^+$ by setting $S^n(s):=S^n(\lfloor s\rfloor)$.
To each drifted walk $X^{n^{b^\ast}}(t)$ we associate its corresponding \textit{embedded discrete time random walk} $(Y_i^{n^{-b^\ast}}: i \in \mathbb{N})$
defined as $Y_i^{n^{-b^\ast}}:=X^{n^{-b^\ast}}(t)$ where $t$ satisfies: $S^n(i)\leq t<S^n(i+1)$.
Obviously $Y_i^{n^{-b^\ast}}$ is a discrete time random walk with drift $n^{-b^\ast}$. We can write
$$S(k)=\sum_{i=0}^{k-1}\tau_{Y_i} e_i,$$
where $(e_i)_{i\geq0}$ is an i.i.d. sequence of exponentially distributed random variables with mean $1$.
Define\\
$\epsilon=\epsilon(n):= n^{-b^\ast} $\\
$p=p(n):=(1+\epsilon (n))/2$ \\
$q=q(n):=(1-\epsilon (n))/2$ and\\
$\nu (n):= \lfloor c\log(n)n^{b^{\ast}}\rfloor$ with $c>2$.
Let $\Xi(x,k)=\Xi (x,k,n)$ be the probability that $Y_i^{\epsilon(n)}$ hits $x$ before $k$ starting at $x+1$. Then we have that
$\Xi(x,k)= q+p\Xi(x+1,k)\Xi(x,k)$ and that $\Xi(k-2,k)=q.$
These observations give a difference equation and an initial
condition to compute $\Xi (x,k)$. Then we get that
\begin{equation}\label{psi}
\Xi (x,k)=r\frac{1-r^{k-x-1}}{1-r^{k-x}},
\end{equation}
where $r=r(n):=q(n)/p(n)$.
Using that formula we can see that the probability that the walk $Y_i^{\epsilon(n)}$ ever hits $x-1$ starting at $x$ is $r$.
We now present a backtracking estimate.
\begin{lem}\label{backtracking} Let $\cal {A}(n):=\{ \min_{i\leq j\leq \zeta_n(n)}(Y_j^{\epsilon(n)}-Y_i^{\epsilon(n)})\geq -\nu(n)\}$
where $\zeta_n(i):=\min\{ k\geq 0 :Y_k^{\epsilon(n)}=i\}$, then
$ \lim_{n \rightarrow \infty}\mathbb P [\cal A(n)]=1.$
\end {lem}
\begin{dem}
We can write
$$\cal A^c(n)=\bigcup_{x=0}^{n-1}\left\{\min_{\zeta_n(x)\leq i\leq \zeta_n(n)}(Y_i^{\epsilon(n)}-Y_{\zeta_n(x)}^{\epsilon(n)})< -\nu(n)\right\}.$$
Hence
$$\cal A^c(n)\subseteq\bigcup_{x=0}^{n-1}\left\{\min_{\zeta_n(x)\leq i}(Y_i^{\epsilon(n)}-Y_{\zeta_n(x)}^{\epsilon(n)})< -\nu(n)\right\}.$$
But, in order to arrive from $x$ to $x-\nu(n)$,
for each $j=x-1,\ldots,x-\nu(n)$, starting from $j+1$ the
random walk $Y_i^{\epsilon(n)}$ needs to
hit $j$ in a finite time.
Hence, it takes
$\nu(n)$ realizations of independent events (strong Markov property) of
probability $r(n)$. In other words
$\mathbb P[\cal A^c(n)]\leq nr(n)^{\nu(n)}= n(1-\frac{2}{1+n^{b^\ast}})^{\nu(n)}$,
which can be bounded by $ n(1-\frac{1}{n^{b^\ast}})^{\nu(n)}.$
Replacing $\nu(n)$ we obtain $n((1-\frac{1}{n^{b^\ast}})^{n^{b^\ast}})^{c\log(n)}.$
We can see that $(1-\frac{1}{n^{b^\ast}})^{n^{b^\ast}} \rightarrow e^{-1} \text
{ when } n\rightarrow \infty .$
Now, for $n$ big enough
$\left(1-\frac{1}{n^{b^\ast}}\right)^{n^{b^\ast}}\leq e^{-\frac{1}{2}}$. Then
$$\mathbb P[\cal A^c(n)]\leq nn^{-\frac{1}{2} c}.$$
But $c > 2$, so we get the result.
\end{dem}
Now we state the convergence result for the hitting time processes.
\begin{lem}\label{maximum}
Let
\begin{equation}\label{hittigntimenormalizad}
H^{(n)}(t):= \frac{H_{b^{\ast}}^{n}(tn)}{n^{(1/\alpha)+b^{\ast}}}.
\end{equation}
Then $(H^{(n)}(t); t\in [0,T])$ converges weakly to
$((\frac{\pi\alpha}{\sin(\pi\alpha)})^{-1/\alpha}V_\alpha(t); t\in[0,T])$ on $(D[0,T],M_1)$, where $V_\alpha(t)$ is an $\alpha$-stable subordinator.
\end{lem}
The proof of this lemma will be given in subsection \ref{proofmax}.
We present the proof of part $(i)$ of Theorem \ref{main} using lemma \ref{maximum} and devote the rest of the section to the proof of lemma \ref{maximum}.
\subsection{Proof of (i) of Theorem \ref{main}}
Let us denote $$\bar{X}^n(t):=n^{-1}\max \{X^{n^{-b^\ast}}(sn^{(1/\alpha)+b^{\ast}}); s\in[0,t]\}.$$
First we will prove convergence in distribution of $\bar{X}^n$ to the (right continuous generalized) inverse of $(\frac{\pi\alpha}{\sin(\pi\alpha)})^{-1/\alpha}V_\alpha$
in the uniform topology.
That is, we want to to prove convergence in distribution of the inverse of $(H^{(n)}(t); t\in [0,T])$ to the inverse of
$((\frac{\pi\alpha}{\sin(\pi\alpha)})^{-1/\alpha}V_\alpha(t); t\in[0,T])$ in the uniform topology.
Define $$\cal C (T,S) _n:=\{H^{(n)}(S) \geq Tn^{(1/\alpha)+b^{\ast}}\}.$$
Then, we have that, on $\cal{C} (T,S)_n$, the right continuous generalized inverse of $(H^{(n)}(s);s\in[0,S])$ is $(\bar{X}^{n}(t);t\in[0,T])$.
Let $T>0$ be fixed, by Lemma \ref{maximum}, we know that we can choose $S$ big enough
so that $\lim_{n\to\infty}\mathbb{P}[\cal C (T,S)_n]$ is as close to $1$ as we want.
Let $D^{\uparrow}[0,T]$ be the subset of $D[0,T]$ consisting of the increasing functions.
By corollary 13.6.4 of \cite{witt}, the inversion map from $(D^{\uparrow}[0,T],M_1)$ to $(D^{\uparrow}[0,T],U)$ is continuous at strictly increasing functions.
Lemma (\ref{maximum}) gives convergence in distribution of $(H^{(n)}(t); t\in [0,S])$ to $((\frac{\pi\alpha}{\sin(\pi\alpha)})^{-1/\alpha}V_\alpha(t); t\in[0,S])$
in the Skorohod $M_1$ topology. We know that $V_{\alpha}$ is a. s. strictly increasing, that is
$((\frac{\pi\alpha}{\sin(\pi\alpha)})^{-1/\alpha}V_\alpha(t); t\in[0,S])\in D^\uparrow[0,T]$ almost surely.
So we can apply corollary 13.6.4 of \cite{witt} and deduce convergence in distribution of $\bar{X}^n$ to the inverse of
$(\frac{\pi\alpha}{\sin(\pi\alpha)})^{-1/\alpha}V_\alpha$ in the uniform topology.
As we have said previously, the inverse of $(H^{(n)}(s);s\in[0,S])$ is $(\bar{X}^{n}(t);t\in[0,T])$ in $\cal{C} (T,S)_n$. This proves
convergence of the maximum of the walk. To deduce convergence of the walk itself it suffices to show that the walk is close enough to its maximum
in the uniform topology.
That is, to prove the theorem, it is enough to show that for all $\gamma > 0$:
$$ \mathbb P\left[\sup_{0 \leq t \leq T}|n^{-1}X^{n^{-b^\ast}}(tn^{(1/\alpha)+b^{\ast}})-\bar{X}^{n}(t)| \geq \gamma\right]\rightarrow 0 .$$
Again, by Lemma \ref{maximum} we know that $\mathbb P[H^n_{b^{\ast}}(n\log(n))\geq Tn^{(1/\alpha)+b^{\ast}}]\rightarrow 1$. Hence, we just have to prove that
$$\mathbb{P}\left[\sup_{0 \leq t \leq H^n_{b^{\ast}}(n\log(n))} |n^{-1}X^{n^{-b^\ast}}(t)-n^{-1}\max \{X^{n^{-b^\ast}}(s); s\in[0,t]\}|\geq \gamma\right]\rightarrow 0.$$
Which is to say,
$$ \mathbb{P}\left[\sup_{0 \leq k\leq \zeta_n(\lfloor n\log(n) \rfloor)}|Y^{\epsilon(n)}_k-\bar{Y}^{\epsilon(n)}_k| \geq
n \gamma \right]\rightarrow 0 .$$
where $\bar{Y}^{\epsilon(n)}$ is the maximum of $Y^{\epsilon(n)}$.
But, we can apply Lemma \ref{backtracking} to see that this is the case.
\subsection {The environment}
Here we give estimates concerning the environment.
For each $n\in\mathbb N$ define
$$g(n):=\frac{n^{1/\alpha}}{(\log(n))^\frac{2}{1-\alpha}}.$$
Now, for each site
$x\in \mathbb N$, we say that
$x$ is an {\it $n$-deep trap} if $\tau_x \geq g(n)$.
Otherwise we will say that $x$ is an {\it $n$-shallow trap}.
We now order the set of $n$-deep traps according to their position
from left to right. Then call $\delta_1(n)$ the leftmost $n$-deep trap
and in general call for $j\ge 1$, $\delta_j(n)$ the $j$-th $n$-deep trap. The number of $n$-deep traps in $[0,n]$ is denoted by $\theta (n)$.
Let us now define
$$\cal E_1 (n) := \left\{n\varphi(n)\left(1-\frac{1}{\log(n)}\right)\leq \theta_n \leq n\varphi(n)\left(1+\frac{1}{\log(n)}\right)\right\},$$
$$\cal E_2 (n):=\{\delta_1\wedge(\min_{1\leq j\leq \theta_n-1}(\delta_j-\delta_{j-1}))\leq\rho(n)\} ,$$
$$\cal E_3 (n):=\{\max_{-\nu(n)\leq x\leq 0} \tau_x <g(n) \}, \textrm{ and}$$
$$\cal E(n):=\cal E_1 (n)\cap\cal E_2 (n)\cap\cal E_3 (n)$$
where $\rho(n):= n^\kappa \ \ \ \kappa<1$ and
$\varphi (n):= \mathbb P[\tau_x\geq g(n)] $.
\begin{lem} We have that
$ \lim_{n\rightarrow\infty} \mathbb P [\cal E (n)]=1.$
\end {lem}
\begin {dem} $\theta (n)$ is binomial with parameters $(n,\varphi(n))$. $\cal E_1$ is estimated using the Markov inequality. To
control $\cal E_2$ it is enough to see that in ${0,..,n}$ there are $O(n\rho(n))$
pairs of points at a distance less than $\rho (n)$. The estimate on $\cal E_3$ is trivial.
\end {dem}
\subsection {Time control}
In this subsection we prove results about the time spent by the walk on the traps.
\subsubsection {Shallow traps} Here we will show that the time that the walks spend in the shallow traps is negligible.
\begin{lem}\label{shallow}
Let $\cal I(n):=\left\{\sum_{i=0}^{\zeta_n(n)} \tau_{Y^{\epsilon(n)}_i} \textbf {$e_i 1$}_{\{\tau_{Y^{\epsilon(n)}_i}\leq\ g(n)\} } \leq \frac{n^{1/[(1-a)\alpha]}}{\log (n)} \right\}
$. Then
\begin{equation} \label{tpp}
\mathbb P[\cal I(n)]\rightarrow 1 \textrm{ as } n\rightarrow\infty .
\end{equation}
\end{lem}
\begin{dem}
We have that
$\mathbb P[\cal I(n)^c]=\mathbb P[\cal I(n)^c\cap\cal E(n)]+ o(1) $.
Using the Markov inequality it suffices to show that
$$\mathbb E \left[\sum_{i=0}^{\zeta_n(n)}\tau_{Y^{\epsilon(n)}_i}e_i1_{\{\tau_{Y^{\epsilon(n)}_i}<g(n)\}}1_{\{Y^{\epsilon(n)}_i\geq-\nu(n)\}}\right]
=o\left(\frac{n^{1/[(1-a)\alpha]}}{\log(n)}\right). $$
The number of visits of $Y^{\epsilon(n)}_i$ to $x$ before time $\zeta_n(n)$ is $1+G(x,n)$, where $G(x,n)$ is
a geometrically distributed random variable of parameter $1-(q+p\Xi(x,n))$ (the parameter is the probability that, $Y^{\epsilon(n)}_i$, starting at $x$, hits
$n$ before returning to $x$). Also
\begin{equation}\label{trampaschicas}
\mathbb E_\tau\left[\sum_{i=0}^{\zeta_n(n)}\tau_{Y^{\epsilon(n)}_i}e_i1_{\{\tau_{Y^{\epsilon(n)}_i}<g(n)\}}1_{\{Y^{\epsilon(n)}_i\geq-\nu(n)\}}\right]
\leq\sum_{x=-\nu(n)}^n\tau_x(1+\mathbb E_\tau[G(x,n)])1_{\{\tau_x<g(n)\}}.
\end{equation}
Using (\ref{psi}) we can deduce that $(1+\mathbb E[G(x,n)])\leq\frac{(1-r(n))}{p}\leq cn^{-b^\ast}$.
So, averaging with respect to the environment in (\ref{trampaschicas}) we get
$$\mathbb E \left[\sum_{i=0}^{\zeta_n(n)}\tau_{Y^{\epsilon(n)}_i}e_i1_{\{\tau_{Y^{\epsilon(n)}_i}<g(n)\}}1_{\{Y^{\epsilon(n)}_i\geq-\nu(n)\}}\right]\leq C n^{1+b^\ast}\mathbb
E[\tau_0 1_{\{\tau_0<g(n)\}}].$$
Also
$$\mathbb E[\tau_0 1_{\{\tau_0<g(n)\}}]\leq \sum_{j=0}^{\infty} (1/2)^jg(n)\mathbb P[\tau_0>(1/2)^{j+1}g(n)].$$
Now, using (\ref{heavytails}) there exists a constant $C$ such that the righthand side of the above inequality is bounded above by
$$ C g(n)^{1-\alpha} \sum_{j=0}^{\infty} ((1/2)^{1-\alpha})^j.$$
Furthermore, since $1-\alpha > 0$ this expression is bounded above by $C g(n)^{1-\alpha}.$
This finishes the proof. \end{dem}
\subsubsection{Deep traps} Here we will estimate the time spent in deep traps.
We define the occupation time for $x \in \mathbb Z$ as
$$T_x=T_x(n):= \sum_{i=0}^{\zeta_{n}(n)}\tau_{Y^{\epsilon(n)}_i}\textbf{$e_i$ 1}_{\{Y^{\epsilon(n)}_i=x\}}. $$
The walk visits $x$, $G(x,n)+1$ times before $\zeta_n(n)$, and each visit lasts an exponentially distributed time.
This allows us to control the Laplace transform of $T_x$. For any pair of sequences of real numbers
$(a_n)_{n\in\mathbb{N}}$, $(b_n)_{n\in\mathbb{N}}$, $a_n\sim b_n$ will mean that $\lim_{n\rightarrow\infty} \frac{a_n}{b_n}=1$.
\begin{lem}\label{deeptraps}
Let $\lambda > 0$. Define $\lambda_n :=
\frac{\lambda}{n^{1/[(1-a)\alpha]}}$. Then we have that
$$ \mathbb E^x[1-\exp(-\lambda_nT_x)|\tau_x\geq g(n)]\sim\frac{\mathbb P[\tau_x\geq g(n)]^{-1}\alpha\pi\lambda^{-\alpha}}{n\sin(\alpha \pi)}.$$
\end{lem}
\begin{dem}
We must perform an auxiliary computation about the asymptotic behavior of the parameter $1-(q+p\Xi(x,n))$ of $G(x,n)$:
$$(1-(q+p\Xi(x,n)))n^{b^{\ast}}=p\frac{1-r}{1-r^{n-x}}$$
$$=\frac{2p(1+n^{-b^{\ast}})^{n-x}}{(1+n^{-b^{\ast}})((1+n^{-b^{\ast}})^{n-x}-(1-n^{-b^{\ast}})^{n-x})}$$
$$=\frac{2p}{(1+n^{-b^{\ast}})(1-(1-\frac{2n^{-b^{\ast}}}{1+n^{-b^{\ast}}})^{n-x})}$$
which converges to $1$. Thus we have showed that
\begin{equation}\label{parametro}
(1-(q+p\Xi(x,n)))n^{b^{\ast}}\stackrel{n\to\infty}{\to}1
\end{equation}
We have
$$ \mathbb E^x_\tau[\exp(-\lambda_nT_x)]=\mathbb E^x_\tau\left[\exp\left(-\lambda_n\sum_{i=0}^{G(x,n)}\tau_x \tilde{e}_i\right)\right]$$
where $\tilde{e}_i $ are i.i.d. end exponentially distributed with
$\mathbb{E}(\tilde{e}_i)=1$. Let $\tilde{\lambda}_n:=\frac{\lambda}{n^{1/\alpha}}$. Then
$$ \mathbb E^x_\tau[\exp(-\lambda_nT_x)]=\frac{1}{1+\tilde{\lambda}_n\frac{\tau_x}{n^{b^\ast}(1-(q+p\Xi(x,n)))}}.$$
Using (\ref{parametro}) we get that the above expression equals
$$=\frac{1}{1+\tilde{\lambda}_n\tau_x(1+o(1))}=\frac{1}{1+\tilde{\lambda}_n\tau_x}+o(n^{-1/\alpha}).$$
Averaging with respect to the environment
$$\mathbb E^x[1-\exp(-\lambda_nT_x)1_{\{\tau_x\geq g(n)\}}]=\int_{g(n)}^\infty 1-\frac{1}{1+{\tilde{\lambda}}_nz} \tau_0(dz) + o(n^{-1/\alpha})$$
where the notation $\tau_0(dz)$ denotes integration with respect the distribution of $\tau_0$.
Integrating by parts $\int_{g(n)}^\infty 1-\frac{1}{1+{\tilde{\lambda}}_nz} \tau_0(dz)$ we get that the above display equals
$$\left[-\frac{\tilde{\lambda}_nz}{1+\tilde{\lambda_n}}\mathbb P[\tau_0\geq z]\right]_{g(n)}^{\infty}+\int_{g(n)}^{\infty}\frac{\tilde{\lambda}_n}{(1+\tilde{\lambda}_nz)^2}
\mathbb P[\tau_0\geq z] dz + o(n^{-1/\alpha}).$$
The first term is smaller than $C\tilde{\lambda}_n g(n)^{1-\alpha}=o(n^{-1})$. To estimate the second term, note that for all $\eta > 0$ we have
$$ (1-\eta)z^{-\alpha}\leq \mathbb P[\tau_0\geq z]\leq
(1+\eta)z^{-\alpha}$$
for $z$ large enough. Then we must compute
$\int_{g(n)}^{\infty}\frac{\tilde{\lambda}_n}{(1+\tilde{\lambda}_nz)^2}z^{-\alpha}dz.$
Changing variables with $y=\frac{\tilde{\lambda}_n z}{1+\tilde{\lambda}_n z}$ we obtain
$$\tilde{\lambda}_n^{-\alpha}\int_\frac{\tilde{\lambda}_n
g(n)}{1+\tilde{\lambda}_n g(n)}^1 y^{-\alpha}(1-y)^\alpha dy.$$
But we know that this integral converges to $\Gamma(\alpha+1)\Gamma(\alpha-1)=\frac{\pi\alpha}{\sin(\pi\alpha)}$.
\end{dem}
\subsection{Proof of Lemma \ref{maximum}}\label{proofmax}
We will show the convergence of the finite dimensional Laplace
transforms of the rescaled hitting times to the corresponding expression for an $\alpha$-stable
subordinator. This will prove finite dimensional convergence.
Let $0=u_0<\cdots<u_K\leq T$ and $\beta_i, i=1..K$ be positive numbers. We know that
$$\mathbb{E}\left[\exp\sum_{i=1}^{K}-\beta_i((\frac{\pi\alpha}{\sin(\pi\alpha)})^{-1/\alpha}V_\alpha(u_i)-(\frac{\pi\alpha}{\sin(\pi\alpha)})^{-1/\alpha}V_\alpha(u_{i-1}))\right]
=\exp\left(\sum_{i=1}^K -\frac{\alpha\pi\beta_K^{-\alpha}}{\sin(\alpha\pi)}(u_K-u_{K-1})\right).$$
So, it only suffices to show that
$$\mathbb{E}\left[\exp\sum_{i=1}^{K}-\beta_i(H^{(n)}(u_i)-H^{(n)}(u_{i-1}))\right]\stackrel{n\to\infty}{\to}\exp\left(\sum_{i=1}^K -\frac{\alpha\pi\beta_K^{-\alpha}}{\sin(\alpha\pi)}(u_K-u_{K-1})\right)$$
where $H^{(n)}$ is as in (\ref{hittigntimenormalizad}).
We can decompose the trajectory of $Y^{\epsilon(n)}$ up to $\zeta_n(\lfloor n u_K \rfloor)$ into three parts.
The first one is the trajectory up to the time $\zeta_n(\lfloor n u_{K-1}-\nu (Tn) \rfloor)$,
the second one is the
trajectory between times $\zeta_n(\lfloor n u_{K-1}-\nu (Tn)\rfloor)$ and
$\zeta_n(\lfloor n u_{K-1}\rfloor)$, finally, the third part
is the trajectory starting from time $\zeta_n(\lfloor nu_{K-1}\rfloor)$
up to time $\zeta_n(\lfloor nu_{K}\rfloor)$.
First we will show that the time spent in the second part of the trajectory is negligible.
We have that $\mathbb P[\max_{y\in B_{\nu(Tn)}(x)}>g(Tn)]=o(1)$, which is to say that the probability of finding
an $n$-deep trap in a ball of radius $\nu(Tn)$ is small. Indeed Lemma \ref{shallow} implies that there exists a constant $C>0$ such that
$$\mathbb P\left[\sum_{i=0}^{\zeta_{\lfloor u_Kn\rfloor}}\tau_{Y^{\epsilon(n)}_i}e_i1_{\left\{\tau_{Y^{\epsilon(n)}_i}\in B_{\nu(Tn)}(\lfloor u_{K-1}n\rfloor)\right\}}
<Cn^{\frac{1}{(1-a)\alpha}}(\log(n))^{-1}\right]
\rightarrow1.$$
Hence, the time that the walk spends in $B_{\nu(Tn)}(\lfloor u_{K-1}n\rfloor)$
is negligible. But in $\cal A(Tn)$ the walk never backtracks a distance
larger than $\nu(Tn)$, so, the time spent in the second part of the decomposition is negligible.
The fact that in $\cal A(Tn)$ the walk never backtracks a distance
larger than $\nu(Tn)$ also implies that, conditional on $\cal A(Tn)$, the first and the third parts of the decomposition of the trajectory
corresponds to independent walks in independent environments.
So $\mathbb E[\exp(\sum_{i=1}^{K}-\beta_i(H^{(n)}(u_i)-H^{(n)}(u_{i-1})))]$ can be expressed as
$$ \mathbb E\left[\exp\sum_{i=1}^{K-1}-\beta_i(H^{(n)}(u_i)-H^{(n)}(u_{i-1}))\right]\mathbb E^{\lfloor nu_{K-1}\rfloor}\left[\exp-\beta_K(H^{(n)}(u_i)-H^{(n)}(u_{i-1}))\right]+o(1) $$
where $o(1)$ is taking account of the time spent in the second part of the decomposition of the trajectory and of $\cal A(Tn)^c$.
The strong Markov property of $Y^{\epsilon(n)}$ applied at the stopping time $\zeta_n(\lfloor nu_{K-1}\rfloor)$ and translational invariance of the environment give that $H^{n}_{b^{\ast}}(n u_i)-H^{n}_{b^{\ast}}(n u_{i-1})$ is distributed as $H^{n}_{b^{\ast}}(ns_n(K))$
where $s_n(K)= \frac{\lfloor u_K n\rfloor-\lceil u_{K-1} n\rceil}{n}$.
Iterating this procedure $K-2$ times we reduce the problem to the computation of one-dimensional Laplace transforms.
Hence, we have to prove that, for each $k\leq K$
$$\mathbb E[ \exp(-\beta_k n^{-(1/\alpha)-a}H^n_{b^{\ast}}(ns_n(k)))]\rightarrow\exp\left(-\frac{\pi\alpha}{\sin(\pi\alpha)}\beta_k^\alpha(u_k-u_{k-1})\right).$$
We have that $\mathbb{P}[\cal E(Tn)\cap A(Tn)]\to1$, then we can write
$$\mathbb E[\exp(-\beta_kn^{-(1/\alpha)-a}H^n_{b^{\ast}}(ns_n(k)))]=
\mathbb E[\exp(-\beta_kn^{-(1/\alpha)-a}H^n_{b^{\ast}}(ns_n(k)))1_{\{\cal E(Tn)\cup A(Tn)\}}]+o(1).$$
We know that the time spent in the shallow traps is negligible, so we only have to take into account the deep traps.
We also know that on $A(Tn)$, the walk does not backtrack more than $\nu(Tn)$, and
that, on $\cal E(Tn)$, the deep traps on $[0,Tn]$ are well separated. Then we can write
$$\mathbb E[ \exp(-\beta_k n^{-(1/\alpha)-a}H^n_{b^{\ast}}(ns_n(k)))]=\mathbb E\left[\prod_{j=1}^{\theta(ns_n(k))}\mathbb E_{\tau}^{\delta_i}[\exp{(-\beta_kn^{-(1/\alpha)-a}T_{\delta_i})}]\right]+o(1).$$
Also, in $\cal E(Tn)$ we have upper and lower bounds for $\theta (Tn)$. Using
the upper bound we see that the righthand side of the above equality
is bounded above by
$$ \mathbb E \left[\prod_{j=1}^{ns_n(k) \varphi (ns_n(k))(1-\frac{1}{\log(ns_n(k))})}\mathbb E_{\tau}^{\delta_i}[\exp{(-\beta_kn^{-(1/\alpha)-a}T_{\delta_i})}]\right]+o(1),$$
Applying again the translational invariance of the environment and the strong Markov property we get that that the above display is equal to
$$\mathbb{E}[\mathbb {E}_\tau^{\delta_i}[\exp{(-\beta_kn^{-(1/\alpha)-a}T_{\delta_i})}]]^{ns_n(k) \varphi (ns_n(k))(1-\frac{1}{\log(ns_n(k))})}+o(1)$$
which in turn can be expressed as
$$\mathbb{E}[\exp{(-\beta_kn^{-(1/\alpha)-a}T_0)}|\tau_0\geq g(ns_n(k))]^{ns_n(k) \varphi (ns_n(k))(1-\frac{1}{\log(ns_n(k))})}+o(1).$$
Using lemma (\ref{deeptraps}) and the fact that $s_n(k)\stackrel{n}{\to}u_k-u_{k-1}$ we obtain
$$ \limsup\mathbb E[\exp(-\beta_kn^{-(1/\alpha)-a}H^n_{b^{\ast}}(ns_n(k)))] \leq \exp\left( -\frac{\alpha\pi\beta_k^{-\alpha}}{\sin(\alpha\pi)}(u_k-u_{k-1})\right) .$$
The lower bound can be obtained in an analogous fashion.
For the tightness, the arguments are the same as in Chapter 5 of \cite{p-spin}
\section{\textbf{The Critical Case}}\label{critical}
We want to show that for $a=\frac{\alpha}{\alpha+1}$ the sequence of walks $(X^{(n,a)}(t);t\in[0,\infty])$ converges in distribution to a drifted F.I.N. diffusion.
We will mimic the arguments in \cite{fin}. But to treat the asymmetry of the model we will use a Brownian motion with drift instead of a Brownian motion. We use the existence of a bi-continuous version of the local time for a Brownian motion with drift.
\subsection{The construction of the walks}
Recall the definition of $\tilde{X}(\mu)$ given in
display (\ref{timechange2}).
Let $s$ be a real number and define
$$\mu:=\sum_{i\in \mathbb(Z)} v_i\delta_{si}.$$
Then $\tilde{X}(\mu)$ is a homogeneus Markov process with $s\mathbb{Z}$ as its state space.
The transition probabilities and jump rates of $\tilde{X}(\mu)$ can be computed from the positions and weights of the atoms using the generator $L$ of $C(t)$
\begin{equation} \label{generator2}
Lf:=\frac{1}{2}\frac{d^2f}{d x^2}+\frac{df}{d x}.
\end{equation}
The arguments we will give below are an adaptation of the reasoning used by Stone in \cite{stone}.
For each $i$ let $\eta_{si}$ be the time of the first jump of $\tilde{X}(\mu)$ started at $si$.
By construction we will have that $\eta_{si}=v_i\tilde{l}(\sigma_s,0)$, where $\sigma_s$ is the hitting time of $(-s,s)$ by $C(t)$.
Using the strong Markov property for $C(t)$ we can deduce that $\eta_{si}$ is exponentially distributed.
It is easy to see that its mean is $v_i\mathbb{E}[\tilde l(\sigma_s,0)]$.
Denote by $p_t(x)$ the density at site $x$ of the distribution of $C(t)$ absorbed at $\{-s,s\}$.
Using that $\tilde{l}(\sigma_s,0):=\epsilon^{-1}\lim_{\epsilon\to0}m(t\in[0,\sigma_s]:C(t)\in[-\epsilon,\epsilon])$ and applying Fubini`s Theorem we find that
$\mathbb{E}[\tilde{l}(\sigma_s,0)]=\epsilon^{-1}\lim_{\epsilon\to0}\int_0^{\sigma_s}\mathbb{P}[C(t)\in[-\epsilon,\epsilon]]dt$.
Then we find that
$$\mathbb{E}[\tilde{l}(\sigma_s,0)]=\int_0^{\infty}p_t(0)dt.$$
We also know that $\int_0^{\infty}p_t(0)dt=f(0),$
where $f$ is the Green function of (\ref{generator2}) with Dirichlet conditions on $\{-s,s\}$.
That is, $f$ is the continuous function that satisfies
$$\frac{1}{2}\frac{d^2 f}{d x^2}+\frac{d f}{d x}=-\delta_0 \textrm { and } f(s)=f(-s)=0.$$
We know that the general solution to $\frac{1}{2}\frac{d^2 g}{d x^2}+\frac{d g}{d x}=0$ is
$g=C_1\exp(-2x) + C_2.$
This and the constraints on $f$ give that
\begin{equation}\label{rates}
\mathbb{E}[\eta_{si}]=v_i^{-1}\frac{\exp(-2s)+1}{1-\exp(-2s)}.
\end{equation}
For the computation of the respective transition probabilities we can use again the generator $L$.
Let $g:[-s,s]\rightarrow\mathbb{R}$ be a continuous function such that
$\frac{1}{2}\frac{d^2 g}{d x^2}+\frac{d g}{d x}=0$
and $g(-s)=0, g(s)=1$.
Using It\={o}'s formula, we find that that $g(C(t))$ is a martingale.
By the optional stopping theorem with the stopping time $\sigma_s$ we find that the probability that the walk takes his first step to the right is $g(0)$.
We can use the constraints on $g$ to see that
\begin{equation} \label{transitionprobabilities}
\mathbb{P} [ \tilde{X}(\mu)(\eta_{si})=s(i+1)] = \frac{\exp(2s)}{1+\exp(2s)}.
\end{equation}
The proof of part $(ii)$ of Theorem (\ref{main}) will rely strongly on the
following proposition.
\begin{prop}\label{stonedrift}
Let $(\nu_n)_{n\in \mathbb N }$ be a sequence of measures that converges
vaguely to $\nu$, a measure whose support is $\mathbb{R}$. Then the corresponding processes
$(\tilde{X}(\nu_n)(t),0\leq t\leq T)$ converges to $(\tilde{X}(\nu)(t),0\leq t\leq T)$ in distribution in $(D[0,T],U)$.
\end{prop}
For the case where the underlying process is a Brownian motion, the proof of this fact can be found in \cite{stone}.
We will use the continuity properties for the local time $\tilde{l}$. For each fixed $t$, $\tilde{l}$ is continuous and of compact support in $x$.
Then, the vague convergence of $\nu_n$ to $\nu$ implies the almost sure convergence of $\tilde{\phi}_{\nu_n}(t)$ to $\tilde{\phi}_{\nu}(t)$.
As $\tilde{l}$ is continuous in $t$, we obtain continuity of $\tilde{\phi}_{\nu_n}$ and of $\tilde{\phi}_{\nu}$.
That, plus the fact that the $\tilde{\phi}_{\nu_n}$ are non-decreasing implies that that $\tilde{\phi}_{\nu_n}$ converges uniformly to $\tilde{\phi}_{\nu}$.
The function $\tilde{\phi}_{\nu}$ is almost surely strictly increasing, because the support of $\nu$ is $\mathbb{R}$. Now we can apply corollary 13.6.4 of \cite{witt} to obtain that $\tilde{\psi}_{\nu_n}$ converges uniformly to $\tilde{\psi}_{\nu}$. That plus the continuity of the Brownian paths yields the lemma.
\subsection{The coupled walks}
To prove part (ii) of Theorem \ref{main}, we will use Proposition \ref{stonedrift}. That is we want to show that each walk $(X^{(n,a)}(t);t\in[0,\infty])$ can be expressed as
a speed measure change of $C(t)$, and then use convergence of the measures to get convergence of the
processes.
The problem is that we are dealing with a sequence of random measures, and the proposition deals only with deterministic measures.
To overcome this obstacle we can construct a coupled sequence of random measures $(\rho_n)_{n\in\mathbb{N}}$, such that
$(\tilde{X}(\rho_n)(t);t\in[0,\infty])$ is distributed as $(X^{(n,a)}(t);t\in[0,\infty])$ and that $(\rho_n)_{n\in\mathbb{N}}$ converges
almost surely vaguely to $\rho$, where $\rho$ is the random measure defined in (\ref{rho}) such that $\tilde{Z}=\tilde{X}[\rho]$. This section is devoted to the construction of the coupled measures.
We recall that $V_{\alpha }$ is an $\alpha$-stable subordinator.
To make the construction clearer, we will first suppose that $\tau_0$ is equidistributed with the positive $\alpha$-stable distribution $V_{\alpha}(1)$.
Let us consider the strictly increasing process $(\tilde{V}_{\rho}(t); t\in \mathbb{R})$
given by $\tilde{V}_{\rho}(t):=\rho [0,t]$ if $t \geq 0$ and
$\tilde{V}_{\rho}(t):=-\rho[t,0)$ if $t<0$. It is a known fact from the theory of Levy processes that $\tilde{V}_{\rho}(t)$ is a two sided $\alpha$-stable subordinator.
We now use this process
to construct the coupled sequence of random measures $(\rho_n)_{n\in\mathbb{N}}$ as
$$\rho_n:=\sum_i n^{-1/(1+\alpha)}\tau_i^n\delta_{s_ni},$$
where
$s_n:=\frac{1}{2}\log{\frac{n^{-a}+1}{1-n^{-a}}} $
and
\begin{equation}\label{tauene}
\tau_i^n:=n^{1/(1+\alpha)}(\tilde{V}_{\rho}(n^{-\alpha/(1+\alpha)}(i+1))-\tilde{V}_{\rho}(n^{-\alpha/(1+\alpha)}i)).
\end{equation}
Observe that $(\tau_i^n)_{i \in \mathbb Z}$ is an i.i.d. sequence distributed
like $\tau_0$, so that using (\ref{transitionprobabilities}) and (\ref{rates}) we see that $\tilde{X}(\rho_n)$
is a walk with drift $n^{-1/\alpha}$ taking values in $s_n\mathbb {Z}$. The
latter means that $\tilde X(\rho_n)$ is distributed like $s_n n^{\frac{\alpha}{1+\alpha}}X^{(n,a)}$.
The key observation here is that the scaling factor $s_n$ satisfies
\begin{equation}\label{sn}
s_nn^{\alpha/(1+\alpha)}\rightarrow 1 \textrm{ as } n\rightarrow \infty.
\end{equation}
So, we just have to show that $\tilde X(\rho_n)$ converges to
$\tilde X(\rho)$, because (\ref{sn}) implies that if $\tilde X(\rho_n)$
converges to $\tilde X(\rho)$, so $s_nn^{\alpha/(1+\alpha)}\tilde{X}(\rho_n)$ does.
With (\ref{sn}) in mind it is easy to prove that the sequence of measures
$(\rho_n)$ converges almost surely vaguely to $\rho$. Suppose that $a<b$ are real numbers and that $V_\rho$ is continuous at $a$ and $b$, then
$$\rho_n((a,b])=V_{\rho}(n^{-\alpha/(1+\alpha)}\lfloor a/s_n\rfloor)-V_{\rho}(n^{-\alpha/(1+\alpha)}(\lfloor b/s_n\rfloor+1)).$$
But using (\ref{sn}) it is clear that $n^{-\alpha/(1+\alpha)}\lfloor a/s_n\rfloor\stackrel{n\to\infty}{\to}a$ and $n^{-\alpha/(1+\alpha)}\lfloor b/s_n\rfloor\stackrel{n\to\infty}{\to}b$. Then the continuity of $V_\rho$ at $a$ and $b$ implies that $\rho_n((a,b])\stackrel{n\to\infty}{\to}\rho(a,b]$, and we have proves the vague convergence of $\rho_n$ to $\rho$.
Suppose now that $\tau_0$ is not a positive $\alpha$-stable random
variable. Then, we can follow Section 3 of \cite{fin}.
There they construct constants $c_\epsilon$ and functions $g_{\epsilon}$ such that $\tau_i^{(\epsilon)}$ is distributed like $\tau_0$, where
\begin{equation}\label{tauepsilon}
\tau_i^{(\epsilon)}:=c_{\epsilon}^{-1}g_\epsilon(\tilde{V}_{\rho}(\epsilon(i+1)) -\tilde{V}_{\rho}(\epsilon i)).
\end{equation}
Lemma 3.1 of \cite{fin} says that
\begin{equation}\label{gepsilon}
g_{\epsilon}(y)\rightarrow y \textrm{ as } \epsilon \rightarrow 0.
\end{equation}
As $\tau_0$ satisfies (\ref{tail}) and using the construction of $c_{\epsilon}$ in Section 3 of \cite{fin}, we can deduce that
\begin{equation}\label{cepsilon}
c_{\epsilon}\sim\epsilon^{1/\alpha}
\end{equation}
Define
\begin{equation}\label{tauiene}
\tau_i^n:=\tau_i^{(n^{(-\alpha/(1+\alpha))})}
\end{equation}
and again
$$\rho_n:=\sum_i n^{-1/(1+\alpha)}\tau_i^n\delta_{s_ni}.$$
Then, by definition (\ref{tauepsilon}), $\tilde{X}(\rho_n)$
is a walk with drift $n^{-1/\alpha}$ taking values in $s_n\mathbb {Z}$.
Using (\ref{gepsilon}), (\ref{cepsilon}) and (\ref{sn}) we can see that $\mathbb{P}$-a.s.
$\rho_n \rightarrow \rho \textrm{ vaguely}.$
\section{\textbf{The Subcritical Regime}}\label{sub}
We will prove that if $a >\alpha/(1+\alpha)$, then $(X^{(n,a)};t\in[0,\infty])$ converges to a F.I.N. diffusion. We obtain the same scaling limit that was obtained in \cite{fin} for a symmetric B.T.M. Nevertheless, here we have to deal with walks which are not symmetric, in contrast with the situation of \cite{fin}. For this purpose we express each rescaled walk as a time scale change of a Brownian motion. The scale change is necessary to treat the asymmetry of the walk. Then we show that the scale change can be neglected.
We now proceed to define a time scale change of a Brownian motion.
Let $\mu$ be a locally finite
discrete measure
$$\mu(dx):=\sum_{i\in\mathbb{Z}}w_i\delta_{y_i}(dx),$$
where $(y_i)_{i\in\mathbb Z}$ is an ordered
sequence of real numbers so that $y_i<y_j \text{ i.i.f. } i<j$.
Let $S:\mathbb{R}\to\mathbb{R}\cup\{\infty,-\infty\}$ be a real valued, strictly increasing function,
$\mu$ will be the speed-measure and $S$ the scaling
function of the time scale change of Brownian motion. Define the scaled measure $(S\circ\mu)(dx)$ as
$$(S\circ\mu)(dx):=\sum_iw_i\delta_{S(y_i)}(dx).$$
Let
$$\phi(\mu,S)(t):=\int_{\mathbb{R}}l(t,y)(S\circ\mu)(dy)$$
and $\psi(\mu,S)(s)$ be the right continuous generalized inverse of $\psi(\mu,S)$. Then, as shown in \cite{stone}
$$X(\mu,S)(t):=S^{-1}(X(S\circ\mu)(t)))$$
is a continuous time random walk with $\{y_i\}$ as its state space. The mean of the exponentially
distributed waiting time of $X(S\circ\mu)$ on $y_i$ is
\begin{equation}\label{meanofthetime}
2w_i\frac{(S(y_{i+1})-S(y_i))(S(y_{i})-S(y_{i-1}))}{S(y_{i+1})-S(y_{i-1})}
\end{equation}
and the transition probabilities to the right and to the left respectively
are
\begin{equation}\label{probabilidadesdetransicion}
\frac{S(y_{i+1})-S(y_i)}{S(y_{i+1})-S(y_{i-1})}\text{ and }\frac{S(y_i)-S(y_{i-1})}{S(y_{i+1})-S(y_{i-1})}.
\end{equation}
As in the previous section, we need to define a sequence of measures
$(\nu_n)_{n\in\mathbb{Z}}$ converging almost surely vaguely to $\rho$,
and which can be used to express the sequence of rescaled walks $X^{(n,a)}$.
Let
$$\nu_n:=\sum_{i\in\mathbb{Z}}\frac{r_n+1}{2r_n^i}\tau_i^n\delta_{in^{\alpha/(\alpha+1)}},$$
where $\tau_i^n $ are defined in display (\ref{tauiene}), and
$r_n:=1-\frac{2n^{-a}}{1+n^{-a}}.$
We will also use a sequence of scaling functions $S^n$ (which will converge
to the identity mapping) given by
$$S^n(in^{\alpha/(\alpha+1)}):=\sum_{j=0}^{i-1}\frac{r^j}{n^{\alpha/(\alpha+1)}}.$$
We extend the domain of definition of $S^{n}$ to $\mathbb{R}$ by linear interpolation.
Then, by (\ref{meanofthetime}) and (\ref{probabilidadesdetransicion}), we have that $X(\nu_n,S^n)$ is distributed like $X^{(n,a)}$. We will use the following theorem proved by Stone in \cite{stone}.
\begin{prop}\label{stone}
Let $(\nu_n)_{n\in \mathbb N }$ be a sequence of measures that converges vaguely to $\nu$. Then the corresponding processes
$(X(\nu_n)(t),0\leq t\leq T)$ converges to $(X(\nu)(t),0\leq t\leq T)$ in distribution in $(D[0,T],J_1)$
\end{prop}
The proof of part (iii) of theorem \ref{main} will rely in the following lemma.
Let $id$ denote the identity mapping on $\mathbb{R}$, then we have that
\begin{lem}\label{sandrho}
$S^n(n^{-\alpha/(1+\alpha)}\lfloor n^{\alpha/(\alpha+1)}\cdot\rfloor)$ converges uniformly on compacts to $id$ and $\nu_n$ to converges almost surely vaguely to $\rho$.
\end{lem}
\begin{proof}
The convergence of the scaling functions is easily seen to be true under the assumption $a>\alpha/(\alpha+1)$ because
$$S^n(n^{-\alpha/(1+\alpha)}\lfloor n^{\alpha/(1+\alpha)}x\rfloor)=\sum_{j=0}^{\lfloor n^{\alpha/(1+\alpha)}x\rfloor}\frac{r_n^j}{n^{\alpha/(\alpha+1)}}$$
and
$$\frac{r_n^{\lfloor n^{\alpha/(1+\alpha)}x\rfloor}\lfloor n^{\alpha/(\alpha+1)}x\rfloor}{n^{\alpha/(1+\alpha)}}\leq\sum_{j=0}^{\lfloor n^{\alpha/(1+\alpha)}x\rfloor}\frac{r_n^j}{n^{\alpha/(\alpha+1)}}\leq\frac{\lfloor n^{\alpha/(\alpha+1)}x\rfloor}{n^{\alpha/(1+\alpha)}}.$$
Now we use the fact that
$$r_n^{\lfloor n^{\alpha/(1+\alpha)}x\rfloor}=\left(1-\frac{2n^{-a}}{1+n^{-a}}\right)^{\lfloor n^{\alpha/(1+\alpha)}x\rfloor}$$
converges to $1$, because $a>\alpha/(1+\alpha)$.
In a similar fashion it can be shown that the ``correcting factors" $\frac{r_n+1}{2r_n^i}$ in the definition of $\nu_n$ converge uniformly to $1$
in any bounded interval. Hence, we can show the convergence of $\nu_n$ to $\rho$ as in the previous section.
\end{proof}
Lemma \ref{sandrho} implies the vague convergence of $(S^n\circ\nu_n)$ to $\rho$.
Then, by proposition \ref{stone} we can deduce that $X(S^n\circ\nu_n)$ converges to $X(\rho)$.
Let $T>0$, by lemma \ref{sandrho} we have that $S^{-1}$ also converges uniformly to the identity. Thus, using the precedent observations, we get that
$(X(\mu,S)(t):0\leq t\leq T)$ converges to $(X(\rho)(t)0\geq t\geq T)$ in $D[0,T]$ with the Skorohod $J-1$ topology.
We have proved that $(X^{(n,a)}(t); t \in [0;T])$ converges in distribution to the F.I.N. diffusion $(Z(t); t \in [0,t]) $ on $(D[0,T],J_1)$.
Thus, it remains to prove that the convergence takes place also in the
uniform topology. Using the fact that the support of $\mathbb{\rho}$ is $\mathbb{R}$, we can show that $\phi(\rho,id)$ is strictly increasing. The almost sure vague convergence of $S\circ\nu_n$ to $\rho$ implies that, for all $t\geq 0$, $\phi(\nu_n,S^{n})(t)$ converges to $\phi(\rho,id)(t)$. As $l$ is continuous in $t$, we obtain continuity of $\phi(\nu_n,S^{n})$ and of $\phi(\rho,id)$. That, plus the fact that the $\phi(\nu_n,id)$ are non-decreasing implies that that $\phi(\nu_n,S^{n})$ converges uniformly to $\phi(\rho,id)$.
The function $\phi(\rho,id)$ is almost surely strictly increasing, because the support of $\rho$ is $\mathbb{R}$. Now we can apply corollary 13.6.4 of \cite{witt} to obtain that $\psi(\nu_n,S^n)$ converges uniformly to $\psi(\rho,id)$. That, plus the continuity of the Brownian paths yields that $X(S^n\circ \nu_n)$ converges uniformly to $X(\rho,id)$. Using that $S^{n -1}$ converges to the identity, we finally get that $X(\nu_n,S^{n})$ converges uniformly to $X(\rho)$.
\bibliographystyle{amsplain}
|
1,314,259,996,617 | arxiv | \section{Introduction}
\PARstart{E}{mbedded} systems are becoming ubiquitous and an integral part of our everyday life. Addressing functional safety is a major challenge with increasing complexity. Typical examples of safety-critical embedded systems include vehicle safety or driver assistance systems with accident prevention. However, functional safety is becoming more prevalent not just in the automotive sector, but also in industrial markets such as aviation, solar energy, and the medical sector \(e.g.,~\cite{Freescale11}\). Memory devices increasingly provide built-in error correction in order to restore corrupted data~\cite{Issi11} and also to maximize the number of writes in flash memory~\cite{Huang11}. With information and communication technology components becoming ever smaller and more complex, the probability for hardware immanent error arises.
Decoders taking advantage of the cyclic structure of shortened Reed--Muller codes accommodate the increasing demand for less space consumption -- at the cost of the decoding duration~\cite{Blahut03}. On the other hand, several recursive algorithms were developed allowing decoding with only $O\(\min\(r,m-r\)\cdot 2^m\)$ operations~\cite{Dumer04,SchnBos95}. Though the number of operations could be reduced, all these operations need to be executed one after another. Therefore, these algorithms require much parallel time. Parallel time is defined as the time the algorithm takes if all its modules are parallelized to the maximum possible amount. Thus, cyclic as well as recursive decoders are not designed for correcting errors in parallel. However, for all safety-critical applications, where real-time control is ranked first, decoding multiple positions in parallel saves precious time. Decoders based on majority-logic can accomplish this task. Furthermore, in embedded systems, very simple hard-decision algorithms are mostly preferable to soft-decision algorithms~\cite{Dumer04,SchnBos95}. Therefore, hard-decision decoders for Reed--Muller codes using decision by majority are an attractive option for forward error correction in real-time on hardware level.
\pubidadjcol
A majority-logic decoding algorithm was first proposed by Reed~\cite{Reed54}. Reed's algorithm consists of $r+1$ decoding steps in which majority voting is performed. Chen~\cite{Chen71,Chen72} significantly improved Reed's decoding algorithm by reducing the number of decoding steps. In particular, if Reed--Muller codes $\RM\(r,m\)$ with $m \geq 3,$ $m/2 \geq r\geq 1$ are employed, Chen's algorithm consists of only two decoding steps.
In this case, up to $O\(2^{3m-2r}\)$ functions are called concurrently and Chen's algorithm can be executed in constant parallel time provided majority voting also takes constant parallel time.
The authors in~\cite{Hauck12} investigated how far the number of majority votes in Chen's algorithm can be reduced while focusing on information bits. They established upper and lower bounds for the complexity. But an explicit instruction how to construct a decoder is only provided for a few codes. Furthermore, their decoding process depends on the encoding procedure.
In the present paper, we propose a new hard-decision decoding algorithm for all Reed--Muller codes $\RM\(r,m\)$ with $m \geq 3,$ $m/2 \geq r\geq 1.$ Our decoder is easy to design for software and hardware applications. The algorithm decodes all bits, i.e., information and redundancy, without considering the encoding process. Compared to state-of-the-art majority-logic decoders, our algorithm is less complex. In contrast to recursive decoders~\cite{Dumer04,SchnBos95}, our decoder enables massively parallel decoding in constant parallel time.
The paper is organized as follows. Section~\ref{sect01} introduces the notation and preliminaries on Reed--Muller codes. In Section~\ref{sect02}, we revisit Chen's decoding algorithm and analyze its complexity. In Section~\ref{sect03}, we present in detail our new decoding algorithm including proof of correctness, pseudocode, estimation of complexity and an example for $\RM\(2,5\)$. Our algorithm is compared to Chen's algorithm in terms of complexity in Section~\ref{sect04}. The paper concludes in Section~\ref{sect05} with further advantages of our algorithm in comparison to other classes of decoders.
\section{Notation and Preliminaries}\label{sect01}
The binary Reed--Muller code $\RM\(r,m\)$ is a $\left[n,k,\delta\right]$ code with
\begin{align*}n:=2^m,\quad k:=\sum_{i=0}^r \binom{m}{i},\quad\delta:=2^{m-r}\end{align*}
which guarantees correcting up to $\delta/2-1$ errors. We number the vectors in $\mathbb{Z}_2^m$ in arbitrary order starting from zero. Every position $i\in\left\{0,1,\ldots,n-1\right\}$ in a binary word of length $n$ is identified by $v_i\in \mathbb{Z}_2^m.$
Then, we characterize a set of vectors $S\subseteq\mathbb{Z}_2^m$ by its incidence vector $\chi_S\in\mathbb{Z}_2^{n}.$ The $i$-th position in $\chi_S$ is set to one if and only if $v_i\in S.$
A \emph{$d$-flat} is a $d$-dimensional affine subspace in $\mathbb{Z}_2^m.$ Given $r$, $m$ and the specific ordering, the Reed--Muller code $\RM\(r,m\)$ is generated by all incidence vectors that characterize $d$-flats with $d= m-r$~\cite{MacWilliamsSloane77}. Therefore, we denote by $\RM\(r,m\)$ not one code but a family of equivalent codes depending on the chosen ordering in $\mathbb{Z}_2^m$.
For the rest of the paper, let $m/2 \geq r\geq 1$, $m \geq 3$. Furthermore, we generally assume a codeword $c:=\(c_0,c_1,\ldots,c_{n-1}\)\in \RM\(r,m\)$ was sent through a noisy channel and \begin{align*}
z&:=\(z_0,z_2\ldots,z_{n-1}\)\\
&=\(c_0,c_1,\ldots,c_{n-1}\)+\(e_0,e_1,\ldots,e_{n-1}\)=c+e\in \mathbb{Z}_2^{n}
\end{align*}
was received where at most $\delta/2-1$ errors occurred.
For any vectors $v,w\in\mathbb{Z}_2^n$, let $v\cdot w\in\mathbb{Z}_2$ denote the scalar product \(over $\mathbb{Z}_2$\) of the two vectors $v$ and $w$.
Let $S\subseteq\mathbb{Z}_2^m$ be arbitrary. The scalar product $z\cdot\chi_S$ is called the \emph{check-sum of $S$}. Since
\begin{align*}
z\cdot\chi_S = \sum_{i\in\left\{j\mid v_j\in S\right\}} z_i\in\mathbb{Z}_2,
\end{align*}
it is not necessary to consider all $n$ entries of $z$. To reduce the complexity of computing the check-sum of $S$, we only take into account the $|S|$ entries $z_i$, $i\in \{j\mid v_j\in S\}$.
In the following, we will say that $S$ \emph{possesses $t$ errors} if and only if
\begin{align*}t =\lv\left\{0\leq i\leq n-1\mid e_{i}\neq 0, v_i\in S\right\}\rv.\end{align*}
In particular, we call $S$ \emph{odd} or \emph{even} if $S$ possesses an odd or even number of errors, respectively. Note that $S$ is odd if and only if $e\cdot\chi_S=1$.
The majority function $\mu:~\left\{0,1\right\}^s \rightarrow \left\{0,1\right\}$ is defined as follows:
\begin{align*}
\mu\(x_1,x_2,\ldots,x_s\)&:=\begin{cases}1&\!\text{if }\lv\left\{0\leq i\leq s: x_i\!=\!1\right\}\rv > \left\lfloor s/2\right\rfloor\\0&\!\text{otherwise}\end{cases}
\end{align*}
where $\lfloor x\rfloor$ represents the largest integer not greater than $x$.
\section{Chen's Two-Step Majority-Logic Decoding of Reed--Muller Codes --- Revisited}\label{sect02}
Chen's decoding algorithm~\cite{Chen71,Chen72} corrects in two majority-logic steps all $n$ positions. It operates on flats of dimension $r+1$ or less and performs majority voting.
\subsection{The Idea}
Chen's algorithm takes advantage of the following proposition.
\begin{proposition}\label{27}
Let $S\subset\mathbb{Z}_2^m$ be arbitrary. Suppose there exist $S_1,\ldots,S_{N}\subseteq \mathbb{Z}_2^m$ with $N\geq \delta-2$ which intersect pairwise in $S$, i.e., $S_i\cap S_j=S$ for all $i,j=1,\ldots,N$, $i\neq j$. Then $S$ is odd if and only if more than $N/2$ sets $S_i$ are odd.
\end{proposition}
\begin{IEEEproof}
Suppose $S$ possesses $t$ errors. Beyond these $t$ errors, up to $\delta/2-1-t$ further errors occurred while transmitting the codeword. Therefore, at least $N-\(\delta/2-1-t\)$ sets $S_i$ must possess the same number of errors as $S$, namely $t$ errors. Note that $N-\(\delta/2-1-t\) \geq N/2+t.$
Hence, if $t$ is odd, more than $N/2$ sets $S_i$ are odd. On the other hand, if $t$ is even, at least $N/2$ sets $S_i$ are even and therefore at most $N/2$ sets $S_i$ are odd.
\end{IEEEproof}
According to Proposition~\ref{27}, it can be deduced whether a set $S$ in $\mathbb{Z}_2^m$ is odd or even, once we have this information about $\delta-2$ arbitrary supersets of $S$, intersecting pairwise in $S$. For some sets, namely $d$-flats with $d\geq r+1,$ this information can be easily gained. Let us consider a $d$-flat $V$ with $d\geq r+1.$ Then, its incidence vector, $\chi_V,$ is a codeword of $\RM\(m-r-1,m\)$, the dual code of $\RM\(r,m\)$~\cite{MacWilliamsSloane77}. Thus,
\begin{align}\label{01}z\cdot\chi_V = \underbrace{c\cdot\chi_V}_{=0} + e\cdot\chi_V=e\cdot\chi_V.
\end{align}
Hence, $V$ is odd if and only if the check-sum of $V$ equals one.
Reed~\cite{Reed54} proposed an algorithm comprising $r+1$ steps in which Proposition~\ref{27} is applied. Taking into account the check-sums of certain $\(r+1\)$-flats, the algorithm computes in the first step whether certain $r$-flats are even or odd using majority-logic. In each step $\rho=1,2,\ldots,r+1,$ it is iteratively decided whether the $\(r+1-\rho\)$-flats are odd or even. In the final step, the algorithm yields the number of errors in $0$-flats where every $0$-flat corresponds to a single position.
Analyzing Reed's algorithm, Chen noticed that several steps can be omitted. In the case of $m\geq 3,$ $m/2\geq r\geq 1,$ Chen showed that for every position $i = 0,1,\ldots,n-1,$ there exist $\delta-2$ $r$-flats intersecting pairwise in $\left\{v_i\right\}$. In addition, each $r$-flat is the pairwise intersection of $\delta-2$ $\(r+1\)$-flats~\cite{Chen71,Chen72}. This observation is the basis for a two-step majority-logic algorithm to decode all $n$ positions. The first step is identical to the one in Reed's algorithm where the second step deduces the number of errors in $0$-flats directly from the results for $r$-flats.
\subsection{The Algorithm}
Chen's algorithm operates on a set of flats of dimension 0, $r$ and $r+1,$ say $\mathcal{F},$ which meets the following conditions.
\begin{enumerate}
\item $\left\{v_i\right\}\in\mathcal{F}$ for all $i=0,1,\ldots,n-1.$
\item For every $0$-flat $\left\{v\right\}\in\mathcal{F},$ there exist $r$-flats $V_{0},V_{1},\ldots,V_{\delta-3}\in\mathcal{F}$ with $V_i\cap V_j=\left\{v\right\}$ for $i\neq j.$
\item For every $r$-flat $V\in\mathcal{F}$ there exist $\(r+1\)$-flats $W_{0},W_{1},\ldots,W_{\delta-3}\in\mathcal{F}$ with $W_i\cap W_j=V$ for $i\neq j.$
\end{enumerate}
We call a set of flats \emph{admissible} if it satisfies these three conditions. Furthermore, we say \emph{$W_i$ is used for decoding of $V$} and \emph{$V_i$ is used for decoding of $\left\{v\right\}$}, $i=0,1,\ldots,\delta-3.$
By proving the existence of an admissible set in~\cite{Chen71,Chen72}, Chen indicates a strategy how to decode all positions in two steps using majority-logic.
\begin{proposition}[Chen's Two-Step Decoding Algorithm]\label{05}
Let $i\in\left\{0,1,\ldots, n-1\right\}$ be arbitrary and let $\mathcal{F}$ be an admissible set.
\begin{enumerate}
\item\label{31} An error occurred at position $i$, i.e., $e_i\neq 0$, if and only if more than half of the $r$-flats used for decoding of $\left\{v_i\right\}$ are odd.
\item\label{32} An $r$-flat $V\in\mathcal{F}$ is odd if and only if more than half of the $\(r+1\)$-flats used for decoding of $V$ are odd.
\end{enumerate}
We recall that an $\(r+1\)$-flat $W$ is odd if and only if $z\cdot\chi_W=1.$
\end{proposition}
The flats used for decoding are labeled as follows. For all $i = 0,1,\ldots,n-1,$ let $\left\{V_{i,0},\ldots,V_{i,\delta-3}\right\}$ be a set of $r$-flats used for decoding of $\left\{v_i\right\}$ and for all $j = 0,1,\ldots,\delta-3,$ let $\left\{W_{i,j,0},\ldots,W_{i,j,\delta-3}\right\}$ be a set of $\(r+1\)$-flats used for decoding of $V_{i,j}.$ The corresponding algorithm consists of four \emph{function levels}.
\begin{algorithmic}[1]
\Statex
\Input the received word $z\in\mathbb{Z}_2^{n}$
\Require at most $\delta/2-1$ errors occurred
\Output the actual transmitted codeword from $\RM\(r,m\)$
\State $\forall~i = 0,1,\ldots,n-1,~\forall~j,l = 0,1,\ldots,\delta-3$
\Statex $\varsigma_{i,j,l}:= z\cdot\chi_{W_{i,j,l}},$\label{33}
\State $\forall~i = 0,1,\ldots,n-1,~\forall~j = 0,1,\ldots,\delta-3$
\Statex $\mu_{i,j}:=\mu\(\varsigma_{i,j,0},\varsigma_{i,j,1},\ldots,\varsigma_{i,j,\delta-3}\),$\label{29}
\State $\forall~ i = 0,1,\ldots,n-1$
\Statex $\eta_{i}:=\mu\(\mu_{i,0},\mu_{i,1},\ldots,\mu_{i,\delta-3}\),$\label{30}
\State \Return $z+\eta:=\(z_0+\eta_0,\ldots,z_{n-1}+\eta_{n-1}\).$
\end{algorithmic}
The symbol ``$+$'' represents an addition in $\mathbb{Z}_2.$ If not more than $\delta/2-1$ errors occurred, $\eta$ equals the error pattern $e$ such that the actual transmitted codeword $c$ is returned. The term two-step decoding refers to the two steps in line~\ref{29} and~\ref{30} testing for majority.
\subsection{The Complexity}
At each of the four function levels, a specific function is called multiple times. All function calls at the same function level can be carried out simultaneously. In Table~\ref{table05}, we specify for each function level how often the corresponding function is called (simultaneously) and how many inputs the function gets.
In total, $O\(n\delta^2\)$ functions are called in Chen's algorithm.
\begin{table}[t]
\setlength{\extrarowheight}{2pt}
\caption{Number of parallel function calls at each function level of Chen's decoding algorithm}
\label{table05}
\centering
\begin{tabular}{|c|c|c|c|}
\firsthline
Function & \multirow{2}{*}{Function}& \multirow{2}{*}{Inputs}& Number of (Parallel)\\
Level && &Function Calls\\\hline\hline
1&Check-Sum &$\(2n/\delta\)$& $n\cdot \(\delta-2\)^2$\\\hline
2&Majority Vote& $\(\delta-2\)$ &$n\(\delta-2\)$ \\\hline
3&Majority Vote& $\(\delta-2\)$& $n$ \\\hline
4&XOR & 2& $n$\\\lasthline
\end{tabular}
\end{table}
\section{Improved Decoding Algorithm}\label{sect03}
Our new decoding algorithm consists of two majority-logic steps. In contrast to Chen, we test less times for majority and compute less check-sums. More precisely, we substitute Step \ref{32} in Chen's decoding procedure \(Proposition~\ref{05}\) by a more efficient method, while we maintain Step \ref{31}. There are two main reasons why our new algorithm is less complex than Chen's decoding procedure. First, instead of considering arbitrary flats for decoding, we use every $r$-flat for all its $2^r$ positions. Second, we never consider $\(r+1\)$-flats. Instead, we developed a new approach where we focus solely on $r$-flats.
\subsection{The Theoretical Approach}\label{sect06}
We start constructing a set of $r$-flats, $\mathcal{F},$ having the characteristics specified in the following proposition.
\begin{proposition}[Proposition 2.3 in~\cite{Hauck12}]\label{03}
There exist $\delta\cdot\(\delta-2\)$ $r$-flats in $\mathbb{Z}_2^m$ such that the intersection of any two of them has at most size 1 and every $v\in\mathbb{Z}_2^m$ is contained in exactly $\delta-2$ of these
$r$-flats.
\end{proposition}
In the proof of this proposition, the authors of~\cite{Hauck12} verify the existence by demonstrating how to construct such a set of $r$-flats.
At the very beginning, $\delta-2$ $r$-dimensional subspaces in $\mathbb{Z}_2^m$, say $U_0,\ldots, U_{\delta-3}\subset \mathbb{Z}_2^m$, pairwise intersecting in $\left\{0\right\}$ need to be computed. In fact, for $m=ar+b$, $a,b\in\mathbb{N}$, $b<r$, at least $N:=2^b\cdot\(\frac{2^{ar}-1}{2^r-1}-1\)+1$ such subspaces exist~\cite[ch. 1.1, Corollary~2.4]{EisfeldStorme} and can be constructed as shown in~\cite[ch. 1.1, Lemma 2.2,~Corollary~2.3, Corollary~2.4]{EisfeldStorme}.
Note that $N>2^{\(a-1\)r+b}=\delta$. Then, for all $l=0,1,\ldots,\delta-3$, let $W_l:=\left\{w_{l,0},\ldots,w_{l,\delta-1}\right\}$ be a complementary subspace of $U_l$ such that $U_l\oplus W_l=\mathbb{Z}_2^m$. We can state two facts. First, for every vector $v\in\mathbb{Z}_2^m$ and for every subspace $U_l$, $l=0,1,\ldots,\delta-3,$ there exists an $i\in\left\{0,1,\ldots,\delta-1\right\}$ such that $v\in w_{l,i}+U_l.$
Second, every two $r$-flats have at most one vector in common because the intersection of the underlying subspaces is trivial.
Thus, the set of $r$-flats
\begin{align*}\mathcal{F}:=\left\{w_{l,i}+U_l\mid l=0,1,\ldots,\delta-3, i=0,1,\ldots,\delta-1\right\},\end{align*}
comprising $\delta\cdot\(\delta-2\)$ $r$-flats, meets the conditions stated in Proposition~\ref{03}. The algorithm we will propose operates on this set of $r$-flats. Before we present our algorithm, we will explain its mathematical background in Theorem~\ref{07} using the following notations.
\begin{definition} We define $\(r+1\)$-flats $W_{l,i,j}\subseteq\mathbb{Z}_2^m$ and integers $\varsigma_{l,i}, \mu_l\in \mathbb{Z}_2$
\begin{align*}
W_{l,i,j}&:=w_{l,i}+\langle w_{l,i}+w_{l,j}\rangle \oplus U_l,\\
\varsigma_{l,i}&:= z\cdot\chi_{w_{l,i}+U_l},\\
\mu_l&:=\mu\(\varsigma_{l,0},\varsigma_{l,1},\ldots,\varsigma_{l,\delta-1}\),
\end{align*}
for all $l,i,j,$ with $l=0,1,\ldots,\delta-3,$ $i,j=0,1,\ldots,\delta-1,$ $i\neq j.$
\end{definition}
\begin{theorem}\label{07}\mbox{}
\begin{enumerate}
\item\label{22} An error occurred at position $j\in\left\{0,1,\ldots, n-1\right\}$, i.e., $e_j\neq 0$, if and only if at least $\delta/2$ flats from $\mathcal{F}$ containing $v_j$ are odd.
\item\label{23} A flat $w_{l,i}+U_{l}\in\mathcal{F}$ is odd if and only if $\mu_l\neq\varsigma_{l,i}$.
\end{enumerate}
\end{theorem}
Before we prove Theorem~\ref{07}, we state some general properties of flats.
\begin{lemma}\label{02}Let $l\in\left\{0,1,\ldots,\delta-3\right\}$ be arbitrary. Then
\begin{enumerate}
\item\label{04} $\(w_{l,i}+U_l \)~\cap~\(w_{l,j}+U_l\) = \emptyset,$\label{14}
\item\label{06} $\(w_{l,i}+U_l\)~\dot\cup~ \(w_{l,j}+U_l\) = W_{l,i,j},$\label{15}
\item $W_{l,i,j}~\cap~ W_{l,i,j^{\prime}} = w_{l,i}+U_l,$\label{16}
\end{enumerate}
for all pairwise distinct indices $i,j,j^{\prime}\in\left\{0,\ldots, \delta-1\right\}.$
\end{lemma}
\begin{IEEEproof}
\begin{enumerate}
\item Obvious.
\item Clearly, $\(w_{l,i}+U_l\), \(w_{l,j}+U_l\) \subset W_{l,i,j}$ and with~\ref{04}
\begin{align*}\lv\(w_{l,i}+U_l\)\dot\cup \(w_{l,j}+U_l\)\rv = 2\cdot 2^{r} = \lv W_{l,i,j}\rv.\end{align*}
\item Follows from~\ref{04} and~\ref{06}.
\end{enumerate}
\end{IEEEproof}
\begin{IEEEproof}[Proof of Theorem \ref{07}]
Assertion~\ref{22} directly follows from Proposition~\ref{27}. Proceeding to part~\ref{23}, let $i\in\left\{0,1,\ldots,\delta-1\right\}$ and $l\in\left\{0,1,\ldots,\delta-3\right\}$ be arbitrary. We will prove that the following statements are equivalent.
\renewcommand{\theenumi}{\roman{enumi}\)}
\renewcommand{\labelenumi}{\theenumi}
\begin{enumerate}
\item\label{37} The flat $w_{l,i}+U_{l}\in\mathcal{F}$ is odd.
\item\label{35} $\lv\left\{0\leq j \leq\delta-1,j\neq i\mid e\cdot \chi_{W_{l,i,j}}=1\right\}\rv\geq\delta/2$.
\item\label{36} $\mu_l \neq \varsigma_{l,i}$.
\end{enumerate}
$\ref{37}\Leftrightarrow\ref{35}$ The $\delta-1$ distinct $\(r+1\)$-flats $W_{l,i,j},$ $j=0,1,\ldots,\delta-1,$ $j\neq i,$ intersect pairwise in $w_{l,i}+U_l$ by Lemma~\ref{02}. Thus, by Proposition~\ref{27}, the flat $w_{l,i}+U_l$ is odd if and only if at least $\delta/2$ of these $\(r+1\)$-flats $W_{l,i,j}$ are odd resulting in the formula stated in~\ref{35}.\\
$\ref{35}\Leftrightarrow\ref{36}$
Similarly to equation~\eqref{01}, we have
\begin{align}
e\cdot \chi_{W_{l,i,j}} &= z\cdot \chi_{W_{l,i,j}} = z\cdot\chi_{W_{l,i,j}}\nonumber\\
&= z\cdot\chi_{w_{l,i}+U_l} + z\cdot\chi_{w_{l,j}+U_l}\nonumber\\
&=\varsigma_{l,i}+\varsigma_{l,j} \label{34}
\end{align}
for every $j=0,1,\ldots,\delta-1,$ $j\neq i,$ where the second equality follows from Lemma~\ref{02}.
We show now that $\lv\left\{0\leq s \leq\delta-1\mid\varsigma_{l,s}=1\right\}\rv \neq \delta/2$. Suppose $\lv\left\{0\leq s \leq\delta-1\mid\varsigma_{l,s}=1\right\}\rv = \delta/2$. It follows from~\eqref{34} that for every $s=0,1,\ldots,\delta-1$ with $\varsigma_{l,s}=1$
\begin{align*}\lv\left\{0\leq j \leq\delta-1,~j\neq s \mid e\cdot \chi_{W_{l,s,j}}=1\right\}\rv=\delta/2.\end{align*}
Applying the already proved equivalence $\ref{37}\Leftrightarrow\ref{35}$, we conclude $w_{l,s}+U_{l}$ is odd for every $s=0,1,\ldots,\delta-1$ with $\varsigma_{l,s}=1$. Thus, $\delta/2$ $r$-flats are odd. Since these $r$-flats are pairwise disjoint by Lemma~\ref{02}~\ref{14}, we have at least $\delta/2$ errors, a contradiction. Hence, $\lv\left\{0\leq s \leq\delta-1\mid\varsigma_{l,s}=1\right\}\rv \neq \delta/2.$
Let us assume $\mu_l \neq \varsigma_{l,i}.$ Then, by the definition of $\mu_l$ and what we have shown before, there exist at least $\delta/2+1$ scalars, say $\varsigma_{l,j_0},\ldots,\varsigma_{l,j_{\delta/2}},$ being unequal to $\varsigma_{l,i}.$ According to equation~\eqref{34}, we have
$e\cdot \chi_{W_{l,i,j_s}} = 1$ for all $s=0,1,\ldots,\delta/2.$
On the other hand, assuming $\mu_l = \varsigma_{l,i}$, there are at most $\delta/2-1$ scalars $\varsigma_{l,j}$ differing from $\varsigma_{l,i}.$ By equation~\eqref{34}, less than $\delta/2$ of the $e\cdot \chi_{W_{l,i,j}}$, $j=0,1,\ldots,\delta-1$, $j\neq i$, are 1.
\renewcommand{\theenumi}{\alph{enumi}\)}
\renewcommand{\labelenumi}{\theenumi}
\end{IEEEproof}
\subsection{The Algorithm}\label{26}
Our new algorithm is strongly based on Theorem~\ref{07}. Tracing back in which $r$-flats every position is contained enables us to design the decoding procedure.
Therefore, we define mappings $\phi_l$, $l=0,1,\ldots,\delta-3$, from $\{0,1,\ldots,n-1\}$ to $\{0,1,\ldots,\delta-1\}$ ensuring that
$v_i\in w_{l,\phi_l(i)}+U_l$ and therefore $v_i+ U_l = w_{l,\phi_l(i)}+U_l$ for all $i=0,1,\ldots,n-1$, $l=0,1,\ldots,\delta-3$.
Once the decoder has been constructed, this mapping between positions and $r$-flats is no longer needed.
\begin{algorithmic}[1]
\Statex
\Input the received word $z\in\mathbb{Z}_2^{n}$
\Require at most $\delta/2-1$ errors occurred
\Output the actual transmitted codeword $c\in\RM\(r,m\)$
\State $\forall~ j=0,1,\ldots,\delta-1,$ $\forall l=0,1,\ldots\delta-3,$\label{39}
\Statex $\varsigma_{l,j}:= z\cdot\chi_{w_{l,j}+U_l},
\State $\forall~ l=0,1,\ldots,\delta-3$\label{19}
\Statex $\mu_l:= \mu\(\varsigma_{l,0},\varsigma_{l,1},\ldots,\varsigma_{l,\delta-1}\),$
\State $\forall~ i=0,1,\ldots,\delta-1,$ $\forall l=0,1,\ldots,\delta-3$\label{40}
\Statex $\overline{\varsigma_{l,i}}:= \varsigma_{l,i}+\mu_l,$\label{42}
\State $\forall~ j=0,1,\ldots,n-1$\label{21}
\Statex $\eta_j:=\mu\(\overline{\varsigma_{0,\phi_0(j)}}, \overline{\varsigma_{1,\phi_1(j)}},\ldots,\overline{\varsigma_{\delta-3,\phi_{\delta-3}(j)}}\),$
\State \Return $z+\eta:=\(z_0+\eta_0,\ldots,z_{n-1}+\eta_{n-1}\).$\label{43}
\Statex
\end{algorithmic}
First, the scalar $\varsigma_{l,i}$ is computed for every $r$-flat $w_{l,i}+U_l\in\mathcal{F}$. Second, after evaluating the majority function at $\(\varsigma_{l,0},\varsigma_{l,1},\ldots,\varsigma_{l,\delta-1}\)$ for each $l=0,1,\ldots,\delta-3$, the value $\mu_l$ is added to the scalars $\varsigma_{l,0},\ldots,\varsigma_{l,\delta-1}$ where the symbol ``$+$'' represents an addition in $\mathbb{Z}_2$. This guarantees with reference to Theorem~\ref{07} that each $\overline{\varsigma_{l,i}}$ equals one if and only if $w_{l,i}+U_l$ is odd. Finally, the value one is assigned to $\eta_j$ if and only if the majority of the scalars
$\overline{\varsigma_{0,\phi_0(j)}}, \overline{\varsigma_{1,\phi_1(j)}},\ldots,\overline{\varsigma_{\delta-3,\phi_{\delta-3}(j)}}$
assumes one. Provided not more than $\delta/2-1$ errors occurred, $\eta$ equals the error pattern $e$ and $c=z+\eta.$
\subsection{The Complexity}
Our algorithm consists of five function levels. Analogous to Chen's algorithm, a specific function is called multiple times at each function level and all function calls at the same function level can be carried out simultaneously (see Table~\ref{table06}).
Because $m\geq 2r$ and therefore, $\delta^2\geq n$, overall, $O\(\delta^2\)$ functions are called in our algorithm.
\begin{table}[t]
\setlength{\extrarowheight}{2pt}
\caption{Number of parallel function calls at each function level of the proposed decoding algorithm}
\label{table06}
\centering
\begin{tabular}{|c|c|c|c|}
\firsthline
Function &\multirow{2}{*}{Function} & Inputs& Number of (Parallel) \\
Level &&&Function Calls\\\hline\hline
1&Check-Sum&$\(n/\delta\)$ & $\delta\cdot \(\delta-2\)$\\\hline
2&Majority Vote&$\delta$& $\delta-2$ \\\hline
3&XOR&2 & $\delta\cdot\(\delta-2\)$ \\\hline
4&Majority Vote& $\(\delta-2\)$ & $n$ \\\hline
5&XOR &2& $n$ \\\lasthline
\end{tabular}
\end{table}
\subsection{An Example for $\RM\(2,5\)$ with Electronic Schematic}
For every $i$, $i=0,1,\ldots,31$, let the vector $v_i:=(v_{i,4},v_{i,3},v_{i,2},v_{i,1},v_{i,0})\in\mathbb{Z}_2^5$ be the binary representation of $i$ such that
\begin{align*}
i=16\cdot v_{i,4} + 8\cdot v_{i,3} + 4\cdot v_{i,2} + 2\cdot v_{i,1} + v_{i,0}.
\end{align*}
For reasons of clarity, we primarily write $i$ instead of $v_i$.
The Reed--Muller code $\RM\(2,5\)$ is an $[n=32,k=16,\delta=8]$-code correcting three errors. A generator matrix $G$ is given by
\begingroup
\setlength{\arraycolsep}{1.15pt}
\renewcommand{\arraystretch}{1.0}
\begin{align*}\left(
\begin{array}{cccccccccccccccccccccccccccccccc}\\[-0.1em]
1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1\\
0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1\\
0&0&0&0&0&0&0&0&1&1&1&1&1&1&1&1&0&0&0&0&0&0&0&0&1&1&1&1&1&1&1&1\\
0&0&0&0&1&1&1&1&0&0&0&0&1&1&1&1&0&0&0&0&1&1&1&1&0&0&0&0&1&1&1&1\\
0&0&1&1&0&0&1&1&0&0&1&1&0&0&1&1&0&0&1&1&0&0&1&1&0&0&1&1&0&0&1&1\\
0&1&0&1&0&1&0&1&0&1&0&1&0&1&0&1&0&1&0&1&0&1&0&1&0&1&0&1&0&1&0&1\\
0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&1&1&1&1&1&1&1\\
0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&1&1&1&0&0&0&0&1&1&1&1\\
0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&1&0&0&1&1&0&0&1&1&0&0&1&1\\
0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&1&0&1&0&1&0&1&0&1&0&1&0&1\\
0&0&0&0&0&0&0&0&0&0&0&0&1&1&1&1&0&0&0&0&0&0&0&0&0&0&0&0&1&1&1&1\\
0&0&0&0&0&0&0&0&0&0&1&1&0&0&1&1&0&0&0&0&0&0&0&0&0&0&1&1&0&0&1&1\\
0&0&0&0&0&0&0&0&0&1&0&1&0&1&0&1&0&0&0&0&0&0&0&0&0&1&0&1&0&1&0&1\\
0&0&0&0&0&0&1&1&0&0&0&0&0&0&1&1&0&0&0&0&0&0&1&1&0&0&0&0&0&0&1&1\\
0&0&0&0&0&1&0&1&0&0&0&0&0&1&0&1&0&0&0&0&0&1&0&1&0&0&0&0&0&1&0&1\\
0&0&0&1&0&0&0&1&0&0&0&1&0&0&0&1&0&0&0&1&0&0&0&1&0&0&0&1&0&0&0&1\\
\end{array}\right)
\end{align*}
\endgroup
First, the decoder itself needs to be created. As presented in Section~\ref{sect06}, we construct six two-dimensional subspaces $U_0,U_2,\ldots,U_5\subset \mathbb{Z}_2^m$ and corresponding complementary subspaces $W_0,W_2,\ldots,W_5\subset \mathbb{Z}_2^m$,
\begin{flalign*}
U_0&:=\{0, 1, 30, 31\},& W_0&:=\{0, 2, 8, 10, 16, 18, 24, 26\}\\
U_1&:=\{0, 2, 24, 26\},& W_1&:=\{0, 3, 4, 7, 16, 19, 20, 23\}\\
U_2&:=\{0, 3, 20, 23\},& W_2&:=\{0, 4, 8, 12, 18, 22, 26, 30\}\\
U_3&:=\{0, 4, 18, 22\},& W_3&:=\{0, 2, 5, 7, 25, 27, 28, 30\}\\
U_4&:=\{0, 5, 25, 28\},& W_4&:=\{0, 6, 8, 14, 19, 21, 27, 29\}\\
U_5&:=\{0, 6, 27, 29\},& W_5&:=\{0, 1, 8, 9, 22, 30, 23, 31\}
\end{flalign*}
Based on each subspace $U_j$, $j=0,1,\ldots,5$, there exist eight 2-flats $w+U_j$, $w\in W_j.$
\begin{flalign*}
v_{0}+U_0&:=\{0,1,30,31\}, &v_{0}+U_1&:=\{0, 2, 24, 26\},\\
v_{2}+U_0&:=\{2,3,28,29\}, &v_{3}+U_1&:=\{1,3, 25,27\},\\
v_{8}+U_0 &:= \{8,9,22,23\}, &v_{4}+U_1&:=\{4,6,28,30\},\\
v_{10}+U_0&:=\{10,11,20,21\}, &v_{7}+U_1&:=\{5,7,29,31\},\\
v_{16}+U_0&:=\{14, 15, 16,17\}, &v_{16}+U_1&:=\{8,10,16,18\},\\
v_{18}+U_0&:=\{12, 13, 18, 19\}, &v_{19}+U_1&:=\{9, 11, 17, 19\},\\
v_{24}+U_0&:=\{6,7,24,25\}, &v_{20}+U_1&:=\{12,14,20,22\},\\
v_{26}+U_0&:=\{4,5,26,27\}, &v_{23}+U_1&:=\{13,15, 21,23\}.
\end{flalign*}
\begin{flalign*}
v_{0}+U_2&:=\{0, 3, 20, 23\}, &v_{0}+U_3&:=\{0, 4, 18, 22\},\\
v_{4}+U_2&:=\{4, 7, 16, 19\}, &v_{2}+U_3&:=\{2, 6, 16, 20\},\\
v_{8}+U_2 &:= \{8, 11, 28, 31\}, &v_{5}+U_3&:=\{1,5, 19,23\},\\
v_{12}+U_2&:=\{12, 15, 24, 27\}, &v_{7}+U_3&:=\{3, 7, 17, 21\},\\
v_{18}+U_2&:=\{5, 6, 17, 18\}, &v_{25}+U_3&:=\{11, 15, 25, 29\},\\
v_{22}+U_2&:=\{1, 2, 21, 22\}, &v_{27}+U_3&:=\{9, 13, 27, 31\},\\
v_{26}+U_2&:=\{13, 14, 25, 26\}, &v_{28}+U_3&:=\{10, 14, 24, 28\},\\
v_{30}+U_2&:=\{9, 10, 29, 30\}, &v_{30}+U_3&:=\{8, 12, 26, 30\}.
\end{flalign*}
\begin{flalign*}
v_{0}+U_4&:=\{0, 5, 25, 28\}, &v_{0}+U_5&:=\{0, 6, 27, 29\},\\
v_{6}+U_4&:=\{3, 6, 26, 31\}, &v_{1}+U_5&:=\{1, 7, 26, 28\},\\
v_{8}+U_4 &:= \{8, 13, 17, 20\}, &v_{8}+U_5&:=\{8, 14, 19, 21\},\\
v_{14}+U_4&:=\{11, 14, 18, 23\}, &v_{9}+U_5&:=\{9, 15, 18, 20\},\\
v_{19}+U_4&:=\{10, 15, 19, 22\}, &v_{22}+U_5&:=\{11, 13, 16, 22\},\\
v_{21}+U_4&:=\{9, 16, 12, 21\}, &v_{30}+U_5&:=\{3, 5, 24,30\},\\
v_{27}+U_4&:=\{2, 7, 27, 30\}, &v_{23}+U_5&:=\{10, 12, 17, 23\},\\
v_{29}+U_4&:=\{1, 4, 24, 29\}, &v_{31}+U_5&:=\{2, 4, 25, 31\}.
\end{flalign*}
The mappings $\phi_l$, $l=0,1,\ldots,5$, are specified in Table~\ref{table09} ensuring
$v_i+ U_l = w_{l,\phi_l(i)}+U_l$ for all $i=0,1,\ldots,31$.
\begin{table}[t]
\setlength{\extrarowheight}{2pt}
\caption{Mappings $\phi_0,\phi_1,\ldots,\phi_5$}
\label{table09}
\centering
\begin{tabular}{cl}
\firsthline
$l$& $\left(\phi_l(0), \phi_l(1)\ldots, \phi_l(31)\right)$\\\hline\hline
\multirow{2}{*}{$l=0$}&$\left(0, 0, 1, 1, 7, 7, 6, 6,2, 2, 3, 3, 5, 5, 4, 4,\right.$\\
&$\phantom{{}}\left.4, 4, 5, 5, 3, 3, 2, 2,6, 6, 7, 7, 1, 1, 0, 0\right).$\\\hline
\multirow{2}{*}{$l=1$}&$\left(0, 1, 0, 1, 2, 3, 2, 3,4, 5, 4, 5, 6, 7, 6, 7,\right.$\\
&$\phantom{{}}\left.4, 5, 4, 5, 6, 7, 6, 7,0, 1, 0, 1, 2, 3, 2, 3\right).$\\\hline
\multirow{2}{*}{$l=2$}&$\left(0, 5, 5, 0, 1, 4, 4, 1,2, 7, 7, 2, 3, 6, 6, 3,\right.$\\
&$\phantom{{}}\left.1, 4, 4, 1, 0, 5, 5, 0,3, 6, 6, 3, 2, 7, 7, 2\right).$\\\hline
\multirow{2}{*}{$l=3$}&$\left(0, 2, 1, 3, 0, 2, 1, 3,7, 5, 6, 4, 7, 5, 6, 4,\right.$\\
&$\phantom{{}}\left.1, 3, 0, 2, 1, 3, 0, 2,6, 4, 7, 5, 6, 4, 7, 5\right).$\\\hline
\multirow{2}{*}{$l=4$}&$\left(0, 7, 6, 1, 7, 0, 1, 6,2, 5, 4, 3, 5, 2, 3, 4,\right.$\\
&$\phantom{{}}\left.5, 2, 3, 4, 2, 5, 4, 3,7, 0, 1, 6, 0, 7, 6, 1\right).$\\\hline
\multirow{2}{*}{$l=5$}&$\left(0, 1, 7, 5, 7, 5, 0, 1,2, 3, 6, 4, 6, 4, 2, 3,\right.$\\
&$\phantom{{}}\left.4, 6, 3, 2, 3, 2, 4, 6,5, 7, 1, 0, 1, 0, 5, 7\right).$\\\lasthline
\end{tabular}
\end{table}
After constructing the underlying geometrical structure of our decoder, we consider the following example.
Let $m=\(1,1,1,0,0,0,0,0,0,0,0,1,1,1,0,0\)$ be the message word. Then,
\begingroup
\setlength{\arraycolsep}{1.0pt}
\begin{align*}c&=m\cdot G\\
&=\left(
\begin{array}{cccccccccccccccccccccccccccccccc}
1&1&1&1&1&1&0&0&0&1&1&0&0&1&0&1&0&0&0&0&0&0&1&1&1&0&0&1&1&0&1&0
\end{array}\right)
\end{align*}
\endgroup
is the corresponding codeword from $\RM(2,5)$. Suppose $c$ was sent through a noisy channel and
\begingroup
\setlength{\arraycolsep}{1.0pt}
\begin{align*}z&=\left(
\begin{array}{cccccccccccccccccccccccccccccccc}
0&0&1&1&1&1&0&0&0&1&1&0&0&1&0&1&0&0&0&0&0&0&1&1&1&0&0&1&1&0&1&1
\end{array}\right)
\end{align*}
\endgroup
was received with errors at positions 0,1 and 31.
The decoding can be performed as stated in Section~\ref{26}.
\begin{algorithmic}[1]
\Input $z$
\State $\(\varsigma_{0,0},\varsigma_{0,1},\ldots,\varsigma_{0,7}\)=\(0,1,1,1,1,1,1,1\),$
\Statex $\(\varsigma_{1,0},\varsigma_{1,1},\ldots,\varsigma_{1,7}\)=\(0,0,1,0,1,1,1,1\),$
\Statex $\(\varsigma_{2,0},\varsigma_{2,1},\ldots,\varsigma_{2,7}\)=\(0,1,0,1,1,0,1,1\),$
\Statex $\(\varsigma_{3,0},\varsigma_{3,1},\ldots,\varsigma_{3,7}\)=\(0,1,0,1,1,0,1,1\),$
\Statex $\(\varsigma_{4,0},\varsigma_{4,1},\ldots,\varsigma_{4,7}\)=\(0,0,1,1,1,1,1,0\),$
\Statex $\(\varsigma_{5,0},\varsigma_{5,1},\ldots,\varsigma_{5,7}\)=\(1,1,0,0,0,0,0,1\),$
\Statex
\State $\mu_0:= \mu\(\varsigma_{0,0},\varsigma_{0,1},\ldots,\varsigma_{0,7}\) = \mu\(0,1,1,1,1,1,1,1\)=1$,
\Statex $\mu_1:= \mu\(\varsigma_{1,0},\varsigma_{1,1},\ldots,\varsigma_{1,7}\) = \mu\(0,0,1,0,1,1,1,1\)=1,$
\Statex $\mu_2:= \mu\(\varsigma_{2,0},\varsigma_{2,1},\ldots,\varsigma_{2,7}\) = \mu\(0,1,0,1,1,0,1,1\)=1,$
\Statex $\mu_3:= \mu\(\varsigma_{3,0},\varsigma_{3,1},\ldots,\varsigma_{3,7}\) = \mu\(0,1,0,1,1,0,1,1\)=1,$
\Statex $\mu_4:= \mu\(\varsigma_{4,0},\varsigma_{4,1},\ldots,\varsigma_{4,7}\) = \mu\(0,0,1,1,1,1,1,0\)=1,$
\Statex $\mu_5:= \mu\(\varsigma_{5,0},\varsigma_{5,1},\ldots,\varsigma_{5,7}\) = \mu\(1,1,0,0,0,0,0,1\)=0,$
\Statex
\State $\(\overline{\varsigma_{0,0}},\overline{\varsigma_{0,1},\ldots},\overline{\varsigma_{0,7}}\)$
\Statex $=\(\varsigma_{0,0},\varsigma_{0,1},\ldots,\varsigma_{0,7}\) + \(\mu_0,\mu_0,\mu_0,\mu_0,\mu_0,\mu_0,\mu_0,\mu_0\)$
\Statex $=\(1,0,0,0,0,0,0,0\),$
\Statex $\(\overline{\varsigma_{1,0}},\overline{\varsigma_{1,1},\ldots},\overline{\varsigma_{1,7}}\)=\(1,1,0,1,0,0,0,0\),$
\Statex $\(\overline{\varsigma_{2,0}},\overline{\varsigma_{2,1},\ldots},\overline{\varsigma_{2,7}}\)=\(1,0,1,0,0,1,0,0\),$
\Statex $\(\overline{\varsigma_{3,0}},\overline{\varsigma_{3,1},\ldots},\overline{\varsigma_{3,7}}\)=\(1,0,1,0,0,1,0,0\),$
\Statex $\(\overline{\varsigma_{4,0}},\overline{\varsigma_{4,1},\ldots},\overline{\varsigma_{4,7}}\)=\(1,1,0,0,0,0,0,1\),$
\Statex $\(\overline{\varsigma_{5,0}},\overline{\varsigma_{5,1},\ldots},\overline{\varsigma_{5,7}}\)=\(1,1,0,0,0,0,0,1\),$
\State $\eta_0:=
\mu\(\overline{\varsigma_{0,\phi_0(0)}}, \overline{\varsigma_{1,\phi_1(0)}},\ldots,\overline{\varsigma_{5,\phi_{5}(0)}}\
=1,$
\Statex $\eta_1:=
\mu\(1,1,1,1,1,1\)=1,$
\Statex $\eta_2:=
\mu\(0,1,1,0,0,1\)=0,$
\Statex $\eta_3:=
\mu\(0,1,1,0,1,0\)=0,$
\Statex ...
\Statex $\eta_{30}:=
\mu\(1,0,0,0,0,0\)=0,$
\Statex $\eta_{31}:=
\mu\(1,1,1,1,1,1\)=1,$
\State \Return $z+\eta \overset{!}{=} c.$
\Statex
\end{algorithmic}
\begin{figure}[t]
\centering
\includegraphics[keepaspectratio=true, scale=0.395, clip=true, trim=0mm 0mm 0mm 0mm]{\MyPath Fig-1}
\caption{The proposed decoder for $\RM\(2,5\)$ with input $z$ and output $c$ provided not more than three errors occurred.}
\label{25}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[keepaspectratio=true, scale=0.395, clip=true, trim=0mm 0mm 00mm 0mm]{\MyPath Fig-2}
\caption{The parity-majority module $P\mu$ corresponding to any $l\in\{0,1,\ldots,5\}$ with input $\(z_{\psi_l(0)},z_{\psi_l(1)},\ldots,z_{\psi_l(31)}\)$ and output $\overline{\varsigma_{l,0}},\overline{\varsigma_{l,1}},
\ldots,\overline{\varsigma_{l,7}}\in\mathbb{Z}_2^{8}.$
In the first layer, even parity generators compute the check-sums and return $\varsigma_{l,0},\varsigma_{l,1},\ldots,\varsigma_{1,7}$ from top to bottom. The majority gate in the second layer returns $\mu_l$. Using XOR gates, $\mu_l$ and $\varsigma_{l,0}, \varsigma_{l,1},\ldots, \varsigma_{l,7}$ are combined in the third layer.}
\label{44}
\end{figure}
\begin{table}[t]
\setlength{\extrarowheight}{2pt}
\caption{Mappings $\psi_0,\psi_1,\ldots,\psi_5$}0
\label{table10}
\centering
\begin{tabular}{cl}
\firsthline
$l$& $\left(\psi_l(0), \psi_l(1)\ldots, \psi_l(31)\right)$\\\hline\hline
\multirow{2}{*}{$l=0$}&$\left(0, 1, 30, 31, 2, 3, 28, 29,8, 9, 22, 23, 10, 11, 20, 21,\right.$\\
&$\phantom{{}}\left.14, 15, 16, 17, 12, 13, 18, 19,6, 7, 24, 25, 4, 5, 26, 27\right).$\\\hline
\multirow{2}{*}{$l=1$}&$\left(0, 2, 24, 26, 1, 3, 25, 27,4, 6, 28, 30, 5, 7, 29, 31,\right.$\\
&$\phantom{{}}\left.8, 10, 16, 18, 9, 11, 17, 19,12, 14, 20, 22, 13, 15, 21, 23\right).$\\\hline
\multirow{2}{*}{$l=2$}&$\left(0, 3, 20, 23, 4, 7, 16, 19,8, 11, 28, 31, 12, 15, 24, 27,\right.$\\
&$\phantom{{}}\left.5, 6, 17, 18, 1, 2, 21, 22,13, 14, 25, 26, 9, 10, 29, 30\right).$\\\hline
\multirow{2}{*}{$l=3$}&$\left(0, 4, 18, 22, 2, 6, 16, 20,1, 5, 19, 23, 3, 7, 17, 21,\right.$\\
&$\phantom{{}}\left.11, 15, 25, 29, 9, 13, 27, 31,10, 14, 24, 28, 8, 12, 26, 30\right).$\\\hline
\multirow{2}{*}{$l=4$}&$\left(0, 5, 25, 28, 3, 6, 26, 31,8, 13, 17, 20, 11, 14, 18, 23,\right.$\\
&$\phantom{{}}\left.10, 15, 19, 22, 9, 16, 12, 21, 2, 7, 27, 30, 1, 4, 24, 29\right).$\\\hline
\multirow{2}{*}{$l=5$}&$\left(0, 6, 27, 29, 1, 7, 26, 28,8, 14, 19, 21, 9, 15, 18, 20,\right.$\\
&$\phantom{{}}\left.11, 13, 16, 22, 3, 5, 24, 30,10, 12, 17, 23, 2, 4, 25, 31\right).$\\\lasthline
\end{tabular}
\end{table}
Fig.~\ref{25} and Fig.~\ref{44} show how the decoding architecture can be built in hardware for a Reed--Muller code $\RM\(2,5\).$
For reasons of clarity and comprehensibility, we structure the decoder (see Fig.~\ref{25}) such that six identical modules, one for every $l=0,1,\ldots,5$, execute line 1, line 2 and line 3 of the proposed algorithm (cf. Section~\ref{26}). A schematic of such a \emph{parity-majority module}, denoted by $P\mu$, is presented in Fig.~\ref{44}.
The blocks labeled with $\omega_0,\omega_1,\ldots,\omega_5$ and $\omega_0^{-1},\omega_1^{-1},\ldots,\omega_5^{-1}$ do not contain any logic gate. They just represent fixed wirings permuting the 32 inputs. The corresponding permutations $\psi_0,\psi_1,\ldots,\psi_5$ are specified in Table~\ref{table10}.
More precisely, within the block $\omega_l$, the input signals, indexed from 0 to 31, are rearranged in the order $\psi_l(0),\psi_l(1),\ldots,\psi_l(31)$ such that the $i$-th signal comes on position $j$ where $\psi_l(j)=i$. Thus, the 32-bit input of the $l$-th module $P\mu$ is just
$\left(z_{\psi_l(0)},z_{\psi_l(1)},\ldots,z_{\psi_l(31)}\right)$.
The module $P\mu$ processes these signals and returns the eight output signals $\left(\overline{\varsigma_{l,0}},\overline{\varsigma_{l,1}},\ldots,\overline{\varsigma_{l,7}}
\right)$.
Recalling that $\overline{\varsigma_{l,i}}$, $l=0,1,,\ldots,5$, $i=0,1,\ldots,7$, states whether $w_{l,i}+U_l$ is odd or even,
every signal $\overline{\varsigma_{l,i}}$ needs to be conveyed to those four different majority gates corresponding to the four vectors contained in $w_{l,i}+U_l$. Therefore, within block $\omega_l^{-1}$, the 32 signals are reordered such that the signal on position $i$, $i=0,1,\ldots,31$, is transferred to position $\psi_l(i)$. Applying this second permutation, it is ensured that the $i$-th signal yields information for determining the $i$-th entry of the codeword,~$c_i$.
\section{Comparison of Complexity}\label{sect04}
In this section, we compare our algorithm with Chen's algorithm in terms of number of function calls as well as in terms of depth and size of circuits realizing the algorithms. Clearly, the number of function calls is correlated with time complexity where depth and size of a circuit provide information about parallel time and space consumption, respectively.
\subsection{Number of Function Calls}
An overview of the executed functions with respect to the number of inputs and how often each is called in Chen's and the proposed algorithm is provided in Table~\ref{table07}.
\begin{table}[t]
\setlength{\extrarowheight}{2pt}
\caption{Number of function calls in Chen's and the proposed algorithm}
\label{table07}
\centering
\begin{tabular}{|r|r||c|c|}
\firsthline
\multirow{2}{*}{Function}&\multirow{2}{*}{Inputs}&\multicolumn{2}{c|}{Number of Function Calls in}\\\cline{3-4}
&& Chen's Algorithm & the New Algorithm\\\hline\hline
\multirow{2}{*}{Check-Sum} & $\(2n/\delta\)$ & $n\cdot \(\delta-2\)^2$ &0 \\%\cline{2-4}
&$\(n/\delta\)$ & 0 &$\delta\cdot\(\delta-2\)$ \\\hline
\multirow{2}{*}{Majority Vote} & $\delta$& 0& $\delta-2$ \\%\cline{2-4}
&$\(\delta-2\)$& $n\(\delta-1\)$ & $n$\\\hline
XOR &2&$n$& $n + \delta\cdot\(\delta-2\)$ \\\hline\hline
In Total & & $O\(n\delta^2\)$ & $O\(\delta^2\)$
\\\lasthline
\end{tabular}
\end{table}
\begin{table*
\setlength{\extrarowheight}{2pt}
\caption{Number of function calls in Chen's and the proposed algorithm for selected Reed--Muller codes}
\label{table08}
\centering
\begin{tabular}{|r||c|c|c||c|c|c||c|c|c||c|c|c|}
\firsthline
\multirow{4}{*}{Function}
&\multicolumn{3}{c||}{RM(2,4)}&\multicolumn{3}{c||}{RM(2,5)}
&\multicolumn{3}{c||}{RM(3,6)}&\multicolumn{3}{c|}{RM(3,7)}
\\\cline{2-13}
&\multirow{3}{*}{Inputs}&\multicolumn{2}{c||}{\# Function Calls in}&\multirow{3}{*}{Inputs}&\multicolumn{2}{c||}{\# Function Calls in}
&\multirow{3}{*}{Inputs}&\multicolumn{2}{c||}{\# Function Calls in}&\multirow{3}{*}{Inputs}&\multicolumn{2}{c|}{\# Function Calls in}
\\
& & Chen's & the New & & Chen's & the New & & Chen's & the New & & Chen's & the New
\\
&& \multicolumn{2}{c||}{Algorithm}
&& \multicolumn{2}{c||}{Algorithm}
&& \multicolumn{2}{c||}{Algorithm}
&& \multicolumn{2}{c|}{Algorithm}
\\\hline\hline
\multirow{2}{*}{Check-Sum}
&8& 64 &0
&8& 1,152 &0
&16& 2,304 &0
&16& 25,088 &0
\\%\cline{2-4}
&4& 0 &8
&4& 0 &48
&8& 0 &48
&8& 0 &224
\\\hline
\multirow{2}{*}{Majority Vote}
& 4& 0& $2$
& 8& 0& $6$
& 8& 0& 6
& 16& 0& 14
\\%\cline{2-4}
& 2 & 48 & 16
& 6 & 224 & 32
& 6 &448 & 64
& 14 &1,920 & 128
\\\hline
XOR
&2 &16& 24
&2 &32& 80
&2 &64& 112
&2 &128& 352
\\\hline\hline
\vtop
\vskip11pt
\hbox{\includegraphics[keepaspectratio=true,scale=0.38, ]
{\MyPath Fig-3}}}
&\multicolumn{3}{c||}{
\vtop
\vskip0pt
\hbox
\includegraphics[keepaspectratio=true,scale=0.38, ]
{\MyPath Fig-4}}}}
&\multicolumn{3}{c||}{
\vtop
\vskip0pt
\hbox
\includegraphics[keepaspectratio=true,scale=0.38, ]
{\MyPath Fig-5}}}
&\multicolumn{3}{c||}{
\vtop
\vskip0pt
\hbox
\includegraphics[keepaspectratio=true,scale=0.38, ]
{\MyPath Fig-6}}}
&\multicolumn{3}{c|}{
\vtop
\vskip0pt
\hbox
\includegraphics[keepaspectratio=true,scale=0.38, ]
{\MyPath Fig-7}}}}\\\lasthline
\end{tabular}
\end{table*}
Apparently, decoding with our method instead of Chen's algorithm reduces the number of check-sums to be computed by an order of $n$ and the number of majority votes to be decided by an order of $\delta$. The parameterized data of Table~\ref{table07} is illustrated by way of example in Table~\ref{table08}.
\subsection{Size and Depth of Combinational Circuits}
We want to investigate the size and depth of combinational circuits realizing Chen's and the proposed decoding algorithm. Therefore, we need to consider concrete implementations of the functions, majority vote and check-sum.
In the following, we assume majority voting is performed in constant time by a single \emph{majority gate}, a specific \emph{linear threshold gate}. Linear threshold gates compute for a given threshold $T\in\mathbb{R}$ and for given weights $w_1,\ldots,w_s\in\mathbb{R}$ the Boolean function $\vartheta: \left\{0,1\right\}^s\rightarrow \left\{0,1\right\}$ where
\begin{align*}\vartheta\(x_1,x_2,\ldots,x_s\):=\begin{cases}
1& \sum_{i=1}^s w_i\cdot x_i\geq T\\
0& \text{otherwise}
\end{cases}\end{align*}
\(cf., e.g.,~\cite[ch. 1, sect. 1.1]{Hassoun95}\).
Thus, a majority gate with $s$ inputs is a linear threshold gate where each weight equals one and the threshold equals $\left\lfloor s/2\right\rfloor +1$.
An \emph{even parity generator} is a combinational circuit which computes the \emph{even parity bit} from the input bits. The even parity bit is set to one if and only if the number of input bits which take on the value one is odd. Every check-sum $z\cdot\chi_S$, $S:=\left\{v_{i_1},v_{i_2},\ldots,v_{i_{\lv S\rv}}\right\}\subseteq \mathbb{Z}_2^m$, can be calculated by an even parity generator taking $\left(z_{i_1},z_{i_2},\ldots,z_{i_{\lv S\rv}}\right)$ as input.
Even parity generators of depth $\left\lceil\log_2(N)\right\rceil$ can be simply built out of $N-1$ XOR gates.
It is not surprising that even parity generators with $N$ inputs and of constant depth require more than a polynomial \(in $N$\) number of unbounded fan-in AND, OR and NOT gates~\cite{FurstSaxeSipser81}. But by using linear threshold gates, constant depth and polynomial size can be achieved.\\
Minnick showed in 1961 that an $(2N)$-bit even parity generator of depth 2 can be constructed with $N+1$ linear threshold gates~\cite{Minnick61}. Furthermore, at most $\left\lfloor 2\sqrt{2N-2} + 4\right\rfloor$ linear threshold gates are required for an $(2N)$-bit even parity generator of depth 3~\cite{SRK94}. In fact, the parity function with $N$ inputs can be realized by a threshold circuit of any given depth $d\geq 2$ and size $O\(dN^{1/(d-1)}\)$~\cite{SRK91}.
Recalling the particular function levels of our new algorithm, it can be implemented in a circuit of any given depth $d\geq 6$ and size $s_{\New}(d)=O\(\delta^2\cdot d\cdot{(n/\delta)}^{1/(d-5)}\)$. In this case, the circuit consists of two layers of XOR gates, two layers of majority gates and $d-4$ layers of linear threshold gates. On the other hand, Chen's algorithm can be realized by a circuit of depth $d\geq 5$ and size $s_{\Chen}(d)=O\( n\cdot \delta^2 \cdot d\cdot{(n/\delta)}^{1/(d-4)}\)$ with one layer of XOR gates, two layers of majority gates and $d-3$ layers of linear threshold gates.
Note that for all $d\geq 6$,
\begin{align*}
s_{\Chen}(d)/s_{\New}(d)
= O\(\delta\cdot{(n/\delta)}^{1-1/{((d-4)(d-5))}}\),
\end{align*}
where
\begin{align*}
\min_{d\geq 6}\(\delta\cdot(n/\delta)^{1-1/((d-4)(d-5))}\)=\delta\cdot{(n/\delta)}^{1/2}={(\delta n)}^{1/2}.
\end{align*}
Hence, using our new instead of Chen's algorithm, the size of the decoder with a fixed depth can be reduced by at least an order of $(\delta n)^{1/2}$.
Furthermore, compared to our depth-efficient decoder, the number of gates in a size-efficient circuit realizing Chen's algorithm is still higher by an order of at least $\delta$:
\begin{align*}
s_{\Chen}(d)/s_{\New}(6)= O\( \delta\cdot{(n/\delta)}^{1/(d-4)}\).
\end{align*}
\section{Conclusion}\label{sect05}
In the present paper, we proposed a new hard-decision majority-logic decoding algorithm for Reed--Muller codes $\RM\(r,m\)$ with $m \geq 3,$ $m/2 \geq r\geq 1.$ We showed how to design the decoder by explaining how to construct its underlying geometrical structure. Therefore, our algorithm is easy to implement in both software and hardware. In embedded systems, the proposed decoder can be realized by a simple non-clocked combinational circuit without any registers or flip-flops.
\balance
Regarding the number of operations, recursive decoders~\cite{Dumer04,SchnBos95} usually outperform those based on majority-logic~\cite{Reed54, Chen71, Chen72} and the proposed one.
However, if decoding is to be performed as fast as possible, parallel processing of the functions is appropriate. Clearly, this cannot be sufficiently achieved by recursive algorithms. Their decoding hierarchy is too deeply nested in order to allow fast parallel decoding. Therefore, if algorithms are evaluated on the basis of the required parallel time, majority-logic decoding is preferable to recursive decoding.
We aimed to construct an algorithm which decodes in constant parallel time but is less complex than the best known majority-logic decoders. In fact, Chen's~\cite{Chen71, Chen72} as well as the presented algorithm offers decoding with a constant level of nesting. Nevertheless, using the new method instead of Chen's algorithm, the number of function calls and space consumption can be reduced by at least an order of $n$ and $\delta$, respectively.
Thus, the proposed decoder is a good candidate when massively parallel decoding of all bits in real-time or near real-time is desired.
\section*{Acknowledgment}
The authors thank the anonymous reviewers for their constructive comments and useful suggestions which greatly contributed to improving the manuscript.
|
1,314,259,996,618 | arxiv | \section{Introduction}
\par Given an elliptic curve $E$ defined over the rationals, and a prime number $p$, denote by $E[p]$ the $p$-torsion subgroup of $E(\bar{\mathbb{Q}})$. B.Mazur introduced deformation functors, parametrizing lifts of the residual representation \[\bar{\rho}:\op{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\rightarrow \op{GL}_2(\mathbb{F}_p)\] on $E[p]$. These functors are represented by universal deformation rings and their study gained considerable momentum in \cite{wilesflt, TW, BCDT}, where the modularity of an elliptic curve $E_{/\mathbb{Q}}$ is established. The approach involved showing that a certain deformation ring associated to $\bar{\rho}$ is in fact isomorphic to a localized Hecke algebra associated to a space of modular forms. A result of this flavor which establishes an isomorphism between a deformation ring and a localized Hecke algebra is known as an $\op{R}=\mathbb{T}$ theorem. Here, $\op{R}$ refers to the deformation ring associated to a residual representation $\bar{\rho}$ and $\mathbb{T}$ is the associated localized Hecke algebra. The Galois representations which arise from elliptic curves and modular forms satisfy additional local conditions, and hence, the deformations functors of interest are subject to a local constraint at $p$, which is defined using $p$-adic Hodge theory. There has been considerable interest in extending such $\op{R}=\mathbb{T}$ results to more general Galois representations arising from cuspidal automorphic representations of higher rank, see for instance \cite{CHT, taylor,lambetal}. For instance, a brilliant application of such automorphy results led to the resolution of the Sato-Tate conjecture, see \cite{LGHT}.
\par In this paper, we shall consider deformations with fixed determinant equal to the cyclotomic character. Given the importance of deformation rings in proving modularity results, it is of natural interest to explicitly characterize Galois deformation rings. Such investigations were carried out by N.Boston and B.Mazur, see \cite{boston1, boston2, bostonmazur}. Presentations for Galois deformation rings with prescribed local conditions are discussed in the work of G. B\"ockle \cite{bockle}. The deformation ring is presented by generators and relations. When there are no relations, the ring is smooth. If this is the case, the deformation ring/functor is said to be \textit{unobstructed} and it is in fact isomorphic to a power series ring over $\mathbb{Z}_p$. On the other hand, there may be relations, in which case it's structure is more complicated. Such relations arise from \textit{local and global obstructions} to lifting. The local obstructions are characterized by local cohomology classes, while in the other hand, the global obstructions are characterized by global cohomology classes subject to local constraints (see \cite[Theorem 5.2]{bockle} for further details).
\par The deformation rings studied in this paper are equipped with a local condition at $p$, which we shall refer to as \textit{geometric deformation rings}. There is one such ring $\mathcal{R}_{E,p}$ associated to each pair $(E,p)$, where $E$ is a rational elliptic curve and $p$ an odd prime at which $E$ has good reduction. We study the structure of these rings on average, more precisely, how often they are unobstructed. Since the determinant is fixed throughout, the geometric deformation ring is smooth if and only of it is isomorphic to $\mathbb{Z}_p$. In this case, there is a unique characteristic-zero lift. We study the following questions.
\begin{Question}
\begin{enumerate}
\item For a fixed elliptic curve $E$, how often is the deformation ring $\mathcal{R}_{E,p}$ unobstructed?
\item For a fixed odd prime $p$ and $E$ varying over all rational elliptic curves with good reduction at $p$, how often is the deformation ring $\mathcal{R}_{E,p}$ unobstructed?
\end{enumerate}
\end{Question}
\par The first question was raised by Mazur in \cite[section 11]{infinitefern}, and studied by the second named author in \cite{westonmain}. In \textit{loc. cit.} one considers a Hecke eigencuspform $f$ of weight $k\geq 2$ and squarefree level. At each prime $p$, denote by $\bar{\rho}_{f,p}$ the residual representation at $p$. Then it is shown that for $100\%$ of the primes $p$, the full deformation ring associated to $\bar{\rho}_{f,p}$ (with no constraints on the determinant or local conditions) is unobstructed (and isomorphic to $\mathbb{Z}_p\llbracket X_1, X_2, X_3\rrbracket$). Furthermore, when $k>2$, the deformation ring is unobstructed for all but finitely many primes. These results were extended by J.Hatley to the case when the level is not squarefree, see \cite{hatley}. When $f$ is the modular form associated to an elliptic curve, the weight $k=2$ and it is still expected (but not proven except in a few cases; see \cite{ridgdill}) that the full deformation ring is unobstructed for all by finitely many primes. These results have been generalized to Hilbert modular forms by A.Gamzon \cite{gamzon}, and subsequently to regular algebraic conjugate selfdual
cuspidal (RACSDC) automorphic representations by D.Guiraud \cite{Guiraud}.
\par We show that for a fixed elliptic curve $E_{/\mathbb{Q}}$ without complex multiplication, the geometric deformation ring $\mathcal{R}_{E,p}$ is isomorphic to $\mathbb{Z}_p$ for all but an explicit finite set of primes $p$, see Theorem \ref{section 4 main thm} and Corollary \ref{section 4 last}. Next, we consider the problem where $p$ is fixed and $E$ varies over elliptic curves ordered according to height. This is certainly a more ambitious problem, and we are only able to combine our results with heuristics. We restrict ourselves to elliptic curves with good ordinary reduction at $p$, squarefree conductor and for which the residual representation $\bar{\rho}_{E,p}$ on $E[p]$ is irreducible. Furthermore, we insist that all primes $\ell$ dividing the conductor of $E$ are not $\pm 1\mod{p}$, see Definition \ref{section 5 defn}. For this set of elliptic curves, it is shown that the deformation ring $\mathcal{R}_{E,p}$ is unobstructed provided the degree of the modular parametrization $X_0(N)\rightarrow E$ is coprime to $p$. This condition has been studied in detail by M.Watkins in \cite{watkins}. Cohen-Lenstra heuristics indicate that the probability that $p$ divides the modular degree of an elliptic curve is given by the product
\[1-\prod_{i\geq 1} \left(1-\frac{1}{p^i}\right)=\frac{1}{p}+\frac{1}{p^2}-\frac{1}{p^3}+\dots.\]
The heuristics indicate that the proportion of elliptic curves subject to the above conditions for which $\mathcal{R}_{E,p}$ is not isomorphic to $\mathbb{Z}_p$ becomes smaller as $p$ increases. These heurists are supported by computation which provide further insight into the problem.
\par \emph{Organization:} Including the introduction, the article consists of five sections. In section \ref{section 2}, we discuss the essential objects of study in the paper, namely the deformation rings associated to elliptic curves. In section \ref{section 3}, we discuss presentations for deformation rings and establish a criterion for unobstructedness, in the setting in which there is a local condition at $p$. In section \ref{section 4}, we study the first of the two aforementioned questions. In this section, it is shown that given a non-CM elliptic curve, the deformation rings considered are unobstructed for all but a finite explicit set of primes. In section \ref{section 5}, the second question is studied, when $p\geq 5$ is a fixed prime and $E$ varies over a certain collection of elliptic curves. In this section the question of unobstructedness of the deformation ring is related to the modular degree. Cohen-Lenstra heuristics are supported by computations in this section.
\subsection*{Acknowledgements}
AR would like to thank Ravi Ramakrishna for helpful conversations.
\section{Deformation theory of Galois representations}\label{section 2}
\par Throughout this section, we fix an elliptic curve $E$ defined over the rationals and a prime number $p\geq 3$ such that the following simplifying assumptions are in place
\begin{enumerate}
\item $E$ has good reduction at $p$,
\item the residual representation $\bar{\rho}_{E,p}:\op{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\rightarrow \op{GL}_2(\mathbb{F}_p)$ on the $p$-torsion points $E[p]$ is absolutely irreducible.
\end{enumerate}Fix an algebraic closure $\bar{\mathbb{Q}}$ of $\mathbb{Q}$. For a finite set of primes $\Sigma$, let $\mathbb{Q}_{\Sigma}\subset \bar{\mathbb{Q}}$ be the maximal extension of $\mathbb{Q}$ which is unramified at all primes $v\notin \Sigma$ and $\op{G}_{\Sigma}$ denote the Galois group $\op{Gal}(\mathbb{Q}_{\Sigma}/\mathbb{Q})$.
\subsection{Galois deformations} \par Associated to the residual representation $\bar{\rho}=\bar{\rho}_{E,p}$ are various Galois deformation rings, which parametrize the Galois representations which lift $\bar{\rho}$. We recall how these rings are defined and the properties that characterize them. Throughout, $S$ will denote the set of primes $\ell$ at which $E$ has bad reduction. Note that by assumption, $S$ does not contain $p$. The global representation gives rise to a local one at each prime $\ell$. For every prime $\ell$, choose an algebraic closure $\bar{\mathbb{Q}}_\ell$ of $\mathbb{Q}_\ell$ and an embedding $\iota_\ell:\bar{\mathbb{Q}}\hookrightarrow \bar{\mathbb{Q}}_\ell$. Set $\op{G}_\ell$ to denote the absolute Galois group $\op{Gal}(\bar{\mathbb{Q}}_\ell/\mathbb{Q}_\ell)$ and note that $\iota_\ell$ induces an inclusion $\op{G}_\ell\hookrightarrow \op{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$. With respect to this inclusion, we let $\bar{\rho}_{|\ell}$ be the restriction of $\bar{\rho}$ to $\op{G}_\ell$. We introduce various functors of \textit{Galois deformations} associated to the residual representation $\bar{\rho}$, some of which shall involve conditions on the local representations. First, we introduce the functor of deformations with no local constraints. Let $\mathcal{C}$ be the category of commutative complete local noetherian $\mathbb{Z}_p$-algebras with residue field isomorphic to $\mathbb{F}_p$. Such rings are referred to as \textit{coefficient rings} and are inverse limits of finite length local rings. A coefficient ring $A$ is equipped with the inverse limit topology and admits a presentation of the form
\[A\simeq \frac{\mathbb{Z}_p\llbracket X_1, \dots, X_n\rrbracket}{(f_1, \dots, f_m)},\] where $f_1, \dots, f_m$ are in the maximal ideal of $A$. Here, $\mathbb{Z}_p\llbracket X_1, \dots, X_n\rrbracket$ is the formal power series ring over $\mathbb{Z}_p$ in indeterminates $X_1, \dots, X_n$. Important examples of such coefficient rings include $\mathbb{Z}_p$, $\mathbb{Z}/p^N$, the ring of dual numbers $\mathbb{F}_p[\epsilon]/(\epsilon^2)$ and the Iwasawa algebra $\mathbb{Z}_p\llbracket T\rrbracket$. A coefficient ring $A$ comes equipped with a unique residue map $A\rightarrow \mathbb{F}_p$ which is a $\mathbb{Z}_p$-algebra homomorphism. A map between coefficient rings is a $\mathbb{Z}_p$-algebra homomorphism which commutes with residue maps. Denote by $\hGL{A}$ the subgroup of $\op{GL}_2(A)$ consisting of matrices which reduce to the identity in $\op{GL}_2(\mathbb{F}_p)$. Two continuous $A$-valued representations
\[\rho_1, \rho_2: \op{G}_{ S\cup \{p\}}\rightarrow \op{GL}_2(A)\]lifting $\bar{\rho}$ are strictly equivalent if they are conjugate by some matrix in $\hGL{A}$.
\begin{Definition}A deformation $\rho:\op{G}_{ S\cup \{p\}}\rightarrow \op{GL}_2(A)$ of $\bar{\rho}$ is a strict equivalence class of continuous lifts of $\bar{\rho}$.
\end{Definition}
Let $\chi$ denote the cyclotomic character. Throughout, all deformations $\rho$ of $\bar{\rho}$ are stipulated to have determinant equal to $\chi$. When we refer to a Galois deformation $\rho:\op{G}_{S\cup \{p\}}\rightarrow \op{GL}_2(A)$, it is to be understood that $\rho$ is an actual representation whose strict equivalence class is the deformation in question.
\subsection{Deformation rings}
\par The functor of deformations
\[\op{D}_{\bar{\rho}}: \mathcal{C}\rightarrow \op{Sets}\]sends a ring $A$ to the set of deformations of $\bar{\rho}$ of the form $\rho:\op{G}_{ S\cup \{p\}}\rightarrow \op{GL}_2(A)$. Note that $S\cup \{p\}$ is the set of primes at which the deformations are allowed to be ramified, and this is suppressed in the notation. In fact, throughout the paper, we will only be considering deformations with minimal ramification in this sense. Since it is assumed that $\bar{\rho}$ is irreducible, it follows from \cite[Proposition 1]{mazur1} that there is a \textit{universal} Galois deformation
\[\rho^{\op{univ}}:\op{G}_{ S\cup\{p\}}\rightarrow \op{GL}_2(\op{R}_{\bar{\rho}})\] representing $\op{D}_{\bar{\rho}}$. This means that if $A$ is a coefficient ring and $\rho:\op{G}_{ S\cup \{p\}}\rightarrow \op{GL}_2(A)$ is a Galois deformation of $\bar{\rho}$, then there is a unique map of coefficient rings $\op{R}_{\bar{\rho}}\rightarrow A$ such that the following diagram commutes
\[ \begin{tikzpicture}[node distance = 2.5 cm, auto]
\node at (0,0) (G) {$\op{G}_{ S\cup \{p\}}$};
\node (A) at (3,0){$\op{GL}_2(A)$.};
\node (B) at (3,2){$\op{GL}_2(\op{R}_{\bar{\rho}})$};
\draw[->] (G) to node [swap]{$\rho$} (A);
\draw[->] (B) to node{} (A);
\draw[->] (G) to node {$\rho^{\op{univ}}$} (B);
\end{tikzpicture}\]
Here, $\op{R}_{\bar{\rho}}$ is referred to as the \textit{universal deformation ring} associated to $\op{D}_{\bar{\rho}}$.
\par Let $\op{I}_p$ be the inertia group at the prime $p$. A character $\psi:\op{G}_{\mathbb{Q}}\rightarrow \op{GL}_1(\mathbb{Z}_p)$ is \textit{geometric} if $\op{\psi}_{\restriction \op{I}_p}=\chi^{k-1}_{\restriction \op{I}_p}$, for $k\in \mathbb{Z}_{\geq 2}$. Fontaine and Mazur introduce the notion of a \textit{geometric} Galois representation.
\begin{Definition}
Let $\rho:\op{G}_{\mathbb{Q}}\rightarrow \op{GL}_2(\bar{\mathbb{Q}}_p)$ be a continuous Galois representation, $\rho$ is said to be geometric if it satisfies the following conditions:
\begin{enumerate}
\item $\rho$ is irreducible,
\item $\op{det}\rho$ is an odd geometric character,
\item $\rho$ is unramified away from a finite set of primes,
\item the local representation $\rho_{\restriction p}$ is \textit{deRham} (see \cite[section 6]{brinonconrad} for the definition and basic properties of deRham representations).
\end{enumerate}
\end{Definition}
When $\rho$ arises from a Hecke eigencuspform, it is necessarily geometric. Fontaine and Mazur conjectured that $2$-dimensional geometric Galois representations arise from Hecke eigencuspforms. This conjecture has been settled in all but one exceptional case (see \cite{taylorfm} and \cite{kisinfm}). One is primarily interested in Galois representations that are geometric in the above sense. Throughout, we shall impose an additional deformation condition at $p$, which will guarantee that the deformation rings parametrize geometric lifts. We will further elaborate on this in the next subsection.
\par Note that if $\rho$ is a deformation of $\bar{\rho}$ then $\rho_{\restriction \ell}$ is a deformation of $\bar{\rho}_{\restriction \ell}$. Therefore, a global deformation gives rise to a local deformation at each prime $\ell$. At each prime $\ell$, set $\operatorname{Def}_\ell(A)$ to be the set of $A$-deformations of $\bar{\rho}_{\restriction \ell}$. The association $A\mapsto \operatorname{Def}_\ell(A)$ defines a functor $\operatorname{Def}_\ell:\mathcal{C}\rightarrow \operatorname{Sets}$. Let $A$ be a coefficient ring with maximal ideal $\mathfrak{m}$. Let $I$ be an ideal in $A$, the quotient map $A\rightarrow A/I$ is said to be a \textit{small extension} if $\mathfrak{m}$ is principal and $I\cdot \mathfrak{m}=0$. \begin{Definition}\label{localdefconditions}
A functor of deformations $\mathcal{C}_\ell:\mathcal{C}\rightarrow \op{Sets}$ is a subfunctor of $\op{Def}_\ell$. Say that $\mathcal{C}_\ell$ is \textit{deformation condition} if the conditions $\eqref{dc1}-\eqref{dc3}$ stated below are satisfied:
\begin{enumerate}
\item\label{dc1} $\mathcal{C}_\ell(\mathbb{F}_p)=\{\bar{\rho}_{\restriction \ell}\}.$
\item\label{dc2} For $i=1,2$, let $R_i\in \mathcal{C}$ and $\rho_i\in\mathcal{C}_\ell(R_i)$. Let $I_1$ be an ideal in $R_1$ and $I_2$ an ideal in $R_2$ such that there is an isomorphism $\alpha:R_1/I_1\xrightarrow{\sim} R_2/I_2$ satisfying \[\alpha(\rho_1 \;\text{mod}\;{I_1})=\rho_2 \;\text{mod}\;{I_2}.\] Let $R_3$ be the fibred product \[R_3=\lbrace(r_1,r_2)\mid \alpha(r_1\;\text{mod}\; I_1)=r_2\; \text{mod} \;I_2\rbrace\] and $\rho_1\times_{\alpha} \rho_2$ the induced $R_3$-representation, then $\rho_1\times_{\alpha} \rho_2\in \mathcal{C}_\ell(R_3)$.
\item\label{dc3} Let $R\in \mathcal{C}$ with maximal ideal $\mathfrak{m}_R$. If $\rho\in \op{Def}_\ell(R)$ and $\rho\in \mathcal{C}_\ell(R/\mathfrak{m}_R^n)$ for all $n>0$ it follows that $\rho\in \mathcal{C}_\ell(R)$. In other words, the functor $\mathcal{C}_\ell$ is continuous.
\end{enumerate}
A deformation functor $\mathcal{C}_\ell$ is said to be \textit{liftable} if for every small extension $A\rightarrow A/I$ the induced map $\mathcal{C}_\ell(A)\rightarrow \mathcal{C}_\ell(A/I)$ is surjective.
\end{Definition}
Condition $\eqref{dc2}$ is referred to as the Mayer-Vietoris property. By a well-known result of Grothendieck \cite[section 18]{Mazurintro}, conditions $\eqref{dc1}$ to $\eqref{dc3}$ guarantee that $\mathcal{C}_\ell$ is representable.
\begin{Definition}\label{adgaloisaction}Set $\op{Ad}^0\bar{\rho}$ to denote the Galois module whose underlying vector space consists of $2\times 2$ matrices with entries in $\mathbb{F}_p$, and with trace zero. The Galois action is as follows: for $g\in \op{G}_{\mathbb{Q}}$ and $v\in \op{Ad}^0\bar{\rho}$,
set $g\cdot v:=\bar{\rho}(g) v \bar{\rho}(g)^{-1}$.
\end{Definition}
Note that $\operatorname{Ad}^0\bar{\rho}^*:=\op{Hom}(\operatorname{Ad}^0\bar{\rho}, \mu_p)$ can be identified with $\operatorname{Ad}^0\bar{\rho}(1)$, i.e., $\operatorname{Ad}^0\bar{\rho}$ twisted by the action of the mod-$p$ cyclotomic character $\bar{\chi}$. Let $\ell$ be a prime number, the functor of local deformations $\op{Def}_\ell$ is equipped with a tangent space. As a set, this is defined to be $\operatorname{Def}_\ell\left(\mathbb{F}_p[\epsilon]/(\epsilon^2)\right)$. It has the structure of a vector space over $\mathbb{F}_p$ and is in bijection with $H^1(\op{G}_\ell,\operatorname{Ad}^0\bar{\rho})$. The bijection identifies a cohomology class $f$ with the deformation \[(\operatorname{Id}+\epsilon f)\bar{\rho}_{\restriction \ell}: \op{G}_{\ell}\rightarrow \op{GL}_2(\mathbb{F}_p[\epsilon]/(\epsilon^2)).\]
The tangent space $\mathcal{N}_\ell$ of a deformation condition $\mathcal{C}_\ell$ consists of the cohomology classes $f\in H^1(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho})$, such that $(\op{Id}+\epsilon f) \bar{\rho}_{\restriction \ell}\in \mathcal{C}_\ell\left(\mathbb{F}_p[\epsilon]/(\epsilon^2)\right)$. For $\ell\in S$, the deformation functor $\op{Def}_\ell$ is \textit{unobstructed} if it is liftable in the sense of Definition $\ref{localdefconditions}$. The following is a criterion for local unobstructedness.
\begin{Lemma}
The functor $\op{Def}_\ell$ is unobstructed if $H^0(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho}^*)=0$.
\end{Lemma}
\begin{proof}
By the local Euler characteristic formula, $H^0(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho}^*)$ is dual to $H^2(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho})$.
Let $A\rightarrow A/I$ be a small extension and $t$ a generator of the principal ideal $\mathfrak{m}_A$. Identify $\operatorname{Ad}^0\bar{\rho}$ with the kernel of the mod-$I$ reduction map
\[\op{SL}_2(A)\rightarrow \op{SL}_2(A/I)\] by identifying $\mtx{a}{b}{c}{-a}$ with
\[\op{Id}+t\mtx{a}{b}{c}{-a}=\mtx{1+ta}{tb}{tc}{1-ta}.\]
We show that \[\varrho:\op{G}_\ell\rightarrow \op{GL}_2(A/I)\] lifts to \[\tilde{\varrho}:\op{G}_\ell\rightarrow \op{GL}_2(A).\] Let $\tau:\op{G}_\ell\rightarrow \op{GL}_2(A)$ be a continuous lift of $\varrho$ (not necessarily a homomorphism), for which $\det \tau=\chi$. Such a lift does always exist. Note that for $g_1,g_2\in \op{G}_{\mathbb{Q}}$, \[\mathcal{O}(\varrho)(g_1,g_2):=\tau(g_1g_2)\tau(g_2)^{-1}\tau(g_1)^{-1}\] is an element of \[\operatorname{Ad}^0\bar{\rho}=\op{ker}\left\{\op{SL}_2(A)\rightarrow \op{SL}_2(A/I)\right\}.\] This defines a cohomology class
$\mathcal{O}(\varrho)\in H^2(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho})$, which is trivial precisely when $\varrho$ lifts to an actual representation \[\tilde{\varrho}:\op{G}_\ell\rightarrow \op{GL}_2(A).\] Since $H^2(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho})=0$, a lift $\tilde{\varrho}$ must exist and the result follows.
\end{proof}
\subsection{The flat deformation condition}
\par The deformation rings we consider shall parametrize geometric Galois representations $\rho$ lifting $\bar{\rho}$ for which $\op{det} \rho=\chi$. This is ensured by choosing an appropriate local deformation condition $\mathcal{C}_p$ for $\bar{\rho}_{|p}$. A deformation $\varrho:\op{G}_p\rightarrow \op{GL}_2(A)$ of $\bar{\rho}_{|p}$ is said to be \textit{flat} if for each finite length quotient $A/I$ of $A$, the mod-$I$ representation $\varrho_I:=\varrho\mod{I}$ arises from the Galois action on the generic fibre of a a finite flat group scheme over $\mathbb{Z}_p$. Such flat deformations are characterized by certain \textit{filtered Dieudonn\'e modules}.
\begin{Definition}
A filtered Dieudonn\'e module $M$ is a $\mathbb{Z}_p$-module furnished with a decreasing, exhaustive, separated filtration $(M^i)$ of sub-$\mathbb{Z}_p$-modules. For each integer $i$, we are given a $\mathbb{Z}_p$-linear map $\varphi^i:M^i\rightarrow M$ such that, for $x\in M^{i+1}$, $\varphi^{i+1}(x)=p\varphi^i(x)$. Denote by $\op{MF}_{\op{tor}}^f$ the category of filtered Dieudonn\'e-modules $M$ that are of finite length and for which $\sum_i \varphi^i(M)=M$. For $j>0$, set $\op{MF}_{\op{tor}}^{f,j}$ to be the full subcategory of $\op{MF}_{\op{tor}}^f$ consisting of modules $M$ for which $M^0=M$ and $M^j=0$.
\end{Definition}
Let $\op{Rep}_{\mathbb{Z}_p}^f$ be the category of finite length $\mathbb{Z}_p[\op{G}_p]$-modules. For $j<p$, Fontaine and Laffaille show in \cite{fontainelaffaille} that there is a faithful, exact contravariant functor
\[\op{U}_S: \op{MF}_{\op{tor}}^{f,j}\rightarrow \op{Rep}_{\mathbb{Z}_p}^f.\]
Since we fix the determinant of our deformations to be equal to $\chi$, we need only focus our attention to the category $\op{MF}_{\op{tor}}^{f,2}$. Fontaine and Laffaille show that $\op{U}_S$ induces an anti-equivalence between this category and the category of $\op{G}_p$-modules arising from finite flat group schemes over $\mathbb{Z}_p$. Since the elliptic curve $E$ is assumed to have good reduction at $p$, the residual representation $\bar{\rho}_{|p}$ is flat, i.e., arises from a finite flat group scheme over $\mathbb{Z}_p$. The result of Fontaine and Laffaille implies that there exists a uniquely determined module $M_0\in \op{MF}_{\op{tor}}^{f,2}$, such that $\op{U}_S(M_0)=V_{\bar{\rho}}$, where $V_{\bar{\rho}}$ is the underlying $\mathbb{F}_p$-vector space on which $\op{G}_p$ acts via $\bar{\rho}_{|p}$. Let $R\in \mathcal{C}$, and $R/I$ be a finite length quotient of $R$. The mod-$I$ reduction of $\varrho:\op{G}_p\rightarrow \op{GL}_2(R)$ is denoted $\varrho_I$ and its underlying module is denoted $V_{\varrho_I}$. We take note of this result.
\begin{Th}[Fontaine-Laffaille]
Let $R\in \mathcal{C}$, a deformation $\varrho:\op{G}_p\rightarrow \op{GL}_2(R)$ is flat if for each finite length quotient $R/I$, there is a Fontaine Laffaille module $M_{\varrho_I}\in \op{MF}_{\op{tor}}^{f,2}$, such that $\op{U}_S(M_{\varrho_I})=V_{\varrho_I}$.
\end{Th}
\begin{proof}
Please refer to \cite[section 9]{fontainelaffaille}, where the result is proved.
\end{proof}
\par Let $\op{F}_2$ be the subfunctor of $\op{Def}_p$ consisting of flat deformations. Ramakrishna in \cite{ramakrishna compositio} showed that under certain hypotheses, $\op{F}_2$ is representable by a power series ring over $\mathbb{Z}_p$. Let us recall these results. Set $\op{End}_p(\bar{\rho})$ to denote the ring of $\op{G}_p$-module endomorphisms of $V_{\bar{\rho}}$.
\begin{Th}[Ramakrishna]
Assume that $\op{End}_p(\bar{\rho})=\mathbb{F}_p$, then, $\op{F}_2$ is pro-representable by a smooth ring $R_2\simeq \mathbb{Z}_p\llbracket X\rrbracket$.
\end{Th}
\begin{proof}
The result of Ramakrishna is proved in the setting where the determinants of the deformations $\varrho$ of $\bar{\rho}_{|p}$ are not fixed. Since $\varrho$ is flat, $\op{det}\varrho_{|\op{I}_p}=\chi$. Note however, that we fix the entire determinant $\op{det} \varrho=\chi$. Ramakrishna shows that the flat deformation functor without fixed determinant is representable by $\mathbb{Z}_p\llbracket X_1, X_2\rrbracket$. This is proven by showing that the full-adjoint tangent space corresponding to flat deformations contained in $H^1(\op{G}_p, \op{Ad}\bar{\rho})$ is $2$-dimensional, and hence, the ring is of the form $\mathbb{Z}_p\llbracket X_1, X_2\rrbracket/\mathcal{I}$. It is also shown that this functor is representable by a smooth ring over $\mathbb{Z}_p$, hence, the ideal $\mathcal{I}=0$. The same argument in the case when the determinant is fixed, shows that when $\det=\chi$ is fixed, the functor is represented by $\mathbb{Z}_p\llbracket X\rrbracket /\mathcal{J}$, and since the functor is liftable, it must be isomorphic to $\mathbb{Z}_p\llbracket X\rrbracket$.
\end{proof}
When the condition $\op{End}_p(\bar{\rho})=\mathbb{F}_p$ is satisfied, set $\mathcal{C}_p$ to denote the flat deformation functor with $\op{det}=\chi$. Let $\mathcal{N}_p\subseteq H^1(\op{G}_p, \operatorname{Ad}^0\bar{\rho})$ denote its tangent space, note that by the above result, $\mathcal{N}_p$ is $1$-dimensional. Also note that in this setting when $\op{End}_p(\bar{\rho})=\mathbb{F}_p$, it follows that $H^0(\op{G}_p, \operatorname{Ad}^0\bar{\rho})=0$. Therefore, the formula
\[\dim \mathcal{N}_p=1+\dim H^0(\op{G}_p, \operatorname{Ad}^0\bar{\rho})\]is automatically satisfied.
When $E$ has supersingular reduction at $p$, the condition $\op{End}_p(\bar{\rho})=\mathbb{F}_p$ follows from \cite{serre}. On the other hand, when $E$ is ordinary at $p$, there is an unramified character $\psi$ such that $\bar{\rho}_{|p}=\mtx{\psi \bar{\chi}}{\ast}{0}{\psi^{-1}}$. In this setting, $\op{End}_p(\bar{\rho})=\mathbb{F}_p$ if and only if the representation $\bar{\rho}_{|p}$ is non-split. Therefore, consider the case when $\bar{\rho}_{|p}$ is split. In this case, Ramakrishna shows in \cite[Table 2, p. 128]{ravi2} that there is a pair $(\mathcal{C}_p, \mathcal{N}_p)$, where $\mathcal{C}_p$ is a liftable deformation functor with determinant equal to $\chi$ and $\mathcal{N}_p$ is the tangent space of $\mathcal{C}_p$. Moreover, these deformations are all ordinary, and are defined as follows.
\begin{itemize}
\item When $\bar{\rho}_{|p}$ is twist equivalent to $\mtx{\bar{\chi}}{0}{0}{1}$, it is shown that there is subfunctor $\mathcal{C}_p$ of $\op{Def}_p$ consisting of flat deformations represented by $\mathbb{Z}_p\llbracket X_1, X_2\rrbracket$.
\item When $\bar{\rho}_{|p}$ is \textit{not} twist equivalent to $\mtx{\bar{\chi}}{0}{0}{1}$, it is shown that there is subfunctor $\mathcal{C}_p$ of $\op{Def}_p$ consisting of ordinary deformations represented by $\mathbb{Z}_p\llbracket X_1, X_2\rrbracket$. In this setting, we check that all infinitesimal lifts $\mathcal{C}_p\left(\mathbb{F}_p[\epsilon]/(\epsilon^2)\right)$ are flat.
\end{itemize}
\begin{Proposition}\label{Np flat}
Let $E$ be an elliptic curve with good reduction at $p\geq 5$, and $\bar{\rho}$ the residual representation. Then, $\bar{\rho}_{|p}$ comes equipped with a liftable deformation condition $\mathcal{C}_p$ along with tangent space $\mathcal{N}_p$, such that
\begin{enumerate}
\item all deformations $\varrho$ satisfying $\mathcal{C}_p$ are ordinary (resp. crystalline) if $E$ has ordinary (resp. supersingular) reduction at $p$,
\item $\dim \mathcal{N}_p=1+\dim H^0(\op{G}_p, \operatorname{Ad}^0\bar{\rho})$,
\item the deformations $\varrho\in \mathcal{C}_p\left(\mathbb{F}_p[\epsilon]/(\epsilon^2)\right)$ are flat, i.e., arise via $\op{U}_S$.
\end{enumerate}
\end{Proposition}
\begin{proof}
The above discussion shows that the only case that needs to be considered is when $\bar{\rho}_{|p}=\mtx{\psi \bar{\chi}}{0}{0}{\psi^{-1}}$ for an unramified character $\psi$. Let $M_0$ be the Fontaine-Laffaille module associated to $V_{\bar{\rho}}$. Then $M_0$ is a direct sum $M_0=N_1\oplus N_2$, where $N_1$ and $N_2$ are $1$-dimensional Fontaine-Laffaille modules corresponding to the Galois modules $\mathbb{F}_p(\psi \bar{\chi})$ and $\mathbb{F}_p(\psi^{-1})$ respectively. Choose a basis $\{e,k\}$ of $M_0$ such that $k$ spans $F^1 M_0$. The Fontaine-Laffaille module $M_0$ is characterized by a dashed matrix
\[\left( {\begin{array}{c|c}
a & c \\
b & d \\
\end{array} } \right)\in \text{M}_2(\mathbb{F}),\]
where $\varphi^0(e)=ae+bk$ and $\varphi^1(k)=ce+dk$. The relation $\varphi^0_{\restriction M_0^1}=p\varphi^1=0$ implies that $\varphi^0(k)=0$. Therefore, the matrices corresponding to $\varphi^0$ and $\varphi^1$ are as follows \[\varphi^0=\mtx{a}{0}{b}{0}\text{, and }\varphi^1=\left( {\begin{array}{c}
c \\
d \\
\end{array} } \right).\]
On the other hand, according to \textit{loc. cit.}, $\mathcal{C}_p$ is the ordinary deformation condition and $\mathcal{N}_p$ is the ordinary tangent space and has dimension $2$. Explicitly, $\mathcal{N}_p\subset H^1(\op{G}_p, \operatorname{Ad}^0\bar{\rho})$ has a basis of two cohomology classes $f_1$ and $f_2$, where $f_1$ is ramified and has image in $\mtx{0}{\ast}{0}{0}$, and on the other hand, $f_2$ is unramified and has image in the diagonal in $\operatorname{Ad}^0\bar{\rho}$. Both cohomology classes are seen to arise from Fontaine Laffaille modules $\widetilde{M}_0$ which fit into a short exact sequence
\[0\rightarrow M_0\rightarrow \widetilde{M}_0\rightarrow M_0\rightarrow 0.\] The definition of $\widetilde{M}_0$ involves a choice of a dashed matrix with entries in $\mathbb{F}_p[\epsilon]/\epsilon^2$ lifting that corresponding $M_0$. We leave the details of this simple matrix calculation to the reader.
\end{proof}
\section{Presentations for Geometric Deformation Rings}\label{section 3}
Assume throughout that $p\geq 5$ and that \[\bar{\rho}:\op{G}_{S\cup \{p\}}\rightarrow \op{GL}_2(\mathbb{F}_p)\] is as in the previous section. In this section, we consider Galois representations which are ordinary/crystalline when localized at $p$, and thus in particular, are deRham. Such representations will be geometric and under a mild additional hypothesis, are known to arise from Hecke eigencuspforms. Following B\"ockle \cite{bockle}, we discuss presentations for the associated Galois deformation rings. We do not impose any local conditions at the primes $\ell\in S$. However, at $p$, we impose the condition $\mathcal{C}_p$ defined in the previous section. Recall that throughout, the determinant of all deformations considered is fixed and equal to $\chi$, the cyclotomic character.
\par We introduce an additional hypothesis which will play a role in simplifying some results in arithmetic statistics, however, the results in this section shall not assume this hypothesis.
\begin{hypothesis}We say that condition $(\star)$ is satisfied for $\bar{\rho}$ if for all $\ell\in S$,
\[H^0(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho}^*)=0.\]
\end{hypothesis}
We introduce Selmer and dual Selmer groups which will play an important role in describing presentations for Galois deformation rings. First, we introduce the notion of Selmer data for the set of primes $S\cup \{p\}$. A Selmer datum $\mathcal{L}$ consists of a choice of subspace $\mathcal{L}_\ell\subseteq H^1(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho})$ for each prime $\ell\in S\cup \{p\}$; we set $\mathcal{L}_\infty=0$. Let $\mathcal{L}_\ell^{\perp}\subset H^1(\op{G}_p, \operatorname{Ad}^0\bar{\rho}^*)$ be the orthogonal complement of $\mathcal{L}_\ell$ with respect to the local Tate-duality pairing. Then, the Selmer and dual Selmer groups associated to the datum $\mathcal{L}$ are defined as follows
\begin{equation}\begin{split}
& H^1_{\mathcal{L}}(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho})
:= \text{ker}\left\{ H^1(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho})\longrightarrow \bigoplus_{\ell\in S\cup \{p\}}\frac{H^1(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho})}{\mathcal{L}_\ell}\right\}\\
& H^1_{\mathcal{L}^\perp}(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho}^*)
:= \text{ker}\left\{ H^1(\operatorname{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho}^{*})\longrightarrow \bigoplus_{\ell\in S\cup \{p\}}\frac{H^1(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho}^*)}{\mathcal{L}_\ell^{\perp}}\right\}\\
\end{split}\end{equation}
respectively.
In this paper, we work in a special case, where $\mathcal{L}_\ell$ is the full space $H^1(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho})$ for $\ell\in S$ and $\mathcal{L}_p=\mathcal{N}_p$. The following notation we use is not standard and there are no conditions at the primes $\ell\in S$.
The Selmer and dual Selmer groups are as follows
\begin{equation}\begin{split}
& H^1_{\langle p\rangle}(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho})
:= \text{ker}\left\{ H^1(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho})\longrightarrow \frac{H^1(\op{G}_p, \operatorname{Ad}^0\bar{\rho})}{\mathcal{N}_p}\right\}\\
& H^1_{\langle p\rangle}(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho}^*)
:= \text{ker}\left\{ H^1(\operatorname{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho}^{*})\longrightarrow \frac{H^1(\op{G}_p, \operatorname{Ad}^0\bar{\rho}^*)}{\mathcal{N}_p^{\perp}}\right\}\\
\end{split}\end{equation}
respectively.
\begin{Proposition}\label{prop:selmer vs dselmer}
With respect to notation introduced above, we have that
\[\dim H^1_{\langle p\rangle}(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho})-\dim H^1_{\langle p\rangle}(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho}^*)=\sum_{\ell\in S} H^0(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho}^*).\] In particular, $(\star)$ is satisfied if and only if
\[\dim H^1_{\langle p\rangle}(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho})=\dim H^1_{\langle p\rangle}(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho}^*).\]
\end{Proposition}
\begin{proof}
Suppose for each prime $\ell\in S\cup\{p, \infty\}$, there is a choice of subspace $\mathcal{L}_\ell\subseteq H^1(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho})$. The Selmer condition $\mathcal{L}$ is the data $\{\mathcal{L}_\ell\}$ and the dimensions of the Selmer and dual Selmer groups are related as follows
\[\begin{split}&\dim H^1_{\mathcal{L}}(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho})-\dim H^1_{\mathcal{L}^{\perp}}(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho}^*)\\
=& \dim H^0(\op{G}_{\mathbb{Q}}, \operatorname{Ad}^0\bar{\rho})-\dim H^0(\op{G}_{\mathbb{Q}}, \operatorname{Ad}^0\bar{\rho}^*)\\
+&\sum_{\ell\in S\cup\{p, \infty\}} \left(\dim \mathcal{L}_\ell-\dim H^0(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho})\right),\\
\end{split}\]
according to Wiles' formula \cite[Theorem 8.7.9]{NW}.
The representation $\bar{\rho}$ associated to an elliptic curve is \textit{odd}, i.e., $\det \bar{\rho}(c)=-1$, where $c$ denotes complex conjugation. It is therefore easy to show that $\dim H^0(\op{G}_\infty, \operatorname{Ad}^0\bar{\rho})=1$. Now, specify the Selmer data as follows
\[\mathcal{L}_\ell=\begin{cases}
0\text{ if }\ell=\infty,\\
\mathcal{N}_p\text{ if } \ell=p,\\
H^1(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho})\text{ if }\ell\in S.\\
\end{cases}\]
In this case, the Selmer group $H^1_{\mathcal{L}}(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho})$ coincides with the Selmer group $H^1_{\langle p\rangle}(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho})$. Since $\operatorname{Ad}^0\bar{\rho}$ is assumed to be irreducible, we have that \[\dim H^0(\op{G}_{\mathbb{Q}}, \operatorname{Ad}^0\bar{\rho})=\dim H^0(\op{G}_{\mathbb{Q}}, \operatorname{Ad}^0\bar{\rho}^*)=0.\]
Also, note that for $\ell \neq p$ by the local Euler characteristic formula,
\[\dim H^1(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho})-\dim H^0(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho})=\dim H^2(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho}).\]
On the other hand, according to local duality,
\[\dim H^2(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho})= \dim H^0(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho}^*) .\] Finally, note that
\[\dim \mathcal{N}_p=1+\dim H^0(\op{G}_p, \operatorname{Ad}^0\bar{\rho}).\] Putting it all together, we obtain the result.
\end{proof}
For the next part of the discussion, a good reference is \cite{bockle}. For $R\in \mathcal{C}$, let $\op{D}_{\bar{\rho}}^{\angp}(R)$ be the subset of $\op{D}_{\bar{\rho}}(R)$ consisting of deformations $\rho$ such that the local representation $\rho_{|p}$ satisfies $\mathcal{C}_p$. The functor $\op{D}_{\bar{\rho}}^{\angp}$ is represented by a universal deformation
\[\rho^{\op{univ}, \langle p\rangle}:\op{G}_S\rightarrow \op{GL}_2(\op{R}_{\bar{\rho}}^{\angp}),\] and we shall refer to $\op{R}_{\bar{\rho}}^{\angp}$ as the \text{universal geometric deformation ring}. This is abuse of terminology, since it does not capture all geometric deformations, since there may indeed be characteristic zero deformations $\varrho$ of $\bar{\rho}_{|p}$ that are deRham yet do not satisfy $\mathcal{C}_p$. However, since our constructions require choosing a suitable local condition that is smooth and satisfies other suitable properties, we work in this setting.
\par Let $\ell\in S$, and $\rho_\ell:\op{G}_\ell\rightarrow \op{GL}_2(R_\ell)$ be the universal deformation of $\bar{\rho}_{|\ell}$ representing the functor $\op{Def}_\ell$. This local deformation ring has a presentation
\[R_\ell\simeq \frac{\mathbb{Z}_p\llbracket X_1, \dots, X_u\rrbracket}{\left(g_1, \dots, g_v\right)}\]
where \[u=\dim H^1(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho})\textit{ and }v=\dim H^2(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho}).\] Denote the ideal of relations by $\mathcal{I}_\ell:=\left(g_1, \dots, g_v\right)$. Let $\rho_p:\op{G}_p\rightarrow \op{GL}_2(R_p)$ be the universal deformation of $\bar{\rho}_{|p}$ representing the functor $\mathcal{C}_p$. Note that since the deformation functor $\mathcal{C}_p^{\psi}$ is liftable, the local ring $R_p$ is a smooth power series ring
\[R_p\simeq \mathbb{Z}_p\llbracket X_1, \dots X_u\rrbracket\]
with $u=\dim \mathcal{N}_p$ generators.
\par By universality, the local representation $\rho_{|\ell}^{\op{univ}, \langle p\rangle}$ arises by composing \[\rho_\ell:\op{G}_\ell\rightarrow \op{GL}_2(R_\ell)\] by the map $\op{GL}_2(R_\ell)\rightarrow \op{GL}_2(\op{R}_{\bar{\rho}}^{\angp})$ induced by a uniquely determined map of local rings $R_\ell\rightarrow \op{R}_{\bar{\rho}}^{\angp}$. Let $\mathcal{J}_\ell$ denote the ideal generated by the image of $\mathcal{I}_\ell$ under the image of this map. The following result is a consequence of \cite[Theorem 5.2]{bockle}.
\begin{Th}\label{theorem presentations}
There is a presentation
\[\op{R}_{\bar{\rho}}^{\angp}\simeq \frac{\mathbb{Z}_p\llbracket X_1, \dots, X_t\rrbracket}{\mathcal{J}},\] with
\[t:=\dim H^1_{\langle p\rangle}(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho})\] and where $\mathcal{J}$ is generated by the ideals $\mathcal{J}_\ell$, for $\ell\in S$, and $s:=\dim H^1_{\langle p\rangle}(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho}^*)$ globally defined relations $f_1,\dots, f_s\in (p, X_1, \dots, X_t)^2$. Each ideal $\mathcal{J}_\ell$ is generated by at most $ \dim H^0(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho}^*)$ local relations.
\end{Th}
\begin{Corollary}\label{corollary def zp}
Suppose that $(\star)$ is satisfied and
\[H^1_{\langle p\rangle}(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho}^*)=0.\] Then, $\op{R}_{\bar{\rho}}^{\angp}$ is isomorphic $\mathbb{Z}_p$.
\end{Corollary}
\begin{proof}
It follows from Proposition \ref{prop:selmer vs dselmer} that both the Selmer group $H^1_{\langle p\rangle}(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho})$ and the dual Selmer group $H^1_{\langle p\rangle}(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho}^*)$ are zero, hence, $s=t=0$. Furthermore, since $H^2(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho})=0$ for $\ell\in S$, it follows that $\mathcal{J}_\ell=0$ are all $\ell\in S$. Since $s=0$, it follows that there no globally defined relations, and hence, $\mathcal{J}=0$. Therefore, it follows from Theorem \ref{theorem presentations} that $\op{R}_{\bar{\rho}}^{\angp}$ is isomorphic $\mathbb{Z}_p$.
\end{proof}
\section{Statistics for a fixed Elliptic Curve $E_{/\mathbb{Q}}$ and varying prime $p$}\label{section 4}
\par Let $E$ be an elliptic curve defined over $\mathbb{Q}$ with squarefree conductor $N$ and $p\geq 5$ a prime. Then, the Galois action on the $p$-torsion points gives rise to the $2$-dimensional mod-$p$ Galois representation $\bar{\rho}=\bar{\rho}_{E,p}:\op{G}_{\mathbb{Q}, S\cup \{p\}}\rightarrow \op{GL}_2(\mathbb{F}_p)$. Associated to each pair $(E,p)$ such that:
\begin{enumerate}
\item $E$ has good reduction at $p$,
\item the Galois representation $\bar{\rho}$ is absolutely irreducible,
\end{enumerate}
let $\mathcal{R}_{E,p}=\op{R}_{\bar{\rho}}^{\angp}$ be the geometric deformation ring introduced in the previous section. Note that for a fixed elliptic curve $E$ without complex multiplication, all but finitely many primes $p$ satisfy the above conditions. In this section, we show that $\mathcal{R}_{E,p}\simeq \mathbb{Z}_p$ for all but finitely many primes $p$ (that satisfy the conditions above). This result may be contrasted with the main result of \cite{westonmain}, where it is shown that $\op{R}_{\bar{\rho}}$ is unobstructed for all but a density zero set of primes.
\par There are 2 steps to the arguments in this section.
\begin{enumerate}
\item It is shown that condition $(\star)$ is satisfied for all but finitely many primes $p$.
\item It is shown that for all but finitely many $p$, $ H^1_{\langle p\rangle}(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho})=0$ for the residual representation $\bar{\rho}=\bar{\rho}_{E,p}$.
\end{enumerate}
\subsection{Bloch-Kato Selmer groups}
\par In order to prove our results about the vanishing of the Selmer group $H^1_{\langle p\rangle}(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho}^*)$, we relate it to the Bloch-Kato Selmer group, which is associated with the characteristic zero representation. Let us recall some standard definitions. In the next discussion, assume that the pair $(E, p)$ is fixed, and recall that $S$ is the set of primes $\ell\neq p$ at which $E$ has bad reduction. Let $\op{T}_p(E)$ denote the $p$-adic Tate-module associated to $E$ and $\op{V}_p(E)$ be the $p$-adic vector space $\op{T}_p(E)\otimes_{\mathbb{Z}_p} \mathbb{Q}_p$, equipped with the action of $\op{G}_{S\cup \{p\}}$, and let $\op{Ad}_p^0(E)$ be the adjoint representation associated to $\op{V}_p(E)$. For a $\op{G}_{S\cup \{p\}}$-module $M$, set $H^i(\mathbb{Q}_{S\cup\{p\}}/\mathbb{Q}, M)$ to denote the group of continuous classes $H^i_{\op{cnts}}(\op{G}_{S\cup\{p\}}, M)$. For each place $\ell$ of $\mathbb{Q}$, Bloch and Kato define a subspace \[H^1_{\bf{f}}\left(\mathbb{Q}_\ell, \op{Ad}_p^0(E)\right)\subseteq H^1\left(\mathbb{Q}_\ell, \op{Ad}_p^0(E)\right)\] as follows
\[H^1_{\bf{f}}\left(\mathbb{Q}_\ell, \op{Ad}_p^0(E)\right):=\begin{cases} H^1_{nr}\left(\mathbb{Q}_\ell, \op{Ad}_p^0(E)\right) & \text{ if }\ell\neq p, \infty,\\
\op{ker}\left\{H^1(\mathbb{Q}_\ell, \op{Ad}_p^0(E))\longrightarrow H^1(\mathbb{Q}_\ell, \op{B}_{\op{crys}}\otimes \op{Ad}_p^0(E))\right\} & \text{ if }\ell=p,\\
0 & \text{ if }\ell=\infty.
\end{cases}
\]
Let $\op{W}_p(E)$ be the quotient $\op{Ad}_p^0(E)\otimes_{\mathbb{Q}_p} \mathbb{Q}_p/\mathbb{Z}_p$; the natural quotient map $\op{Ad}_p^0(E)\rightarrow \op{W}_p(E)$ induces a map on passing to cohomology
\[H^1\left(\mathbb{Q}_\ell, \op{Ad}_p^0(E)\right)\rightarrow H^1\left(\mathbb{Q}_\ell, \op{W}_p(E)\right).\]
Let $\ell$ be any prime, set
\[H^1_{\bf{f}}\left(\mathbb{Q}_\ell, \op{W}_p(E)\right):=\op{im}\left\{H^1_{\bf{f}}\left(\mathbb{Q}_\ell, \op{Ad}_p^0(E)\right)\longrightarrow H^1(\mathbb{Q}_\ell, \op{W}_p(E))\right\}.\] For $M$ denoting either $\op{Ad}_p^0(E)$ or $\op{W}_p(E)$, the Bloch-Kato Selmer group associated to $M$ is defined as follows
\[H^1_{\bf{f}}(\mathbb{Q}_{S\cup \{p\}}/\mathbb{Q}, M):=\ker\left\{H^1(\mathbb{Q}_{S\cup \{p\}}/\mathbb{Q}, M)\longrightarrow \bigoplus_\ell \frac{H^1(\mathbb{Q}_\ell, M)}{H^1_{\bf{f}}(\mathbb{Q}_\ell, M)}\right\}.\]
For $\ell\notin \{p, \infty\}$, the local cohomology group $H^1_{\bf{f}}\left(\mathbb{Q}, \op{W}_p(E)\right)$ coincides with the maximal divisible subgroup of $H^1_{\op{nr}}\left(\mathbb{Q}_\ell, \op{W}_p(E)\right)$. For any finite set of primes $\Sigma$ not containing $p,\infty$, define a larger Selmer group as follows
\[H^1_\Sigma(\mathbb{Q}, M)\longrightarrow \left(\bigoplus_{\ell\notin \Sigma\cup \{p,\infty\}}\frac{H^1(\mathbb{Q}_\ell, M)}{H^1_{\op{nr}}(\mathbb{Q}_\ell, M)}\right)\oplus \left(\frac{H^1(\mathbb{Q}_p, M)}{H^1_{\bf{f}}(\mathbb{Q}_p, M)}\right).\]
Note that the inclusion $H^1_{\bf{f}}\left(\mathbb{Q}, \op{W}_p(E)\right)\subseteq H^1_{\emptyset}\left(\mathbb{Q}, \op{W}_p(E)\right)$ is an equality if $H^0\left(\op{I}_\ell,\op{W}_p(E)\right)$ is divisible for all $\ell\neq p$.
\subsection{Vanishing of the dual Selmer group}
We now study the relationship between the Bloch-Kato Selmer group and the smoothness of the geometric deformation ring. In order to arrive at such a relationship, we establish a criterion for the vanishing of the dual Selmer group $H^1_{\langle p\rangle}(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho}^*)$.
\begin{Lemma}\label{lemma 4.1}
Let $\mathcal{N}_p\subseteq H^1(\op{G}_p, \operatorname{Ad}^0\bar{\rho})$ be the tangent space of $\mathcal{C}_p$. Then, the image of $\mathcal{N}_p$ under the natural map
\[H^1(\op{G}_p, \operatorname{Ad}^0\bar{\rho})\rightarrow H^1\left(\op{G}_p, \op{W}_p(E)\right)[p]\] is contained in $H^1_{\bf{f}}\left(\op{G}_p, \op{W}_p(E)\right)[p]$.
\end{Lemma}
\begin{proof}
It is well known that elements of $ H^1_{\bf{f}}\left(\op{G}_p, \op{W}_p(E)\right)$ are precisely those classes whose corresponding extensions lie in the image of the Fontaine-Laffaille functor, see the discussion in \cite[section 1.1.2 and p.697]{DFG}. On the other hand, it follows from part (3) of Proposition \ref{Np flat} that all cohomology classes $f\in \mathcal{N}_p$ have this property. Hence, any class $f\in \mathcal{N}_p$ must map to $H^1_{\bf{f}}\left(\op{G}_p, \op{W}_p(E)\right)$.
\end{proof}
\begin{Proposition}\label{BK to res selmer}
Let $E$ be an elliptic curve defined over $\mathbb{Q}$ and $p\geq 5$ a prime at which $E$ has good reduction. Assume that the following conditions hold:
\begin{enumerate}
\item the mod-$p$ Galois representation $\bar{\rho}_{E,p}$ is absolutely irreducible,
\item $H^1_\emptyset(\mathbb{Q}, \op{W}_p(E))=0$.
\end{enumerate}Then the following assertions hold:
\begin{enumerate}
\item $\dim H^1_{\langle p\rangle}(\op{G}_{ S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho})=\sum_{\ell\in S} \dim H^0(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho}^*)$,
\item $H^1_{\langle p\rangle}(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho}^*)=0$.
\end{enumerate}
\end{Proposition}
\begin{proof}
Throughout this proof, we set $\bar{\rho}:=\bar{\rho}_{E,p}$. Identify $\op{W}_p(E)[p]$ with $\operatorname{Ad}^0\bar{\rho}$, and note that since $\bar{\rho}$ is assumed to be irreducible, it follows that $H^0(\mathbb{Q}, \operatorname{Ad}^0\bar{\rho})=0$, and as a result, the natural map
\begin{equation}\label{natural map}H^1(\mathbb{Q}, \operatorname{Ad}^0\bar{\rho})\longrightarrow H^1(\mathbb{Q}, \op{W}_p(E))\end{equation} is injective. Let $\mathcal{T}\subset H^1_{\langle p\rangle}(\op{G}_{S\cup\{p\}}, \operatorname{Ad}^0\bar{\rho})$ consist of cohomology classes $f$ that are unramified at all primes $\ell\neq p$. In other words,
\[\mathcal{T}:=\op{ker}\left\{ H^1(\op{G}_{S\cup\{p\}}, \operatorname{Ad}^0\bar{\rho})\longrightarrow \left(\bigoplus_{\ell\in S} \frac{H^1(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho})}{H^1(\op{G}_\ell/\op{I}_\ell, \operatorname{Ad}^0\bar{\rho})}\right)\oplus \left( \frac{H^1(\op{G}_p, \operatorname{Ad}^0\bar{\rho})}{\mathcal{N}_p}\right) \right\}.\]We show that the above injection \eqref{natural map} restricts to a map
\[\mathcal{T}\longrightarrow H^1_\emptyset\left(\mathbb{Q}, \op{W}_p(E)\right).\] It suffices to observe that the local conditions defining $\mathcal{T}$ as a subspace of $H^1(\mathbb{Q}, \operatorname{Ad}^0\bar{\rho})$ are compatible with those defining $H^1_\emptyset\left(\mathbb{Q}, \op{W}_p(E)\right)$. Clearly, this is the case for $\ell\neq p$, and for $\ell=p$, the assertion follows from Lemma \ref{lemma 4.1}. As a result, the assumption that $H^1_\emptyset\left(\mathbb{Q}, \op{W}_p(E)\right)=0$ implies that $\mathcal{T}=0$.
On the other hand, consider the short exact sequence
\[0\rightarrow \mathcal{T}\rightarrow H^1_{\langle p\rangle}(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho})\rightarrow \bigoplus_{\ell\in S} \frac{H^1(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho})}{H^1_{\op{nr}}(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho})},\] from which we obtain the inequality
\[\begin{split}\dim H^1_{\langle p\rangle}(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho}) &\leq \dim \mathcal{T}+\sum_{\ell\in S} \left(\dim H^1(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho})-\dim H^1_{nr}(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho})\right)\\
&\leq \dim \mathcal{T}+\sum_{\ell\in S} \left(\dim H^1(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho})-\dim H^0(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho})\right)\\
& = \dim \mathcal{T}+\sum_{\ell\in S} \dim H^0(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho}^*),\\
& = \sum_{\ell\in S} \dim H^0(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho}^*),
\end{split}\]
where in the last step, we invoke $\dim \mathcal{T}=0$.
\par Proposition \ref{prop:selmer vs dselmer} asserts that
\[\dim H^1_{\langle p\rangle}(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho})=\dim H^1_{\langle p\rangle}(\op{G}_{S\cup\{p\}}, \operatorname{Ad}^0\bar{\rho}^*)+\sum_{\ell\in S} \dim H^0(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho}^*).\]
Therefore, $H^1_{\langle p\rangle}(\op{G}_{S\cup\{p\}}, \operatorname{Ad}^0\bar{\rho}^*)=0$ and
\[\dim H^1_{\langle p\rangle}(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho})=\sum_{\ell\in S} \dim H^0(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho}^*).\]
\end{proof}
\subsection{Congruence primes and vanishing of the Bloch-Kato Selmer group}
\par Next, we discuss conditions for the vanishing of $H^1_\emptyset(\mathbb{Q}, \op{W}_p(E))$. Let $N$ denote the conductor of $E$, and $f$ the Hecke eigencuspform of weight $2$ associated with $E$. We introduce the notion of a congruence prime, which will be key in our analysis of the vanishing of the Selmer group $H^1_{\emptyset}\left(\mathbb{Q}, \op{W}_p(E)\right)$.
\begin{Definition}
We say that $p$ is a \textit{congruence prime} for $f$ if there exists a
newform $f_0$ of weight $2$ and level $d|N$ such that:
\begin{enumerate}
\item $f_0$ has character lifting the trivial one,
\item $f_0$ is not Galois conjugate to $f$,
\item $\bar{\rho}_{f_0,\lambda}\simeq \bar{\rho}_{f, \lambda}$ for some prime $\lambda|p$ of $\bar{\mathbb{Q}}$.
\end{enumerate}
Let $\op{Cong}(f)$ denote the set of congruence primes. A prime $p$ is a \textit{strict} congruence prime if there is a newform satisfying the above conditions with level $N$. Denote by $\op{Cong}_N(f)\subseteq \op{Cong}(f)$ the subset of strict congruence primes.
\end{Definition}
The following result gives a relationship between congruence primes and the vanishing of the Selmer group $H^1_{\emptyset}\left(\mathbb{Q}, \op{W}_p(E)\right)$, it is largely based on results in \cite{DFG}
\begin{Proposition}\label{prop BK}
Let $p$ be a prime, assume that
that:
\begin{enumerate}
\item $\bar{\rho}_{f,p}$ is absolutely irreducible,
\item $p>2$,
\item either $N>1$ or $p>3$,
\item $p\nmid N$,
\item $\ell \not \equiv 1\mod{p}$ for all primes $\ell|N$,
\item $\bar{\rho}_{f,p}$ is ramified at all primes $\ell|N$.
\end{enumerate}
Then $H^1_\emptyset(\mathbb{Q}, \op{W}_p(E))\neq 0$ if and only if $p\in \op{Cong}_N(f)$.
\end{Proposition}
\begin{proof}
This is a special case of \cite[Proposition 17]{westonexplicit}.
\end{proof}
\begin{Proposition}\label{prop local vanishing}
Suppose that $\ell \in S$, $p\geq 3$ and $\ell\not \equiv \pm 1\mod{p}$. Then, $H^0(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho}^*)=0$ if and only if $\bar{\rho}$ is ramified at $\ell$.
\end{Proposition}
\begin{proof}
We refer the reader to \cite[Lemma 11]{westonexplicit}.
\end{proof}
\begin{Th}\label{section 4 main thm}
Let $E$ be an elliptic curve defined over $\mathbb{Q}$ with squarefree conductor $N$. Let $\Sigma(E)$ be the set of primes $p$ such that one of the following conditions is satisfied:
\begin{enumerate}
\item $p\leq 3$,
\item $\bar{\rho}_{f,p}$ is not absolutely irreducible
\item $p\mid N $,
\item there is a prime $\ell|N$, such that $\ell\equiv \pm 1\mod{p}$,
\item $p\in \op{Cong}(f)$.
\end{enumerate}
Then, for $p\notin \Sigma(E)$, the geometric deformation ring $\mathcal{R}_{E,p}$ is isomorphic to $\mathbb{Z}_p$.
\end{Th}
\begin{proof}
\par
Let $p\notin \Sigma(E)$ and assume by way of contradiction that $\mathcal{R}_{E,p}\not\simeq \mathbb{Z}_p$. According to Corollary \ref{corollary def zp}, there are two possibilities:
\begin{enumerate}
\item $H^0(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho}^*)\neq 0$ for some prime $\ell \in S$,
\item $H^1_{\langle p\rangle}(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho}^*)\neq 0$.
\end{enumerate}
Suppose that the first of the above possibilities does hold. According to Proposition \ref{prop local vanishing}, it the residual representation $\bar{\rho}$ must be unramified at $\ell$. However, according to \cite[Lemma 12]{westonexplicit}, it follows that $p\in \op{Cong}(f)$. However, since $p\notin \Sigma(E)$, it follows therefore that $H^0(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho}^*)=0$ for all primes $\ell \in S$. Therefore, the only possibility is that $H^1_{\langle p\rangle}(\op{G}_{S\cup \{p\}}, \operatorname{Ad}^0\bar{\rho}^*)\neq 0$. Note that since $p\notin \Sigma(E)$, the residual representation $\bar{\rho}$ is absolutely irreducible. Hence, it follows from Proposition \ref{BK to res selmer} that $H^1_\emptyset(\mathbb{Q}, \op{W}_p(E))\neq 0$. Since $H^0(\op{G}_\ell, \operatorname{Ad}^0\bar{\rho}^*)=0$ for all primes $\ell \in S$, it follows from Proposition \ref{prop local vanishing} that $\bar{\rho}$ is ramified at all primes $\ell\in S$. Proposition \ref{prop BK} then implies that $p\in \op{Cong}_N(f)$, which is not possible since $p\notin \Sigma(E)$. The contradiction implies that $\mathcal{R}_{E,p}\simeq \mathbb{Z}_p$ for all primes $p\notin \Sigma(E)$.
\end{proof}
\begin{Corollary}\label{section 4 last}
Let $E$ be an elliptic curve over $\mathbb{Q}$ with squarefree conductor $N$ and without complex multiplication. Then, for all but finitely many primes $p$, the Galois deformation ring $\mathcal{R}_{E,p}$ is isomorphic to $\mathbb{Z}_p$.
\end{Corollary}
\begin{proof}
It follows from Serre's open image theorem that the set of primes $p$ at which the residual representation $\bar{\rho}_{E,p}$ is absolutely irreducible is finite. It is easy to see that the set of primes $\Sigma(E)$ in the statement of Theorem \ref{section 4 main thm} is finite and the result follows from the theorem.
\end{proof}
\section{Statistics for a fixed prime $p$ and varying elliptic curve $E_{/\mathbb{Q}}$}\label{section 5}
In this section, we are able to provide some partial answers to the dual problem. Namely, we fix a prime $p$ throughout and let $E$ vary over all non-CM elliptic curves over $\mathbb{Q}$. We will need to assume that $p\geq 5$. We note now that our results apply to a sparse set of elliptic curves $E_{/\mathbb{Q}}$.
\subsection{The congruence number and modular degree}
\begin{Definition}\label{section 5 defn}Let $\mathscr{E}_p$ be the set of elliptic curves $E_{/\mathbb{Q}}$ for which the following conditions are satisfied:
\begin{enumerate}
\item\label{s5:1} $E$ has good reduction at $p$,
\item\label{s5:2} the conductor $N$ of $E$ is squarefree,
\item\label{s5:3} the residual representation $\bar{\rho}=\bar{\rho}_{E,p}$ is absolutely irreducible,
\item\label{s5:4} $\ell\not\equiv \pm 1\mod{p}$ for all primes $\ell\mid N$.
\end{enumerate}
\end{Definition}
\begin{Remark}
It is shown by J. Cremona and M. Sadek in \cite{cremonasadek} that the proportion of elliptic curves over $\mathbb{Q}$ ordered by height satisfying \eqref{s5:1} (resp. \eqref{s5:2}) is $(1-\frac{1}{p})$ (resp. $\zeta(10)/\zeta(2)\approx$ 60.85\%). On the other hand, it is a result of W.Duke that $\bar{\rho}_{E,p}$ is irreducible for $100\%$ of elliptic curves, see \cite{Duke}.
\end{Remark}
\begin{Remark}
We observed computationally for elliptic curves with conductor at most 4000 that the conditions (1), (2) and (4) appear to imply (3). We are not immediately aware of any obvious reason for this.
\end{Remark}
As $E$ varies in the set $\mathscr{E}_p$, one would like to understand how often is the geometric deformation ring $\mathcal{R}_{E,p}$ isomorphic to $\mathbb{Z}_p$? Let $f$ be the modular form corresponding to $E$. Note that according to Theorem \ref{section 4 main thm}, this is the case when $p\notin \op{Cong}(f)$. Let $\mathscr{E}_p'$ be the subset of $\mathscr{E}_p$ for which this additional condition is satisfied.
\par Let $E$ be an elliptic curve defined over $\mathbb{Q}$, there is a unique minimal Weierstrass equation $y^2=x^3+ax+b$, where $(a,b)\in \mathbb{Z}^2$ is such that $\op{gcd}(a^3 , b^2)$ is not divisible by any twelfth power. The \textit{height of }$E$ is given by $H(E) := \max\left(\abs{a}^3, b^2\right)$. Ordering elliptic curves over $\mathbb{Q}$ according to height, for any subset $\mathscr{S}\subseteq \mathscr{E}_p$ and $x>0$, let $\mathscr{S}(x)$ consist of all elliptic curves $E\in \mathscr{S}$ with height $\leq x$. The lower density of $\mathscr{S}$ is given by
\[\mathfrak{d}(\mathscr{S}):=\liminf_{x\rightarrow \infty} \frac{\# \mathscr{S}(x)}{\# \mathscr{E}_p(x)}.\]
From a statistical point of view, one would like to characterize $\mathfrak{d}(\mathscr{E}_p')$, i.e., the lower density for the proportion of curves $E\in \mathscr{E}_p$ such that $\mathcal{R}_{E,p}\simeq \mathbb{Z}_p$.
\par We are not able to prove unconditional results, instead, we resort to heuristics and explicit calculations. We recall the notions of \textit{modular degree} and \textit{congruence number} associated to an elliptic curve $E_{/\mathbb{Q}}$.
\begin{Definition}
Let $E_{/\mathbb{Q}}$ be an elliptic curve of conductor $N$ and $\phi_E:X_0(N)\rightarrow E$ the modular homomorphism associated to $E$. The modular degree $m_E$ is the degree of $\phi_E$.
\end{Definition}
Let $f$ be the normalized newform on $\Gamma_0(N)$ associated to $E$. The congruence number is a number which is precisely divisible by the primes in $\op{Cong}(f)$, see \cite{agashe ribet} for the definition. Let $r_E$ the congruence number.
\par Given $r\in \mathbb{Z}_{\geq 1}$, set $r^{(p)}$ to denote $|r|_p^{-1}$, i.e., the $p$-part of $r$. Since $p\nmid N$, it follows from \cite[Theorem 2.1]{agashe ribet} that $r_E^{(p)}=m_E^{(p)}$. Note that an elliptic curve $E\in \mathscr{E}_p$ belongs to $\mathscr{E}_p'$, precisely when $r_E^{(p)}=1$. Therefore, we may formulate the problem in terms of the distribution of the $p$-primary part of the modular degree
\[\liminf_{x\rightarrow \infty} \frac{\#\mathscr{E}_p'(x)}{\#\mathscr{E}_p(x)}=\liminf_{x\rightarrow \infty} \frac{\#\{E\in \mathscr{E}_p(x)\mid m_E^{(p)}=1\}}{\#\mathscr{E}_p(x)}.\]
The probablity that $p\mid m_E^{(p)}$ for odd primes $p$ is given by Cohen-Lenstra heuristics, see \cite[p.499]{watkins}. It follows from such heuristics that the proportion of curves $m_E^{(p)}=1$ is
\[\prod_i \left(1-\frac{1}{p^{i}}\right)=1-\frac{1}{p}-\frac{1}{p^2}+\frac{1}{p^3}+\dots.\] Watkins obtains some evidence for these heuristics in loc. cit. One can only expect that the stipulation that $m_E^{(p)}=1$ be largely independent from the other conditions \eqref{s5:1}-\eqref{s5:4}.
\par In the rest of this paper, we show through explicit calculation that one may expect that $m_E^{(p)}=1$ holds for a large proportion of curves, especially as $p\rightarrow\infty$. This leads us to expect the following,
\begin{enumerate}
\item for any odd prime $p$,
\[\liminf_{x\rightarrow \infty} \frac{\# \mathscr{E}_p'(x)}{\# \mathscr{E}_p(x)}>0.\]
\item As $p\rightarrow \infty$,
\[\lim_{p\rightarrow \infty}\left(\liminf_{x\rightarrow \infty} \frac{\# \mathscr{E}_p'(x)}{\# \mathscr{E}_p(x)}\right)\rightarrow 1.\]
\end{enumerate}
\subsection{Statistics for congruence primes}
\par For newforms $f,g$ of levels $M,N$, we say that a rational prime $p$ is a {\it congruence prime} for $f$ and $g$ if there is a prime $\mathfrak p$ of $\bar{\mathbb{Q}}$ containing $p$ such that there is a congruence of Fourier coefficients $$a_\ell(f) \equiv a_{\ell}(g) \pmod{\mathfrak p}$$ for all primes $\ell$ not dividing $pMN$.
\par Let $E$ be an elliptic curve with conductor $N$ and associated newform $f \in S_2(\Gamma_0(N))$. Recall that a prime $p$ is a {\it congruence prime} for $E$ if there is a weight $2$ newform $g$ of level dividing $N$ such that $p$ is a congruence prime for $f_E$ and $g$. We
say that a congruence prime is {\it strict} (resp.\ {\it proper}) if the level of $g$ can be taken equal to $N$ (resp.\ strictly less than $N$). In deformation theory, strict congruence primes correspond to global obstructions while proper congruence primes correspond to local
obstructions.
\par The behavior of $2$ as a congruence prime was studied in \cite{CalegariEmerton}, where they proved that it is a congruence prime for the vast majority of eigenforms. The behavior for odd primes is less clear. We computed all congruence primes for the 13,352 isogeny
classes of elliptic curves of
conductor at most $4000$. Computations were done in Magma using the package of Montes to deal with the large number fields which arise as coefficient fields. (Attempts to extend the
computations to larger conductor were unsuccessful due to the extremely large coefficient
fields which begin to arise at that stage.)
\begin{tabular}{c|ccc}
$p$ & \% proper & \% strict & \% cong \\ \hline
2 & 82.9 & 99.1 & 99.5 \\
3 & 49.1 & 61.7 & 75.4 \\
5 & 16.3 & 23.8 & 37.3 \\
7 & 7.8 & 13.2 & 20.0 \\
11 & 2.5 & 4.8 &7.2 \\
13 & 1.6 & 3.4 & 4.9 \\
17 & 0.7 & 2.1 & 2.8 \\
19 & 0.5 & 1.3 & 1.8 \\
23 & 0.3 & 0.9 & 1.1 \\
29 & 0 & 0.6 & 0.6 \\
31 & 0.1 & 0.4 & 0.4 \\
37 & 0 & 0.3 & 0.4 \\
41 & 0 & 0.3 & 0.3 \\
43 & 0 & 0.1 & 0.1 \\
47 & 0 & 0.1 & 0.1
\end{tabular}
The proportions for all primes between 53 and 97 were never more than $0.1\%$.
It is difficult to see any immediate pattern in this data beyond the not surprising observation that small congruence primes are relatively common and large congruence primes are rare. The data for the 179 isogency classes of elliptic curves of prime conductor in our sample (in which case all congruences are necessarily strict) was perhaps more revealing.\
\begin{tabular}{c|ccc}
$p$ & \% cong & \% Watkins $S_3$ & Watkins CL \\ \hline
3 & 40.2 & 44.8 & 44.0\\
5 & 22.9 & 24.2 & 24.0\\
7 & 11.2 & 16.3 & 16.3\\
11 & 9.5 & 9.9 & 9.9\\
13 & 7.3 & 8.1 & 8.3 \\
17 & 3.9 & 6.2 & 6.2\\
19 & 5.6 & 5.3 & 5.5\\
23 & 3.4 & 4.7 & 4.5\\
29 & 3.4 & 3.4 & 3.6\\
31 & 2.8 & 3.4 & 3.3\\
37 & 1.1 & 2.7 & 2.8
\end{tabular}
Here we list also the proportion of elliptic curves $E$ in Watkin's set $S_3$ (consisting of 52878 non-Setzer--Neumann elliptic curves of prime
discriminant of absolute value less than $10^7$, as computed by
\cite{BrumerMcGuinness}) with $p$ dividing the modular degree, as well
as the Cohen--Lenstra prediction Watkins developed for that case. Given that our data set is quite small, the fit is certainly respectably close.
Since our results are concerned with elliptic curves of squarefree conductor, we also report the proportions in that case, which are notably higher than the overall average, at least for larger $p$.
The data here is for the 4931 isogeny classes of elliptic curves of squarefree conductor $\leq 4000$.
\begin{tabular}{c|c}
$p$ & \% cong \\ \hline
2 & 98.8 \\
3 & 63.7 \\
5 & 37.9 \\
7 & 21.6 \\
11 & 9.3 \\
13 & 6.8 \\
17 & 4.1 \\
19 & 2.8 \\
23 & 1.9 \\
29 & 1.2
\end{tabular}
Finally, and perhaps most relevantly, we note that individual elliptic curves tend to have very few congruence primes.
\begin{tabular}{c|ccc}
$n$ & \% $E$ with $n$ proper cp & \% $E$ with $n$ strict cp & \% $E$ with $n$ cp \\ \hline
0 & 4.3\% & 0.3\% & 0.1\% \\
1 & 40.8\% & 16.6\% & 8.2\% \\
2 & 44.1\% & 55.7\% & 42.2\% \\
3 & 10.2\% & 25.0\% & 38.8\% \\
4 & 0.6\% & 2.3\% & 9.7\% \\
5 & 0.0\% & 0.1\% & 0.9\% \\
6 & 0 & 0 & 0.0\%
\end{tabular}
There were a total of six elliptic curves in the sample with six
congruence primes.
Putting everything together, we computed for small primes $p$ the
proportion of isogeny classes of conductor $\leq 4000$ lying in our
set $\mathcal{E}_p$, and then also what proportion of those further satisfy
$p \nmid m_E$.
\begin{tabular}{c|cc}
$p$ & \% in ${\mathcal E}_p$ & \% in ${\mathcal E}_p$ with $p \nmid m_E$ \\ \hline
5 & 10.3\% & 6.6\% \\
7 & 17.2\% & 13.5\% \\
11 & 25.1\% & 22.3\% \\
13 & 28.6\% & 26.7\% \\
17 & 30.4\% & 29.2\% \\
19 & 30.9\% & 30.0\% \\
23 & 32.1\% & 31.5\% \\
29 & 33.3\% & 33.0\%
\end{tabular}
|
1,314,259,996,619 | arxiv |
\section{MAE as a differentiable inpainter}
\setcounter{figure}{0}
\setcounter{table}{0}
Masked Autoencoders (MAE) consist of a Transformer Encoder, which takes as input only a subset of unmasked patches during training, and a Transformer Decoder, which takes as input the encoded patches and, in addition, a learnable MSK token replicated at all the locations where the (masked) patches were not fed to the encoder. The decoder is trained to reconstruct the masked patches.
In MOVE, we need the pre-trained MAE to work as a differentiable inpainter. To that end, we feed all the patches to the encoder. Then, we only do a soft-masking between the MSK token and the encoded patches via a convex combination, before feeding the embeddings to the decoder (see section~2 and Figure~3). This is different from how MAE was trained: During training the encoder had no way to encode the information about the missing patches. Since in MOVE we feed all the patches to the encoder, it is possible that the encoded embeddings contain information about their neighbors. In particular, there is the risk that the unmasked encoded patches would contain information about the masked patches. If that were the case, the decoder would be able to inpaint the masked object even when the entire object is masked at the decoder input. We show empirically and quantitatively that this is not the case. Using the same pre-trained MAE, we compare the reconstruction error for the original inference vs. our modified soft-masking inference. We run the evaluation on a subset of $5000$ images from the ImageNet validation set \cite{imagenet}, randomly masking between $80\%$ and $95\%$ of the tokens. We show the mean squared error of the intensity for intensity range $[0;1]$ in Table~\ref{tab:inpaint-mae} and comparison of reconstructed images in Figure~\ref{fig:inpaint-mae} for both MAE trained with a GAN loss or with an MSE loss. We find that the difference in the inpainting error is not significant.
Moreover, we observe visually that the reconstructions through the Modified soft-masking (MOVE) do not show a better reconstruction of the masked patches than in the Default case where the masked patches are not provided to MAE.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{figures/mae-inpainting.jpg}
\raggedright
{
\scriptsize \hspace*{0.7cm} Input \hspace*{1.4cm} Masked input \hspace*{0.5cm} MAE w/ GAN - orig. \hspace*{0.2cm} MAE w/ GAN - mod. \hspace*{0.2cm} MAE w/ MSE - orig. \hspace*{0.2cm} MAE w/ MSE - mod.
}
\caption{Comparison of MAE sparse input vs differentiable mask inpainting. We show the input and masked input image in the two first columns. For MAE trained with a GAN loss or with an MSE loss we show the reconstructed image when we feed a sparse subset of tokens to the encoder (orig.) and when we feed all the tokens to the encoder and mask only before feeding the embeddings to the decoder (mod.). No significant difference can be observed between these two reconstruction modalities or when we change the MAE training. }\label{fig:inpaint-mae}
\end{figure*}
\section{Inpainter mask and downsampled mask losses}
\setcounter{figure}{0}
\begin{figure}
\centering
\includegraphics[scale=0.6]{figures/sup-maxpool.pdf}
\raggedright
{
\scriptsize \hspace*{3.25cm} Predicted mask \hspace*{3.5cm} Downsampled inpainter mask
}
\caption{Obtaining an inpainting mask from a predicted mask via max pooling downsampling. Due to small artifacts in the mask, all patches might be selected as masked and thus, the entire background might get inpainted. The grid on the right is just for reference purposes.}
\label{fig:sup-maxpool}
\end{figure}
As specified in section~2, we obtain a low-res inpainting mask $\hat m$ via a $\text{maxpool}_P$ with stride $P$ operation on the union of the predicted mask and its shifted version, where $P$ is the patch size that MAE tokens embed. We use max pooling for downsampling, because we want to make sure that we mask all the patches containing even only parts of the object. This is important, otherwise the inpainter may partly reconstruct the object. However, using max pooling for downsampling might result in inpainting more than necessary due to the artifacts in the mask. An extreme case of this is illustrated in Figure~\ref{fig:sup-maxpool}, where the entire background would get inpainted due to a single pixel within each $P\times P$ patch. To avoid such cases we apply our ${\cal L}_\text{min}$
and ${\cal L}_\text{bin}$ losses (eq.~(1),(2)) on the downsampled mask as well. Having
a binarization loss on the mask downsampled with max pooling has an extra regularizing effect on the original mask. For example, when all mask pixels in a patch have a value below 0.5, the binarization loss on the max pooling of the mask will push only the largest value towards 0. This creates an asymmetry when the pixels of the mask must be reduced, which prioritizes the largest values.
Eventually however, the application of this loss over multiple iterations will result in pushing all pixels within the patch to 0.
\section{Bilateral solver}
\setcounter{figure}{0}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/sup-bilateral-fail-cr.jpg}
\raggedright
{
\scriptsize \hspace*{0.5cm} Input \hspace*{0.8cm} Ground truth \hspace*{0.5cm} MOVE \hspace*{0.3cm} MOVE + bilateral \hspace*{0.8cm} Input \hspace*{0.8cm} Ground truth \hspace*{0.5cm} MOVE \hspace*{0.3cm} MOVE + bilateral
}
\caption{A refinement with the bilateral solver might cause the shrinking of valid predicted masks.}
\label{fig:sup-bilateral}
\end{figure}
While other methods get competitive results by using a bilateral solver to refine the masks predicted from their methods (see section~4.1), MOVE provides more accurate results wihout any additional post-processing. The application of a bilateral solver, which highly relies on image texture, could even decrease the performance in cluttered images. In Figure~\ref{fig:sup-bilateral} we show some examples where the bilateral solver hurts our predictions.
\section{Segmenter architecture}
Our segmenter is built on top of a ViT-based feature extractor, as specified in section~3.
We define a $\text{Block}^{in \textunderscore ch}_{out \textunderscore ch}$ as a sequence of layers:
\begin{equation}
\begin{aligned}
& 3\times3 \textrm{ } \textrm{Conv}_{out \textunderscore ch}^{in \textunderscore ch} \rightarrow \textrm{BatchNorm} \rightarrow \textrm{LeakyReLU},
\end{aligned}
\end{equation}
where $K\times K$ $\textrm{Conv}_{out \textunderscore ch}^{in \textunderscore ch}$ is a padded $K\times K$ convolution with $\textrm{stride}=1$, $in \textunderscore ch$ input channels and $out \textunderscore ch$ output channels. Our baseline segmenter takes DINO ViT-S/8 384-dimensional features arranged in a grid and consists of alternating upsampling layers and blocks:
\begin{equation}
\begin{aligned}
& \textrm{Upsample}_{\textrm{nearest}}^{2\times2} \rightarrow \textrm{Block}^{384}_{192} \rightarrow \\
& \textrm{Upsample}_{\textrm{nearest}}^{2\times2} \rightarrow \textrm{Block}^{192}_{128} \rightarrow \\
& \textrm{Upsample}_{\textrm{nearest}}^{2\times2} \rightarrow \textrm{Block}^{128}_{128} \rightarrow \\
& \textrm{Block}^{128}_{128} \rightarrow 1 \times 1 \textrm{ } \textrm{Conv}^{128}_1
\end{aligned}
\end{equation}
MAE features used as the segmenter inputs in one of the ablations (section~4.3) are 1024-dimensional and come from ViT/16 with $16\times16$ patches, therefore the segmenter needs an extra upsampling block and adapted number of channels. The adapted architecture in this case is
\begin{equation}
\begin{aligned}
& \textrm{Upsample}_{\textrm{nearest}}^{2\times2} \rightarrow \textrm{Block}^{1024}_{512} \rightarrow \\
& \textrm{Upsample}_{\textrm{nearest}}^{2\times2} \rightarrow \textrm{Block}^{512}_{256} \rightarrow \\
& \textrm{Upsample}_{\textrm{nearest}}^{2\times2} \rightarrow \textrm{Block}^{256}_{128} \rightarrow \\
& \textrm{Upsample}_{\textrm{nearest}}^{2\times2} \rightarrow \textrm{Block}^{128}_{128} \rightarrow \\
& \textrm{Block}^{128}_{128} \rightarrow 1 \times 1 \textrm{ } \textrm{Conv}^{128}_1
\end{aligned}
\end{equation}
For the ImageNet100 experiment we increase the capacity of the segmenter by making each Block deeper, i.e. $\text{Block}^{in \textunderscore ch}_{out \textunderscore ch}$ is:
\begin{equation}
\begin{aligned}
& 3\times3 \textrm{ } \textrm{Conv}_{out \textunderscore ch}^{in \textunderscore ch} \rightarrow \textrm{BatchNorm} \rightarrow \textrm{LeakyReLU} \rightarrow \\
& 3\times3 \textrm{ } \textrm{Conv}_{out \textunderscore ch}^{out \textunderscore ch} \rightarrow \textrm{BatchNorm} \rightarrow \textrm{LeakyReLU}.
\end{aligned}
\end{equation}
\section{Additional results}
\setcounter{figure}{0}
\setcounter{table}{0}
\subsection{Segmentation qualitative results}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/sup-ecssd-pred-cr.jpg}
\raggedright
{
\scriptsize \hspace*{0.7cm} Input \hspace*{1.2cm} MOVE \hspace*{0.5cm} SelfMask on MOVE \hspace*{0.2cm} Ground truth \hspace*{0.8cm} MOVE \hspace*{0.5cm} SelfMask on MOVE \hspace*{0.2cm} Ground truth
}
\caption{Sample segmentation results on ECSSD.}
\label{fig:sup-ecssd-pred}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/sup-duts-te-pred-cr.jpg}
\raggedright
{
\scriptsize \hspace*{0.7cm} Input \hspace*{1.2cm} MOVE \hspace*{0.5cm} SelfMask on MOVE \hspace*{0.2cm} Ground truth \hspace*{1.0cm} MOVE \hspace*{0.5cm} SelfMask on MOVE \hspace*{0.2cm} Ground truth
}
\caption{Sample segmentation results on DUTS-TE.}
\label{fig:sup-duts-pred}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/sup-dut-omron-pred-cr.jpg}
\raggedright
{
\scriptsize \hspace*{0.7cm} Input \hspace*{1.2cm} MOVE \hspace*{0.5cm} SelfMask on MOVE \hspace*{0.2cm} Ground truth \hspace*{1.0cm} MOVE \hspace*{0.5cm} SelfMask on MOVE \hspace*{0.2cm} Ground truth
}
\caption{Sample segmentation results on DUT-OMRON.}
\label{fig:sup-omron-pred}
\end{figure}
In Figures~\ref{fig:sup-ecssd-pred},\ref{fig:sup-duts-pred},\ref{fig:sup-omron-pred} we show more segmentation results of MOVE on DUTS-TE, DUT-OMRON and ECSSD.
\subsection{Detection qualitative results}
In Figures~\ref{fig:sup-voc07-pred},\ref{fig:sup-voc12-pred},\ref{fig:sup-coco-pred} we show more object detection results of MOVE on VOC07, VOC12 and COCO20k.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/sup-VOC07-preds-cr.jpg}
\raggedright
{
\scriptsize \hspace*{0.6cm} Input \hspace*{0.55cm} MOVE masks \hspace*{0.25cm} MOVE bbox \hspace*{0.15cm} Ground truth bbox \hspace*{0.6cm} Input \hspace*{0.55cm} MOVE masks \hspace*{0.25cm} MOVE bbox \hspace*{0.15cm} Ground truth bbox
}
\caption{Sample detection results on VOC07.}
\label{fig:sup-voc07-pred}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/sup-VOC12-preds-cr.jpg}
\raggedright
{
\scriptsize \hspace*{0.6cm} Input \hspace*{0.55cm} MOVE masks \hspace*{0.25cm} MOVE bbox \hspace*{0.15cm} Ground truth bbox \hspace*{0.6cm} Input \hspace*{0.55cm} MOVE masks \hspace*{0.25cm} MOVE bbox \hspace*{0.15cm} Ground truth bbox
}
\caption{Sample detection results on VOC12.}
\label{fig:sup-voc12-pred}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/sup-COCO-preds-cr.jpg}
\raggedright
{
\scriptsize \hspace*{0.6cm} Input \hspace*{0.55cm} MOVE masks \hspace*{0.25cm} MOVE bbox \hspace*{0.15cm} Ground truth bbox \hspace*{0.6cm} Input \hspace*{0.55cm} MOVE masks \hspace*{0.25cm} MOVE bbox \hspace*{0.15cm} Ground truth bbox
}
\caption{Sample detection results on COCO20k.}
\label{fig:sup-coco-pred}
\end{figure}
\clearpage
\section{Introduction}
Image segmentation and object detection are today mature and essential components in vision-based systems with applications in a wide range of fields including automotive \cite{chen2015deepdriving}, agriculture \cite{Chiu_2020_CVPR}, and medicine \cite{smistad2015medical}, just to name a few.
A major challenge in building and deploying such components at scale is that they require costly and time-consuming human annotation. This has motivated efforts in self-supervised learning (SSL) \cite{caron2021emerging,chen2021mocov3,wang2021dense}. The aim of SSL is to learn general-purpose image representations from large unlabeled datasets that can be fine-tuned to different downstream tasks with small annotated datasets. While SSL methods have also been fine-tuned for image segmentation since their inception, it is only with the recent state of the art (SotA) methods, such as DINO \cite{caron2021emerging} and Dense Contrastive Learning \cite{wang2021dense}, that a clear and strong link to object segmentation has been observed. This has led to several methods for salient object detection built on top of SSL features \cite{amir2021deep,wang2022freesolo,yin2021transfgu,LOST,wang2022self,Shin2022selfmask}.
Most prior work based on SSL features defines some form of clustering by either using attention maps \cite{amir2021deep,wang2022freesolo,yin2021transfgu} or similarity graphs \cite{LOST,wang2022self,Shin2022selfmask}.
In this work, we take a quite different direction. Rather than directly clustering the features, we train a network to map them to a segmentation mask. As supervision signal we use the \emph{movability} of objects, \ie, whether they can be locally shifted in a realistic manner. We call our method \methodname. This property holds for objects in the foreground, as they occlude all other objects in the scene. This basic idea has already been exploited in prior work with relative success \cite{Remez_2018_ECCV,Ostyakov2018,bielski2019emergence,arandjelovic2019object,yang_loquercio_2019,savarese2020information,katircioglu2021videobginpaint}. Nonetheless, here we introduce a novel formulation based on movability that yields a significant performance boost across several datasets for salient object detection.
\begin{figure*}[t]
\centering
\includegraphics[scale=.2,trim=0cm 0cm 9.5cm 0cm, clip]{figures/example_pipeline/pipeline2large.pdf}
\caption{Exploiting inpainting and movability. (a) Input image. (b) Examples of predicted segmentation masks: correct (top), larger (middle) and smaller (bottom). (c) Inpainted backgrounds in the three corresponding cases. (d) Composite image obtained by shifting the foreground object in the three cases. (e) It can be observed that when the mask is incorrect (it includes parts of the background or it does not include all of the background), the background inpainting combined with shifting reveals repeated patterns and mismatching background texture, when compared to the original input image or composite images obtained without shifting.}\label{fig:shiftability}
\end{figure*}
In our approach, it is not necessary to move objects far from their initial location or to other images \cite{Ostyakov2018,arandjelovic2019object} and thus we do not have to handle the context mismatch. It is also not necessary to employ %
models to generate entire scenes \cite{bielski2019emergence,yang2017lr}, which can be challenging to train. Our working principle exploits observations also made by \cite{yang_loquercio_2019,savarese2020information,katircioglu2021videobginpaint}. They point out that the correct mask maximizes the inpainting error both for the background and the foreground. However, rather than using the reconstruction error as a supervision signal, we rely on the detection of artifacts generated through shifting, which we find to provide a stronger guidance.
Suppose that, given a single image (Figure~\ref{fig:shiftability}~(a)), we predict a segmentation mask (one of the 3 cases in Figure~\ref{fig:shiftability}~(b)). With the mask we can remove the object and inpaint the background (Figure~\ref{fig:shiftability}~(c)). Then, we can also extract the foreground object, randomly shift it locally, and paste it on top of the inpainted background (Figure~\ref{fig:shiftability}~(d)).
When the mask does not accurately follow the outline of a foreground object (e.g., as in the middle and bottom rows in Figure~\ref{fig:shiftability}), we can see duplication artifacts (of the foreground or of the background). We exploit these artifacts as supervision signal to detect the correct segmentation mask.
As inpainter we use a publicly available Masked AutoEncoder (MAE) \cite{he2021masked} trained with an adversarial loss.\footnote{\url{https://github.com/facebookresearch/mae/blob/main/demo/mae_visualize.ipynb}}
Our segmenter uses a pre-trained SSL ViT as backbone (e.g., DINO \cite{caron2021emerging} or the MAE encoder \cite{he2021masked}). We then train a neural network head based on an upsampling Convolutional Neural Network (CNN).
Following \cite{Shin2022selfmask}, we also further refine the segmenter by training a second segmentation network (SelfMask \cite{Shin2022selfmask}) with supervision from pseudo-masks generated by our trained segmenter.
Even without these further refinements \methodname shows a remarkable performance on a wide range of datasets and tasks.
In particular, in unsupervised single object discovery on VOC07, VOC12 and COCO20K it improves the SotA CorLoc between $6.1$\% and $9.3$\%, and in unsupervised class agnostic object detection on COCOval2017 it improves the $\text{AP}_{50}$ by $6.8$\% (a relative improvement of $56$\%), the $\text{AP}_{75}$ by $2.3$\% (relative $55$\%) and the $\text{AP}$ by $2.7$\% (relative $49$\%).
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{figures/new_composer_discriminator-v3.pdf}
\caption{Synthetic and real images used to learn how to segment foreground objects.
We obtain the predicted mask and inpainted background from our segmenter and MAE respectively. We train the segmenter in an adversarial manner so that the composite image with a shifted foreground (left, top row) looks real. A discriminator is trained to distinguish two types of real (right) from two types of fake (left) images. The fake images consist of the composite image with a shift and a copy-paste image, obtained by placing the shifted foreground on top of the input image. The set of real images consists of composite images without a shift and the real images. The real images are first autoencoded with MAE to match the artifacts of the inpainted background.
\label{fig:composer}}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=.21,trim=0 0cm 0cm 0cm, clip]{figures/segmenter_inpainter_cr.pdf}
\caption{(Left) The segmenter is built on top of SSL features from a \textit{frozen} encoder. To define the inpainting region for the background, the predicted mask is shifted and combined with the unshifted mask (bottom left). For better visualization purposes we highlight the edge of the shifted mask, but this does not appear in the actual union of the masks. This mask union is then downsampled to the size of the tile grid via max pooling and denoted $\hat m$.
(Right) The inpainter is based on a \textit{frozen} MAE. First, it takes all the tiles from the input image and feeds them to the MAE encoder. Second, it takes a convex combination between the encoder embeddings and the MSK learned embedding (but now frozen), where the convex combination coefficients are based on the downsampled mask $\hat m$. Finally, this combination is fed to the MAE decoder to generate the inpainted background.
}
\label{fig:segmenter}
\end{figure}
\section{Method}
\label{sec:method}
Our objective is to train a segmenter to map a real image $x\in\real^{H\times W \times 3}$, with $H$ the height and $W$ the width of the image, to a mask $m\in\real^{H\times W}$ of the foreground, such that we can synthesize a realistic image for any small shifts of the foreground.
The mask allows to cut out the foreground from $x$ and to move it arbitrarily by some $\delta\in\real^2$ shift (see Figure~\ref{fig:composer}, top-left).
However, when the shifted foreground is copied back onto the background, missing pixels remain exposed. Thus, we inpaint the background with a \textit{frozen} pre-trained MAE\footnote{The MAE \cite{he2021masked} we use is based on a ViT architecture and has been pre-trained in an adversarial fashion (as opposed to the standard training with an MSE loss) to output more realistic-looking details} and obtain $\hat b$ (see Figure~\ref{fig:segmenter}). Moreover, there is a difference between the texture of $\hat b$, which is generated from a neural network, and the texture of the cut out foreground from $x$, which is a real image. To ensure more similarity between these two textures, we synthesize $\hat{x}_\delta$ by extracting the foreground from the autoencoding (AE) of the input image $x$ shifted by $\delta$, which we call $\check{x}_\delta$, and by pasting it onto the background $\hat b$.
We enforce the realism of the synthesized images $\hat x_\delta$ by using adversarial training,
i.e., by training the segmenter against a discriminator that distinguishes
two sets of \textit{real} (Figure~\ref{fig:composer}, right hand side) from two sets of \textit{fake} images (Figure~\ref{fig:composer} left hand side).
The synthetic \textit{real} image $\hat x_{\delta=0}$ is obtained by composing a zero-shifted foreground with the inpainted background; the second \textit{real} image $\check x$ is instead simply the AE of $x$.
The two \textit{fake} images are obtained by composing a $\delta$-shifted foreground with either the inpainted background $\hat b$ or $\check x$, and obtain
$\hat x_\delta$ and $\tilde x_\delta$ respectively.
We introduce all the above synthetic images so that the discriminator pays attention only to artifacts due to incorrect masks from the segmenter.
Ideally, the segmenter should generate masks such that the fake image $\hat x_\delta$ looks as realistic as $\check x$ for any small $\delta$. However, the discriminator might distinguish these two images because of the background inpainting artifacts and not because of the artifacts due to an incorrect segmentation (which are exposed by random shifts). To avoid this undesired behavior, we also introduce the real image $\hat x_{\delta=0}$. This image has no segmentation artifacts for any mask, but has the same background inpanting artifacts as the fake images (although there is no shift in $\hat x_{\delta=0}$, the background inpainting creates artifacts beyond the boundaries of the segmentation mask). Finally, to guide the discriminator to detect repeated patterns (as those caused by incorrect masks, see Figure~\ref{fig:shiftability}), we also add a fake image $\tilde x_\delta$, where the background has the original foreground.
The segmenter is trained only through the backpropagation from $\hat x_\delta$. The details of the segmentation network, the inpainting network and the adversarial training are explained in the following sections.
\subsection{Segmenter}
Following the recent trend of methods for unsupervised object segmentation \cite{amir2021deep,wang2022freesolo,yin2021transfgu,LOST,wang2022self,Shin2022selfmask,melas2022}, we build our method on top of SSL features, and, in particular, DINO \cite{caron2021emerging} or MAE \cite{he2021masked} features.
Thus, as a backbone, we adopt the Vision Transformer (ViT) architecture \cite{dosovitskiy2020image}. Following the notation in \cite{LOST}, we split an image $x\in\real^{H\times W \times 3}$ in tiles of size $P\times P$ pixels, for a total of $N = HW/P^2$ tiles (and we assume that $H$ and $W$ are such that $H/P$ and $W/P$ are integers). Each tile is then mapped through a trainable linear layer to an embedding of size $d$ and an additional CLS token is included in the input set (see Figure~\ref{fig:segmenter} left).
The \emph{segmenter} network is a CNN that takes SSL features as input (e.g., from a pre-trained DINO or MAE encoder), upsamples them and then outputs a mask for the original input image. The final output is generated by using a sigmoid to ensure that the mask values are always between $0$ and $1$. We also ensure a minimum
size of the support of the predicted mask by using %
\begin{align}
\label{eq:mask_min_max}
\textstyle
{\cal L}_\text{min} = \frac{1}{n}\sum_{i=1}^n \max\left\{\theta_\text{min}-\sum_{p} \frac{m^{(i)}[p]}{HW},0\right\} %
\end{align}
where $n$ is the number of images in the training dataset, $m^{(i)}$ is the predicted mask from image $x^{(i)}$, $p$ is a pixel location within the image domain, and $\theta_\text{min}$ %
is a threshold for the minimum
mask coverage percentage respectively (in the range $[0,1]$, where $0$ implies that the mask is empty and $1$ implies that the mask covers the whole image).
Since masks should only take binary values to clearly indicate a segment, we use a loss that encourages $m^{(i)}$ to take either $0$ or $1$ values
\begin{align}
\label{eq:mask_bin}
\textstyle
{\cal L}_\text{bin} = \frac{1}{n}\sum_{i=1}^n \frac{1}{HW}\sum_{p} \min\left\{ m^{(i)}[p],1-m^{(i)}[p]\right\}.
\end{align}
\subsection{Differentiable inpainting}
The main task of \methodname is to predict a segmentation mask that can be used to synthesize a realistic image, where the foreground object is shifted on top of the inpainted background (see Figure~\ref{fig:shiftability}~(e) top and Figure~\ref{fig:composer} top left).
Figure~\ref{fig:segmenter} shows how we use the predicted high resolution mask for inpainting with MAE. Since MAE performs inpainting by masking or retaining entire patches of $P'\times P'$ pixels, it is necessary to also split the segmentation mask into a grid of tiles of $P'\times P'$ pixels and to map each tile to a single scalar between $0$ and $1$. We do that by using a max pooling operation within each tile and obtain a low-res mask $\hat m$, such that $1-\hat m$ does not contain any part of the predicted mask.
To regularize the predicted mask $m$, the mask losses ${\cal L}_\text{min}$,
${\cal L}_\text{bin}$ are also computed on max pool $\hat{m}$ and average pool downsampled masks (at a scale $1/P'$ of the original image resolution; for more details see the supplementary material).
Then, we feed the entire set of image tiles to the MAE encoder and obtain embeddings $\xi_1,\dots,\xi_N$.
Next, for $j=1,\dots,N$, we compute the convex combination between the embeddings $\xi_j$ and the learned MSK (masked) token from MAE by using the low res mask $\hat m$ as
$\hat \xi_j = \hat m[j] \cdot \xi_\text{MSK} + (1-\hat m[j]) \cdot \xi_j.$
Finally, we feed the new embeddings $\hat \xi_j$ in the MAE decoder and reassemble the output tiles back into the inpainted background image $\hat b$ (see Figure~\ref{fig:segmenter} bottom-right).
Notice that we feed all the tiles as input to obtain a differentiable mapping that we can backpropagate on. Interestingly, we found that when no tile is masked at the input of the MAE encoder, the embeddings $\xi_j$ do not store significant information about their neighbors (see the supplementary material). This is in contrast to the typical use of MAE, where only the subset of ``visible'' tiles is fed as input to the encoder. However, such tile selection operation would make the inpainting not differentiable.
\subsection{Adversarial training}
Figure~\ref{fig:composer} shows how we create the images used in the adversarial training.
First, we mask the input image with the predicted mask and compose with the inpainted background image, obtaining
\begin{align}
\hat x_\delta[p] = m_{\delta}[p] \check{x}[p+\delta] +
(1-m_{\delta}[p]) \hat b[p],
\end{align}
where $m_{\delta}[p]=m[p+\delta]$, $\delta\in[-\Delta W,\Delta W]\times[-\Delta H,\Delta H]$ is a 2D shift, with $\Delta$ the maximum shift range (relative to the image size).
To make the inpainting artifacts in the no-shift composite image $\hat x_{\delta=0}$ more comparable to those in the shifted composite image, we define the background inpainting region as the union between the predicted mask and its shifted version (see Figure~\ref{fig:segmenter}). Thus,
\begin{align}
\hat m = \text{maxpool}_{P}(1-(1-m)\odot (1-m_{\delta})).
\end{align}
To improve the discriminator's ability to focus on repeated patterns artifacts, we additionally create \emph{fake} images with a predicted shifted foreground pasted on top of the autoencoded image, obtaining $\Tilde{x}_\delta=\check{x}_\delta \odot m_\delta + \check{x} \odot (1-m_\delta)$.
The adversarial loss for the discriminator can be written as
\begin{align}
{\cal L}_\text{advD} = - \text{I\!E}_{x_R} \min\{0, D(x_R) - 1 \} - \text{I\!E}_{x_S} \min\{0, -D(x_S) - 1 \}
\end{align}
where samples for ``real'' images $x_R$ are the set $\{\check x^{(i)}\}_{i=1,\dots,n}\bigcup \{\hat x_{\delta=0}^{(i)}\}_{i=1,\dots,n}$ and samples for synthetic images $x_S$ are the set $\{\hat x_\delta^{(i)}\}_{i=1,\dots,n} \bigcup \{\Tilde{x}_{\delta}^{(i)}\}_{i=1,\dots,n}$, with uniform random samples $\delta\sim {\cal U}_2([-\Delta W,\Delta W]\times[-\Delta H,\Delta H])$ and $\text{I\!E}$ denotes the expectation.
To speed up the convergence, we also use the projected discriminator method \cite{Sauer2021NEURIPS}.
For the segmenter, we use instead the standard loss computed on the composite shifted images
\begin{align}
{\cal L}_\text{advS} =
- \text{I\!E}_{\hat x_\delta} D(\hat x_\delta).
\end{align}
Finally, with $\lambda_\text{min}$,
$\lambda_\text{bin}$ nonnegative hyperparameters, our optimization is the adversarial minimization \begin{align}
S^\ast =& \arg\min_S {\cal L}_\text{advS}
+ \lambda_\text{min}{\cal L}_\text{min}
+ \lambda_\text{bin}{\cal L}_\text{bin}\\
&\text{subject to } D^\ast = \arg\min_D {\cal L}_\text{advD}.
\end{align}
\section{Implementation}
\label{sec:implementation}
Except for the ablation studies, in all our experiments we use a self-supervised DINO \cite{caron2021emerging} ViT-S/8 transformer pre-trained on ImageNet \cite{imagenet} as an SSL feature extractor.
We take the output of the penultimate transformer block of DINO as the feature tokens with $P=8$ and feed them to the segmenter.
Our segmenter is a small upsampling convolutional neural network. It assembles the DINO features into a grid of size $H/P\times W/P$ and processes them with 3 upsampling blocks, so that the output matches the input image resolution. Each upsampling block first performs a $2\times 2$ nearest upsampling, followed by a $3\times3$ convolutional layer with padding, batch normalization \cite{ioffe2015batch} and a LeakyReLU activation function (see the supplementary material for details). We add an additional block without upsampling and a linear projection to 1 channel, representing the mask.
Our inpainting network is a ViT-L/16 transformer pre-trained on ImageNet as a Masked Autoencoder (MAE) \cite{he2021masked} with an adversarial loss to increase the details of the reconstructed images. For the discriminator we use the Projected Discriminator \cite{Sauer2021NEURIPS} in its standard setting, but we only use \textit{color} differentiable augmentation.
For the training we use random resized crops of size $224$ with a scale in range $(0.9, 1)$ and aspect ratio $(3/4, 4/3)$.
We set the minimum mask area $\theta_\text{min}= 0.05$, the minimum loss coefficient $\lambda_\text{min} = 100$ and we linearly ramp up the binarization loss coefficient $\lambda_\text{bin}$ from $0$ to $12.5$ over the first $2500$ segmenter iterations.
We use the shift range $\Delta=\nicefrac{1}{8}$. We train the segmenter by alternatively minimizing the discriminator loss and the segmenter losses. Both are trained with a learning rate of $0.0002$ and an Adam \cite{kingmaAdam} optimizer with betas $= (0, 0.99)$ for the discriminator and $(0.9, 0.95)$ for the segmenter. We implemented our experiments in PyTorch \cite{pytorch}. We train our model for $80$ epochs with a batch size of $32$ on a single NVIDIA GeForce 3090Ti GPU with 24GB of memory.
\section{Experiments}
\subsection{Unsupervised saliency segmentation}
\label{sec:exp-saliency}
\input{tables/os-comp.tex}
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{figures/saliency-results-camera-ready.jpg}
\caption{Qualitative evaluation of \methodname on ECSSD, DUTS-TE and DUT-OMRON. First row: input image; second row: \methodname; third row: SelfMask on \methodname; last row: ground truth. Best viewed in color. For more examples and a gray scale version see the supplementary material.}\label{fig:saliency-results}
\end{figure*}
\textbf{Datasets.} We train our main model using the train split of the DUTS dataset (DUTS-TR) \cite{wang2017learningDUTS}, containing $10{,}553$ images of scenes and objects of varying sizes and appearances. We emphasize that we only use the images without the corresponding ground truth. For comparison, we evaluate our approach on three saliency detection datasets: the test set of DUTS ($5{,}019$ images), DUT-OMRON \cite{yang2013saliency} ($5{,}168$ images) and ECSSD \cite{shi2015hierarchical} ($1{,}000$ images). We report three standard metrics: pixel mask accuracy (Acc), intersection over union (IoU), $\max F_{\beta}$, where $F_{\beta} = \frac{(1+\beta^2)\text{Precision}\times \text{Recall}}{\beta^2\text{Precision}+\text{Recall}}$ for $\beta=0.3$; the $\max F_{\beta}$ is the score for the single optimal threshold on a whole dataset. Additionally, we report the IoU on the test split \cite{chen2019unsupervisedRedo} of CUB-200-2011 (CUB-Birds) \cite{WahCUB_200_2011} dataset.\\
\noindent\textbf{Evaluation. }
We train our segmenter in an adversarial manner as specified in sections~\ref{sec:method} and \ref{sec:implementation} and evaluate it on the test datasets. We compare with other methods in Table~\ref{tab:salient-comp}. Note that without any type of post-processing of our predicted masks, we surpass all other methods by a significant margin.
We also follow \cite{wang2022self,Shin2022selfmask} and further refine our masks with a bilateral solver \cite{barron2016fast}.
\begin{wraptable}{r}{5cm}
\caption{Comparison of unsupervised segmentation methods on the CUB-200-2011 test set. MOVE$^\star$ was trained on the CUB-200-2011 train set, while MOVE was trained on DUTS-TR}
\label{tab:cub-birds}
\centering
\small
\vspace{0.3cm}
\begin{tabularx}{5cm}{@{}l@{\hspace{1.6cm}}c@{}}
\toprule
\textbf{Method} & \textbf{IoU} \\
\midrule
PerturbGAN \cite{bielski2019emergence} & 0.360 \\
ReDO \cite{chen2019unsupervisedRedo} & 0.426 \\
IEM \cite{savarese2020information} & 0.551 \\
Melas-Kyriazi \cite{melaskyriazi2021finding} & 0.664 \\
Voynov \cite{voynov2020big} & 0.683 \\
Voynov-E \cite{voynov2020big} & 0.710 \\
Deep Spectral \cite{melas2022} & 0.769 \\
\textbf{MOVE}$^\star$ & \textbf{0.814} \\
\textbf{MOVE} & \textbf{0.858} \\
\bottomrule
\end{tabularx}
\end{wraptable}
Since the bilateral solver only marginally improves or even decreases the quality of our segmentation, we conclude that our predicted masks are already very accurate. Using the bilateral solver might also inadvertently discard correct, but fragmented segmentations, as we show in the supplementary material. %
Next, we extract the predicted unsupervised masks from the DUTS-TR dataset and use them as pseudo ground-truth to train a class-agnostic segmenter. We use the same architecture (a MaskFormer \cite{cheng2021per}) and training scheme as SelfMask \cite{Shin2022selfmask}. We then evaluate again on the saliency prediction datasets. Without additional pre-processing our method surpasses or is on par with the SotA across all metrics and datasets.
While additional processing with the bilateral solver seems to benefit SelfMask \cite{Shin2022selfmask}, it mostly hurts the performance of our method. Figure~\ref{fig:saliency-results} shows qualitative results of our method.
Finally, we evaluate our method on the test set of CUB-Birds dataset. Additionally, we train our model on the train split of CUB-Birds dataset and run the same evaluation. We present the comparison with other methods in Table~\ref{tab:cub-birds} and show that we achieve SotA performance.
\subsection{Single-object discovery}
\textbf{Datasets.} We evaluate our trained model (see section~\ref{sec:exp-saliency}) on 3 typical single-object discovery benchmarks: the train split of COCO20K \cite{lin2014microsoft,vo2020toward} and the trainval splits of VOC07 \cite{pascal-voc-2007} and VOC12 \cite{pascal-voc-2012}. Following \cite{cho2015unsupervised,deselaers2010localizing,siva2013looking,vo2019unsupervised,vo2020toward,vo2021large,LOST,wang2022self}, we report the \textit{Correct Localization} metric (CorLoc), \ie, the percentage of images, where the $\text{IoU}>0.5$ of a predicted single bounding box with at least one of the ground truth ones.\\
\noindent\textbf{Evaluation.} %
Since our method tends to produce a single segmentation mask for multiple objects in the scene, we separate the objects by detecting connected components via OpenCV \cite{opencv_library}. We then convert the separate masks to bounding boxes and choose the biggest one as our prediction for the given image. In Table~\ref{tab:single-ob-discovery}, we compare \methodname with other unsupervised methods and we show that just by using processed masks from our method we achieve SotA results on all three datasets, outperforming even methods that used their bounding boxes to train a Class Agnostic Detector (CAD). We show qualitative results for object detection in Figure~\ref{fig:detection-results}.
We also follow the practice of \cite{LOST,wang2022self} and use our predicted bounding boxes as pseudo-ground truth for training the CAD on each of the evaluation datasets. To train the detector, we use either the largest or all the bounding boxes (\textit{Multi}) that we obtained from the connected components analysis and after filtering those that have an area smaller than $1\%$ of the image. For the evaluation we take the bounding box with the highest prediction confidence, as done in \cite{LOST,wang2022self}. We use the exact same architecture and training scheme as our competitors for a fair comparison. Training with a single bounding box improves the performance of our method, while training with multiple ones gives it a significant additional boost.\\
\begin{table}
\caption{{Comparisons for unsupervised single object discovery}. We compare \methodname to SotA object discovery methods on VOC07~\cite{pascal-voc-2007}, VOC12 ~\cite{pascal-voc-2012} and COCO20K~\cite{lin2014microsoft,vo2020toward} datasets. Models are evaluated with the CorLoc metric. +CAD indicates training a second stage class-agnostic detector with unsupervised ``pseudo-boxes'' labels. (\textcolor{olivegreen}{$\uparrow \mathbf{z}$}) indicates an improvement of $z$ over prior sota}
\label{tab:single-ob-discovery}
\centering
\small
\begin{tabularx}{\linewidth}{@{}Xc@{\hspace{3em}}c@{\hspace{3em}}c@{}}
\toprule Method & VOC07~\cite{pascal-voc-2007} & VOC12 ~\cite{pascal-voc-2012}& COCO20K~\cite{lin2014microsoft,vo2020toward} \\
\midrule
Selective Search~\cite{uijlings2013selective, LOST} & 18.8 & 20.9 & 16.0 \\
EdgeBoxes~\cite{zitnick2014edge, LOST} & 31.1 & 31.6 & 28.8 \\
Kim et al.~\cite{kim2009unsupervised, LOST}& 43.9 & 46.4& 35.1 \\
Zhange et al.~\cite{zhang2020object, LOST}& 46.2 & 50.5 & 34.8 \\
DDT+~\cite{wei2019unsupervised, LOST}& 50.2 & 53.1 & 38.2 \\
rOSD~\cite{vo2020toward, LOST} & 54.5 & 55.3 & 48.5 \\
LOD~\cite{vo2021large, LOST}&53.6 & 55.1 & 48.5 \\
DINO-seg~\cite{caron2021emerging,LOST}& 45.8 & 46.2 & 42.1 \\
FreeSOLO~\cite{wang2022freesolo} & 56.1 & 56.7 & 52.8 \\
LOST~\cite{LOST}& 61.9 & 64.0 & 50.7 \\
Deep Spectral \cite{melas2022} & 62.7 & 66.4 & 52.2 \\
TokenCut \cite{wang2022self}& 68.8 & 72.1 & 58.8 \\
\bf\methodname (Ours) & \hspace{3em}\bf76.0 (\textcolor{olivegreen}{$\uparrow$ \bf7.2 }) & \hspace{3em}\bf78.8 (\textcolor{olivegreen}{$\uparrow$ \bf6.7 }) & \hspace{3em}\bf66.6 (\textcolor{olivegreen}{$\uparrow$ \bf7.8 }) \\ %
\midrule
LOD + CAD\cite{LOST} & 56.3 & 61.6 & 52.7 \\
rOSD + CAD~\cite{LOST} & 58.3 & 62.3 & 53.0 \\
LOST + CAD~\cite{LOST} & 65.7 & 70.4 & 57.5 \\
TokenCut + CAD~\cite{wang2022self} & 71.4 & 75.3 & 62.6 \\
\bf\methodname (Ours) + CAD & 77.1 & 80.3 & 69.1\\
\bf\methodname (Ours) Multi + CAD & \hspace{3em}\bf 77.5 (\textcolor{olivegreen}{$\uparrow$ \bf 6.1}) & \hspace{3em}\bf 81.5 (\textcolor{olivegreen}{$\uparrow$ \bf 6.2}) & \hspace{3em}\bf 71.9 (\textcolor{olivegreen}{$\uparrow$ \bf 9.3}) \\
\bottomrule
\end{tabularx}
\end{table}
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{figures/detection-with-gt-camera-ready.jpg}
\caption{Qualitative evaluation of object detection of \methodname on VOC07, VOC12 and COCO20k. \textcolor{red}{Red} is the ground truth,
\colorbox{black}{\textcolor{yellow}{yellow}} is our prediction.
For more examples see the supplementary material.}\label{fig:detection-results}
\end{figure*}
\begin{wraptable}{r}{7.5cm}
\vspace{-0.5cm}
\caption{Unsupervised class-agnostic object detection on MS COCO \texttt{val2017}. Compared results are taken directly from FreeSOLO \cite{wang2022freesolo}}
\label{tab:det_val}
\centering
\small
\vspace{0.2cm}
\begin{tabularx}{7.5cm}{@{}l@{\hspace{1.5em}}c@{\hspace{.85em}}c@{\hspace{.85em}}c@{\hspace{.85em}}c@{\hspace{.85em}}c@{\hspace{.85em}}c@{}}
\toprule
Method & AP$_\text{50}$ & AP$_\text{75}$ & AP & AR$_\text{1}$ & AR$_\text{10}$ & AR$_\text{100}$ \\
\midrule
Sel. Search~\cite{uijlings2013selective} & 0.5 & 0.1 & 0.2 & 0.2 & 1.5 & 10.9\\
DETReg~\cite{bar2021detreg} & 3.1 & 0.6 & 1.0 & 0.6 & 3.6 & 12.7 \\
FreeSOLO~\cite{wang2022freesolo} & 12.2 & 4.2 & 5.5 & 4.6 & 11.4 & 15.3 \\
\bf \methodname (Ours) & \textbf{19.0} & \textbf{6.5} & \textbf{8.2} & \textbf{5.7} & \textbf{13.6} & \textbf{15.9} \\
\bottomrule
\end{tabularx}
\end{wraptable}
\textbf{Unsupervised class-agnostic object detection.} We evaluate our unsupervised object detection model trained on COCO20K with CAD post-training and compare it with SotA on unsupervised class-agnostic object detection. In Table~\ref{tab:det_val}, we evaluate \methodname on COCOval2017 and report Average Precision (AP) and Average Recall (AR), as in \cite{wang2022freesolo}. \methodname yields a remarkable relative improvement over the AP SotA of $50$\% on average.
\subsection{Ablation study}
\label{sec:ablation}
We perform ablation experiments on the validation split (500 images) of HKU-IS \cite{hkuisDataset} to validate the relative importance of the components of our segmentation approach. For the ablation we train each model for 80 epochs on DUTS-TR. We report the IoU in Table~\ref{tab:ablation-table}.
Our baseline model trained with 3 different seeds gives a mean IoU $0.818$ with $\text{std}=0.008$. Thus we only report results for a single run in all experiments. \\
\begin{wraptable}{r}{6cm}
\vspace{-.5cm}
\caption{Ablation study. Models evaluated on HKU-IS-val}
\label{tab:ablation-table}
\centering
\small
\vspace{0.2cm}
\begin{tabularx}{6cm}{@{}lc@{}}
\toprule
Setting & IoU\\
\midrule
\textbf{Baseline (shift \nicefrac{2~}{16})} & \textbf{0.819} \\
no min. mask & 0.000 \\
no binarization loss & 0.774 \\
no pooled mask losses & 0.811 \\
no shift & 0.000 \\
shift \nicefrac{1~}{16} & 0.751 \\
shift \nicefrac{3~}{16} & 0.799 \\
shift \nicefrac{4~}{16} & 0.704 \\
disc. fake inputs: composed & 0.789 \\
disc. real inputs: $x$ + comp. w/o shift & 0.740 \\
disc. real inputs: comp. w/o shift & 0.031 \\
disc. real inputs: $x_{ae}$ & 0.000 \\
non-diff inpainter & 0.314 \\
MSE MAE & 0.817 \\
MAE feature extractor & 0.783 \\
ImageNet100 dataset & 0.815\\
\bottomrule
\end{tabularx}
\end{wraptable}
\textbf{Mask losses.} We validate the importance of the mask losses: minimum mask area, binarization and losses on downsampled max-pooled and avg-pooled masks. We find that the minimum area loss is necessary for our method to work, otherwise there is no incentive to produce anything other than empty masks.
Removing the binarization loss or mask losses on the downsampled masks makes the masks noisier, which negatively affects the results. \\
\textbf{Shift range. } We evaluate different ranges of the random shift $\delta$. A small range $\Delta=\nicefrac{1}{16}$ makes it more challenging for the discriminator to detect inconsistencies at the border of objects.
Larger shifts may cause objects to go out of the image boundaries ($\Delta=\nicefrac{3}{16},\nicefrac{4}{16}$) and thus reduce the feedback at the object boundary to the segmenter.
For $\Delta=0$ (no-shift) the only possible discriminator inputs are composed images without a shift as fake and autoencoded images as real. There is no incentive to produce any meaningful masks in this case. \\
\textbf{Discriminator inputs.} In our baseline model, we feed both composed images with no-shift and real images autoencoded with MAE as real samples and composed images with a shift and autoencoded images with copy-pasting of a predicted masked object as fake samples for the discriminator training.
We test the case \textsc{disc. real $x$ + comp. w/o shift }, where we feed to the discriminator real images without autoencoding. In this case, the discriminator can detect the artifacts of MAE instead of focusing on inconsistencies resulting from an incorrect mask. In \textsc{disc. real $x_{ae}$} we only feed the autoencoded images as real. Here, the discriminator can focus on the mismatch from the inpainting artifacts and encourages the segmenter to output empty masks, where no inpainting is done. If we only feed the composite non-shifted images (\textsc{disc. real comp w/o shift}), the artifacts resulting from an incorrect masks cannot be fixed, because there is no reference of what the real images look like. In \textsc{disc. fake inputs: composed} we only feed the composed image as fake to the discriminator and omit the real image with a copy-pasted predicted masked object, which slightly degrades the performance. \\
\textbf{Non-differentiable inpainter.} We evaluate the use of hard thresholded downsampled masks as input to the background inpainter. In this case the only feedback for the masks comes from the composition of the images. We find it to be insufficient for the segmenter to learn any meaningful masks.\\
\textbf{Inpainter model.} We substitute the MAE trained with a GAN loss with a MAE that was trained only to reconstruct missing patches with a Mean Squared Error (MSE) loss.
Since this model was trained to only reconstruct the missing patches and not the entire image, we construct the inpainted background by composing the inpainted part with the real image: $\hat{m}_{up} = \text{upsample}_{16}(\hat{m})$; $\hat{b} := x \odot (1-\hat{m}_{up}) + \hat{b} \odot \hat{m}_{up} $. Consequently, we do not use autoencoding when creating the discriminator inputs. We find this model to perform competitively.\\
\textbf{Feature extractor.} We train the model using the features provided by MAE encoder instead of a separate DINO model. In this case we adapted the segmenter architecture and added one more upsampling block, since MAE takes patches of size $P=16$ (instead DINO has $P=8$). We find that with these features we are able to train a competitive segmenter. \\
\textbf{ImageNet100 dataset.} We train our model on the ImageNet100 dataset \cite{tian2020contrastive}, with $131{,}689$ images from $100$ randomly selected ImageNet \cite{imagenet} classes. Since this dataset is much bigger than DUTS-TR, we adapt our segmenter by adding an additional convolutional layer in each upsampling block (see section~\ref{sec:implementation}) and train the model for $8$ epochs. The results are comparable to the DUTS-TR dataset.
\section{Prior Work}
In the past decade, research on object segmentation and detection has seen remarkable progress when full supervision is available \cite{He2017MaskR,hu2018learning,carion2020end}.
To limit the cost of annotation several methods explored different forms of weak supervision
\cite{Khoreva_2017_CVPR,zhou2019objects}
or ways to avoid labeling altogether
\cite{Ji2018,kanezaki2018,Ostyakov2018,Remez_2018_ECCV}.
\methodname falls in the latter category. Therefore, we focus our review of prior work on unsupervised methods for object segmentation and the related task of object detection.\\
\noindent\textbf{Unsupervised Object Detection and Category Discovery.}
Unsupervised object detection and category discovery are extremely challenging tasks that have recently seen a surge of efforts \cite{bar2021detreg,rambhatla2021pursuit,zheng2022towards}
thanks to the capabilities of modern deep learning models.
Recently, features based on deep learning have shown significant progress in object detection \cite{bar2021detreg}, even with just some noisy (unsupervised) guidance \cite{uijlings2013selective}.
More in general, one limitation of unsupervised object detection is that it only provides a coarse localization of the objects. As we have shown with \methodname, it is possible to obtain much more information without supervision.\\ %
\noindent\textbf{Unsupervised Object Segmentation.}
Object segmentation can be formulated as a pixel-wise image partitioning task \cite{Ji2018,ouali2020autoregressive,Xia2017WNetAD,kanezaki2018}
or through the generation of layered models from which a segmenter is trained as a byproduct \cite{wang2022freesolo,Zhang_2018_CVPR,nguyen2019deepusps,burgess2019monet}.
The use of SSL features has spawned several methods with significant performance on real datasets, which we discuss in the following paragraphs.\\
\noindent\textbf{SSL-Based Methods.}
Due to the success of SSL methods and the emergence of segmentation capabilities, several recent methods for unsupervised object segmentation have been built on top of SSL features.
In particular, SelfMask \cite{Shin2022selfmask} proposes a clustering approach that can use multiple SSL features and evaluates all possible combinations of DINO \cite{caron2021emerging}, SwAV \cite{caron2020unsupervised} and MOCOV2 \cite{he2020momentum}. They find that combining features from all three SSL methods yields the best results for segmentation. FreeSOLO \cite{wang2022freesolo} instead finds that DenseCL features \cite{wang2021dense} work best. More in general, some methods use a weak (unsupervised) guidance and losses robust to the coarse pseudo-labels \cite{wang2022freesolo}, but the majority is based on directly clustering SSL features \cite{melas2022,amir2021deep,yin2021transfgu,ziegler2022leopart,LOST,wang2022self}. In contrast to these methods, we show that movability can provide a robust supervision signal.\\
\noindent\textbf{Generative Methods.}
A wide range of methods also exploits generative models to create layered image representations \cite{van2018case,kwak2016generating,bielski2019emergence,yang2017lr,eslami2016attend,he2021ganseg,yang_loquercio_2019,savarese2020information}.
A general scheme is to train a network to generate a background, a foreground and its mask. These components can then be combined to generate an image and then, in a second stage, one can train a segmenter that learns to map a synthetic image to its corresponding foreground mask. In alternative, the segmenter could be built during the training of the generative model as a byproduct.
Some methods rely on the assumption that a dataset of only backgrounds is available \cite{benny2020onegan,Ostyakov2018}.
The use of shifts to define segments has also been used before \cite{Remez_2018_ECCV,bielski2019emergence,arandjelovic2019object}. However, \methodname does not require the training of a generative model, which can be a challenge on its own.
\section{Limitations and Societal Impact}
\label{sec:limitation}
\noindent\textbf{Limitations. }As mentioned in the introduction, movability alone may not suffice in identifying an object unambiguously. In fact, the method can segment any combination of multiple objects. To address this we use a post-processing algorithm to find connected components, but there is no guarantee that all objects have been segmented. %
Another issue is that shifts would not expose artifacts when the background is uniform (\eg, looking at the sky, underwater, with macrophotography).\\
\noindent\textbf{Societal Impact. }%
An important aspect that is relevant to unsupervised learning methods in general is the potential to become biased if the training datasets are unbalanced. This clearly has a potentially negative impact on the segmentation of categories that are underrepresented and thus this work should be integrated with mechanisms to take the dataset imbalance into account.
\section{Conclusions}
We have introduced \methodname, a novel self-supervised method for object segmentation that exploits the synthesis of images where objects are randomly shifted.
\methodname improves the SotA in object saliency segmentation, unsupervised single object discovery, and unsupervised class agnostic object detection by significant margins. Our ablations show that movability is a strong supervision signal that can be robustly exploited as a pseudo-task for self-supervised object segmentation. We believe that our approach can be further scaled by exploring different architectures and larger datasets.
\section{Introduction}
Image segmentation and object detection are today mature and essential components in vision-based systems with applications in a wide range of fields including automotive \cite{chen2015deepdriving}, agriculture \cite{Chiu_2020_CVPR}, and medicine \cite{smistad2015medical}, just to name a few.
A major challenge in building and deploying such components at scale is that they require costly and time-consuming human annotation. This has motivated efforts in self-supervised learning (SSL) \cite{caron2021emerging,chen2021mocov3,wang2021dense}. The aim of SSL is to learn general-purpose image representations from large unlabeled datasets that can be fine-tuned to different downstream tasks with small annotated datasets. While SSL methods have also been fine-tuned for image segmentation since their inception, it is only with the recent state of the art (SotA) methods, such as DINO \cite{caron2021emerging} and Dense Contrastive Learning \cite{wang2021dense}, that a clear and strong link to object segmentation has been observed. This has led to several methods for salient object detection built on top of SSL features \cite{amir2021deep,wang2022freesolo,yin2021transfgu,LOST,wang2022self,Shin2022selfmask}.
Most prior work based on SSL features defines some form of clustering by either using attention maps \cite{amir2021deep,wang2022freesolo,yin2021transfgu} or similarity graphs \cite{LOST,wang2022self,Shin2022selfmask}.
In this work, we take a quite different direction. Rather than directly clustering the features, we train a network to map them to a segmentation mask. As supervision signal we use the \emph{movability} of objects, \ie, whether they can be locally shifted in a realistic manner. We call our method \methodname. This property holds for objects in the foreground, as they occlude all other objects in the scene. This basic idea has already been exploited in prior work with relative success \cite{Remez_2018_ECCV,Ostyakov2018,bielski2019emergence,arandjelovic2019object,yang_loquercio_2019,savarese2020information,katircioglu2021videobginpaint}. Nonetheless, here we introduce a novel formulation based on movability that yields a significant performance boost across several datasets for salient object detection.
\begin{figure*}[t]
\centering
\includegraphics[scale=.2,trim=0cm 0cm 9.5cm 0cm, clip]{figures/example_pipeline/pipeline2large.pdf}
\caption{Exploiting inpainting and movability. (a) Input image. (b) Examples of predicted segmentation masks: correct (top), larger (middle) and smaller (bottom). (c) Inpainted backgrounds in the three corresponding cases. (d) Composite image obtained by shifting the foreground object in the three cases. (e) It can be observed that when the mask is incorrect (it includes parts of the background or it does not include all of the background), the background inpainting combined with shifting reveals repeated patterns and mismatching background texture, when compared to the original input image or composite images obtained without shifting.}\label{fig:shiftability}
\end{figure*}
In our approach, it is not necessary to move objects far from their initial location or to other images \cite{Ostyakov2018,arandjelovic2019object} and thus we do not have to handle the context mismatch. It is also not necessary to employ %
models to generate entire scenes \cite{bielski2019emergence,yang2017lr}, which can be challenging to train. Our working principle exploits observations also made by \cite{yang_loquercio_2019,savarese2020information,katircioglu2021videobginpaint}. They point out that the correct mask maximizes the inpainting error both for the background and the foreground. However, rather than using the reconstruction error as a supervision signal, we rely on the detection of artifacts generated through shifting, which we find to provide a stronger guidance.
Suppose that, given a single image (Figure~\ref{fig:shiftability}~(a)), we predict a segmentation mask (one of the 3 cases in Figure~\ref{fig:shiftability}~(b)). With the mask we can remove the object and inpaint the background (Figure~\ref{fig:shiftability}~(c)). Then, we can also extract the foreground object, randomly shift it locally, and paste it on top of the inpainted background (Figure~\ref{fig:shiftability}~(d)).
When the mask does not accurately follow the outline of a foreground object (e.g., as in the middle and bottom rows in Figure~\ref{fig:shiftability}), we can see duplication artifacts (of the foreground or of the background). We exploit these artifacts as supervision signal to detect the correct segmentation mask.
As inpainter we use a publicly available Masked AutoEncoder (MAE) \cite{he2021masked} trained with an adversarial loss.\footnote{\url{https://github.com/facebookresearch/mae/blob/main/demo/mae_visualize.ipynb}}
Our segmenter uses a pre-trained SSL ViT as backbone (e.g., DINO \cite{caron2021emerging} or the MAE encoder \cite{he2021masked}). We then train a neural network head based on an upsampling Convolutional Neural Network (CNN).
Following \cite{Shin2022selfmask}, we also further refine the segmenter by training a second segmentation network (SelfMask \cite{Shin2022selfmask}) with supervision from pseudo-masks generated by our trained segmenter.
Even without these further refinements \methodname shows a remarkable performance on a wide range of datasets and tasks.
In particular, in unsupervised single object discovery on VOC07, VOC12 and COCO20K it improves the SotA CorLoc between $6.1$\% and $9.3$\%, and in unsupervised class agnostic object detection on COCOval2017 it improves the $\text{AP}_{50}$ by $6.8$\% (a relative improvement of $56$\%), the $\text{AP}_{75}$ by $2.3$\% (relative $55$\%) and the $\text{AP}$ by $2.7$\% (relative $49$\%).
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{figures/new_composer_discriminator-v3.pdf}
\caption{Synthetic and real images used to learn how to segment foreground objects.
We obtain the predicted mask and inpainted background from our segmenter and MAE respectively. We train the segmenter in an adversarial manner so that the composite image with a shifted foreground (left, top row) looks real. A discriminator is trained to distinguish two types of real (right) from two types of fake (left) images. The fake images consist of the composite image with a shift and a copy-paste image, obtained by placing the shifted foreground on top of the input image. The set of real images consists of composite images without a shift and the real images. The real images are first autoencoded with MAE to match the artifacts of the inpainted background.
\label{fig:composer}}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=.21,trim=0 0cm 0cm 0cm, clip]{figures/segmenter_inpainter_cr.pdf}
\caption{(Left) The segmenter is built on top of SSL features from a \textit{frozen} encoder. To define the inpainting region for the background, the predicted mask is shifted and combined with the unshifted mask (bottom left). For better visualization purposes we highlight the edge of the shifted mask, but this does not appear in the actual union of the masks. This mask union is then downsampled to the size of the tile grid via max pooling and denoted $\hat m$.
(Right) The inpainter is based on a \textit{frozen} MAE. First, it takes all the tiles from the input image and feeds them to the MAE encoder. Second, it takes a convex combination between the encoder embeddings and the MSK learned embedding (but now frozen), where the convex combination coefficients are based on the downsampled mask $\hat m$. Finally, this combination is fed to the MAE decoder to generate the inpainted background.
}
\label{fig:segmenter}
\end{figure}
\section{Method}
\label{sec:method}
Our objective is to train a segmenter to map a real image $x\in\real^{H\times W \times 3}$, with $H$ the height and $W$ the width of the image, to a mask $m\in\real^{H\times W}$ of the foreground, such that we can synthesize a realistic image for any small shifts of the foreground.
The mask allows to cut out the foreground from $x$ and to move it arbitrarily by some $\delta\in\real^2$ shift (see Figure~\ref{fig:composer}, top-left).
However, when the shifted foreground is copied back onto the background, missing pixels remain exposed. Thus, we inpaint the background with a \textit{frozen} pre-trained MAE\footnote{The MAE \cite{he2021masked} we use is based on a ViT architecture and has been pre-trained in an adversarial fashion (as opposed to the standard training with an MSE loss) to output more realistic-looking details} and obtain $\hat b$ (see Figure~\ref{fig:segmenter}). Moreover, there is a difference between the texture of $\hat b$, which is generated from a neural network, and the texture of the cut out foreground from $x$, which is a real image. To ensure more similarity between these two textures, we synthesize $\hat{x}_\delta$ by extracting the foreground from the autoencoding (AE) of the input image $x$ shifted by $\delta$, which we call $\check{x}_\delta$, and by pasting it onto the background $\hat b$.
We enforce the realism of the synthesized images $\hat x_\delta$ by using adversarial training,
i.e., by training the segmenter against a discriminator that distinguishes
two sets of \textit{real} (Figure~\ref{fig:composer}, right hand side) from two sets of \textit{fake} images (Figure~\ref{fig:composer} left hand side).
The synthetic \textit{real} image $\hat x_{\delta=0}$ is obtained by composing a zero-shifted foreground with the inpainted background; the second \textit{real} image $\check x$ is instead simply the AE of $x$.
The two \textit{fake} images are obtained by composing a $\delta$-shifted foreground with either the inpainted background $\hat b$ or $\check x$, and obtain
$\hat x_\delta$ and $\tilde x_\delta$ respectively.
We introduce all the above synthetic images so that the discriminator pays attention only to artifacts due to incorrect masks from the segmenter.
Ideally, the segmenter should generate masks such that the fake image $\hat x_\delta$ looks as realistic as $\check x$ for any small $\delta$. However, the discriminator might distinguish these two images because of the background inpainting artifacts and not because of the artifacts due to an incorrect segmentation (which are exposed by random shifts). To avoid this undesired behavior, we also introduce the real image $\hat x_{\delta=0}$. This image has no segmentation artifacts for any mask, but has the same background inpanting artifacts as the fake images (although there is no shift in $\hat x_{\delta=0}$, the background inpainting creates artifacts beyond the boundaries of the segmentation mask). Finally, to guide the discriminator to detect repeated patterns (as those caused by incorrect masks, see Figure~\ref{fig:shiftability}), we also add a fake image $\tilde x_\delta$, where the background has the original foreground.
The segmenter is trained only through the backpropagation from $\hat x_\delta$. The details of the segmentation network, the inpainting network and the adversarial training are explained in the following sections.
\subsection{Segmenter}
Following the recent trend of methods for unsupervised object segmentation \cite{amir2021deep,wang2022freesolo,yin2021transfgu,LOST,wang2022self,Shin2022selfmask,melas2022}, we build our method on top of SSL features, and, in particular, DINO \cite{caron2021emerging} or MAE \cite{he2021masked} features.
Thus, as a backbone, we adopt the Vision Transformer (ViT) architecture \cite{dosovitskiy2020image}. Following the notation in \cite{LOST}, we split an image $x\in\real^{H\times W \times 3}$ in tiles of size $P\times P$ pixels, for a total of $N = HW/P^2$ tiles (and we assume that $H$ and $W$ are such that $H/P$ and $W/P$ are integers). Each tile is then mapped through a trainable linear layer to an embedding of size $d$ and an additional CLS token is included in the input set (see Figure~\ref{fig:segmenter} left).
The \emph{segmenter} network is a CNN that takes SSL features as input (e.g., from a pre-trained DINO or MAE encoder), upsamples them and then outputs a mask for the original input image. The final output is generated by using a sigmoid to ensure that the mask values are always between $0$ and $1$. We also ensure a minimum
size of the support of the predicted mask by using %
\begin{align}
\label{eq:mask_min_max}
\textstyle
{\cal L}_\text{min} = \frac{1}{n}\sum_{i=1}^n \max\left\{\theta_\text{min}-\sum_{p} \frac{m^{(i)}[p]}{HW},0\right\} %
\end{align}
where $n$ is the number of images in the training dataset, $m^{(i)}$ is the predicted mask from image $x^{(i)}$, $p$ is a pixel location within the image domain, and $\theta_\text{min}$ %
is a threshold for the minimum
mask coverage percentage respectively (in the range $[0,1]$, where $0$ implies that the mask is empty and $1$ implies that the mask covers the whole image).
Since masks should only take binary values to clearly indicate a segment, we use a loss that encourages $m^{(i)}$ to take either $0$ or $1$ values
\begin{align}
\label{eq:mask_bin}
\textstyle
{\cal L}_\text{bin} = \frac{1}{n}\sum_{i=1}^n \frac{1}{HW}\sum_{p} \min\left\{ m^{(i)}[p],1-m^{(i)}[p]\right\}.
\end{align}
\subsection{Differentiable inpainting}
The main task of \methodname is to predict a segmentation mask that can be used to synthesize a realistic image, where the foreground object is shifted on top of the inpainted background (see Figure~\ref{fig:shiftability}~(e) top and Figure~\ref{fig:composer} top left).
Figure~\ref{fig:segmenter} shows how we use the predicted high resolution mask for inpainting with MAE. Since MAE performs inpainting by masking or retaining entire patches of $P'\times P'$ pixels, it is necessary to also split the segmentation mask into a grid of tiles of $P'\times P'$ pixels and to map each tile to a single scalar between $0$ and $1$. We do that by using a max pooling operation within each tile and obtain a low-res mask $\hat m$, such that $1-\hat m$ does not contain any part of the predicted mask.
To regularize the predicted mask $m$, the mask losses ${\cal L}_\text{min}$,
${\cal L}_\text{bin}$ are also computed on max pool $\hat{m}$ and average pool downsampled masks (at a scale $1/P'$ of the original image resolution; for more details see the supplementary material).
Then, we feed the entire set of image tiles to the MAE encoder and obtain embeddings $\xi_1,\dots,\xi_N$.
Next, for $j=1,\dots,N$, we compute the convex combination between the embeddings $\xi_j$ and the learned MSK (masked) token from MAE by using the low res mask $\hat m$ as
$\hat \xi_j = \hat m[j] \cdot \xi_\text{MSK} + (1-\hat m[j]) \cdot \xi_j.$
Finally, we feed the new embeddings $\hat \xi_j$ in the MAE decoder and reassemble the output tiles back into the inpainted background image $\hat b$ (see Figure~\ref{fig:segmenter} bottom-right).
Notice that we feed all the tiles as input to obtain a differentiable mapping that we can backpropagate on. Interestingly, we found that when no tile is masked at the input of the MAE encoder, the embeddings $\xi_j$ do not store significant information about their neighbors (see the supplementary material). This is in contrast to the typical use of MAE, where only the subset of ``visible'' tiles is fed as input to the encoder. However, such tile selection operation would make the inpainting not differentiable.
\subsection{Adversarial training}
Figure~\ref{fig:composer} shows how we create the images used in the adversarial training.
First, we mask the input image with the predicted mask and compose with the inpainted background image, obtaining
\begin{align}
\hat x_\delta[p] = m_{\delta}[p] \check{x}[p+\delta] +
(1-m_{\delta}[p]) \hat b[p],
\end{align}
where $m_{\delta}[p]=m[p+\delta]$, $\delta\in[-\Delta W,\Delta W]\times[-\Delta H,\Delta H]$ is a 2D shift, with $\Delta$ the maximum shift range (relative to the image size).
To make the inpainting artifacts in the no-shift composite image $\hat x_{\delta=0}$ more comparable to those in the shifted composite image, we define the background inpainting region as the union between the predicted mask and its shifted version (see Figure~\ref{fig:segmenter}). Thus,
\begin{align}
\hat m = \text{maxpool}_{P}(1-(1-m)\odot (1-m_{\delta})).
\end{align}
To improve the discriminator's ability to focus on repeated patterns artifacts, we additionally create \emph{fake} images with a predicted shifted foreground pasted on top of the autoencoded image, obtaining $\Tilde{x}_\delta=\check{x}_\delta \odot m_\delta + \check{x} \odot (1-m_\delta)$.
The adversarial loss for the discriminator can be written as
\begin{align}
{\cal L}_\text{advD} = - \text{I\!E}_{x_R} \min\{0, D(x_R) - 1 \} - \text{I\!E}_{x_S} \min\{0, -D(x_S) - 1 \}
\end{align}
where samples for ``real'' images $x_R$ are the set $\{\check x^{(i)}\}_{i=1,\dots,n}\bigcup \{\hat x_{\delta=0}^{(i)}\}_{i=1,\dots,n}$ and samples for synthetic images $x_S$ are the set $\{\hat x_\delta^{(i)}\}_{i=1,\dots,n} \bigcup \{\Tilde{x}_{\delta}^{(i)}\}_{i=1,\dots,n}$, with uniform random samples $\delta\sim {\cal U}_2([-\Delta W,\Delta W]\times[-\Delta H,\Delta H])$ and $\text{I\!E}$ denotes the expectation.
To speed up the convergence, we also use the projected discriminator method \cite{Sauer2021NEURIPS}.
For the segmenter, we use instead the standard loss computed on the composite shifted images
\begin{align}
{\cal L}_\text{advS} =
- \text{I\!E}_{\hat x_\delta} D(\hat x_\delta).
\end{align}
Finally, with $\lambda_\text{min}$,
$\lambda_\text{bin}$ nonnegative hyperparameters, our optimization is the adversarial minimization \begin{align}
S^\ast =& \arg\min_S {\cal L}_\text{advS}
+ \lambda_\text{min}{\cal L}_\text{min}
+ \lambda_\text{bin}{\cal L}_\text{bin}\\
&\text{subject to } D^\ast = \arg\min_D {\cal L}_\text{advD}.
\end{align}
\section{Implementation}
\label{sec:implementation}
Except for the ablation studies, in all our experiments we use a self-supervised DINO \cite{caron2021emerging} ViT-S/8 transformer pre-trained on ImageNet \cite{imagenet} as an SSL feature extractor.
We take the output of the penultimate transformer block of DINO as the feature tokens with $P=8$ and feed them to the segmenter.
Our segmenter is a small upsampling convolutional neural network. It assembles the DINO features into a grid of size $H/P\times W/P$ and processes them with 3 upsampling blocks, so that the output matches the input image resolution. Each upsampling block first performs a $2\times 2$ nearest upsampling, followed by a $3\times3$ convolutional layer with padding, batch normalization \cite{ioffe2015batch} and a LeakyReLU activation function (see the supplementary material for details). We add an additional block without upsampling and a linear projection to 1 channel, representing the mask.
Our inpainting network is a ViT-L/16 transformer pre-trained on ImageNet as a Masked Autoencoder (MAE) \cite{he2021masked} with an adversarial loss to increase the details of the reconstructed images. For the discriminator we use the Projected Discriminator \cite{Sauer2021NEURIPS} in its standard setting, but we only use \textit{color} differentiable augmentation.
For the training we use random resized crops of size $224$ with a scale in range $(0.9, 1)$ and aspect ratio $(3/4, 4/3)$.
We set the minimum mask area $\theta_\text{min}= 0.05$, the minimum loss coefficient $\lambda_\text{min} = 100$ and we linearly ramp up the binarization loss coefficient $\lambda_\text{bin}$ from $0$ to $12.5$ over the first $2500$ segmenter iterations.
We use the shift range $\Delta=\nicefrac{1}{8}$. We train the segmenter by alternatively minimizing the discriminator loss and the segmenter losses. Both are trained with a learning rate of $0.0002$ and an Adam \cite{kingmaAdam} optimizer with betas $= (0, 0.99)$ for the discriminator and $(0.9, 0.95)$ for the segmenter. We implemented our experiments in PyTorch \cite{pytorch}. We train our model for $80$ epochs with a batch size of $32$ on a single NVIDIA GeForce 3090Ti GPU with 24GB of memory.
\section{Experiments}
\subsection{Unsupervised saliency segmentation}
\label{sec:exp-saliency}
\input{tables/os-comp.tex}
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{figures/saliency-results-camera-ready.jpg}
\caption{Qualitative evaluation of \methodname on ECSSD, DUTS-TE and DUT-OMRON. First row: input image; second row: \methodname; third row: SelfMask on \methodname; last row: ground truth. Best viewed in color. For more examples and a gray scale version see the supplementary material.}\label{fig:saliency-results}
\end{figure*}
\textbf{Datasets.} We train our main model using the train split of the DUTS dataset (DUTS-TR) \cite{wang2017learningDUTS}, containing $10{,}553$ images of scenes and objects of varying sizes and appearances. We emphasize that we only use the images without the corresponding ground truth. For comparison, we evaluate our approach on three saliency detection datasets: the test set of DUTS ($5{,}019$ images), DUT-OMRON \cite{yang2013saliency} ($5{,}168$ images) and ECSSD \cite{shi2015hierarchical} ($1{,}000$ images). We report three standard metrics: pixel mask accuracy (Acc), intersection over union (IoU), $\max F_{\beta}$, where $F_{\beta} = \frac{(1+\beta^2)\text{Precision}\times \text{Recall}}{\beta^2\text{Precision}+\text{Recall}}$ for $\beta=0.3$; the $\max F_{\beta}$ is the score for the single optimal threshold on a whole dataset. Additionally, we report the IoU on the test split \cite{chen2019unsupervisedRedo} of CUB-200-2011 (CUB-Birds) \cite{WahCUB_200_2011} dataset.\\
\noindent\textbf{Evaluation. }
We train our segmenter in an adversarial manner as specified in sections~\ref{sec:method} and \ref{sec:implementation} and evaluate it on the test datasets. We compare with other methods in Table~\ref{tab:salient-comp}. Note that without any type of post-processing of our predicted masks, we surpass all other methods by a significant margin.
We also follow \cite{wang2022self,Shin2022selfmask} and further refine our masks with a bilateral solver \cite{barron2016fast}.
\begin{wraptable}{r}{5cm}
\caption{Comparison of unsupervised segmentation methods on the CUB-200-2011 test set. MOVE$^\star$ was trained on the CUB-200-2011 train set, while MOVE was trained on DUTS-TR}
\label{tab:cub-birds}
\centering
\small
\vspace{0.3cm}
\begin{tabularx}{5cm}{@{}l@{\hspace{1.6cm}}c@{}}
\toprule
\textbf{Method} & \textbf{IoU} \\
\midrule
PerturbGAN \cite{bielski2019emergence} & 0.360 \\
ReDO \cite{chen2019unsupervisedRedo} & 0.426 \\
IEM \cite{savarese2020information} & 0.551 \\
Melas-Kyriazi \cite{melaskyriazi2021finding} & 0.664 \\
Voynov \cite{voynov2020big} & 0.683 \\
Voynov-E \cite{voynov2020big} & 0.710 \\
Deep Spectral \cite{melas2022} & 0.769 \\
\textbf{MOVE}$^\star$ & \textbf{0.814} \\
\textbf{MOVE} & \textbf{0.858} \\
\bottomrule
\end{tabularx}
\end{wraptable}
Since the bilateral solver only marginally improves or even decreases the quality of our segmentation, we conclude that our predicted masks are already very accurate. Using the bilateral solver might also inadvertently discard correct, but fragmented segmentations, as we show in the supplementary material. %
Next, we extract the predicted unsupervised masks from the DUTS-TR dataset and use them as pseudo ground-truth to train a class-agnostic segmenter. We use the same architecture (a MaskFormer \cite{cheng2021per}) and training scheme as SelfMask \cite{Shin2022selfmask}. We then evaluate again on the saliency prediction datasets. Without additional pre-processing our method surpasses or is on par with the SotA across all metrics and datasets.
While additional processing with the bilateral solver seems to benefit SelfMask \cite{Shin2022selfmask}, it mostly hurts the performance of our method. Figure~\ref{fig:saliency-results} shows qualitative results of our method.
Finally, we evaluate our method on the test set of CUB-Birds dataset. Additionally, we train our model on the train split of CUB-Birds dataset and run the same evaluation. We present the comparison with other methods in Table~\ref{tab:cub-birds} and show that we achieve SotA performance.
\subsection{Single-object discovery}
\textbf{Datasets.} We evaluate our trained model (see section~\ref{sec:exp-saliency}) on 3 typical single-object discovery benchmarks: the train split of COCO20K \cite{lin2014microsoft,vo2020toward} and the trainval splits of VOC07 \cite{pascal-voc-2007} and VOC12 \cite{pascal-voc-2012}. Following \cite{cho2015unsupervised,deselaers2010localizing,siva2013looking,vo2019unsupervised,vo2020toward,vo2021large,LOST,wang2022self}, we report the \textit{Correct Localization} metric (CorLoc), \ie, the percentage of images, where the $\text{IoU}>0.5$ of a predicted single bounding box with at least one of the ground truth ones.\\
\noindent\textbf{Evaluation.} %
Since our method tends to produce a single segmentation mask for multiple objects in the scene, we separate the objects by detecting connected components via OpenCV \cite{opencv_library}. We then convert the separate masks to bounding boxes and choose the biggest one as our prediction for the given image. In Table~\ref{tab:single-ob-discovery}, we compare \methodname with other unsupervised methods and we show that just by using processed masks from our method we achieve SotA results on all three datasets, outperforming even methods that used their bounding boxes to train a Class Agnostic Detector (CAD). We show qualitative results for object detection in Figure~\ref{fig:detection-results}.
We also follow the practice of \cite{LOST,wang2022self} and use our predicted bounding boxes as pseudo-ground truth for training the CAD on each of the evaluation datasets. To train the detector, we use either the largest or all the bounding boxes (\textit{Multi}) that we obtained from the connected components analysis and after filtering those that have an area smaller than $1\%$ of the image. For the evaluation we take the bounding box with the highest prediction confidence, as done in \cite{LOST,wang2022self}. We use the exact same architecture and training scheme as our competitors for a fair comparison. Training with a single bounding box improves the performance of our method, while training with multiple ones gives it a significant additional boost.\\
\begin{table}
\caption{{Comparisons for unsupervised single object discovery}. We compare \methodname to SotA object discovery methods on VOC07~\cite{pascal-voc-2007}, VOC12 ~\cite{pascal-voc-2012} and COCO20K~\cite{lin2014microsoft,vo2020toward} datasets. Models are evaluated with the CorLoc metric. +CAD indicates training a second stage class-agnostic detector with unsupervised ``pseudo-boxes'' labels. (\textcolor{olivegreen}{$\uparrow \mathbf{z}$}) indicates an improvement of $z$ over prior sota}
\label{tab:single-ob-discovery}
\centering
\small
\begin{tabularx}{\linewidth}{@{}Xc@{\hspace{3em}}c@{\hspace{3em}}c@{}}
\toprule Method & VOC07~\cite{pascal-voc-2007} & VOC12 ~\cite{pascal-voc-2012}& COCO20K~\cite{lin2014microsoft,vo2020toward} \\
\midrule
Selective Search~\cite{uijlings2013selective, LOST} & 18.8 & 20.9 & 16.0 \\
EdgeBoxes~\cite{zitnick2014edge, LOST} & 31.1 & 31.6 & 28.8 \\
Kim et al.~\cite{kim2009unsupervised, LOST}& 43.9 & 46.4& 35.1 \\
Zhange et al.~\cite{zhang2020object, LOST}& 46.2 & 50.5 & 34.8 \\
DDT+~\cite{wei2019unsupervised, LOST}& 50.2 & 53.1 & 38.2 \\
rOSD~\cite{vo2020toward, LOST} & 54.5 & 55.3 & 48.5 \\
LOD~\cite{vo2021large, LOST}&53.6 & 55.1 & 48.5 \\
DINO-seg~\cite{caron2021emerging,LOST}& 45.8 & 46.2 & 42.1 \\
FreeSOLO~\cite{wang2022freesolo} & 56.1 & 56.7 & 52.8 \\
LOST~\cite{LOST}& 61.9 & 64.0 & 50.7 \\
Deep Spectral \cite{melas2022} & 62.7 & 66.4 & 52.2 \\
TokenCut \cite{wang2022self}& 68.8 & 72.1 & 58.8 \\
\bf\methodname (Ours) & \hspace{3em}\bf76.0 (\textcolor{olivegreen}{$\uparrow$ \bf7.2 }) & \hspace{3em}\bf78.8 (\textcolor{olivegreen}{$\uparrow$ \bf6.7 }) & \hspace{3em}\bf66.6 (\textcolor{olivegreen}{$\uparrow$ \bf7.8 }) \\ %
\midrule
LOD + CAD\cite{LOST} & 56.3 & 61.6 & 52.7 \\
rOSD + CAD~\cite{LOST} & 58.3 & 62.3 & 53.0 \\
LOST + CAD~\cite{LOST} & 65.7 & 70.4 & 57.5 \\
TokenCut + CAD~\cite{wang2022self} & 71.4 & 75.3 & 62.6 \\
\bf\methodname (Ours) + CAD & 77.1 & 80.3 & 69.1\\
\bf\methodname (Ours) Multi + CAD & \hspace{3em}\bf 77.5 (\textcolor{olivegreen}{$\uparrow$ \bf 6.1}) & \hspace{3em}\bf 81.5 (\textcolor{olivegreen}{$\uparrow$ \bf 6.2}) & \hspace{3em}\bf 71.9 (\textcolor{olivegreen}{$\uparrow$ \bf 9.3}) \\
\bottomrule
\end{tabularx}
\end{table}
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{figures/detection-with-gt-camera-ready.jpg}
\caption{Qualitative evaluation of object detection of \methodname on VOC07, VOC12 and COCO20k. \textcolor{red}{Red} is the ground truth,
\colorbox{black}{\textcolor{yellow}{yellow}} is our prediction.
For more examples see the supplementary material.}\label{fig:detection-results}
\end{figure*}
\begin{wraptable}{r}{7.5cm}
\vspace{-0.5cm}
\caption{Unsupervised class-agnostic object detection on MS COCO \texttt{val2017}. Compared results are taken directly from FreeSOLO \cite{wang2022freesolo}}
\label{tab:det_val}
\centering
\small
\vspace{0.2cm}
\begin{tabularx}{7.5cm}{@{}l@{\hspace{1.5em}}c@{\hspace{.85em}}c@{\hspace{.85em}}c@{\hspace{.85em}}c@{\hspace{.85em}}c@{\hspace{.85em}}c@{}}
\toprule
Method & AP$_\text{50}$ & AP$_\text{75}$ & AP & AR$_\text{1}$ & AR$_\text{10}$ & AR$_\text{100}$ \\
\midrule
Sel. Search~\cite{uijlings2013selective} & 0.5 & 0.1 & 0.2 & 0.2 & 1.5 & 10.9\\
DETReg~\cite{bar2021detreg} & 3.1 & 0.6 & 1.0 & 0.6 & 3.6 & 12.7 \\
FreeSOLO~\cite{wang2022freesolo} & 12.2 & 4.2 & 5.5 & 4.6 & 11.4 & 15.3 \\
\bf \methodname (Ours) & \textbf{19.0} & \textbf{6.5} & \textbf{8.2} & \textbf{5.7} & \textbf{13.6} & \textbf{15.9} \\
\bottomrule
\end{tabularx}
\end{wraptable}
\textbf{Unsupervised class-agnostic object detection.} We evaluate our unsupervised object detection model trained on COCO20K with CAD post-training and compare it with SotA on unsupervised class-agnostic object detection. In Table~\ref{tab:det_val}, we evaluate \methodname on COCOval2017 and report Average Precision (AP) and Average Recall (AR), as in \cite{wang2022freesolo}. \methodname yields a remarkable relative improvement over the AP SotA of $50$\% on average.
\subsection{Ablation study}
\label{sec:ablation}
We perform ablation experiments on the validation split (500 images) of HKU-IS \cite{hkuisDataset} to validate the relative importance of the components of our segmentation approach. For the ablation we train each model for 80 epochs on DUTS-TR. We report the IoU in Table~\ref{tab:ablation-table}.
Our baseline model trained with 3 different seeds gives a mean IoU $0.818$ with $\text{std}=0.008$. Thus we only report results for a single run in all experiments. \\
\begin{wraptable}{r}{6cm}
\vspace{-.5cm}
\caption{Ablation study. Models evaluated on HKU-IS-val}
\label{tab:ablation-table}
\centering
\small
\vspace{0.2cm}
\begin{tabularx}{6cm}{@{}lc@{}}
\toprule
Setting & IoU\\
\midrule
\textbf{Baseline (shift \nicefrac{2~}{16})} & \textbf{0.819} \\
no min. mask & 0.000 \\
no binarization loss & 0.774 \\
no pooled mask losses & 0.811 \\
no shift & 0.000 \\
shift \nicefrac{1~}{16} & 0.751 \\
shift \nicefrac{3~}{16} & 0.799 \\
shift \nicefrac{4~}{16} & 0.704 \\
disc. fake inputs: composed & 0.789 \\
disc. real inputs: $x$ + comp. w/o shift & 0.740 \\
disc. real inputs: comp. w/o shift & 0.031 \\
disc. real inputs: $x_{ae}$ & 0.000 \\
non-diff inpainter & 0.314 \\
MSE MAE & 0.817 \\
MAE feature extractor & 0.783 \\
ImageNet100 dataset & 0.815\\
\bottomrule
\end{tabularx}
\end{wraptable}
\textbf{Mask losses.} We validate the importance of the mask losses: minimum mask area, binarization and losses on downsampled max-pooled and avg-pooled masks. We find that the minimum area loss is necessary for our method to work, otherwise there is no incentive to produce anything other than empty masks.
Removing the binarization loss or mask losses on the downsampled masks makes the masks noisier, which negatively affects the results. \\
\textbf{Shift range. } We evaluate different ranges of the random shift $\delta$. A small range $\Delta=\nicefrac{1}{16}$ makes it more challenging for the discriminator to detect inconsistencies at the border of objects.
Larger shifts may cause objects to go out of the image boundaries ($\Delta=\nicefrac{3}{16},\nicefrac{4}{16}$) and thus reduce the feedback at the object boundary to the segmenter.
For $\Delta=0$ (no-shift) the only possible discriminator inputs are composed images without a shift as fake and autoencoded images as real. There is no incentive to produce any meaningful masks in this case. \\
\textbf{Discriminator inputs.} In our baseline model, we feed both composed images with no-shift and real images autoencoded with MAE as real samples and composed images with a shift and autoencoded images with copy-pasting of a predicted masked object as fake samples for the discriminator training.
We test the case \textsc{disc. real $x$ + comp. w/o shift }, where we feed to the discriminator real images without autoencoding. In this case, the discriminator can detect the artifacts of MAE instead of focusing on inconsistencies resulting from an incorrect mask. In \textsc{disc. real $x_{ae}$} we only feed the autoencoded images as real. Here, the discriminator can focus on the mismatch from the inpainting artifacts and encourages the segmenter to output empty masks, where no inpainting is done. If we only feed the composite non-shifted images (\textsc{disc. real comp w/o shift}), the artifacts resulting from an incorrect masks cannot be fixed, because there is no reference of what the real images look like. In \textsc{disc. fake inputs: composed} we only feed the composed image as fake to the discriminator and omit the real image with a copy-pasted predicted masked object, which slightly degrades the performance. \\
\textbf{Non-differentiable inpainter.} We evaluate the use of hard thresholded downsampled masks as input to the background inpainter. In this case the only feedback for the masks comes from the composition of the images. We find it to be insufficient for the segmenter to learn any meaningful masks.\\
\textbf{Inpainter model.} We substitute the MAE trained with a GAN loss with a MAE that was trained only to reconstruct missing patches with a Mean Squared Error (MSE) loss.
Since this model was trained to only reconstruct the missing patches and not the entire image, we construct the inpainted background by composing the inpainted part with the real image: $\hat{m}_{up} = \text{upsample}_{16}(\hat{m})$; $\hat{b} := x \odot (1-\hat{m}_{up}) + \hat{b} \odot \hat{m}_{up} $. Consequently, we do not use autoencoding when creating the discriminator inputs. We find this model to perform competitively.\\
\textbf{Feature extractor.} We train the model using the features provided by MAE encoder instead of a separate DINO model. In this case we adapted the segmenter architecture and added one more upsampling block, since MAE takes patches of size $P=16$ (instead DINO has $P=8$). We find that with these features we are able to train a competitive segmenter. \\
\textbf{ImageNet100 dataset.} We train our model on the ImageNet100 dataset \cite{tian2020contrastive}, with $131{,}689$ images from $100$ randomly selected ImageNet \cite{imagenet} classes. Since this dataset is much bigger than DUTS-TR, we adapt our segmenter by adding an additional convolutional layer in each upsampling block (see section~\ref{sec:implementation}) and train the model for $8$ epochs. The results are comparable to the DUTS-TR dataset.
\section{Prior Work}
In the past decade, research on object segmentation and detection has seen remarkable progress when full supervision is available \cite{He2017MaskR,hu2018learning,carion2020end}.
To limit the cost of annotation several methods explored different forms of weak supervision
\cite{Khoreva_2017_CVPR,zhou2019objects}
or ways to avoid labeling altogether
\cite{Ji2018,kanezaki2018,Ostyakov2018,Remez_2018_ECCV}.
\methodname falls in the latter category. Therefore, we focus our review of prior work on unsupervised methods for object segmentation and the related task of object detection.\\
\noindent\textbf{Unsupervised Object Detection and Category Discovery.}
Unsupervised object detection and category discovery are extremely challenging tasks that have recently seen a surge of efforts \cite{bar2021detreg,rambhatla2021pursuit,zheng2022towards}
thanks to the capabilities of modern deep learning models.
Recently, features based on deep learning have shown significant progress in object detection \cite{bar2021detreg}, even with just some noisy (unsupervised) guidance \cite{uijlings2013selective}.
More in general, one limitation of unsupervised object detection is that it only provides a coarse localization of the objects. As we have shown with \methodname, it is possible to obtain much more information without supervision.\\ %
\noindent\textbf{Unsupervised Object Segmentation.}
Object segmentation can be formulated as a pixel-wise image partitioning task \cite{Ji2018,ouali2020autoregressive,Xia2017WNetAD,kanezaki2018}
or through the generation of layered models from which a segmenter is trained as a byproduct \cite{wang2022freesolo,Zhang_2018_CVPR,nguyen2019deepusps,burgess2019monet}.
The use of SSL features has spawned several methods with significant performance on real datasets, which we discuss in the following paragraphs.\\
\noindent\textbf{SSL-Based Methods.}
Due to the success of SSL methods and the emergence of segmentation capabilities, several recent methods for unsupervised object segmentation have been built on top of SSL features.
In particular, SelfMask \cite{Shin2022selfmask} proposes a clustering approach that can use multiple SSL features and evaluates all possible combinations of DINO \cite{caron2021emerging}, SwAV \cite{caron2020unsupervised} and MOCOV2 \cite{he2020momentum}. They find that combining features from all three SSL methods yields the best results for segmentation. FreeSOLO \cite{wang2022freesolo} instead finds that DenseCL features \cite{wang2021dense} work best. More in general, some methods use a weak (unsupervised) guidance and losses robust to the coarse pseudo-labels \cite{wang2022freesolo}, but the majority is based on directly clustering SSL features \cite{melas2022,amir2021deep,yin2021transfgu,ziegler2022leopart,LOST,wang2022self}. In contrast to these methods, we show that movability can provide a robust supervision signal.\\
\noindent\textbf{Generative Methods.}
A wide range of methods also exploits generative models to create layered image representations \cite{van2018case,kwak2016generating,bielski2019emergence,yang2017lr,eslami2016attend,he2021ganseg,yang_loquercio_2019,savarese2020information}.
A general scheme is to train a network to generate a background, a foreground and its mask. These components can then be combined to generate an image and then, in a second stage, one can train a segmenter that learns to map a synthetic image to its corresponding foreground mask. In alternative, the segmenter could be built during the training of the generative model as a byproduct.
Some methods rely on the assumption that a dataset of only backgrounds is available \cite{benny2020onegan,Ostyakov2018}.
The use of shifts to define segments has also been used before \cite{Remez_2018_ECCV,bielski2019emergence,arandjelovic2019object}. However, \methodname does not require the training of a generative model, which can be a challenge on its own.
\section{Limitations and Societal Impact}
\label{sec:limitation}
\noindent\textbf{Limitations. }As mentioned in the introduction, movability alone may not suffice in identifying an object unambiguously. In fact, the method can segment any combination of multiple objects. To address this we use a post-processing algorithm to find connected components, but there is no guarantee that all objects have been segmented. %
Another issue is that shifts would not expose artifacts when the background is uniform (\eg, looking at the sky, underwater, with macrophotography).\\
\noindent\textbf{Societal Impact. }%
An important aspect that is relevant to unsupervised learning methods in general is the potential to become biased if the training datasets are unbalanced. This clearly has a potentially negative impact on the segmentation of categories that are underrepresented and thus this work should be integrated with mechanisms to take the dataset imbalance into account.
\section{Conclusions}
We have introduced \methodname, a novel self-supervised method for object segmentation that exploits the synthesis of images where objects are randomly shifted.
\methodname improves the SotA in object saliency segmentation, unsupervised single object discovery, and unsupervised class agnostic object detection by significant margins. Our ablations show that movability is a strong supervision signal that can be robustly exploited as a pseudo-task for self-supervised object segmentation. We believe that our approach can be further scaled by exploring different architectures and larger datasets.
\section{MAE as a differentiable inpainter}
\setcounter{figure}{0}
\setcounter{table}{0}
Masked Autoencoders (MAE) consist of a Transformer Encoder, which takes as input only a subset of unmasked patches during training, and a Transformer Decoder, which takes as input the encoded patches and, in addition, a learnable MSK token replicated at all the locations where the (masked) patches were not fed to the encoder. The decoder is trained to reconstruct the masked patches.
In MOVE, we need the pre-trained MAE to work as a differentiable inpainter. To that end, we feed all the patches to the encoder. Then, we only do a soft-masking between the MSK token and the encoded patches via a convex combination, before feeding the embeddings to the decoder (see section~2 and Figure~3). This is different from how MAE was trained: During training the encoder had no way to encode the information about the missing patches. Since in MOVE we feed all the patches to the encoder, it is possible that the encoded embeddings contain information about their neighbors. In particular, there is the risk that the unmasked encoded patches would contain information about the masked patches. If that were the case, the decoder would be able to inpaint the masked object even when the entire object is masked at the decoder input. We show empirically and quantitatively that this is not the case. Using the same pre-trained MAE, we compare the reconstruction error for the original inference vs. our modified soft-masking inference. We run the evaluation on a subset of $5000$ images from the ImageNet validation set \cite{imagenet}, randomly masking between $80\%$ and $95\%$ of the tokens. We show the mean squared error of the intensity for intensity range $[0;1]$ in Table~\ref{tab:inpaint-mae} and comparison of reconstructed images in Figure~\ref{fig:inpaint-mae} for both MAE trained with a GAN loss or with an MSE loss. We find that the difference in the inpainting error is not significant.
Moreover, we observe visually that the reconstructions through the Modified soft-masking (MOVE) do not show a better reconstruction of the masked patches than in the Default case where the masked patches are not provided to MAE.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{figures/mae-inpainting.jpg}
\raggedright
{
\scriptsize \hspace*{0.7cm} Input \hspace*{1.4cm} Masked input \hspace*{0.5cm} MAE w/ GAN - orig. \hspace*{0.2cm} MAE w/ GAN - mod. \hspace*{0.2cm} MAE w/ MSE - orig. \hspace*{0.2cm} MAE w/ MSE - mod.
}
\caption{Comparison of MAE sparse input vs differentiable mask inpainting. We show the input and masked input image in the two first columns. For MAE trained with a GAN loss or with an MSE loss we show the reconstructed image when we feed a sparse subset of tokens to the encoder (orig.) and when we feed all the tokens to the encoder and mask only before feeding the embeddings to the decoder (mod.). No significant difference can be observed between these two reconstruction modalities or when we change the MAE training. }\label{fig:inpaint-mae}
\end{figure*}
\section{Inpainter mask and downsampled mask losses}
\setcounter{figure}{0}
\begin{figure}
\centering
\includegraphics[scale=0.6]{figures/sup-maxpool.pdf}
\raggedright
{
\scriptsize \hspace*{3.25cm} Predicted mask \hspace*{3.5cm} Downsampled inpainter mask
}
\caption{Obtaining an inpainting mask from a predicted mask via max pooling downsampling. Due to small artifacts in the mask, all patches might be selected as masked and thus, the entire background might get inpainted. The grid on the right is just for reference purposes.}
\label{fig:sup-maxpool}
\end{figure}
As specified in section~2, we obtain a low-res inpainting mask $\hat m$ via a $\text{maxpool}_P$ with stride $P$ operation on the union of the predicted mask and its shifted version, where $P$ is the patch size that MAE tokens embed. We use max pooling for downsampling, because we want to make sure that we mask all the patches containing even only parts of the object. This is important, otherwise the inpainter may partly reconstruct the object. However, using max pooling for downsampling might result in inpainting more than necessary due to the artifacts in the mask. An extreme case of this is illustrated in Figure~\ref{fig:sup-maxpool}, where the entire background would get inpainted due to a single pixel within each $P\times P$ patch. To avoid such cases we apply our ${\cal L}_\text{min}$
and ${\cal L}_\text{bin}$ losses (eq.~(1),(2)) on the downsampled mask as well. Having
a binarization loss on the mask downsampled with max pooling has an extra regularizing effect on the original mask. For example, when all mask pixels in a patch have a value below 0.5, the binarization loss on the max pooling of the mask will push only the largest value towards 0. This creates an asymmetry when the pixels of the mask must be reduced, which prioritizes the largest values.
Eventually however, the application of this loss over multiple iterations will result in pushing all pixels within the patch to 0.
\section{Bilateral solver}
\setcounter{figure}{0}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/sup-bilateral-fail-cr.jpg}
\raggedright
{
\scriptsize \hspace*{0.5cm} Input \hspace*{0.8cm} Ground truth \hspace*{0.5cm} MOVE \hspace*{0.3cm} MOVE + bilateral \hspace*{0.8cm} Input \hspace*{0.8cm} Ground truth \hspace*{0.5cm} MOVE \hspace*{0.3cm} MOVE + bilateral
}
\caption{A refinement with the bilateral solver might cause the shrinking of valid predicted masks.}
\label{fig:sup-bilateral}
\end{figure}
While other methods get competitive results by using a bilateral solver to refine the masks predicted from their methods (see section~4.1), MOVE provides more accurate results wihout any additional post-processing. The application of a bilateral solver, which highly relies on image texture, could even decrease the performance in cluttered images. In Figure~\ref{fig:sup-bilateral} we show some examples where the bilateral solver hurts our predictions.
\section{Segmenter architecture}
Our segmenter is built on top of a ViT-based feature extractor, as specified in section~3.
We define a $\text{Block}^{in \textunderscore ch}_{out \textunderscore ch}$ as a sequence of layers:
\begin{equation}
\begin{aligned}
& 3\times3 \textrm{ } \textrm{Conv}_{out \textunderscore ch}^{in \textunderscore ch} \rightarrow \textrm{BatchNorm} \rightarrow \textrm{LeakyReLU},
\end{aligned}
\end{equation}
where $K\times K$ $\textrm{Conv}_{out \textunderscore ch}^{in \textunderscore ch}$ is a padded $K\times K$ convolution with $\textrm{stride}=1$, $in \textunderscore ch$ input channels and $out \textunderscore ch$ output channels. Our baseline segmenter takes DINO ViT-S/8 384-dimensional features arranged in a grid and consists of alternating upsampling layers and blocks:
\begin{equation}
\begin{aligned}
& \textrm{Upsample}_{\textrm{nearest}}^{2\times2} \rightarrow \textrm{Block}^{384}_{192} \rightarrow \\
& \textrm{Upsample}_{\textrm{nearest}}^{2\times2} \rightarrow \textrm{Block}^{192}_{128} \rightarrow \\
& \textrm{Upsample}_{\textrm{nearest}}^{2\times2} \rightarrow \textrm{Block}^{128}_{128} \rightarrow \\
& \textrm{Block}^{128}_{128} \rightarrow 1 \times 1 \textrm{ } \textrm{Conv}^{128}_1
\end{aligned}
\end{equation}
MAE features used as the segmenter inputs in one of the ablations (section~4.3) are 1024-dimensional and come from ViT/16 with $16\times16$ patches, therefore the segmenter needs an extra upsampling block and adapted number of channels. The adapted architecture in this case is
\begin{equation}
\begin{aligned}
& \textrm{Upsample}_{\textrm{nearest}}^{2\times2} \rightarrow \textrm{Block}^{1024}_{512} \rightarrow \\
& \textrm{Upsample}_{\textrm{nearest}}^{2\times2} \rightarrow \textrm{Block}^{512}_{256} \rightarrow \\
& \textrm{Upsample}_{\textrm{nearest}}^{2\times2} \rightarrow \textrm{Block}^{256}_{128} \rightarrow \\
& \textrm{Upsample}_{\textrm{nearest}}^{2\times2} \rightarrow \textrm{Block}^{128}_{128} \rightarrow \\
& \textrm{Block}^{128}_{128} \rightarrow 1 \times 1 \textrm{ } \textrm{Conv}^{128}_1
\end{aligned}
\end{equation}
For the ImageNet100 experiment we increase the capacity of the segmenter by making each Block deeper, i.e. $\text{Block}^{in \textunderscore ch}_{out \textunderscore ch}$ is:
\begin{equation}
\begin{aligned}
& 3\times3 \textrm{ } \textrm{Conv}_{out \textunderscore ch}^{in \textunderscore ch} \rightarrow \textrm{BatchNorm} \rightarrow \textrm{LeakyReLU} \rightarrow \\
& 3\times3 \textrm{ } \textrm{Conv}_{out \textunderscore ch}^{out \textunderscore ch} \rightarrow \textrm{BatchNorm} \rightarrow \textrm{LeakyReLU}.
\end{aligned}
\end{equation}
\section{Additional results}
\setcounter{figure}{0}
\setcounter{table}{0}
\subsection{Segmentation qualitative results}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/sup-ecssd-pred-cr.jpg}
\raggedright
{
\scriptsize \hspace*{0.7cm} Input \hspace*{1.2cm} MOVE \hspace*{0.5cm} SelfMask on MOVE \hspace*{0.2cm} Ground truth \hspace*{0.8cm} MOVE \hspace*{0.5cm} SelfMask on MOVE \hspace*{0.2cm} Ground truth
}
\caption{Sample segmentation results on ECSSD.}
\label{fig:sup-ecssd-pred}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/sup-duts-te-pred-cr.jpg}
\raggedright
{
\scriptsize \hspace*{0.7cm} Input \hspace*{1.2cm} MOVE \hspace*{0.5cm} SelfMask on MOVE \hspace*{0.2cm} Ground truth \hspace*{1.0cm} MOVE \hspace*{0.5cm} SelfMask on MOVE \hspace*{0.2cm} Ground truth
}
\caption{Sample segmentation results on DUTS-TE.}
\label{fig:sup-duts-pred}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/sup-dut-omron-pred-cr.jpg}
\raggedright
{
\scriptsize \hspace*{0.7cm} Input \hspace*{1.2cm} MOVE \hspace*{0.5cm} SelfMask on MOVE \hspace*{0.2cm} Ground truth \hspace*{1.0cm} MOVE \hspace*{0.5cm} SelfMask on MOVE \hspace*{0.2cm} Ground truth
}
\caption{Sample segmentation results on DUT-OMRON.}
\label{fig:sup-omron-pred}
\end{figure}
In Figures~\ref{fig:sup-ecssd-pred},\ref{fig:sup-duts-pred},\ref{fig:sup-omron-pred} we show more segmentation results of MOVE on DUTS-TE, DUT-OMRON and ECSSD.
\subsection{Detection qualitative results}
In Figures~\ref{fig:sup-voc07-pred},\ref{fig:sup-voc12-pred},\ref{fig:sup-coco-pred} we show more object detection results of MOVE on VOC07, VOC12 and COCO20k.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/sup-VOC07-preds-cr.jpg}
\raggedright
{
\scriptsize \hspace*{0.6cm} Input \hspace*{0.55cm} MOVE masks \hspace*{0.25cm} MOVE bbox \hspace*{0.15cm} Ground truth bbox \hspace*{0.6cm} Input \hspace*{0.55cm} MOVE masks \hspace*{0.25cm} MOVE bbox \hspace*{0.15cm} Ground truth bbox
}
\caption{Sample detection results on VOC07.}
\label{fig:sup-voc07-pred}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/sup-VOC12-preds-cr.jpg}
\raggedright
{
\scriptsize \hspace*{0.6cm} Input \hspace*{0.55cm} MOVE masks \hspace*{0.25cm} MOVE bbox \hspace*{0.15cm} Ground truth bbox \hspace*{0.6cm} Input \hspace*{0.55cm} MOVE masks \hspace*{0.25cm} MOVE bbox \hspace*{0.15cm} Ground truth bbox
}
\caption{Sample detection results on VOC12.}
\label{fig:sup-voc12-pred}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/sup-COCO-preds-cr.jpg}
\raggedright
{
\scriptsize \hspace*{0.6cm} Input \hspace*{0.55cm} MOVE masks \hspace*{0.25cm} MOVE bbox \hspace*{0.15cm} Ground truth bbox \hspace*{0.6cm} Input \hspace*{0.55cm} MOVE masks \hspace*{0.25cm} MOVE bbox \hspace*{0.15cm} Ground truth bbox
}
\caption{Sample detection results on COCO20k.}
\label{fig:sup-coco-pred}
\end{figure}
\clearpage
|
1,314,259,996,620 | arxiv | \section{Introduction}
The purpose of this paper is to study vector and scalar-valued nearly $S^*$-invariant subspaces of the Hardy space defined on the unit disc. We first produce some results on the structure of nearly $S^*$-invariant subspaces with a finite defect, in particular we produce a powerful tool which allows us to relate the vector-valued nearly $S^*$-invariant subspaces to scalar-valued nearly $S^*$-invariant subspaces with a finite defect. These results then allow us to adopt a previously unknown universal approach to the study of the kernel of: the Toeplitz operator, the truncated Toeplitz operator, the dual truncated Toeplitz operator and the truncated Toeplitz operator on the multiband space (all to be defined later).
We denote $\mathbb{T}$ to be the unit circle and $\mathbb{D}$ to be the open unit disc. The vector-valued Hardy space is denoted $H^2(\mathbb{D}, \mathbb{C}^n)$ and is the Hilbert space defined to be a column vector of length $n$ with each coordinate taking values in $H^2$; background theory on the classical Hardy space $H^2$ can be found in \cite{nikolski2002operators, duren1970theory}. The backwards shift on the space $H^2(\mathbb{D}, \mathbb{C}^n)$ is defined by
$$
S^* \begin{pmatrix}
f_1 \\
\vdots \\
f_n
\end{pmatrix} (z) = \frac{\begin{pmatrix}
f_1 (z) \\
\vdots \\
f_n (z)
\end{pmatrix} - \begin{pmatrix}
f_1 (0) \\
\vdots \\
f_n (0)
\end{pmatrix}}{z}.
$$
If we denote $\overline{H^2_0} = \{ \overline{f} : f \in H^2, f(0) = 0 \}$, then it is readily checked that $\overline{H^2_0}$ is the orthogonal complement of $H^2$ in $L^2 (\mathbb{T})$. Then in the scalar case (i.e. when $n=1$) using Beurling's Theorem one can then deduce that all non-trivial $S^*$-invariant subspaces are of the form $K_{\theta} = \theta \overline{H^2_0} \cap H^2$ for some inner function $\theta$. We call $K_{\theta}$ a model space and further information on model spaces can be found in \cite{cima2000backward}. One can further check that for distinct $\lambda_i \in \mathbb{D}$ if $\theta = \prod_{i} \frac{z- \lambda_i}{1 - \overline{\lambda_i}z} $, then $K_{\theta}$ is the span of Cauchy kernels $k_{{\lambda_i}}(z) = \sum_{n=0}^{\infty} (\overline{\lambda_i} z)^n$. The Cauchy kernel $k_{\lambda_i}$ is the eigenvector of the backwards shift with eigenvalue $\overline{\lambda_i}$.
\begin{defn}
A closed subspace $M \subseteq H^2 ( \mathbb{D} , \mathbb{C}^n )$ is said to be nearly $S^*$-invariant with defect $m$ if and only if there exists a $m$-dimensional subspace $D$ (which may be taken to be orthogonal to $M$) such that if $f \in M$ and $f(0) $ is the zero vector then $S^* f \in M \oplus D$.
\vskip 0.1cm
\noindent If $M$ is nearly $S^*$-invariant with defect 0 then it is said to be nearly $S^*$-invariant.
\end{defn}
Using orthogonal decomposition we can write $L^2 = \overline{H^2_0} \oplus K_{\theta} \oplus \theta H^2$. We define $P_{\theta}: L^2 \to K_{\theta}$ to be the orthogonal projection.
\begin{defn}The truncated Toeplitz operator $A_g^{\theta}: K_{\theta} \to K_{\theta}$ having symbol $g \in L^2$ is the densely defined operator
$$
A_g^{\theta} (f) = P_{\theta} (g f)
$$
having domain
$$
\{ f \in K_{\theta} : g f \in L^2 \}.
$$
\end{defn}
The concept of (scalar) nearly backward shift invariant subspaces was first introduced by Hitt in \cite{hitt1988invariant} as a generalisation to Hayashi’s results concerning Toeplitz kernels in \cite{MR853630}. These spaces were then studied further by Sarason \cite{sarason1988nearly}. The study of nearly backwards shift invariant subspaces was then generalised to the vectorial case in \cite{MR2651921}, and generalised to include a finite defect in \cite{chalendar2019beurling}. Kernels of Toeplitz operators are the prototypical example of nearly $S^*$- invariant subspaces.
Truncated Toeplitz operators were introduced in \cite{sarason2007algebraic}, and over the past decade there have been many further publications studying their properties. The applications of truncated Toeplitz operators are vast, ranging from purely mathematical to more applied problems. From a purely mathematical perspective one can use the Sz.-Nagy-Foiaş model theory for Hilbert space contractions (see \cite{nikolski2002operators}) to show that every Hilbert space contraction $T$ having defect indices $(1,1)$ and such that $\lim_{n \to \infty} (T^*)^n $ (SOT) is unitarily equivalent to $A_z^{\theta}$, for some inner function $\theta$. This can be generalised to produce similar results for arbitrary defect indices. Another notable application of truncated Toeplitz operators within pure mathematics comes from the Carathéodory and Pick problems \cite{pickproblem}, where truncated Toeplitz operators with an analytic symbol appear naturally. From a more applied perspective truncated Toeplitz operators have links to control theory and electrical engineering. More specifically when one is considering an extremal problem posed over $H^{\infty}$, the solution of such a problem can be solved by computing the norm of a Hankel operator, and the norm of the Hankel operator can in turn be shown to equal the norm of an analytic truncated Toeplitz operator. This is shown explicitly as equation 2.9 in \cite{hankel}.
Although truncated Toeplitz operators share many properties with the classical Toeplitz operator it is easily checked that the kernel of a truncated Toeplitz operator is not nearly $S^*$-invariant. This motivates our study for section 2 where we show under certain conditions the kernel of a truncated Toeplitz operator is in fact nearly $S^*$-invariant with defect 1. In many cases the study of Toeplitz operators becomes greatly simplified when the operator has an invertible symbol; in section 2 we also show that the symbol of a truncated Toeplitz operator may be chosen to be invertible in $L^{\infty}$.
In section 3 we prove a powerful result that shows for any $i \in \{ 1 \hdots n \}$ the first $i$ coordinates of a vector-valued nearly $S^*$-invariant subspace is a nearly $S^*$-invariant subspace with a finite defect. We then generalise Theorem 3.2 in \cite{MR2651921} and Corollary 4.5 in \cite{chalendar2019beurling} to find a Hitt-style decomposition for the vector-valued nearly $S^*$-invariant subspaces with a finite defect.
In section 4 we show that in all cases the kernel of a truncated Toeplitz operator is a nearly $S^*$-invariant subspace with defect 1; this then allows us to decompose the kernel in to an isometric image of a model space. The approach of decomposing a kernel in to an isometric image of a model space much resembles the works of Hayashi \cite{MR853630} and Hitt \cite{hitt1988invariant} for the classical Toeplitz operator. We also make the observation that we can decompose the kernel of a truncated Toeplitz operator in to a nearly $S^*$-invariant subspace multiplied by a power of $z$ (where $z \in \mathbb{D}$ is the independent variable). Then using the results of \cite{hitt1988invariant}, this observation also gives us a second method to decompose the kernel in to a isometric image of a model space. Furthermore we show that in general our two choices of decomposition of the kernel of a truncated Toeplitz operator yield different results.
In section 5 we study the kernel of dual truncated Toeplitz operator. Dual truncated Toeplitz operators have been studied in both \cite{dual1,dual2} as well as many other sources. The kernel of a dual truncated Toeplitz operator has been studied in \cite{camara2019invertibility}. Although the domain of the dual truncated Toeplitz operator is not a subspace of $H^2$ we still can use similar recursive techniques used in previous sections to decompose the the kernel in to a fixed function multiplied by a $S^*$-invariant subspace.
In section 6 we study the truncated Toeplitz operator on the multiband space. We show every truncated Toeplitz operator on a multiband space is unitarily equivalent to an operator which has kernel nearly $S^*$-invariant with defect 2. This allows us to apply our previously developed theory to give a decomposition for the kernel of the truncated Toeplitz operator on a multiband space in terms of $S^*$-invariant subspaces.
\subsection{Notations and convention}
\begin{itemize}
\item From section 3 onward we assume the symbol of any Toeplitz operator (denoted $g$) is bounded and hence the Toeplitz operator is bounded.
\item Throughout we let $\theta$ be an arbitrary inner function.
\item We use the notation $f^i / f^o$ to denote the inner/outer factor of $f$.
\item $GCD$ stands for greatest common divisor, and the greatest common divisor of two inner functions is always taken to be an inner function.
\item All limits are taken in the $H^2(\mathbb{D}, \mathbb{C}^n)$ sense unless otherwise stated.
\item All subspaces of $H^2(\mathbb{D}, \mathbb{C}^n )$ are assumed closed unless otherwise stated.
\end{itemize}
\section{Preliminary results}
\begin{thm}\label{suggestinvariance}
For any $g \in L^2$ we write $g = g^- + g^+$ where $g^- \in \overline{H^2_0}$ and $g^+ \in H^2$. If $\overline {g^-}$ is not cyclic for the backwards shift then there exists a $\tilde{g} \in L^2$ such that $A^{\theta}_g = A^{\theta}_{\tilde{g}}$ and $\tilde{g}^{-1} \in H^{\infty}$.
\end{thm}
\begin{proof}
Theorem 3.1 of \cite{sarason2007algebraic} shows that $A_{g_1}^{\theta} = A_{g_2}^{\theta}$ if and only if $g_1 - g_2 \in \overline{\theta H^2} + \theta H^2$, so we may initially assume without loss of generality that $g \in \overline{ K_{\theta}} \oplus K_{\theta}$. Using Lemma 2.1 in \cite{o2020toeplitz} we can construct an outer function $u $ such that $|u| = 2 |g| + 1$, furthermore $u \in L^2$ so $u \in H^2$. Then it follows that for any inner function $\alpha$
\begin{equation}\label{alphatothoeta}
g - \overline{ \alpha u}
\end{equation}
has the property that
$$
| g - \overline{ \alpha u} | \geqslant |u| - |g| > |g| + 1 > 0
$$
almost everywhere on $\mathbb{T}$, and so $(g - \overline{ \alpha u })^{-1} \in L^{\infty}$. Our construction of $u$ shows $|\frac{1}{u}| \leqslant 1 $ and as the reciprocal of an outer function in is outer, we have $\frac{1}{u}$ is outer and in $L^{\infty}$, so $\frac{1}{u} \in H^{\infty}$. Furthermore by Corollary 4.9 in \cite{o2020toeplitz} we can say $\frac{1}{u} \in H^2$ is non-cyclic for $S^*$ and hence must lie in a model space $K_{\Phi}$. Define $\tilde{g} := ( g - \overline{ \Phi \theta u} )$, then as previously stated $\tilde{g}^{-1} \in L^{\infty}$. We now show $\tilde{g}^{-1} = \sum_{k=0}^{\infty} (-1) g^k ( \Phi \theta \frac{1}{\overline{u}})^{k+1}$ where the limit is taken in the sense of uniform convergence. We write $\tilde{g}^{-1}_{N}$ to be $\sum_{k=0}^{N} (-1) g^k ( \Phi \theta \frac{1}{\overline{u}})^{k+1}$ then we have $||\tilde{g}^{-1}_{N} - \tilde{g}^{-1}||_{\infty}$ is equal to $$
|| \tilde{g}^{-1} \tilde{g} ( \tilde{g}^{-1}_{N} - \tilde{g}^{-1} )||_{\infty} \leqslant ||\tilde{g}^{-1}||_{\infty} || \tilde{g}^{-1} \tilde{g}^{-1}_{N} - 1 ||_{\infty} \leqslant ||\tilde{g}^{-1}||_{\infty} ||g^N ( \Phi \theta \frac{1}{\overline{u}})^{N} ||_{\infty}.
$$
By our construction of $u$ this is less than $||\tilde{g}^{-1}||_{\infty} (\frac{1}{2})^{N}$, which clearly converges to 0. Now our choice of $\Phi $ ensures that $ \Phi \frac{1}{\overline{u}} \in H^{\infty}$, we also have $ \theta g \in H^2$. This means $(-1) g^k ( \Phi \theta \frac{1}{\overline{u}})^{k+1} \in H^2 $ and is bounded by 1 so must actually lie in $H^{\infty}$, so $\tilde{g}^{-1}$ (being the uniform limit of a sequence in $H^{\infty}$) must also be in $H^{\infty}$.
\end{proof}
\noindent Examining the first part of the above proof we can also deduce the following proposition.
\begin{prop}
For any $g \in L^2$ there exists a $\tilde{g} \in L^2$ such that $A_{g}^{\theta} = A_{\tilde{g}}^{\theta}$ and $\tilde{g}^{-1} \in L^{\infty}$.
\end{prop}
\begin{proof}
In \eqref{alphatothoeta} if we set $\alpha$ to equal $\theta$, keep our construction of $u$ the same and define $\tilde{g} = g - \overline{\alpha u } $ then $A_{g}^{\theta} = A_{\tilde{g}}^{\theta}$. Furthermore the computation immediately after \eqref{alphatothoeta} shows $\tilde{g}^{-1} \in L^{\infty}$.
\end{proof}
This has an interesting relation to Sarason's question posed in \cite{sarason2007algebraic}; which is whether every bounded truncated Toeplitz operator has a bounded symbol. This was first shown to have a negative answer as Theorem 5.3 in \cite{baranov2010bounded}, and further results in \cite{baranov2010symbols} characterise the inner functions $\theta$ which have the property that every bounded truncated Toeplitz operator on $K_{\theta}$ has a bounded symbol.
These results suggest that under certain circumstances $\ker A_g^{\theta}$ may be a nearly invariant subspace with a finite defect. This is because $f \in \ker A_g^{\theta}$ if and only if $f \in K_{\theta}$ and
$$
gf \in \overline{H^2_0} \oplus \theta H^2,
$$
so if $f(0) = 0$ and $f \in \ker A_g^{\theta}$ then we must have
$$
\frac{gf}{z} \in \overline{H^2_0} + { \rm span} \{ S^* ( \theta ) \} + \theta H^2.
$$
This may lead us to believe that $\ker A_g^{\theta}$ is a nearly $S^*$-invariant subspace with a defect given by $g^{-1} { \rm span} \{ S^* ( \theta ) \}$, but the issue here is $g^{-1} S^* (\theta)$ need not necessarily lie in $K_{\theta}$ or even $H^2$. Theorem \ref{suggestinvariance} shows us that under some weak restrictions we can choose our non-unique symbol $g$ so that $g^{-1} S^* ( \theta ) \in H^2$, but to fully understand $\ker A_g^{\theta}$ as a nearly invariant subspace with a defect we must study vector-valued nearly invariant subspaces with a defect.
\section{Vector-valued nearly invariant subspaces with a defect}
Let $M \subseteq H^2(\mathbb{D},\mathbb{C}^n)$ be a nearly invariant subspace for the backwards shift with a finite defect space $D$ and let $\dim D = m$. If not all functions in $M$ vanish at $0$ then we define $ W := M \ominus (M \cap zH^2( \mathbb{D},\mathbb{C}^n)) $ and Corollary 4.3 in \cite{MR2651921} shows that $r := \dim W \leqslant n$, in this case we let $W_1 \hdots W_r$ be an orthonormal basis of $W$. For $i = 1 \hdots n $ we let $P_{i}: H^2( \mathbb{D}, \mathbb{C}^n) \to H^2( \mathbb{D}, \mathbb{C}^i )$ be the projection on to the first $i$ coordinates.
\begin{thm}\label{main}
For any $i \in \{ 1 \hdots n \}$, $M_i := P_{i}( M)$ is a (not necessarily closed) nearly invariant subspace with a defect space $ \left( \frac{ { \rm span} \{ { P_{i}(W_1), \hdots P_{i}(W_r)} \} }{z} \cap H^2(\mathbb{D}, \mathbb{C}^i) \right) + P_i (D) $.
\end{thm}
\begin{proof}
We first consider the case when not all functions in $M$ vanish at 0. Let $f_i \in M_i$, then $f_i$ is the first $i$ entries of some $F \in M$. We write $F$ as
$$
F = a_1 W_1 + \hdots a_r W_r + F_1,
$$
where $a_1 \hdots a_r \in \mathbb{C}$ and $F_1 \in M \cap z H^2(\mathbb{D},\mathbb{C}^n)$. So if $f_i(0) $ is the zero vector, we then have $f_i(0) $ is zero and $F_1 (0)$ is zero, which forces $P_i ( a_1 W_1 + \hdots a_r W_r) $ to be zero. So
$$
\frac{f_i}{z} - \frac{P_i ( a_1 W_1 + \hdots a_r W_r)}{z} = P_i \left( \frac{ F_1}{z} \right) \in M_i + P_{i}(D),
$$
which means
$$
\frac{f_i}{z} \in M_i + \left( \frac{ { \rm span} \{ { P_{i}(W_1), \hdots P_{i}(W_r)} \}}{z} \cap H^2 \right) + P_i (D).
$$
In the case when all functions in $M$ vanish at $0$ then $W = \{ 0 \}$ and we would just have $\frac{F}{z} \in M + D$, so $\frac{f_i}{z} \in M_i + P_i (D)$.
\end{proof}
\begin{rem}
If $W = \{ 0 \}$ we can interpret $ \left( \frac{ { \rm span} \{ { P_{i}(W_1), \hdots P_{i}(W_r)} \} }{z} \cap H^2(\mathbb{D}, \mathbb{C}^2) \right)$ to be the zero vector.
\end{rem}
\begin{cor}\label{main2}
With the same assumptions as in Theorem \ref{main}, if $m = 0$ i.e. if $M$ is a nearly $S^*$-invariant subspace, then $M_i$ is a (not necessarily closed) nearly $S^*$-invariant subspace with a defect space $ \left( \frac{ { \rm span} \{ { P_{i}(W_1), \hdots P_{i}(W_r)} \} }{z} \cap H^2(\mathbb{D}, \mathbb{C}^i) \right) $.
\end{cor}
To further build on this result we will now give a Hitt style decomposition for a vector-valued nearly invariant subspace with a finite defect. This style of decomposition was first introduced by Hitt in \cite{hitt1988invariant} when he decomposed the nearly $S^*$-invariant subspaces. This was then generalised to the vectorial case as Corollary 4.5 in \cite{MR2651921}. This style of proof was then adapted to produce a similar result for the (scalar) nearly invariant subspace with a defect, which is Theorem 3.2 in \cite{chalendar2019beurling}.
\vskip 0.5cm
For a Hilbert space $\mathcal{H}$ and $x, y \in \mathcal{H}$ we define $x\otimes y (f) = \langle f, y \rangle x$. We say an operator $T$ on $\mathcal{H}$ belongs to the class $C_{.0}$ if for all $x \in \mathcal{H}, \lim_{n \to \infty} ||(T^*)^n x || = 0.$
Consider a subspace $M$ which is nearly $S^*$-invariant with defect 1, so that $D= { \rm span} \{ e_1 \}$, say, where $||e_1 || =1.$
Suppose first that not all functions in $M$ vanish at $0$, then $1 \leqslant r = \dim W \leqslant n$. Let $F_0$ be the matrix with columns $W_1 \hdots W_r$, and let $P_{W}$ be the orthogonal projection on to $W$. For each $F \in M$ we may write
$$
F = P_{W}(F) + F_1 = F_0 \begin{pmatrix}
a_0^1 \\
\vdots \\
a_0^r \\
\end{pmatrix} + F_1 .
$$
Now as $F_1 (0) = 0$ we have $S^* (F_1) = G_1 + \beta_1 e_1$, where $G_1 \in M$ and $\beta_1 \in \mathbb{C}$. Thus
$$
F(z) = F_0(z) A_0 + z G_1(z) + z \beta e_1(z) ,
$$
where $A_0 = \begin{pmatrix}
a_0^1 \\
\vdots \\
a_0^r \\
\end{pmatrix}$. Moreover since the family $\{ W_i \}_{i=1 \hdots r}$ forms an orthonormal basis of $W$, we obtain the following identity of norms:
$$
||F||^2 = ||F_0 A_0 ||^2 + || F_1 ||^2 = ||A_0||^2 + ||G_1||^2 + |\beta_1|^2.
$$
We may now repeat this process on $G_1$ to obtain $G_1 = P_{W}(G_1) + F_2$, and $S^* (F_2) = G_2 + \beta_2 e_1$, so $G_1 = F_0 A_1 + zG_2 + z \beta_2 e_1$. We iterate this process to obtain
\begin{equation}\label{star}
F(z) = F_0(z) ( A_0 + A_1 z + \hdots A_{n-1} z^{n-1} ) + z G_n (z) + \left(\beta_1 z + \hdots + \beta_n z^n \right)e_1(z) ,
\end{equation}
where
$$
||F||^2 = \sum_{k=0}^{n-1}||A_k||^2 + ||G_n||^2 + \sum_{k=1}^{n} |\beta_k|^2 .
$$
We now argue $||G_n|| \to 0$ as $n \to \infty$. We can write $G_n = P_1 S^* P_2 (G_{n-1})$, where $P_1$ is the projection with kernel $\langle e_1 \rangle$ and $P_2$ is the projection with kernel $ { \rm span} \{ W_1 \hdots W_r \}$. For all $n \geqslant 1$ we may write $G_{n+1} = P_1 R^{n-1} ( S^* P_2 ( G_1 ) )$, where $R= S^* P_2 P_1$ and so
\begin{equation}\label{limit}
||G_{n+1} || \leqslant ||P_1|| ||R^{n-1} (S^* P_2 (G_1))||.
\end{equation}
As $e_1$ is orthogonal to $W$ we have
$$P_2 P_1 = P_1 P_2 = Id - e_1 \otimes e_1 - \sum_{j=1}^{r} W_j \otimes W_j,$$
and so the adjoint of $R$ is
$$
P_1 P_2 S = S - e_1 \otimes S^*(e_1) - \sum_{j=1}^{r} W_j \otimes S^*(W_j).
$$
We now apply the second assertion of Proposition 2.1 from \cite{MR2651921} to show the adjoint of $R$ is of class $C_{.0}$, and so $R^{n-1}$ applied to $S^* P_2 (G_1)$ converges to $0$; now from \eqref{limit} we see $||G_{n+1}|| \to 0$. As a consequence taking limits in \eqref{star} we may write
$$
F(z) = \lim_{n \to \infty} \left( F_0(z) ( A_0 + A_1 z + \hdots A_{n-1} z^{n-1} ) + (\beta_1 z + \hdots + \beta_n z^n )e_1(z) \right).
$$
We denote $a_n(z) = F_0(z) \left( A_0 + A_1 z + \hdots A_{n-1} z^{n-1} \right)$, and $a_0(z) = F_0 \left( \sum_{k=0}^{\infty} A_k z^k \right)$, where $\left( \sum_{k=0}^{\infty} A_k z^k \right)$ is taken in the $H^2(\mathbb{D}, \mathbb{C}^n)$ sense (this is defined by the equality of norms given immediately after \eqref{star}). Then in the $H^1 ( \mathbb{D}, \mathbb{C}^n )$ norm we must have
$$
|| a_n(z)- a_0(z) || = || F_0 \sum_{k=n}^{\infty} A_k z^k || \leqslant ||W_1 \sum_{k=n}^{\infty} a_k^1 z^k || + \hdots + ||W_r \sum_{k=n}^{\infty} a_k^r z^k ||.
$$
For each $i \in \{ 1 \hdots n \}$ we define $C_i$ to equal the maximum $H^2$ norm of each coordinate of $W_i$ multiplied by $n$, then we apply Hölder's inequality on each coordinate to obtain
$$
||W_i \sum_{k=n}^{\infty} a_k^i z^k ||_{H^1(\mathbb{D}, \mathbb{C}^n)} \leqslant C_i || \sum_{k=n}^{\infty} a_k^i z^k ||_{H^2(\mathbb{D}, \mathbb{C}^n)} \to 0.
$$
Thus in the $H^1 ( \mathbb{D}, \mathbb{C}^n )$ norm we have $ a_n \to a_0$, a similar computation shows $\left(\beta_1 z + \hdots + \beta_n z^n \right)e_1(z)$ converges to $(\sum_{k=1}^{\infty}\beta_k z^k ) e_1$ in the $H^1 ( \mathbb{D}, \mathbb{C}^n )$ norm, so the $H^1 ( \mathbb{D}, \mathbb{C}^n )$ limit of $$ F(z) = F_0(z) ( A_0 + A_1 z + \hdots A_{n-1} z^{n-1} ) + (\beta_1 z + \hdots + \beta_n z^n )e_1(z)$$ must be equal to
$$
F(z) = F_0 \left( \sum_{k=0}^{\infty} A_k z^k \right) + \left( \sum_{k=1}^{\infty} \beta_k z^k \right) e_1,
$$
and furthermore by taking limits in the equality of norms immediately after \eqref{star} we know
\begin{equation}\label{isom}
||F||^2 = \sum_{k=0}^{\infty}||A_k||^2 + \sum_{k=1}^{\infty}|\beta_k|^2 .
\end{equation}
We may alternatively express this as saying $F \in M$ if and only if
\begin{equation}\label{start}
F(z) = F_0 k_0 + z k_1 e_1 ,
\end{equation}
where $(k_0, k_1)$ lies in a subspace $K \subseteq H^2(\mathbb{D}, \mathbb{C}^r) \times H^2$ which is identified with $H^2(\mathbb{D}, \mathbb{C}^{r+1})$.
By virtue of \eqref{isom} we can see that $K$ is the image of a isometric mapping, and hence closed. We now argue $K$ is invariant under the backwards shift (on $H^2(\mathbb{D}, \mathbb{C}^{r+1})$). Since in the algorithm we have $k_0 (0) = A_0$ and $k_1(0) = \beta_1$ we can write $F$ as
$$
F = F_0 A_0 + z F_0 S^*(k_0) + \beta_1 z e_1 + z^2 S^* (k_1) e_1 ,
$$
consequently
\begin{equation}\label{end}
F_0 S^*(k_0) + z S^* (k_1) e_1 = \frac{F - F_0 A_0 - \beta_1 z e_1}{z} = G_1 \in M.
\end{equation}
Conversely if
$$
M= \{ F_0 k_0 + z k_1 e_1: (k_0 , k_1) \in K \},
$$
is a closed subspace of $H^2(\mathbb{D}, \mathbb{C}^n)$, where $K$ is a $S^*$-invariant subspace of $H^2(\mathbb{D}, \mathbb{C}^{r+1})$, then $M$ is nearly $S^*$-invariant with defect 1. To show this we first need a lemma.
\begin{lem}\label{lemmaforlater}
$W_1(0), ... W_r(0)$ are linearly independent in $\mathbb{C}^n$.
\end{lem}
\begin{proof}
If $W_k(0) = \sum_{i \neq k} \lambda_i W_i(0)$ this would mean $W_k - \sum_{i \neq k} \lambda_i W_i$ vanishes at 0 and therefore lies in $zH^2(\mathbb{D}, \mathbb{C}^n )$.
\end{proof}
\noindent If $F \in M$ and $F(0)=0$ then we must have $F_0 (0) k_0 (0)$ is equal to the zero vector. We now add $n-r$ vectors $X_1 \hdots X_{n-r}$ which are linearly independent from $W_1(0), \hdots W_r(0)$ as extra columns to the matrix $F_0(0)$ to obtain a matrix
$$
F_0^{'}(0) = [W_1 , \hdots W_r, X_1 , \hdots, X_{n-r}].
$$
We now add $n-r$ extra $0$'s to the end of the column vector $k_0 (0)$ and label this $k_0^{'}(0)$. As $F_0 (0) k_0 (0)$ is equal to the zero vector, then $F_0^{'}(0) k_0^{'}(0)$ must also be equal to the zero vector. We can now invert $F_0^{'}(0)$ to obtain $k_0^{'}(0)$ is equal to the zero vector and hence $k_0(0)$ must be zero. This allows us to write
$$
S^* (F) = F_0 \frac{k_0}{z} + k_1 e_1 = F_0 \frac{k_0}{z} + z S^* k_1 e_1 + k_1(0) e_1,
$$
and as $K$ is $S^*$-invariant this is clearly an element of $M \oplus { \rm span} \{ e_1 \} $.
\vskip 0.5cm
If all functions in $M$ vanish at $0$ then there is no non-trivial reproducing kernel at $0$, but we may now write
$$
F(z) = z \left( G_1 (z) + \beta_1 e_1 (z) \right),
$$
with $G_1 \in M$ and $\beta_1 \in \mathbb{C}$, and furthermore
$$
||F||^2 = ||G_1||^2 + |\beta_1|^2.
$$
We can then iterate on $G_1$ as we have previously done to obtain
$$
F(z) = \beta_1 z e_1 + \beta_2 z^2 e_1 + \hdots.
$$
For a general finite defect $m$ the analogous calculations produce the following result.
\begin{thm}\label{vector near invariant with defect}
Let $M$ be a closed subspace that is nearly $S^*$-invariant with a finite defect $m$. Then:
\begin{enumerate}
\item In the case where there are functions in $M$ that do not vanish at $0$,
$$
M = \{ F : F(z) = F_0 (z) k_0(z) + z \sum_{j=1}^{m} k_j(z) e_j (z) : ( k_0, \hdots ,k_m) \in K \} ,
$$
where $F_0$ is the matrix with each column being an orthonormal element of $W$, $\{ e_1 , \hdots e_m \}$ is any orthonormal basis for $D$, $k_0 \in H^2(\mathbb{D}, \mathbb{C}^r)$ (where $r = \dim W$), $k_1, \hdots k_m \in H^2$, and $K \subseteq H^2 ( \mathbb{D} , \mathbb{C}^{(r+m)})$ is a closed $S^*$-invariant subspace. Furthermore $ ||F||^2 = \sum_{j=0}^m ||k_j||^2$.
\item In the case where all functions in $M$ vanish at $0$,
$$
M = \{ F : F(z) = z \sum_{j=1}^{m} k_j(z) e_j(z) : (k_1, \hdots, k_m) \in K \},
$$
with the same notation as in 1, except that $K$ is now a closed $S^*$-invariant subspace of $H^2( \mathbb{D}, \mathbb{C}^m )$, and $||F||^2 = \sum_{j=1}^{m} ||k_j||^2$.
\end{enumerate}
Conversely if a closed subspace $M \subseteq H^2(\mathbb{D}, \mathbb{C}^n)$ has a representation as in 1 or 2, the it is a nearly $S^*$-invariant subspace with defect $m$.
\end{thm}
\begin{rem}
The above Theorem was also independently proved in \cite{chattopadhyay2020invariant}.
\end{rem}
\section{Application to truncated Toeplitz operators}
Throughout this section our symbol $g$ is bounded and so the truncated Toeplitz operator $A_g^{\theta}: K_{\theta} \to K_{\theta}$ is defined by
$$
A_g^{\theta} (f) = P_{\theta}(gf),
$$
where $P_{\theta}$ is the orthogonal projection $L^2 \to K_{\theta}$.
It was originally observed in \cite{MR3398735} that the kernel of a truncated Toeplitz operator is the first coordinate of the kernel of the matricial Toeplitz operator with symbol
$$
G =
\begin{pmatrix}
\overline{\theta} & 0 \\
g & \theta \\
\end{pmatrix}.
$$
Scalar-type Toeplitz kernels (first introduced in \cite{scalartype}) are vector-valued Toeplitz kernels which can be expressed as the product of a space of scalar functions by a fixed vector function. A maximal function for $\ker T_G$ is an element $f \in \ker T_G$ such that if $f \in \ker T_H$ for any other bounded matricial symbol $H$, then $\ker T_G \subseteq \ker T_H$. By Corollary 3.9 in \cite{scalartype} $\ker T_G$ is of scalar type, it is also easily checked that $\ker T_G$ is not shift invariant and so by Theorem 3.7 in \cite{scalartype} we must have that $\ker T_G$ has a maximal function. Now by Theorem 3.10 of \cite{o2020toeplitz} we can deduce that $ W = \ker T_G \ominus (\ker T_G \cap z H^2( \mathbb{D},\mathbb{C}^n))$ has dimension 1. If we denote $\begin{pmatrix}
w_1 \\
w_2 \\
\end{pmatrix}
$
to be the normalised element of $W$ then using Corollary 4.5 from \cite{MR2651921} we can write
$$
\ker T_G = \begin{pmatrix}
w_1 \\
w_2 \\
\end{pmatrix} K_{z \Phi},
$$
where $ \Phi $ is an inner function. We now can write
\begin{equation}\label{matrixtotruncated}
\ker A_g^{\theta} = w_1 K_{ z \Phi}.
\end{equation} We can describe $\Phi$ with the following proposition.
\begin{prop}\label{associated}
When $
\ker T_G = \begin{pmatrix}
w_1 \\
w_2 \\
\end{pmatrix} K_{z \Phi},
$ $\Phi $ is the unique (up to multiplication by a unimodular constant) inner function for which there exists $p_1, p_2 \in H^2$ such that
$$
G \begin{pmatrix}
w_1 \\
w_2 \\
\end{pmatrix} \Phi = \begin{pmatrix}
\overline{z p_1} \\
\overline{z p_2} \\
\end{pmatrix},
$$
and $GCD( p_1^i , p_2^i ) = 1$.
\end{prop}
\begin{proof}
We first show that up to multiplication by a unitary constant there can only be one inner function $\Phi$ satisfying
$$
G \begin{pmatrix}
w_1 \\
w_2 \\
\end{pmatrix} \Phi = \begin{pmatrix}
\overline{z p_1} \\
\overline{z p_2} \\
\end{pmatrix},
$$
where $GCD( p_1^i , p_2^i ) = 1$.
Suppose there are two inner functions $ \Phi_1 , \Phi_2$ such that
$$
G \begin{pmatrix}
w_1 \\
w_2 \\
\end{pmatrix} \Phi_1 = \begin{pmatrix}
\overline{z p_1} \\
\overline{z p_2} \\
\end{pmatrix},
$$
and
$$
G \begin{pmatrix}
w_1 \\
w_2 \\
\end{pmatrix} \Phi_2 = \begin{pmatrix}
\overline{z q_1} \\
\overline{z q_2} \\
\end{pmatrix},
$$
where both $GCD( p_1^i , p_2^i ) = 1$ and $GCD( q_1^i , q_2^i ) = 1$. This would then imply that
$$
\overline{\Phi_1} \begin{pmatrix}
\overline{z p_1} \\
\overline{z p_2} \\
\end{pmatrix} = \overline{\Phi_2} \begin{pmatrix}
\overline{z q_1} \\
\overline{z q_2} \\
\end{pmatrix},
$$
and so $(\Phi_1 p_1)^i =(\Phi_2 q_1)^i$ and $(\Phi_1 p_2)^i =(\Phi_2 q_2)^i$. By assumption we have $GCD( p_1^i , p_2^i )$ $= 1$ so $GCD( (\Phi_1 p_2)^i , (\Phi_1 p_1)^i ) = \Phi_1$, but substituting $(\Phi_1 p_1)^i$ for $(\Phi_2 q_1)^i$ we obtain $$GCD ( (\Phi_1 p_2)^i , (\Phi_2 q_1)^i )= \Phi_1, $$ and so $\Phi_1 $ divides $\Phi_2$. A similar computation shows $\Phi_2$ divides $\Phi_1$, and so we must have $\Phi_1$ is a unitary constant multiple of $\Phi_2$. We now show that $\Phi$ is such that
$$
G \begin{pmatrix}
w_1 \\
w_2 \\
\end{pmatrix} \Phi = \begin{pmatrix}
\overline{z p_1} \\
\overline{z p_2} \\
\end{pmatrix},
$$
with $GCD( p_1^i , p_2^i ) = 1$. If it is the case that $ \alpha = GCD( p_1^i , p_2^i ) \neq 1$ then it would follow that $\begin{pmatrix}
w_1 \\
w_2 \\
\end{pmatrix} \Phi \alpha \in \ker T_G$, which would be a contradiction as $\Phi \alpha \notin K_{z \Phi}$.
\end{proof}
It is easily checked that $\ker T_G$ is nearly $S^*$-invariant, and because $\ker A_g^{\theta} = P_1 (\ker T_G )$ we can use Corollary \ref{main2} to deduce the kernel of a truncated Toeplitz operator is nearly $S^*$-invariant with a defect given by ${ \rm span} \{\frac{w_1}{z}\} \cap H^2$. With this information we can use the following result given as Theorem 3.2 in \cite{chalendar2019beurling} (or equivalently Theorem \ref{vector near invariant with defect} with $n=1$) to study $\ker A_g^{\theta}$.
\begin{thm}\label{4.2}
Let $M \subseteq H^2$ be a closed subspace that is nearly $S^*$-invariant with a finite defect $m$. Then:
\begin{enumerate}
\item In the case where there are functions in $M$ that do not vanish at $0$,
$$
M = \{ f : f(z) = f_0 (z) k_0(z) + z \sum_{j=1}^{m} k_j(z) e_j (z) : ( k_0, \hdots ,k_m) \in K \} ,
$$
where $f_0$ is the normalised reproducing kernel for $M$ at $0$, $\{ e_1 , \hdots e_m \}$ is any orthonormal basis for $D$, and $K$ is a closed $S^*$-invariant subspace of $H^2 ( \mathbb{D} , \mathbb{C}^{(m+1)})$. Furthermore $ ||f||^2 = \sum_{j=0}^m ||k_j||^2$.
\item In the case where all functions in $M$ vanish at $0$,
$$
M = \{ f : f(z) = z \sum_{j=1}^{m} k_j(z) e_j(z) : (k_1, \hdots, k_m) \in K \},
$$
with the same notation as in 1, except that $K$ is now a closed $S^*$-invariant subspace of $H^2( \mathbb{D}, \mathbb{C}^m )$, and $||f||^2 = \sum_{j=1}^{m} ||k_j||^2$.
\end{enumerate}
Conversely if a closed subspace $M \subseteq H^2$ has a representation as in 1 or 2, the it is a nearly $S^*$-invariant subspace with defect $m$.
\end{thm}
To use Theorem \ref{4.2} we have to assume that our defect space is orthogonal to $\ker A_g^{\theta}$; we consider two separate cases. We first assume that all functions in $\ker A_g^{\theta}$ vanish at 0. We set $O := \ker A_g^{\theta} + { \rm span} \{\frac{w_1}{z}\} $, $E := O \ominus \ker A_g^{\theta}$, we let $e$ be $P_{E}(\frac{w_1}{z})$ and then $e$ is orthogonal to $\ker A_g^{\theta}$. In this construction $e \neq 0$ as this would imply $\frac{w_1}{z} \in \ker A_g^{\theta} = w_1 K_{ z \Phi}$ which is clearly a contradiction. Theorem \ref{4.2} now yields
$$
\ker A_g^{\theta} = e z K_{ \Psi},
$$
where multiplication by $e z$ is an isometry from $K_{\Psi}$ to $\ker A_g^{\theta}$. This expression for $\ker A_g^{\theta}$ is more familiar than $w_1 K_{z \Phi} $ as in this case the multiplication is an isometry as opposed to a contraction. We can also relate this expression to nearly $S^*$-invariant subspaces. If we let $n$ be the greatest natural number such that $\frac{e}{z^n} \in H^2$ then $\frac{\ker A_g^{\theta}}{z^{n+1}} = \frac{e}{z^n} K_{z \Psi}$, now $\frac{e}{z^n}(0) \neq 0 $ so $\frac{\ker A_g^{\theta}}{z^{n+1}} = \frac{e}{z^n} K_{z \Psi}$ is a nearly $S^*$-invariant subspace. We can conclude the following theorem in this case.
\begin{thm}
If $n$ is the greatest natural number such that $\ker A_g^{\theta} \subseteq z^n H^2$, then $\frac{ \ker A_g^{\theta}}{z^n}$ is a nearly $S^*$-invariant subspace.
\end{thm}
We now turn our attention to the case when not all functions in $\ker A_g^{\theta}$ vanish at 0. In this case it must also follow that $w_1(0) \neq 0$ as otherwise $w_1 K_{z \Phi} (0) = 0$, so using Corollary \ref{main2} we must have the defect space for $\ker A_g^{\theta} = 0$ so can conclude the following theorem.
\begin{thm}
If $\ker A_g^{\theta}$ contains functions which do not vanish at 0 then it is nearly $S^*$-invariant.
\end{thm}
When $\ker A_g^{\theta}$ is nearly $S^*$-invariant we may proceed by using Proposition 3 of the paper of Hitt \cite{hitt1988invariant} to show $\ker A_g^{\theta} = u K_{z \psi}$ where $u \in \ker A_g^{\theta} \ominus ( \ker A_g^{\theta} \cap z H^2 )$ is an isometric multiplier. As was noted in \cite{hartmann2003extremal} we can call $\psi$ the associated inner function to $u$, and it is easily checked (similar to the approach in Proposition \ref{associated}) this is an inner function such that $ g u \psi = \overline{ z p_1 } + \theta p_2$ where $p_1$ is outer.
In fact using \eqref{matrixtotruncated} we can view these two theorems as specialisations of the following theorem.
\begin{thm}\label{twoways}
If $f \in H^2$, and $I$ is an inner function such that $f K_I$ is a closed subspace of $H^2$, then if $f(0) \neq 0$ then $f K_I$ is a nearly invariant subspace. If $f(0) = 0$ then $f K_I$ is both a nearly invariant subspace multiplied by a power of $z$ and a nearly invariant subspace with a 1-dimensional defect space $\frac{f}{z} ( K_I \ominus (K_I \cap z H^2))$.
\end{thm}
\begin{proof}
The only non-trivial statement to prove is if $f(0) = 0$ then $f K_I$ is a nearly invariant subspace with a defect space $\frac{f}{z} ( K_I \ominus (K_I \cap z H^2))$, but this follows from
$$
\frac{f K_I}{z} \in \frac{f}{z}( K_I \ominus (K_I \cap z H^2)) + f ( \frac{K_I \cap z H^2}{z}) \subseteq \frac{f}{z}( K_I \ominus (K_I \cap z H^2)) + f K_I.
$$
\end{proof}
So under the assumptions $f \in H^2$ and $I$ is an inner function such that $f K_I$ is a closed subspace of $H^2$, if $f(0) = 0$ then Theorem \ref{twoways} gives us two possible approaches to decomposing $f K_{I}$. \begin{enumerate}
\item Divide $f K_{I}$ by $z^n$ where $n \in \mathbb{N}$ is chosen such that $\frac{f}{z^n} (0) \neq 0$, then use the Hitt decomposition given in \cite{hitt1988invariant}. Then we could write $f K_I$ as $z^n u$ multiplied by some model space, where $u \in \frac{f K_I}{z^n} \ominus ( \frac{f K_I}{z^n} \cap z H^2 ) $ .
\item Use Theorem 3.2 in \cite{chalendar2019beurling} with $\frac{f}{z}( K_I \ominus (K_I \cap z H^2)) $ as the defect space. Then we could write $f K_I$ as $ z e $ multiplied by some model space , where $e$ is chosen to be an element of $\frac{f}{z}( K_I \ominus (K_I \cap z H^2)) + f K_I$ orthogonal to $ f K_I$.
\end{enumerate} In both of these cases we obtain a model space multiplied by an isometric multiplier.
Due to the similarities in the way these two decompositions are developed one might expect that the two possible ways of decomposing $f K_{I}$ might actually yield the same result. We show this is not the case and in general we have two different expressions with an example.
\begin{exmp}
Let $g = \frac{1}{1 - \frac{z}{3}} (\overline{z}^3 + z^3)$ and let $\theta = z^4$, we first find $ \ker A_g^{\theta}$ using linear algebra techniques. With respect to the basis $ 1, z, z^2, z^3 $, $ A_g^{\theta}$ has the matrix representation
$$
\begin{pmatrix}
\frac{1}{3^3} & \frac{1}{3^2} & \frac{1}{3} & 1 \\
\frac{1}{3^4} & \frac{1}{3^3} & \frac{1}{3^2} & \frac{1}{3} \\
\frac{1}{3^5} & \frac{1}{3^4} & \frac{1}{3^3} & \frac{1}{3^2} \\
1 + \frac{1}{3^6} & \frac{1}{3^5} & \frac{1}{3^4} & \frac{1}{3^3} \\
\end{pmatrix},
$$
which has reduced row echelon form given by
$$
\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 3 & 9 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
\end{pmatrix}.
$$
The kernel of this matrix has a basis given by
$$
\begin{pmatrix}
0 \\
1 \\
- \frac{1}{3} \\
0 \\
\end{pmatrix}, \begin{pmatrix}
0 \\
0 \\
1 \\
- \frac{1}{3} \\
\end{pmatrix},
$$
and thus we can write $\ker A_g^{\theta} = z ( 1 - \frac{z}{3}) K_{z^2}$. We now will give two different decompositions of this kernel using Theorem \ref{twoways}.
Let $f = z ( 1 - \frac{z}{3})$ and $K_{I} = K_{z^2}$, then $f K_{I} = z { \rm span} \{( 1 - \frac{z}{3}) , z ( 1 - \frac{z}{3}) \}$. We first use approach 1. It can be checked that
$$
1 - \frac{z}{3} K_{z^2} \ominus 1 - \frac{z}{3} K_{z^2} \cap z H^2
$$
has a normalised basis element given by $$ u = \frac{3 \sqrt{910}}{91}(1 - \frac{ 1}{30} z - \frac{1}{10} z^2) , $$ and so $f K_{I}$ can be written as $z u $ multiplied by some model space, which we will denote $K_{I_1}$. In order to find $I_1$ we must solve
$$
z ( 1 - \frac{z}{3}) K_{z^2} = z u K_{I_1},
$$
but $ \frac{( 1 - \frac{z}{3})}{u}$ is a scalar multiple of $\frac{1}{1 + \frac{3z}{10}}$, so $K_{I_1}$ must be given by ${ \rm span} \{ \frac{1}{1 + \frac{3z}{10}} \frac{z}{1 + \frac{3z}{10}} \}$, therefore $I_1 = z\frac{z+ \frac{3}{10}}{1 + \frac{3z}{10}}$. So we conclude
$$
z ( 1 - \frac{z}{3}) K_{z^2} = z \frac{3 \sqrt{910}}{91}(1 - \frac{1}{30} z - \frac{1}{10} z^2 ) K_{ z(\frac{z+ \frac{3}{10}}{1 + \frac{3z}{10}})},
$$
where multiplication by $z \frac{3 \sqrt{910}}{91}(1 - \frac{1}{30} z - \frac{1}{10} z^2 )$ is an isometry on the model space. This can be simplified to
$$
z ( 1 - \frac{z}{3}) K_{z^2} = z ( 30 - z - 3 z^2) K_{z(\frac{z+ \frac{1}{3}}{1 + \frac{z}{3}})},
$$
however in this case we no longer have the multiplication on the model space acting as an isometry.
Now we use approach 2. We must find a normalised element $e \in z ( 1 - \frac{z}{3}) K_{z^2} + { \rm span} \{ ( 1 - \frac{z}{3}) \}$, which is orthogonal to $z ( 1 - \frac{z}{3}) K_{z^2}$. This can be checked to be
$$
\sqrt{\frac{729}{74620}} (\frac{91}{9} - \frac{1}{27} z - \frac{1}{9}z^2 - \frac{1}{3}z^3 )
$$
which means $f K_I$ can also be written as $ze$ multiplied by some model space, which we will denote $K_{I_2}$. Now to find $I_2$ we must solve
$$
z ( 1 - \frac{z}{3}) K_{z^2} = z e K_{I_2},
$$
$e$ is a scalar multiple of $$
( 273 - z - 3z^2 - 9 z^3 ) = 3 ( 1 - \frac{z}{3}) ( 9 z^2 + 30z + 91 ), $$ and so $K_{I_2}$ must be ${ \rm span} \{ \frac{1}{9z^2 +30z + 91} , \frac{z}{9z^2 +30z + 91} \}$. We now aim to find the inner function $I_2$. We denote $A = \frac{1}{9z^2 +30z + 91}$ and $B = \frac{z}{9z^2 +30z + 91}$. $A(0) = \frac{1}{91}$, so $$S^* (A)(z) = \frac{A(z) - \frac{1}{91}}{z} = \frac{-9z - 30}{91 (9 z^2 + 30 z + 91)} = -\frac{30}{91} A - \frac{9}{91} B. $$ It is clear that $S^* (B) = A$. We now aim to find two eigenvectors of the backwards shift operator (these are necessarily Cauchy kernels) which are in ${ \rm span} \{ A , B \} $. If we use $ A , B $ as a basis for ${ \rm span} \{ A , B \}$ then the matrix representation of the backwards shift operator is given by
$$
\begin{pmatrix}
-\frac{30}{91} & 1 \\
- \frac{9}{91} & 0 \\
\end{pmatrix}.
$$
This has eigenvalues given by $\frac{-15 \pm 3i \sqrt{66}}{91}$, we denote $\lambda_1 = \frac{-15 + 3i \sqrt{66}}{91}$ and $\lambda_2 = \frac{-15 - 3i \sqrt{66}}{91}$, then the corresponding eigenvectors are given by $k_{\overline{\lambda_1}} = \frac{1}{1-\lambda_1 z}$ and $k_{\overline{\lambda_2}} = \frac{1}{1-\lambda_2 z}$, so as mentioned in the introduction $I_2 = (\frac{z- \overline{\lambda_1}}{1 - {\lambda_1}z})(\frac{z- \overline{\lambda_2}}{1 - {\lambda_2 }z})$. We can conclude
$$
z ( 1 - \frac{z}{3}) K_{z^2} = z \sqrt{\frac{729}{74620}} (\frac{91}{9} - \frac{1}{27} z - \frac{1}{9}z^2 - \frac{1}{3}z^3) K_{(\frac{z- \overline{\lambda_1}}{1 - {\lambda_1}z})(\frac{z- \overline{\lambda_2}}{1 - {\lambda_2 }z})},
$$
where multiplication by $z \sqrt{\frac{729}{74620}} (\frac{91}{9} - \frac{1}{27} z - \frac{1}{9}z^2 - \frac{1}{3}z^3)$ is an isometry on the model space. Again we can simplify this to
$$
z ( 1 - \frac{z}{3}) K_{z^2} = z (273 - z - 3z^2 - 9z^3) K_{(\frac{z- \overline{\lambda_1}}{1 - {\lambda_1}z})(\frac{z- \overline{\lambda_2}}{1 - {\lambda_2 }z})},
$$
but in this expression we no longer have the multiplication on the model space acting as an isometry.
Thus approach 1 and approach 2 give different decompositions.
\end{exmp}
\section{Application to truncated Toeplitz operators on multiband spaces}
Truncated Toeplitz operators on multiband spaces are soon to be introduced in a publication which is currently in preparation by M.C. Câmara, R. O’Loughlin, and J.R. Partington. They are defined (on the unit circle) as follows. Let $g \in L^{\infty}$, let $\phi, \psi$ be unimodular functions in $L^{\infty}$ such that $\phi K_{\theta} \perp \psi K_{\theta} $, we define the multiband space $M := \phi K_{\theta} \oplus \psi K_{\theta}$. The truncated Toeplitz operator on $M$ denoted $A_g^{M} : M \to M$ is defined by
$$
A_g^M(f) = P_M ( gf),
$$
where $P_M$ is the orthogonal projection on to $M$. These operators have applications in speech processing and as a special case, if we let $\phi = \overline{\theta}$ and $ \psi = \theta$ we recover a (disc variation) of the Paley-Wiener space.
We write $K_{\theta}(\mathbb{D}, \mathbb{C}^n) \subseteq H^2(\mathbb{D},\mathbb{C}^n)$ to mean the vectors of length $n$ with each coordinate taking entries in $K_{\theta}$. To study truncated Toeplitz operators on multiband spaces we first consider the truncated Toeplitz operator $A_G^{\theta}$ acting on $K_{\theta}(\mathbb{D},\mathbb{C}^2)$, where
$$
G =\begin{pmatrix}
g_{11} & g_{12} \\
g_{21} & g_{22} \\
\end{pmatrix},
$$
has each entry in $L^{\infty}$. Using the unitary map $U: M \to K_{\theta}(\mathbb{D}, \mathbb{C}^2)$ where $$U(\phi f_1 + \psi f_2 ) = \begin{pmatrix}
f_1 \\
f_2
\end{pmatrix},$$ one can show that any truncated Toeplitz operator on a multiband space is unitarily equivalent to $A_G^{\theta}$ for a certain choice of $G$. Thus we turn our attention to studying $ \ker A_{G}^{\theta}$.
If we define
$$
\mathcal{G} = \begin{pmatrix}
\overline{\theta} & 0 & 0 & 0 \\
0 & \overline{\theta} & 0 & 0 \\
g_{11} & g_{12} & \theta & 0 \\
g_{21} & g_{22} & 0 & \theta \\
\end{pmatrix},
$$
then it is easily checked that
$$
\begin{pmatrix}
p \\
q \\
r \\
s \\
\end{pmatrix} \in \ker T_{\mathcal{G}}
$$
if and only if $p, q \in K_{\theta} $ and
$$
\begin{pmatrix}
g_{11} & g_{12} \\
g_{21} & g_{22} \\
\end{pmatrix} \begin{pmatrix}
p \\
q \\
\end{pmatrix} + \theta \begin{pmatrix}
r \\
s \\
\end{pmatrix} \in \overline{H^2_0} \oplus \overline{H^2_0}.
$$
So $\begin{pmatrix}
p \\
q \\
\end{pmatrix} \in \ker A_{G}^{\theta}$, and likewise given $\begin{pmatrix}
p \\
q \\
\end{pmatrix} \in \ker A_{G}^{\theta}$ there exists $\begin{pmatrix}
r \\
s \\
\end{pmatrix} \in H^2$ with $\begin{pmatrix}
p \\
q \\
r \\
s \\
\end{pmatrix} \in \ker T_{\mathcal{G}}$. Keeping the same notation as Theorem \ref{main} we let $W = \ker T_{\mathcal{G}} \ominus ( \ker T_{\mathcal{G}} \cap z H^2(\mathbb{D},\mathbb{C}^4))$, and let $W_1 \hdots W_r$ be an orthonormal basis for $W$, as previously mentioned $r \leqslant 4$. Toeplitz kernels are nearly $S^*$-invariant so by Corollary \ref{main2} we know $P_{2}(\ker T_{\mathcal{G}}) = \ker A_{G}^{\theta}$ is nearly $S^*$-invariant with defect space $\left( \frac{ { \rm span} \{ { P_{2}(W_1), \hdots P_{2}(W_r)} \} }{z} \cap H^2(\mathbb{D}, \mathbb{C}^2)\right)$. We now try to find the dimension of this defect space. For $F$ a set of functions we denote
$$
F(0) = \{ f(0) : f \in F \}.
$$
\begin{lem}\label{evaluating at 0}
$\dim \ker T_{\mathcal{G}}(0) = \dim W = \dim W(0) $.
\end{lem}
\begin{proof}
Lemma 3.9 in \cite{o2020toeplitz} shows that $\dim \ker T_{\mathcal{G}}(0) = \dim W$, Lemma \ref{lemmaforlater} shows that $W_1(0) \hdots W_r(0)$ are linearly independent and clearly $W_1(0) \hdots W_r(0)$ { \rm span} $W(0)$.
\end{proof}
We first consider the case when $\dim W = 4$, in this case by Lemma \ref{evaluating at 0} we have $W(0) = \mathbb{C}^4$. We have a correspondence between the matrix $[ W_1, W_2, W_3, W_4]$ and a $4$-by-$4$ matrix taking values in $\mathbb{C}$ given by
$$
[ W_1, W_2, W_3, W_4] \mapsto [ W_1(0), W_2(0), W_3(0), W_4(0)].
$$
We also know by Lemma \ref{evaluating at 0} that $ W_1(0), W_2(0), W_3(0), W_4(0)$ are a basis for $\mathbb{C}^4$, so there exists a sequence of column operations we can perform to $ W_1(0), W_2(0), W_3(0),$ $W_4(0)$ which yields the identity matrix. If we perform the same column operations to $[ W_1, W_2, W_3, W_4]$ we will obtain a matrix
$$
[ \tilde{W_1}, \tilde{W_2}, \tilde{W_3}, \tilde{W_4}],
$$
which has the property that $[ \tilde{W_1}(0), \tilde{W_2}(0), \tilde{W_3}(0), \tilde{W_4}(0)]$ is equal to the identity matrix. The linear independence of $\tilde{W_1}(0), \tilde{W_2}(0), \tilde{W_3}(0), \tilde{W_4}(0)$ implies linear independence of $ \tilde{W_1}, \tilde{W_2}, \tilde{W_3}, \tilde{W_4}$, and so $ \tilde{W_1}, \tilde{W_2}, \tilde{W_3}, \tilde{W_4}$ { \rm span} $W$. It is now clear that $\left( \frac{ { \rm span} \{ { P_{2}(W_1), \hdots P_{2}(W_4)} \} }{z} \cap H^2(\mathbb{D}, \mathbb{C}^2) \right)$ is given by $\frac{ { \rm span} \{ { P_{2}(\tilde{W_3}) P_{2}(\tilde{W_4})} \} }{z}$, and so when $\dim W = 4$, we have $\ker A_{g}^{\theta}$ is nearly invariant with the 2-dimensional defect space $\frac{ { \rm span} \{ { P_{2}(\tilde{W_3}) P_{2}(\tilde{W_4})} \} }{z}$.
We now consider the case when $\dim W =3$, in this case $W(0)$ is a 3-dimensional subspace of $\mathbb{C}^4$. We again have a correspondence
$$
[ W_1, W_2, W_3] \mapsto [ W_1(0), W_2(0), W_3(0)].
$$
In this case we can perform column operations to $ W_1(0), W_2(0), W_3(0)$ to obtain a matrix which takes one of the following four forms (here we denote $x_1, x_2, x_3$ to be some unknown unspecified values in $\mathbb{C}$),
$$
\begin{pmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
x_1 & x_2 & x_3 \\
\end{pmatrix},
\begin{pmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
x_1 & x_2 & x_3 \\
0 & 0 & 1 \\
\end{pmatrix},
\begin{pmatrix}
1 & 0 & 0 \\
x_1 & x_2 & x_3 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
\end{pmatrix},
\begin{pmatrix}
x_1 & x_2 & x_3 \\
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
\end{pmatrix}.
$$
As in the previous case if we perform these same column operations which yield one of the above to the matrix $[ W_1, W_2, W_3 ]$ we will obtain a matrix
$$
[ \tilde{W_1}, \tilde{W_2}, \tilde{W_3}].
$$
By the same arguments made previously we can deduce $\tilde{W_1}, \tilde{W_2}, \tilde{W_3}$ { \rm span} $W$. This means $\left( \frac{ { \rm span} \{ { P_{2}(W_1), \hdots P_{2}(W_3)} \} }{z} \cap H^2 \right)$ is contained in $\frac{ { \rm span} \{ { P_{2}(\tilde{W_2}) P_{2}(\tilde{W_3})} \} }{z} \cap H^2(\mathbb{D}, \mathbb{C}^2)$, and so when $\dim W = 3$, we have $\ker A_{G}^{\theta}$ is nearly invariant with (at most) 2-dimensional defect space $$\frac{ { \rm span} \{ { P_{2}(\tilde{W_2}) P_{2}(\tilde{W_3})} \} }{z} \cap H^2(\mathbb{D}, \mathbb{C}^2).$$
In the case when $\dim W \leqslant 2$ it is clear from Corollary \ref{main2} that the defect space of $\ker A_{G}^{\theta}$ has dimension at most 2. So we can conclude the following theorem.
\begin{thm}\label{multiband defect}
$\ker A_{G}^{\theta}$ is a nearly $S^*$-invariant subspace with defect 2.
\end{thm}
We now give an example to show that in general $2$ is the smallest dimension of defect space, i.e. it is not true that for all inner functions $\theta$ and matrix symbols $G$ that $\ker A_{G}^{\theta}$ has a 1-dimensional defect.
\begin{exmp} Following the unitary equivalence we mentioned earlier we consider an operator of the form
$$
A_{G}^{\theta} = \begin{pmatrix}
A_{g}^{\theta} & A_{g \overline{\phi} \psi}^{\theta} \\
A_{g \overline{\psi} \phi}^{\theta} & A_g^{\theta} \\
\end{pmatrix},
$$
where $g \in L^{\infty}$, $\theta$ is an inner function and $\phi , \psi \in L^{\infty}$ are unimodular functions such that $\phi K_{\theta} \perp \psi K_{\theta}$. These conditions ensure that $A_G^{\theta}$ is indeed unitarily equivalent to a truncated Toeplitz operator a multiband space. Let $\theta = z^2$, $\phi = z$, $\psi = z^4$, $g = 2 \overline{z}^2 + z + 2 z^4$. We identify the basis of $K_{\theta}(\mathbb{D}, \mathbb{C}^2)$ with a basis of $\mathbb{C}^4$ in the following way $\begin{pmatrix}
1 \\
0 \\
\end{pmatrix} \mapsto \begin{pmatrix}
1 \\
0 \\
0 \\
0 \\
\end{pmatrix} $,$\begin{pmatrix}
z \\
0 \\
\end{pmatrix} \mapsto \begin{pmatrix}
0 \\
1 \\
0 \\
0 \\
\end{pmatrix} $, $\begin{pmatrix}
0 \\
1 \\
\end{pmatrix} \mapsto \begin{pmatrix}
0 \\
0 \\
1 \\
0 \\
\end{pmatrix} $, $\begin{pmatrix}
0 \\
z \\
\end{pmatrix} \mapsto \begin{pmatrix}
0 \\
0 \\
0 \\
1 \\
\end{pmatrix} $, then $A_G^{\theta}$ has the following matrix representation
$$
\begin{pmatrix}
0 & 0 & 0 & 0 \\
1 & 0 & 2 & 0 \\
0 & 0 & 0 & 0 \\
2 & 0 & 1 & 0 \\
\end{pmatrix}.
$$
Thus $\ker A_{G}^{\theta} $ is given by the span of $\begin{pmatrix}
z \\
0 \\
\end{pmatrix}$ and $\begin{pmatrix}
0 \\
z \\
\end{pmatrix}$, which is clearly nearly $S^*$-invariant with defect 2.
\end{exmp}
For a multiband space $M := \phi K_{\theta} \oplus \psi K_{\theta}$ using our unitary equivalence by $U$ we can write
$$
\ker A_g^{M} = U^* \ker \begin{pmatrix}
A_{g}^{\theta} & A_{g \overline{\phi} \psi}^{\theta} \\
A_{g \overline{\psi} \phi}^{\theta} & A_g^{\theta} \\
\end{pmatrix}.
$$
Combining this with Theorem \ref{multiband defect} and Theorem \ref{vector near invariant with defect} gives a decomposition for $\ker A_g^M$ in terms of $S^*$-invariant subspaces.
\section{Application to dual truncated Toeplitz operators}
It is easily checked that in $L^2$ we have $K_{\theta}^{\perp} =\overline{H^2_0} \oplus \theta H^2$. We denote $Q$ to be the orthogonal projection $Q: L^2 \to (K_{\theta})^{\perp}$. Throughout this section we assume $g \in L^{\infty}$. The dual truncated Toeplitz operator $D_{g}^{\theta} : (K_{\theta})^{\perp} \to (K_{\theta})^{\perp}$ is defined by
$$
f \mapsto Q(gf).
$$
Theorem 6.6 in \cite{camara2019invertibility} shows that for a symbol $g$ that is invertible in $L^{\infty}$ we have $\ker D_{g}^{\theta} = g^{-1} \ker A_{g^{-1}}^{\theta}$, so given our observation \eqref{matrixtotruncated} under the condition that $g$ is invertible in $L^{\infty}$ we can write $\ker D_{g}^{\theta}$ as an $L^2$ function multiplied by a model space. We now aim to use similar recursive methods that were used to prove Theorem \ref{vector near invariant with defect} to obtain a decomposition theorem for $\ker D_{g}^{\theta}$.
\vskip 0.4cm
Throughout this section we assume that $\ker D_{g}^{\theta}$ is finite dimensional.
\vskip 0.4cm
We define $A := \{ f \in \ker D_{g}^{\theta} : gf \in K_{\theta} \cap z H^2 \}$ and $C := \ker D_g^{\theta} \cap ( \overline{H^2_0} \oplus \theta z H^2 ) \cap A $, then using orthogonal decomposition we can write
$$
\ker D_g^{\theta} = C \oplus ( \ker D_{g}^{\theta} \ominus C ).
$$
\begin{lem}\label{lem1}
If $\ker D_g^{\theta} \subseteq C$ then $\ker D_g^{\theta} = \{ 0 \}$.
\end{lem}
\begin{proof}
Suppose we have a non-zero $ f \in \ker D_g^{\theta} \subseteq C $, then by construction of $C$ we must have $\frac{f}{z} \in \ker D_g^{\theta} \subseteq C$. Iterating this we can obtain $\frac{f}{z^n} \in \ker D_g^{\theta}$ for all $ n \in \mathbb{N}$, which can't be true as given $n$ sufficiently large $\frac{gf}{z^n} \notin H^2$.
\end{proof}
\begin{cor}\label{cor2}
For any $\ker D_g^{\theta} \neq \{ 0 \} $ we have $1 \leqslant \dim (\ker D_{g}^{\theta} \ominus C ) \leqslant 2 $.
\end{cor}
\begin{proof}
If $\ker D_g^{\theta} \neq \{ 0 \} $ then Lemma \ref{lem1} shows that $1 \leqslant \dim (\ker D_{g}^{\theta} \ominus C )$. Let $F_1 $ be the orthogonal projection of $\overline{g} k_0$ on to $\ker D_g^{\theta}$ and $F_2 $ be the orthogonal projection of $\theta k_0$ on to $\ker D_g^{\theta}$, where $k_0$ is the reproducing kernel at 0, then $\ker D_g^{\theta} \ominus C$ is generated by $F_1, F_2$. Indeed if $f \in \ker D_g^{\theta}$ and $f$ is orthogonal to $F_1, F_2$ then
$$
\langle f , F_1 \rangle = \langle gf, k_0 \rangle = 0,
$$
so $f \in A$, and
$$
\langle f , F_2 \rangle = \langle \overline{\theta}f, k_0 \rangle = 0,
$$
so we also have $P( \overline{\theta}f) \subseteq z H^2$, so $f \in C$.
\end{proof}
Consider $g\ker D_g^{\theta} = gC \oplus ( g\ker D_g^{\theta} \ominus gC ) $, by Corollary \ref{cor2} we must have $g \ker D_g^{\theta} \ominus gC$ is at most 2-dimensional. If $g \ker D_g^{\theta} \ominus gC$ is 2-dimensional then we denote its orthonormal basis elements by $gf_0, gh_0 $. Then for all $f \in \ker D_g^{\theta}$ using orthogonal projections and the observation that $\frac{C}{z} \subseteq \ker D_g^{\theta}$ we can write
$$
gf = \lambda_0 g f_0 + \mu_0 g h_0 + z gf_1,
$$
where $gf_1 \in g\ker D_g^{\theta}$, and furthermore
$$
|| gf ||^2 = | \lambda_0 |^2 + | \mu_0 |^2 + ||gf_1 ||^2.
$$
In a similar process to Theorem \ref{vector near invariant with defect} we can iterate this process starting with $gf_1$ to obtain
$$
gf = \sum_{i=0}^N gf_0 \lambda_i z^i + \sum_{j=0}^{N} gh_0 \mu_j z^j + z^{N+1} gf_{N+1},
$$
with
\begin{equation}\label{here}
||gf||^2 = \sum_{i=0}^N |\lambda_i|^2 + \sum_{j=0}^N |\mu_j|^2 + ||gf_{N+1}||.
\end{equation}
Following the argument laid out in section 3 to deduce \eqref{limit} we can deduce that in the $H^2$ norm $|| g f_{N+1} || \to 0$ as $N \to \infty$, then $|| g f_{N+1} ||$ must also converge to $0$ in the $L^1$ norm , and so in the $L^1$ norm we must have
$$
gf = \lim_{N \to \infty} \left( \sum_{i=0}^N gf_0 \lambda_i z^i + \sum_{j=0}^{N} gh_0 \mu_j z^j \right).
$$
Now two applications of Hölder's inequality shows the $L^1$ limit of $\sum_{i=0}^N gf_0 \lambda_i z^i + \sum_{j=0}^{N} gh_0 \mu_j z^j$ is equal to $g f_0 \sum_{i=0}^{\infty} \lambda_i z^i + gh_0 \sum_{j=0}^{\infty} \mu_j z^j$, where $\sum_{i=0}^{\infty} \lambda_i z^i , \sum_{j=0}^{\infty} \mu_j z^j$ are limits in the $H^2$ sense . So we may write
$$
gf = g f_0 \sum_{i=0}^{\infty} \lambda_i z^i + gh_0 \sum_{j=0}^{\infty} \mu_j z^j,
$$
and furthermore by taking limits in \eqref{here} we can deduce
$$
||gf||_{H^2}^2 = \sum_{i=0}^{\infty} |\lambda_i|^2 + \sum_{i=0}^{\infty} |\mu_i|^2.
$$
Mimicking the argument from section 3 between \eqref{start} and \eqref{end} we can say $f \in \ker D_g^{\theta}$ if and only if
$$
gf =
\begin{pmatrix}
gf_0 & gh_0
\end{pmatrix}
\begin{pmatrix}
k_0 \\
k_1 \\
\end{pmatrix},
$$
where $\begin{pmatrix}
k_0 \\
k_1 \\
\end{pmatrix}$ lies in a closed $S^*$-invariant subspace of $H^2 ( \mathbb{D} , \mathbb{C}^2)$. With obvious modifications for when $\dim \ker D_g^{\theta} \ominus C = 1$ we can deduce the following theorem.
\begin{thm}
\begin{enumerate}
\item If $\dim (g \ker D_g^{\theta} \ominus gC) = 2$ then
$$
g \ker D_g^{\theta} = \begin{pmatrix} gf_0 & gh_0 \\ \end{pmatrix}
K,$$
where $K$ is a closed $S^*$-invariant subspace of $H^2( \mathbb{D}, \mathbb{C}^2)$, $gf_0, gh_0$ are orthonormal basis elements of $(g \ker D_g^{\theta} \ominus gC)$ and for $f \in \ker D_g^{\theta}$ we have $||gf||_{H^2}^2 = ||k_0||_{H^2}^2 + || k_1 ||_{H^2}^2$.
\item If $\dim (g \ker D_g^{\theta} \ominus gC) = 1$ then
$$
g \ker D_g^{\theta} = gf_0 K_{\chi z},
$$
where $ \chi$ is some inner function, $gf_0$ is a normalised element of $(g \ker D_g^{\theta} \ominus gC)$ and for $f \in \ker D_g^{\theta}$ we have $||gf||_{H^2}^2 = ||k ||_{H^2}^2$.
\end{enumerate}
\end{thm} Cancelling the $g$ and using the same notation as the previous theorem we obtain the following.
\begin{cor}
\begin{enumerate}
\item If $\dim ( \ker D_g^{\theta} \ominus C) = 2$ then
$$
\ker D_g^{\theta} = \begin{pmatrix} f_0 & h_0 \\ \end{pmatrix}
\begin{pmatrix}
k_0 \\
k_1 \\
\end{pmatrix}.$$
\item If $\dim (\ker D_g^{\theta} \ominus C) = 1$ then
$$
\ker D_g^{\theta} = f_0 K_{\chi z}.
$$
\end{enumerate}
\end{cor}
\section*{Declarations}
The author is grateful to the EPSRC for financial support. \newline The author is grateful to Professor Partington for his valuable comments.
\vskip 0.25cm
\noindent Conflicts of interest/Competing interests- not applicable.
\newline Availability of data and material- not applicable.
\newline Code availability- not applicable.
\newline Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
\bibliographystyle{plain}
|
1,314,259,996,621 | arxiv | \section*{Intro}
This document presents some definitions and vocabulary for working
with datasets that contain complex relationships, applicable to a
large variety of application domains. The concepts borrow from graph
theory, and several other areas of mathematics. The goal is to define
a way of thinking about complex graphs, and how they can be simplified
and condensed into simpler graphs that ``concentrate'' embedded
knowledge into a more manageable size. The output of the process is
a grammar that summarizes or captures the significant or important
relationships.
The ideas described here are not terribly complex; they represent
a kind-of ``folk knowledge'' generally known to a number of practitioners.
However, I am not currently aware of any kind of presentation of this
information, either in review/summary form, or as a fully articulated
book or text. The background knowledge appears to be scattered across
wide domains, and occur primarily in highly abstract settings, outside
of the mainstream computer-science and data-analysis domain. Thus,
this document tries to provide an introduction to these concepts in
a plain-spoken language. The hope is to be precise enough that there
will be few complaints from the mathematically rigorous-minded, yet
simple enough that ``anyone'' can follow through and understand.
Some examples are provided, primarily drawn from linguistics. However,
the concepts are generally applicable, and should prove useful for
analyzing any kind of dataset expressed with pair-wise relationships,
but containing hidden (non-obvious) complex cause-and-effect relationships.
Such datasets include genomic and proteomic data, social-graph data,
and even such social policy information.
Consider the example of determining the effectiveness of educational
curricula. When teaching students, one never teaches advanced topics
until foundations are laid. Yet many students struggle. Given raw
data on a large sample of students, and the curricula they were subjected
to, can one discern sequences and dependencies of cause-and-effect
in this data? Can one find the most effective curriculum to teach,
that advances the greatest number of students? Can one discover different
classes of students, some who respond better to one style than another?
My belief is that these questions can not only be answered, but that
the framework described here can be used to uncover this structure.
Another example might be the analysis of motives and actions in humans.
This includes analysis from real life, as well as the narratives of
books and movies. In a book setting, the author cannot easily put
characters into action until some basic sketch of personality and
motives is developed. Motives can't be understood until a setting
is established. If one can break down a large number of books/movies
into pairs of related facts/scenes/remarks/actions, one can then extract
a grammar of relationships, to see exactly what is involved in the
movement of a narrative from here to there.
Much of this document is devoted to stating definitions for a few
key structures used to talk about the general problem of discerning
relationships and structure. The definitions are inspired by and draw
upon concepts from algebraic topology, but mostly avoid both the rigor
and the difficulty of that topic.
The definitions provide a framework, rather than an algorithm. It
is up to the user to provide some mechanism for judging similarity
- and this can be anything: some neural net, Bayesian net, Markov
chain, or some vector space or SVM-style technique; the overall framework
is agnostic as to these details. The goal is to provide a way of talking
about, thinking about and presenting data so that the important knowledge
contained in it is captured and described, boiled down to a manageable,
workable state from a large raw dump of pair-relationship data.
Currently, the ideas described here are employed in a machine-learning
project that attempts to extract the structure of natural language
in an unsupervised way. Thus, the primary, detailed examples will
come from the natural language domain. The theory should be far more
general than that.
This document resides in, accompanies source code that implements
the ideas here. Specifically, it is in \href{https://github.com/opencog/atomspace/tree/master/opencog/sheaf}{https://github.com/opencog/atomspace/tree/master/opencog/sheaf}
and it spills over into other files, such as \href{https://github.com/opencog/opencog/blob/master/opencog/nlp/learn/scm/gram-class.scm}{https://github.com/opencog/opencog/blob/master/ opencog/nlp/learn/scm/gram-class.scm}
This code is in active development, and is likely to have changed
by a lot since this was written. This document is \emph{not} intended
to describe the code; rather, it is meant to describe the general
underlying concepts.
For the mathematically inclined, please be aware that the concepts
described here touch on the tiniest tips of some very deep mathematical
icebergs, specifically in parsing, type theory and category theory.
I have no hope of providing the needed background, as these fields
are sophisticated and immense. The reader is encouraged to study these
on their own, especially as they are applied in computer science and
linguistics. There are many good texts on these topics.
This document is organized as follows. The first part of provides
a definition of a ``section'' of a graph. A section is a lot like
a subgraph, except that it explicitly indicates which edges were cut
to form the subgraph. The next part defines and articulates the concept
of projection, and shows how it can be used to form quotients. The
quotients or projections are termed ``stalks'', and, because each
stalk comes festooned with connectors, they can be thought to resemble
corn-stalks. The next part shows how stalks can be tied together to
form sheaves, and reviews the axioms of sheaf theory to show that
this name is appropriate.
After this comes a lighting review of how data mining, pattern mining
and clustering can be viewed in the context of sheaves. After this
come two asides: a quick sketch of type theory, illustrating the interplay
between data-mined patterns and the concept of types. Another aside
reviews the nature of parsing, illustrating that parsing algorithms
implement the gluing axiom of sheaves, viz, that gluing and parsing
are the same thing. The final part examines polymorphic behavior.
Polymoprhism is that point where syntax begins to touch semantics,
where deep structure becomes distinguished from surface structure.
\section*{Sections}
Begin with the standard definition of a graph.
\begin{defn*}
A \noun{graph} $G=\left(V,E\right)$ is an ordered pair $\left(V,E\right)$
of two sets, the first being the set $V$ of vertices, and the second
being the set $E$ of edges. An edge $e\in E$ is a pair $\left(v_{1},v_{2}\right)$
of vertices, where every $v_{k}$ \emph{must} be a member of $V$.
That is, edges in $E$ can only connect vertexes in $V$, and not
to something else. $\diamond$
\end{defn*}
For directed graphs, the vertex ordering in the edge matters. For
undirected graphs, it does not. The subsequent will mostly leave this
distinction unspecified, and allow either (or both) directed and undirected
edges, as the occasion and the need fits. Distinguishing between directed
and undirected graphs is not important, at this point. In most of
what follows, it will usually be assumed that there are no edges with
$v_{1}=v_{2}$ (loops that connect back to themselves) and that there
is at most one edge connecting any given pair of vertexes. These assumptions
are being made to simplify the discussion; they are not meant to be
a fundamental limitation. It just makes things easier to talk about
and less cluttered at the start. The primary application does not
require either construct, and it is straight-forward to add extensions
to provide these features. Similar remarks apply to graphs with labeled
vertexes or edges (such as ``colored'' edges, vertexes or edges
with numerical weights on them, \emph{etc}). Just keep in mind that
such additional markup may appear out of thin air, later on.
Besides the above definition, there are other ways of defining and
specifying graphs. The one that will be of primary interest here will
be one that defines graphs as a collection of sections. These, in
turn, are composed of seeds.
\begin{defn*}
A \noun{seed} is a vertex and the set of edges that connect to it.
That is, it is the pair $\left(v,E_{v}\right)$ where $v$ is a single
vertex, and $E_{v}$ is a set of edges containing that vertex, i.e.
that set of edges having $v$ as one or the other endpoint. The vertex
$v$ may be called the \noun{germ} of the seed. For each edge in the
edge set, the other vertex is called the \noun{connector}.$\diamond$
\end{defn*}
It should be clear that, given a graph $G$, one can equivalently
describe it as a set of seeds (one simply lists all of the vertexes,
and all of the edges attached to each vertex). The converse is not
``naturally'' true. Consider a single seed, consisting of one vertex
$v_{1}$, and a single edge $e=\left(v_{1},v_{2}\right)$. Then the
pair $\left(V,E\right)$ with $V=\left\{ v_{1}\right\} $ and $E=\left\{ \left(v_{1},v_{2}\right)\right\} $
is \emph{not} a graph, because $v_{2}$ is missing from the set $V$.
Of course, we could implicitly include $v_{2}$ in the collection
of vertexes, but this is not ``natural'', if one is taking the germs
of the seeds to define the vertexes of the graph.
Thus, given a seed, each edge in that seed has one ``connected''
endpoint, and one ``unconnected'' endpoint. The ``connected''
endpoint is that endpoint that is $v$. The other endpoint will commonly
be called the \noun{connector}; equivalently, the edge can be taken
to be the connector. Perhaps it should be called a half-edge, as one
end-point is specified, but missing.
The seed can be visualized as a ball, with a bunch of sticks sticking
out of it. A burr one might collect on one's clothing. One can envision
a seed as an analog of an open set in topology: the center (the germ)
is part of the set, and then there's some more, but the boundary is
not part of the set. The vertexes on the unconnected ends of the edges
are not a part of the seed.
\begin{figure}[h]
\caption{A seed}
\includegraphics[width=0.25\columnwidth]{seed.pdf}
\end{figure}
Just as one can cover a topological space with a collection of open
sets, so one can also cover a graph with seeds. This analogy is firm:
if one has open sets $U_{i}$ and $U_{j}$ and $U_{i}\cap U_{j}\ne\emptyset$
then one can take $U_{i}$ and $U_{j}$ to be vertices, and $U_{i}\cap U_{j}$
to be an edge running between them.
More definitions are needed to advance the ideas of connecting and
covering.
\begin{defn*}
A \noun{section} is a set of seeds. $\diamond$
\end{defn*}
It should be clear that a graph $G$ can be expressed as section;
that section has the nice property that all of the germs appear once
(and only once) in the set $V$ of $G$, and that all of the edges
in $E$ appear twice, once each in two distinct seeds. This connectivity
property motivates the following definition:
\begin{defn*}
Given a section $S$, a \noun{link} is any edge $\left(v_{1},v_{2}\right)$
where both $v_{1}$ and $v_{2}$ appear as germs of seeds in $S$.
Two seeds are \noun{connected} when there is a link between them.
$\diamond$
\end{defn*}
This definition of a link is imprecise. A more proper, technical definition
is that a link can be formed only when the germ $v_{1}$ has $v_{2}$
as a connector, and also, at the same time, the germ $v_{2}$ has
$v_{1}$ as a connector; only then can the two be joined together.
The joining is meant to be optional, not mandatory: just because a
section contains connectors that can be joined, it does not imply
that they must be. The joining is also meant to consume the connectors
as a resource: once two connectors have been connected, neither one
is free to make connections elsewhere.
\begin{figure}[h]
\caption{Two linked (connected) seeds}
\includegraphics[width=0.35\columnwidth]{seeds-two}
\end{figure}
The use of links allows the concepts of paths and connectivity, taken
from graph theory, to be imported into the current context. Thus,
one can obviously define:
\begin{defn*}
A \noun{connected section}, or a \noun{contiguous section} is a section
where every germ is connected to every other germ via a path through
the edges. $\diamond$
\end{defn*}
In graph theory, this would normally be called a ``connected graph'',
but we cannot fairly call it that because the seeds and sections were
defined in such a way that they are not graphs; they only become graphs
when they are fully connected. Never-the-less, it is fairly safe and
straight-forward to apply common concepts from graph-theory. Sections
are almost like graphs, but not quite.
Note that there are two types of edges in a section: those edges that
connect to nothing, and those edges that connect to other seeds in
that section. Henceforth, the unconnected edges will be called connectors
(as defined above), while the fully-connected edges will be called
links (also defined above). Connectors can be thought of as a kind-of
half-edge: incomplete, missing the far end, while links are fully
connected, whole.
Seeds and sections can (and should!) be visualized as hedgehogs -
a body with spines sticking out of it - the connectors can be thought
of as the spiny bits sticking out, waiting to make a connection, while
the hedgehog body is that collection of vertices and the fully-connected
links between them.
\begin{figure}[h]
\caption{A connected section}
\includegraphics[width=0.25\columnwidth]{hedgehog}
\end{figure}
Implicit in the above definitions was that, during link formation,
an edge is only allowed to connect to another seed if and only if
the connector matches the germ. That is, if $\left(v_{1},v_{2}\right)$
is an edge rooted in the seed for $v_{1}$ and if $\left(v_{3},v_{4}\right)$
is an edge rooted in the seed for $v_{3}$, then these two can form
a link if and only if $v_{2}=v_{3}$ and $v_{4}=v_{1}$. That is,
the connectors are typed: they can only connect to seeds that are
of the same type as the unconnected end of the edge.
This motivates a different way of looking at seeds: they can be visualized
as jigsaw puzzle pieces, where any given tab on one jigsaw piece can
fit into one and only one slot on another jigsaw piece. This union
of a tab+slot is the link. Connectors must be of the same type in
order to be connectible. The types of the connectors will later be
seen to be the same thing as the types of type theory; that is, they
are bona-fide types, in the proper sense of the word.
\begin{figure}[h]
\caption{Joining two connectors to form a link}
\includegraphics[width=0.4\columnwidth]{puzzle}
\end{figure}
The jigsaw puzzle-piece illustration is not uncommon in the literature;
such illustrations are explicitly depicted in a variety of settings.\cite{Sleator1991,Coecke2010,Kart2014,Baez2009}
The point being illustrated here is that the connectors need not be
specific vertexes, they can be vertex types, where any connector of
the appropriate type is allowed to connect. This can be formalized
in an expanded definition of a seed. A provisional definition of a
type is needed, first.
\begin{defn*}
A \noun{type} is a set of vertexes. Notationally, $t=\left\{ v_{a},v_{b},\cdots\right\} $.
$\diamond$
\end{defn*}
This allows the jigsaw concept to be expressed more formally.
\begin{defn*}
A \noun{seed} is a vertex and the set of connector types that connect
to it. That is, it is the pair $\left(v,C_{v}\right)$ where $v$
is a vertex, and $C_{v}$ is a set of connector types containing that
vertex, i.e. that set of edges having $v$ as one endpoint and a type
as the other endpoint. That is, $C_{v}=\left\{ \left(v,t_{a}\right),\left(v,t_{b}\right),\cdots\right\} $.
A single pair $\left(v,t\right)$ can be called a \noun{connector
type}. $\diamond$
\end{defn*}
The capital letter $C$ is used to remind one that members of the
set are connectors. The intent of specifying connector types is exactly
what the jigsaw-puzzle paradigm suggests: links can be created, as
long as the types match up. This is formalized by expanding the definition
of a link.
\begin{defn*}
Given a section $S$, a \noun{link} between seeds $s_{1}=\left(v_{1},C_{1}\right)$
and $s_{2}=\left(v_{2},C_{2}\right)$ is any edge $\left(v_{1},v_{2}\right)$
where $v_{1}$ is in one of the types in $C_{2}$ and $v_{2}$ is
in one of the types in $C_{1}$. That is, there exists a pair $\left(v_{1},t_{a}\right)\in C_{1}$
such that $v_{2}\in t_{a}$ and, symmetrically, there exists a pair
$\left(v_{2},t_{b}\right)\in C_{2}$ such that $v_{1}\in t_{b}$ .
Two seeds are \noun{connected} when there is a link between them.
$\diamond$
\end{defn*}
As before, the creation of links is meant to be optional, not forced.
As before, the connectors are meant to be consumable: once connected,
they cannot be used again. The figure below illustrates the idea.
\begin{figure}[h]
\caption{Seed connectors might be types, not vertexes}
\includegraphics[width=0.35\columnwidth]{seed-puzzle}
\end{figure}
Its important to realize that the standard approach to graph theory
has been left behind. Although it is possible to hook up seeds to
form a graph, it is also possible to have a collection of seeds that
is not a graph: the category of sections contain the category of graphs
as a subset. Extending the notion of a connector to be the notion
of a connector-type in particular plays considerable violence to the
notion of graph theory. As long as the narrower definition of seed
was used, one could imagine that a collection of seeds could be assembled
into a graph, and that assembly is unique. Once connector types are
introduced, the possibility that there are multiple, non-unique assemblages
of seeds becomes possible. A graph can be disassembled into seeds,
and, if one is careful to label vertexes and edges in a unique way,
that collection can be viewed as isomorphic to the original graph.
If one is not careful, sloppily assigning labels or avoiding them
entirely, the collection can have multiple non-isomorphic re-assemblies.
The ability to be sloppy in this way is one of the appeals, one of
the benefits of working with seeds and sections. They provide ``elbow
room'' not available in (naive) graph theory.
\subsection*{Why sections?}
Whats the point of introducing this seemingly non-standard approach
to something that looks a lot like graph theory? There are several
reasons.
\begin{itemize}
\item From a computational viewpoint, sections have nice properties that
a list of vertexes and edges do not. Given a single seed, one ``instantly''
knows \emph{all} of the edges attached to its germ: they are listed
right there. By contrast, given only a graph description, one has
to search the entire list $E$ for any edges that might contain the
given vertex. Computationally, searching large lists is inefficient,
especially so for very large graphs.
\item The subset of a section is always a section. This is not the case
for a graph: given $G=\left(V,E\right)$, some arbitrary subset of
$V$ and some arbitrary subset of $E$ do not generally form a graph;
one has to apply consistency conditions to get a subgraph.
\item A connected section behaves very much like a seed: just as two seeds
can be linked together to form a connected section, so also two connected
sections can be linked together to form a larger connected section.
Both have a body, with spines sticking out. The building blocks (seeds),
and the things built from them (sections) have the same properties,
lie in the same class. Thus, one has a system that is naturally ``scalable'',
and allows notions of similarity and scale invariance to be explored.
There is no need to introduce additional concepts and constructions.
\item Given two seeds, one can always either join them (because they connect)
or it is impossible to connect them. Either way, one knows immediately.
Graphs, in general, cannot be joined, unless one specifies a subgraph
in each that matches up. Locating subgraphs in a graph is computationally
expensive; verifying subgraph isomorphism is computationally expensive.
\item The analogy between graphs and topology, specifically between open
sets and seeds and the intersection of open sets and edges, allows
concepts and tools to be borrowed from algebraic topology.
\end{itemize}
If we stop here, not much is accomplished, other than to define a
somewhat idiosyncratic view of graph theory. But that is not the case;
the concept of seeds and sections are needed to pursue more complex
constructions. They provide a tool to study natural language and other
systems.
\subsection*{Example: Biochemical reaction type}
An example of a seed applied to the biochemical domain would be the
phosphorylation of ADP to ATP, shown in the figure below.
\begin{center}
\includegraphics[width=0.35\columnwidth]{phosphorylation}
\par\end{center}
The germ of the seed is the point where the semi-circle kisses the
line: not labeled here, the germ would be succinate-CoA ligase. The
connectors are labeled with their types, and the arrows provide directionality.
The connector types clearly indicate what can be linked to what: this
particular seed, when linked, \emph{must} link to a source of ADP,
or a source of phosphate, or a sink if ATP or a sink of hydroxyls,
if it is to be validly linked into any part of a connected section.
In this example, ADP and ATP can both be treated as simple connectors,
while R-OH does name a type: R can be any moiety. Implicit here, but
not explicit in the seed, is that the R group on both connectors must
be the same.
An example of a connected section would be the Krebs cycle, taken
as a whole:
\begin{center}
\includegraphics[width=0.55\columnwidth]{krebs}
\par\end{center}
Each distinct reaction constitutes a seed; the heavy lines forming
the cycle are the links internal to the section, and each tangent
arrow is a pair of connectors, with one end of the arrow being an
unconnected reaction input, and the other end of the arrow an unconnected
reaction product. Thus, for example, connector types include NAD,
NADH, water and ATP, among others. These connectors are free to be
attached to other seeds or sections.
This example may seem dubious, at this point of the presentation.
That it is a valid example should become clear with further development
of the general principles in what follows.
\subsection*{Similar concept: Link Grammar}
Readers familiar with Link Grammar\cite{Sleator1991,Sleator1993}
should have recognized seeds as being more or less the same thing
as ``disjuncts'' in Link Grammar. The formal definition for Link
Grammar disjuncts are a bit more complicated than seeds, and is expanded
on in later sections. To lay that groundwork, however, consider an
unlabeled dependency parse for the sentence ``this is an example'',
shown in the figure below.
\begin{figure}[h]
\caption{A dependency parse decomposed into four seeds}
\includegraphics[width=0.85\columnwidth]{example}
\end{figure}
The dependency parse is shown as a graph, with four vertexes. Below,
the parse is decomposed into the component seeds; as always, the open
dots are connectors, the closed dots are the germs. Using the notation
$\left(v,C_{v}\right)$ for a seed, where $C_{v}=\left\{ \left(v,v_{a}\right),\left(v,v_{b}\right),\cdots\right\} $,
these seeds can be textually written as\\
\qquad{}%
\begin{minipage}[t]{0.8\columnwidth}%
\textsf{this: \{(this, is+)\}}
\textsf{is: \{(is, this-), (is, example+)\}}
\textsf{an: \{(an, example+)\}}
\textsf{example: \{(example, is-), (example, an-)\}}%
\end{minipage}\\
\\
\\
The above vertex: edge-list notation is a bit awkward and hard to
read. A simpler notation conveying the same idea is\\
\qquad{}%
\begin{minipage}[t]{0.8\columnwidth}%
\textsf{this: is+;}
\textsf{is: this- \& example+;}
\textsf{an: example+;}
\textsf{example: an- \& is-;}%
\end{minipage}\\
\\
\\
In both textual representations, the pluses and minuses are used to
indicate word-order: minuses to the left, pluses to the right. This
is an additional decoration added to the connectors, needed to indicate
and preserve word-order, but not a part of the core definition of
a seed. The ampersand is not symmetric, but enforces order; this is
not apparent here, but is required for the proper definition.
In Link Grammar, the objects to the right of the colon are called
``disjuncts''. The name comes from the idea that they disjoin colocational
extractions. After observing a large corpus, one might find that\\
\qquad{}%
\begin{minipage}[t]{0.8\columnwidth}%
\textsf{is: (this- \& example+) or (banana- \& fruit+) or (apple-
\& green+);}%
\end{minipage}\\
\\
which indicates that sentences such as ``a banana is a kind of fruit''
or ``this apple is green'' were observed and parsed into (unlabeled)
dependencies.
\subsection*{Similar concept: lambda notation}
Linguistics literature sometimes describes similar concepts using
a lambda-calculus notation. For example, one can sort-of envision
the expression $\lambda M.xyz$ as a seed with the germ $M$ and with
connectors $x$, $y$ and $z$. This notation has been used to express
the concept of a seed, as described above. For example, Poon and Domingos\cite{Poon2009}
write $\lambda y\lambda x.\mbox{borders}(x,y)$ to represent the attachments
of the word ``borders'' as a synonym for ``is next to''. This
is illustrated with the verb-phrase $\lambda y\lambda x.\mbox{borders}(x,y)(\mbox{Idaho})$
which beta-reduces to the verb-phrase $\lambda x.\mbox{borders}(x,\mbox{Idaho})$
to indicate that $x$ is next to Idaho. The utility of this device
becomes apparent because one can use this same notation to write $\lambda y\lambda x.\mbox{is\_next\_to}(x,y)$
and $\lambda y\lambda x.\mbox{shares\_a\_border\_with}(x,y)$ as synonymous
phrases. The lambda notation allows $x$ and $y$ to be exposed as
connectors, while at the same time hiding the links that were required
to assemble seeds for ``next'', ``is'', and ``to'' into a phrase.
That is, $\lambda y\lambda x.\mbox{is\_next\_to}(x,y)$ is an example
of a connected section, having $x$ and $y$ as the externally exposed
connectors and the internal links between ``next'', ``is'', and
``to'' hidden.
The problem with this notation is that, properly speaking, lambda
calculus is a system for generating and working with strings, not
with graphs, and lambdas are designed to perform substitution (beta-reduction),
and not for connecting things.
That is, lambda terms are always strings of symbols, and the variables
bound by the lambda are used to perform substitutions. To illustrate
the issue, suppose that $M$ above is $axbyczd$ and suppose that
$\lambda N.w=ewf$. Can these be ``connected'' together, linked
together like seeds? No: if one tried to ``connect'' $N$ to $z,$
one has the beta-reduction $(\lambda M.xyz)\lambda N.w\rightarrow\lambda axbycewfd.xyw$.
There is no way to express some symmetric version of this, because
$(\lambda N.w)\lambda M.xyz\rightarrow\lambda eaxbyczdf.xyz$ which
is hardly the same. Now, of course, lambda calculus has great expressive
power, and one could invent a way encoding graph theory, and/or seeds,
in lambda calculus; however, doing so would result in verbose and
complex system. Its easier to work with graphs directly, and just
sleep peacefully with the knowledge that one could encode them with
lambdas, if that is what your life depended on.
Note also that there have been extensions of the ideas of lambda calculus
to graphs; however, those extensions cling to the fundamental concept
of beta reduction. Thus, one works with graphs that have variables
in them. Given a variable, one plugs in a graph in the place of that
variable. The OpenCog \href{http://wiki.opencog.org/w/PutLink}{PutLink}
works in exactly this way. The beta-reduction is fundamentally not
symmetrical: putting A into B is not the same as putting B into A.
The concept of ``connecting'' in a symmetric way doesn't arise.
\subsection*{Similar concept: tensor algebra}
The \href{https://en.wikipedia.org/w/Tensor_algebra}{tensor algebra}
is an important mathematical construct underlying large parts of mathematical
analysis, including the theory of vector spaces, the theory of Hilbert
spaces, and, in physics, the theory of quantum mechanics.
\begin{figure}[h]
\caption{A tensor with three input wires and two output wires}
\includegraphics[width=0.3\columnwidth]{tensor}
\end{figure}
It has been widely noted that tensor algebras have the structure of
monoidal categories; perhaps the most insightful and carefully explained
such development is given by Baez and Stay\cite{Baez2009}. The diagram
of a tensor shown above is taken from that paper; it is a diagrammatic
representation of a morphism $f:X_{1}\otimes X_{2}\otimes X_{3}\to Y_{1}\otimes Y_{2}$.
There are several interesting operations one can do with tensors.
One of them is the contraction of indexes between two tensors. For
example, to multiply a matrix $M_{ik}$ by a vector $v_{k}$, one
sums over the index $k$ to obtain another vector: $w_{i}=\sum_{k}M_{ik}v_{k}$.
The matrix $M_{ik}$ should be understood as a 2-tensor, having two
connectors, while vectors are 1-tensors. The intent here is that $M_{ik}$
is to be literally taken as a seed, with $M$ the germ, and $i$ and
$k$ the connectors on the germ. The vector $v_{k}$ is another seed,
with germ $v$ and connector $k$. The inner product $\sum_{k}M_{ik}v_{k}$
is a connected section. The multiplication of vectors and matrices
is the act of connecting together connectors to form links: multiplication
is linking.
Tensors have additional properties and operations on them, the most
important of which, for analysis, is their linearity. For the purposes
here, the linearity is not important, whereas the ability to contract
indexes is. The contraction of indexes, that is, the joining together
of connectors to form links, gives tensor algebras the structure of
a monoidal category. This is a statement that seems simple, and yet
carries a lot of depth. As noted above, the beta-reduction of lambda
calculus also looks like the joining together of connectors. This
is not accidental; rather, it is the side effect of the fact that
the internal language of closed monoidal categories is simply typed
lambda calculus. The words ``simply typed'' are meant to convey
that there is only one type. For the above example morphism, that
would mean that $X_{1}$ and $X_{2}$ and so on all have the same
type: $X_{1}=X_{2}=X_{3}=Y_{1}=Y_{2}$. The end-points on the seed
are NOT labeled; equivalently, they all carry the same label. This
is in sharp contrast to the earlier example\\
\qquad{}%
\begin{minipage}[t]{0.8\columnwidth}%
\textsf{is: this- \& example+;}%
\end{minipage}\\
\\
where the two connectors are labeled, and have different types, which
sharply limit what they connect to. The \textsf{this-} connector has
the type ``\textsf{this-is}'', and can only attach to another connector
having the same type, namely, the \textsf{is+} connector on ``this''\\
\qquad{}%
\begin{minipage}[t]{0.8\columnwidth}%
\textsf{this: is+;}%
\end{minipage}\\
\\
It may seem strange to conflate the concept of tensors and monoidal
categories with linguistic analysis, yet this has an rich and old
history, briefly touched on in the next section. The core principle
driving this is that the Lambek calculus, underpinning the categorial
grammars used in linguistic analysis, can be embedded into a fragment
of non-commutative linear logic. The remaining step is to recall that
linear logic is the logic of tensor categories; the non-commutative
aspect is a statement that the left and right products must be handled
distinctly.
\subsection*{Similar concept: Lambek Calculus}
The foundations of categorial grammars date back to Lambek in 1961\cite{Lambek61,Marcus1967}
and the interpretation in terms of tensorial categories proliferates
explosively in modern times. One direct example can be found in works
by Kartsaklis\cite{Kart2013,Kart2014}, where one can find not only
a detailed development of the tensorial approach, together with its
type theory, but also explicit examples, such as the tensor
\[
\overrightarrow{men}\otimes\overline{built}\otimes\overrightarrow{houses}
\]
together with explicit instructions on how to contract this with a
different tensor
\[
\mathcal{F}\left(\alpha_{\mbox{subj verb obj}}\right)=\epsilon_{W}\otimes1_{W}\otimes\epsilon_{W}
\]
to obtain the ``quantization'' of the sentence ``men built houses''.
This notation will not be explained here; the reader should consult
\cite{Kart2013} directly for details. The point to be made is that
this kind of tensorial analysis can be, and is done, and often invokes
words like ``quantum'' and ``entanglement'' to emphasize the connection
to linear logic and to linear type theory.
Unfortunately, it is usually not clearly stated that it is only a
fragment of linear logic and linear type theory that applies. In linguistics,
it is not the linearity that is important, but rather the conception
of frames (in the sense of Kripke frames in proof theory). Frames
have the important property of presenting choices or alternatives:
one can have either this, or one can have that. The property of having
alternatives is described by intuitionistic logic, where the axiom
of double-negation is discarded. This either-or choice appears as
the concept of a ``multiverse'' in quantum mechanics, and far more
mundanely as alternative parses in linguistics.
Another worthwhile example of tensor algebra can be found in equation
13 of \cite{Kart2014}, reproduced below:
\[
\overline{verb}=\sum_{i}\left(\overrightarrow{subject}_{i}\otimes\overrightarrow{object}_{i}\right)
\]
where $\overrightarrow{subject}_{i}$ and $\overrightarrow{object}_{i}$
are meant to be the $i$th occurrence of a subject/object pair in
an observed corpus. If the corpus consisted of two sentences, ``a
banana is a kind of fruit'' and ``this apple is green'', then one
would write
\[
\overline{verb}=\left(\overrightarrow{banana}\otimes\overrightarrow{fruit}\right)+\left(\overrightarrow{apple}\otimes\overrightarrow{green}\right)
\]
where the verb, in this case, is ``is''. The control over the word
order, that is, the left-right placement of the dependencies, is controlled
by means of the pregroup grammar. The pregroup grammar and its compositionality
properties follow directly from the properties of the left-division,
right-division and multiplication in the Lambek calculus. A quick
modern mathematical review of the axioms of the Lambek calculus can
be found in Pentus\cite{Pentus98}, which also provides a proof of
equivalence to context-free grammars.
\subsection*{Similar concept: history and Bayesian inference}
Some first-principles applications of Bayesian models to natural language
explicitly make use of a sequential order, called the ``history''
of a document.\cite{Rosen1996} That is, the probability of observing
the the $n$-th word of a sequence is taken to be $P(w_{n}|h)$ where
$h=w_{n-1},w_{n-2},\cdots,w_{1}$ is termed the history. This conception
of probability is sharply influenced by the theory of Markov processes
and finite-state machines, dating back to the dawn of information
theory.\cite{Ash1965} In a finite-state process model, the future
state is predicated only on the current state, and thus the Markov
assumption holds. In deciphering such a process, one might not know
how the current state is correlated to the output symbol, thus leading
to the concept of a Hidden Markov Model (HMM). The concept of ``history''
is well-suited for such analysis. Several issues, however, make this
approach impractical for many common problems, including natural language.
\begin{figure}[h]
\caption{The history of a text as a sequence of words}
\includegraphics[width=0.4\columnwidth]{history}
\end{figure}
One issue, already noted, is the sequential nature of the process.
One can try to hand-wave away this issue: given a graph of vertices,
it is sufficient to write the vertexes in some order, any order will
do. This obscures the fact that $n$ vertexes have $n!$ ($n$-factorial)
possible interactions: a combinatorial explosion, when the actual
data graph may have a much much smaller number of interactions between
vertexes (aka ``edges''). By encoding the known interactions as
edges, a graphical approach avoids such a combinatorial explosion
from the outset.
To put it more bluntly: a sequential history model of genomic and
proteomic data is inappropriate. Although base pairs and amino acids
come in sequences, the interactions between different genes and proteins
are not in any way shape or form sequential. The interactions are
happening in parallel, in distinct, different physical locations in
a cell. These interactions can be depicted as a graph. Curiously,
that graph can resemble the one depicted below, although the depiction
is meant to show something different: it is meant to show a history.
\begin{figure}[h]
\caption{A Viterbi parse lattice of a Markov chain\label{fig:A-Viterbi-parse}}
\includegraphics[width=0.2\paperwidth]{viterbi}
\end{figure}
Figure \ref{fig:A-Viterbi-parse} depicts the lattice of a Viterbi
parse of a Markov chain. The dashed green line depicts a maximum-likelihood
path through the lattice, that is, the most likely history. Viterbi
decoding, using an ``error correcting code'', is a process by which
the validity of the dashed green path is checked, and failing paths
discarded. For natural language, the dashed red path must be a grammatically
correct sequence of words. For a radio receiver, the dashed red path
must be a sequence of bits that obey some error-correction polynomial;
if it doesn't, the next-most-likely path is selected.
Each black line represents a probability $p_{ij}$ of moving from
state $i$ to state $j$ at the next time-step. That is, $p_{ij}=P(w_{n}=j|w_{n-1}=i)$
is the likelihood of word $j$ given word $i$ in the immediate past.
The probabilities are arranged such that $\sum_{i}p_{ij}=1$. This
is called a Markov model, because only the most recent state transitions
are depicted: there are no edges connecting the nodes more than one
time-step apart; there are no edges connecting $w_{n}$ to $w_{n-2},$
etc. Put differently, $P(w_{n}|h)=P(w_{n}|w_{n-1})$. That is, this
depicts the use of 2-grams to predict the current state.
Non-Markov models would have edges connecting nodes further in the
past. A $n$-gram approach to language digs $n$ steps into the past.
If there are $k$ states, and $n$ steps into the past, then $k^{n}$
edges are required: that is, a rank-$n$ tensor. Here, $k=4$ and
$n=2$ is depicted; in natural language $k$ is the number of words
(say, $k=10^{4}$ for a common subset of the English language), while
$n$ is the length of a longer sentence, say $n=12$. In this case,
the history tensor $P(w_{n}|h)$ has $k^{n}=10^{48}=2^{160}$ edges.
But of course, this is computationally absurd. It is also theoretically
absurd: almost all of those edges have zero probability. Almost none
of the edges are needed; the actual tensor is very very sparse.
The red path in the figure below indicates a very unlikely word-sequence:
``example this an this''. There are $4\times16=64$ paths through
it. Of these, only 3 are plausible: the green edges, and the sequences
``this example is an'' and ``an example is this''. The others
can't be observed.
\begin{figure}[h]
\caption{Likely and unlikely word sequences}
\includegraphics[width=0.35\columnwidth]{chain}
\end{figure}
The sparsity is easily exposed with dependency parsing. So, for example,
if $w_{n-3}=this$ and $w_{n-2}=is$ and $w_{n-1}=an$, a dependency
parse will tell you that $w_{n}$ must be a singular noun starting
with a vowel, or an adjective starting with a vowel. It also tells
you that, for this particular history, this noun can depend only on
$w_{n-2}$ and on $w_{n-1}$ but not on $w_{n-3}$. A collection of
dependency parses obtained from a corpus identifies which edges matter,
and which edges do not.
Dependency parses do even more: they unveil possible paths, and not
just pair-wise edges. They provide a more holistic view of what might
be going on in natural language. That is, the notation
\[
\overline{is}=\left(\overrightarrow{banana}\otimes\overrightarrow{fruit}\right)+\left(\overrightarrow{apple}\otimes\overrightarrow{green}\right)
\]
and\\
\qquad{}%
\begin{minipage}[t]{0.8\columnwidth}%
\textsf{is: (banana- \& fruit+) or (apple- \& green+);}%
\end{minipage}\\
\\
and
\[
P\left(w_{n}=fruit|w_{n-1}=is,w_{n-2}=banana\right)+P\left(w_{n}=green|w_{n-1}=is,w_{n-2}=apple\right)
\]
all represent the same knowledge, the dependency notation appears
to be less awkward than thinking about history as some Bayesian probability.
The dependency notation focuses attention on a different part of the
problem.
Another popular way to at least partly deal with the sparsity of the
history tensor $P(w_{n}|h)$ is to use skip-grams. The idea recognizes
that many of the edges of an $n$-gram will be zero, and so these
edges can be skipped. This is not a bad approach, except that it is
``simply typed'': it does not leverage the possibility that different
words might have different types (verb, noun, ...) and that this typing
information delivers further constraints on the structure of the skip-gram.
That is, the notion of subj-verb-obj not only tells you that your
skip-gram is effectively a 3-gram, but also that the first and third
words belong to a class called ``noun'', and the middle is a transitive
verb. This sharply prunes the number of possibilities \emph{before}
the learning algorithm is launched, instead of during or after. The
fact that such pruning is even possible is obscured by the notation
and language of $n$-grams and the history $P(w_{n}|h)$.
A different stumbling block of the ``history'' approach is that
it ignores ``the future'': the fact that the words that might be
said next have already influenced the choice of the words already
spoken. This can be hand-waved away by stating that the history is
creating a model of (hidden) mental states, and that this model already
incorporates those, and thus is anticipating future speech actions.
Although this might be philosophically acceptable to some degree,
it again forces complexity onto the problem, when the complexity is
not needed. If you've already got the document, look at all of it;
go all the way to the end of the sentence. Don't arbitrarily divide
it into past and future, and discard the future.
To summarize: dependency structures appear naturally; flattening them
into sequences places one at a notional, computational and conceptual
disadvantage, even if the flattening is conceptually isomorphic to
the original problem. The tensor $P(w_{n}|h)$ may indeed encode all
possible knowledge about the text in a rigorously Bayesian fashion;
but its unwieldy.
\section*{Quotienting}
The intended interpretation for the graphs discussed in this document
is that they represent or are the result of capturing a large amount
of collected raw data. From this data, one wants to extract commonalities
and recurring patterns.
The core assumption being made in this section is that, when two local
neighborhoods of a graph are similar or identical, then this reflects
some important similarity in the raw data. That is, similarity of
subgraphs is the be-all and end-all of extracting knowledge from the
larger graph, and that the primary goal is to search for, mine, such
similar subgraphs.
Exactly what it means to be ``similar'' is not defined here; this
is up to the user. Similarity could mean subgraph isomorphism, or
subgraph homomorphism, or something else: some sort of ``close-enough''
similarity property involving the shape of the graph, the connections
made, the colors, directions, labels and weights on the vertexes or
edges. The precise details do not matter. However, it is assumed that
the user can provide some algorithm for finding such similarities,
and that the similarities can be understood as a kind-of ``equivalence
relation''.
\subsection*{Example of similarity}
To motivate this, consider the following scenario. One has a large
graph, some dense mesh, and one decides, via some external decision
process, that two vertexes are similar. One particularly good reason
to think that they are similar is that they share a lot of nearest
neighbors. In a social graph, one might say they have a lot of friends
in common. In genomic or proteomic data, they may interact with the
same kinds of genes/proteins. In natural language, they might be words
that are synonyms, and thus get used the same way across many different
sentences; specifically, the syntactic dependency parse links these
words to the same set of heads and dependents. At any rate, one has
a large graph, and some sort of equivalence operation that can decide
if two vertexes are the ``same'', or are ``similar enough''. Whenever
one has an equivalence relation, one can apply it to obtain a quotient,
of grouping together into an identity all things that are the same.
To make this even more concrete, consider this example from linguistics.
Suppose, given a corpus, one has observed three sentences: ``Mary
walked home'', ``Mary ran home'' and ``Mary drove home''. A dependency
parse provides three seeds: \\
\qquad{}%
\begin{minipage}[t]{0.8\columnwidth}%
\textsf{walked: Mary- \& home+; }
\textsf{ran: Mary- \& home+;}
\textsf{drove: Mary- \& home+;}%
\end{minipage}\\
\\
which seem to be begging for an equivalence relation that will reduce
these to \\
\qquad{}%
\begin{minipage}[t]{0.8\columnwidth}%
\textsf{walked ran drove: Mary- \& home+; }%
\end{minipage}\\
\\
Using a tensorial notation, once starts with
\[
\overrightarrow{Mary}\otimes\overline{walked}\otimes\overrightarrow{home}+\overrightarrow{Mary}\otimes\overline{ran}\otimes\overrightarrow{home}+\overrightarrow{Mary}\otimes\overline{drove}\otimes\overrightarrow{home}
\]
and applies the equivalence relation to obtain
\[
\overrightarrow{Mary}\otimes\left(\overline{walked}+\overline{ran}+\overline{drove}\right)\otimes\overrightarrow{home}
\]
The structure here strongly resembles the application of the distributive
law of multiplication over addition. This distributivity property
is one of the appeals of the tensor notation. One can obtain a similar
sense of distributivity by using the operator ``\textsf{or}'' to
separate the Link Grammar style stanzas, and note that the change
also appears to be an application of the distributive law of conjunction
over disjunction.
This is illustrated pictorially, in figure \ref{fig:Creating-a-quotient}.
\begin{figure}[h]
\caption{Creating a quotient graph\label{fig:Creating-a-quotient}}
\includegraphics[width=0.55\columnwidth]{similar}
\end{figure}
It need not be the case that an equivalence relation is staring us
in the face, yet here, it is. The vertexes ``walked'', ``ran''
and ``drove'' can be considered similar, precisely because they
have the same neighbors. The upper graph can be simplified by computing
a quotient, shown in the lower part: the quotient merges these three
similar vertexes into one. The result is not only a simpler graph,
but also some vague sense that ``walked'', ``ran'' and ``drove''
are synonymous in some way.
\subsection*{Quotienting}
If one has an equivalence relation that can be applied to a graph,
then the obvious urge is to attempt to perform quotienting on the
graph. That is, to create a new graph, where the ``equal'' parts
are merged into one.
The first issue to be cleared out of the way is the use of the word
``\href{https://en.wikipedia.org/wiki/Quotient}{quotienting}'',
which seems awkward, since the example above seemed to involve some
sort of factoring, or the application of a distributive law of some
sort. The terminology comes from modulo arithmetic, and is in wide
use in all branches of mathematics. A simple example is the idea of
dividing by three: given the set of integers $\mathbb{Z}$, one partitions
it into three sets: the set $\left\{ 0,3,6,9,\cdots\right\} $, the
set $\left\{ 1,4,7,\cdots\right\} $ and the set $\left\{ 2,5,8,\cdots\right\} $.
These three sets are termed the cosets of 0, 1 and 2, and all elements
in each set are considered to be equal, in the sense that, for any
$m$ and $n$ in any one of these sets, it is always true that $m=n\mod3$:
they are equal, modulo 3. In this way, one obtains the quotient set
$\mathbb{Z}_{3}=\mathbb{Z}/3\mathbb{Z}=\mathbb{Z}/\mod3=\left\{ 0,1,2\right\} $.
Modulo arithmetic resembles division, ergo the term ``quotient''.
Given a set $S$ and an equivalence relation $\sim$, it is common
to write the quotient set as $Q=S/\sim$. In the above, $S$ was $\mathbb{Z}$
and $\sim$ was $\mod3$. In general, one looks for, and works with
equivalence relations that preserve desirable algebraic properties
of the set, while removing undesirable or pointless distinctions.
In the modulo arithmetic example, addition is preserved: it is well
defined, and works as expected. In the linguistic example, the subj-verb-obj
structure of the sentence is preserved; the quotienting removes the
``pointless'' distinction between different verbs.
Quotienting is often described in terms of homomorphisms, functions
$\pi:S\to Q$ that preserve the algebraic operations on $S$. For
example, if $m:S\times S\times S\to S$ is a three-argument endomorphism
on $S$, one expects that $\pi$ preserves it: that $\pi\left(m\left(a,b,c\right)\right)=m\left(\pi\left(a\right),\pi\left(b\right),\pi\left(c\right)\right)$.
For the previous example, if $m$ was used to provide or identify
a subj-verb-obj relationship, then, after quotienting, one expects
that $m$ can still identify the verb-slot correctly.
\subsection*{Graph quotients}
In graph theory, the notion of quotienting is often referred to as
working ``relative to a subgraph''. Given a graph $G$ and a subgraph
$A\subset G$, one ``draws a dotted line'' or places a balloon around
the vertexes and edges in $A$, but preserves all of the edges coming
out of $A$ and going into $G$. The internal structure of $A$ is
then ignored. The equivalence relation makes all elements of $A$
equivalent, so that $A$ behaves as if it were a single vertex, with
assorted edges attached to it, running from $A$ to the rest of $G$.
\subsection*{Stalks}
Given the above notion of a graph quotient, it can be brought over
to the language of seeds and sections, established earlier. Let $G$
be a graph, and let $v_{a}$ and $v_{b}$ be two vertexes in the graph,
with corresponding seeds $s_{a}$ and $s_{b}$ extracted from the
graph. That is, $s=\left(v,C_{v}\right)$ with $C_{v}$ being the
set of edges connecting $v$ to all of its nearest neighbors. Let
$\pi$ be a projection function, such that $\pi\left(v_{a}\right)=\pi\left(v_{b}\right)$.
That is, $\pi:V\to B$ is a map from the vertices $V$ of $G$ to
some other set $B$.
It is not hard to see that $\pi$ is a morphism of graphs; it not
only maps vertexes, but it can be extended to map edges as well. The
target of $\pi$ is a graph quotient.
\begin{defn*}
Given a map $\pi:V\to B$, the \noun{stalk} above $b\in B$ is the
set $S$ of seeds such that for each $s=\left(v,C_{v}\right)\in S$,
one has that $\pi(v)=b$. $\diamond$
\end{defn*}
\begin{figure}[h]
\caption{A stalk and it's projection}
\includegraphics[width=0.6\columnwidth]{stalk}
\end{figure}
In general, this definition does not require that the map $\pi:V\to B$
be a total map; that is, it does not need to be defined on all of
$V$. Also, $V$ does not need to be the vertexes of some specific
graph; it is enough that $V$ is a set of germs of seeds. That is,
the seeds in the stalk can be generalized seeds, having typed connectors,
rather than connectors derived from edges. The vertexes in the stalk
can be visualized as being stacked one on top another, forming a tower
or a fiber, with the edges sticking out as spines. When the seeds
carry typed connectors, the stalk can be visualized as a tower of
jigsaw-puzzle pieces.
\begin{figure}[h]
\caption{A corn stalk, a stack of puzzle pieces}
\includegraphics[width=0.25\textwidth]{cornstalk.jpg}\includegraphics[width=0.25\columnwidth]{stack-jigsaw-puzzle-pieces.jpg}
\end{figure}
Note that the projection of a stalk is a seed. It's germ is $b$,
and if any connector appears in the stalk, then it also appears as
a connector on $b$ in the base. At least, this is the unassailable
conclusion if one starts with a graph, and assumes that $\pi$ is
a graph morphism. It will prove to be very useful to loosen this restriction,
that is, to allow $\pi$ to add or remove connectors. Thus, it is
useful to immediately broaden the definition of the stalk.
\begin{defn*}
Given a map $\pi:E\to B$, where both $E$ and $B$ are collections
seeds, the \noun{stalk} above $b\in B$ is the set $S$ of seeds in
$E$ such that for each $s=\left(v,C_{v}\right)\in S$, one has that
$\pi(s)=b$. $\diamond$
\end{defn*}
In this revised definition, there is no hint of what $\pi$ did with
the connectors. In particular, there is no way to ask about some specific
connector on some seed $s$, and what happened to it after $\pi$
mapped $s$ to $b$. This definition is perhaps too general; in the
most common case, it is useful to project the connectors as well as
the germs. It is also very useful to be able to say that a particular
connector on $s$ can be mapped to a particular connector on $b$.
Yet it is also useful to sometimes discard some connectors because
they are infrequently used, to perform pruning, as it were. These
use-cases will be returned to later. There is no particular reason
to allow pruning during projection; it can always be done before,
or after.
Thus, perhaps the most agreeable definition for a stalk is this.
\begin{defn*}
Given a map $\pi:E\to B$, where both $E$ and $B$ are collections
seeds, the \noun{stalk} above $b\in B$ is the set $S$ of seeds in
$E$ such that for each $s=\left(v,C_{v}\right)\in S$, one has that
$\pi(s)=b$. The map $\pi$ can be decomposed into a pair $\pi=\left(\pi_{g},\pi_{c}\right)$
such that, for every $\gamma\in C_{v}$ one has that $\pi\left(v,\gamma\right)=\left(\pi_{g}\left(v\right),\pi_{c}\left(\gamma\right)\right)$
such that $\pi_{c}\left(\gamma\right)\in C_{b}$. That is, $\pi_{g}$
maps the germs of $E$ to the germs of $B$ and $\pi_{c}$ maps the
connectors in $E$ to specific connectors in \textbf{$B$.} $\diamond$
\end{defn*}
The next figure illustrates both the projection of germs, and of connectors.
It tries to capture the notion that the projection is entire and consistently
defined.
\begin{figure}[h]
\caption{Germs and connectors project consistently}
\includegraphics[width=0.6\columnwidth]{project}
\end{figure}
The definition of a link needs to be generalized, and made consistent
with this final definition of a stalk.
\begin{defn*}
Two stalks $S_{1}$ and $S_{2}$ are \noun{connected} if there exists
a link between some seed $s_{1}\in S_{1}$ and some seed $s_{2}\in S_{2}$.
The stalks are \noun{consistently linked} if the projections of the
stalks are also linked in a fashion consistent with the projection.
That is, if $\left(v_{1},t_{a}\right)$ is the connector on $s_{1}$
that is connected to the connector $\left(v_{2},t_{b}\right)$ on
$s_{2}$, \emph{viz.} $v_{2}\in t_{a}$ and $v_{1}\in t_{b}$, then
$\left(\pi_{g}\left(v_{1}\right),\pi_{c}\left(t_{a}\right)\right)$
is connected to $\left(\pi_{g}\left(v_{1}\right),\pi_{c}\left(t_{a}\right)\right)$
. That is, $\pi_{g}\left(v_{2}\right)\in\pi_{c}\left(t_{a}\right)$
and $\pi_{g}\left(v_{1}\right)\in\pi_{c}\left(t_{b}\right)$ .$\diamond$
\end{defn*}
Recall that the original definition of a connector was such that it
could be used once and only once. This can become an issue, if it
is strictly enforced on the base space. It will become convenient
to remove this restriction on the base space, and replace it by a
use-count. That is, if two different links between stalks project
down to the same link in the base space, then the link in the base-space
should be counted ``with multiplicity''. This induces the notion
that maybe the base space can be used for statistics-gathering, and
that is exactly the intent.
\section*{Sheaves}
The stalk is meant to provide a framework with which to solve the
computational intractability problems associated with Bayesian networks,
by explicitly exposing the grammatical structure within them in such
a fashion that they can be explicitly manipulated. The intent is to
accomplish the hope expressed in the diagram below. To actually arrive
at a workable solution requires additional clarifications, examples,
and definitions. This hopeful figure \emph{must not be taken literally}:
one certainly does \emph{not} want the base space to be some Markov
network! That would be a disaster. Rather, the hope is to accumulate
a large number of graph fragments in such a way that the fragments
are apparent, but that the statistics of their collective behavior
is also accessible. The hope is that this can be done without overflowing
available CPU and RAM, while carefully maintaining fidelity to the
graph fragments. This is an example from linguistics, but one might
hope to do the same with activation pathways in cell biochemistry.
The citric acid cycle should be amenable to such a treatment, as well.
\begin{figure}[h]
\caption{The problem, and it's intended solution }
\includegraphics[width=0.45\columnwidth]{sheaf}
\end{figure}
From the previous development, it should be clear that stalks capture
the local structure of graphs, and that the projection, carefully
done, can preserve the essence of that local structure. Enough mechanism
has been developed to allow the definition of a section to be understood
in a way that is in keeping with the usual notion of a section as
commonly defined in covering spaces and fiber bundles. A preliminary,
provisional definition of a sheaf can now be given.
\begin{defn*}
A sheaf is a collection of connected sections, together with a projection
function $\pi$ that can be taken to be an equivalence relation. That
is, $\pi$ maps sections to a base space $B$, such that, for each
pair of vertexes $v,w$ occurring in different sections, one has $\pi(v)=\pi(w)$
if and only if $v,w$ are in germs in the same stalk. $\diamond$
\end{defn*}
This provisional definition can be tightened. The formal definition
of a sheaf also requires that it obey a set of axioms, called the
gluing axioms. Before giving these, it is useful to look at an example.
\subsection*{Example: collocations}
A canonical first step in corpus linguistics is to align text around
a shared word or phrase:\medskip{}
\qquad{}%
\noindent\begin{minipage}[t]{1\columnwidth}%
\begin{tabular}{rl}
& \texttt{fly like a butterfly}\tabularnewline
\texttt{airplanes that} & \texttt{fly}\tabularnewline
& \texttt{fly fishing}\tabularnewline
& \texttt{fly away home}\tabularnewline
& \texttt{fly ash in concrete}\tabularnewline
\texttt{when sparks} & \texttt{fly}\tabularnewline
\texttt{let's} & \texttt{fly a kite}\tabularnewline
\texttt{learn to} & \texttt{fly helicopters}\tabularnewline
\end{tabular}%
\end{minipage}
\medskip{}
Each word is meant to be a vertex; edges are assumed to connect the
vertexes together in some way. In standard corpus linguistics, the
edges are always taken to join together neighboring words, in sequential
fashion. Note that each phrase in the collocation obeys the formal
definition of a section, given above. It does so trivially: its just
a linear sequence of vertexes connected with edges. If the collocated
phrases are chopped up so that they form a word-sequence that is exactly
$n$ words long, then one calls that sequence an $n$-gram.
The projection function $\pi$ is now also equally plain: it simply
maps all of the distinct occurrences of the word ``fly'' down to
a single, generic word ``fly''. The stalk is just the vertical arrangement
of the word ``fly'', one above another. Each phrase or section can
be visualized as a botanical branch or botanical leaf branching off
the central stalk.The projection of all of the stalks obtained from
collocation is shown below, in figure \ref{fig:N-gram-base}. Identical
words are projected down to a common base point. Links between words
are projected down to links in the base space. For ordinary $n$-grams,
the links are merely the direct sequential linking of neighboring
words. The figure depicts the base-space of the sheaf obtained from
$n$-grams.
\begin{figure}[h]
\caption{N-gram corpus text alignment\label{fig:N-gram-base}}
\includegraphics[width=0.5\columnwidth]{corpus-ngram}
\end{figure}
The sections do not have to be linear sequences; the phrases can be
parsed with a dependency parser of one style or another, in which
case the words are joined with edges that denote dependencies. The
edges might be directed, and they might be labeled. Parsing with a
head-phrase parser introduces additional vertexes, typically called
NP, VP, S and so on. The next figure (figure \ref{fig:Dependency-base})
shows the projection that results from alignment on an (unlabeled,
undirected) dependency parse of the text. As before, each stalk is
projected down to a single word, and the links are projected down
as well. The most noticeable difference between this base space and
the N-gram base space is that the determiner ``a'' does not link
to ``fly'' even though it stands next to it; instead, the determiner
links to the noun it determines. This figure also shows ``ash''
as modifying ``fly'', which, as a dependency, is not exactly correct
but does serve to illustrate how the N-gram and the dependency alignments
differ. If the dependency parse produced directed edges with labels,
it would be prudent to project those labels as well.
\begin{figure}[h]
\caption{Dependency parse corpus text alignment\label{fig:Dependency-base}}
\includegraphics[width=0.75\columnwidth]{corpus-dep}
\end{figure}
Both of the figures \ref{fig:N-gram-base} and \ref{fig:Dependency-base}
depict a quotient graph that results from a corpus alignment, where
all uses of a word have been collapsed (projected down) to a single
node, and all links connecting the words are likewise projected. The
resulting graph can be understood to depict all possible connections
in a natural language. In some sense, it captures important structural
information in natural language.
Be careful, though: these base spaces are just the projections of
the sheaf; they are not the sheaf itself. Its as if a flashlight were
held above the stalks: the base space is the shadow that is cast.
The sheaf is the full structure, the base space is just the shadow.
\subsection*{Are projections useful?}
Yes. A collapsed graph like those above might appear strange; why
would one want to do that, if one has individual sentence data?
By collapsing in this way, one obtains a natural place to store \href{https://en.wikipedia.org/wiki/Marginal_distribution}{marginal distributions}.
For example, when accumulating statistics for large collections of
sentences, the projected vertex becomes an ideal place to store the
frequency count of that word; the projected edge becomes an excellent
place to store the joint probability or the mutual information for
a pair of words. The projected graph - the quotient graph, is manageable
in size. For example, in a corpus consisting of ten million sentences,
one might see 130K distinct, unique words (130K vertexes) and perhaps
5 million distinct word-pairs (5M edges). Such a graph is manageable,
and can fit into the RAM of a contemporary computer.
By contrast, storing the individual parses for 10 million sentences
is more challenging. Assuming 15 words per sentence, this requires
storing 150M vertexes, and approximately 20 links per sentence for
200M edges. This graph is two orders of magnitude larger than the
quotient graph. One could, of course, apply various programming and
coding tricks to squeeze and compress the data, but this misses the
point: It makes sense to project sections down to the base space as
soon as possible. The original sections can be envisioned to still
be there, virtually, in principle, but the actual storage can be avoided.
Every graph can be represented as an adjacency matrix. In this example,
it would be a sparse matrix, with 5 million non-zero entries out of
130K$\times$130K total. The sparsity is considerable: $\log_{2}\left(130\times130/5\right)=11.7$.
Less than one in a thousand of all possible edges are actually observed.
The marginals stored with the graph can be accessed as marginals on
the adjacency matrix. That is, they are marginals in the ordinary
sense of values written in the margin of the matrix. Standard linear-algebra
and data-analysis tools, such as the R programming language, can access
the matrix and the marginals.
\subsection*{Visualizing Sheaves}
One way of visualizing the sheaf is as a stack of sheets of paper,
with one sentence written on each sheet. The papers are stacked in
such a way that words that are the same are always arranged vertically
one above another. This stacking is where the term ``sheaf'' comes
from. Each single sheet of paper is a section. Each collocation is
a stalk.
\begin{figure}
\caption{A Sheaf of Stalks; a Sheaf of Paper}
\includegraphics[width=0.35\columnwidth]{sheaf-of-stalks.jpg}\includegraphics[width=0.51\columnwidth]{sheaf-of-papers.jpg}
\end{figure}
A different example can be taken from biochemistry. There, one might
want to write down specific pathways or interaction networks on the
individual sheets of paper, treating them as sections. If one specific
gene is up-regulated, one can then try to view everything else that
changed as belonging to the same section, as if it were an activation
mode within the global network graph of all possible interactions.
Thus, for example, the Krebs cycle can be taken to be a single section
through the network: it shows exactly which coenzymes are active in
aerobic metabolism. The same substrates, products and enzymes may
also participate in other pathways; those other pathways should be
considered as other sections through the sheaf. Each substrate, enzyme
or product is itself a stalk. Each reaction type is a seed.
The sheaf, it's decomposition into sections, and it's projection down
to a single unified base network, provides a holistic view of a network
of interactions. For linguistic data, activations or modes of the
network correspond to grammatically valid sentences. For biological
data, an activated biological pathway is a section. The base space
provides a general map of biochemical interactions; it does not capture
individual activations. The individual sections in the sheaf do capture
that activation.
\subsection*{Feature Vectors}
It is important to understand that, in many ways, stalks can be treated
as vectors, and, specifically as the ``feature vectors'' of data-mining.
This is best illustrated with an example.
Consider the corpus ``the dog chased the cat'', ``the cat chased
the mouse'', ``the dog chased the squirrel'', ``the dog killed
the chicken'', ``the cat killed the mouse'', ``the cat chased
the cockroach''. There are multiple stalks, here, but the ones of
interest are the one for the dog:
\medskip{}
\qquad{}%
\noindent\begin{minipage}[t]{1\columnwidth}%
\begin{tabular}{rl}
\texttt{the} & \texttt{dog chased the cat}\tabularnewline
\texttt{the} & \texttt{dog chased the squirrel}\tabularnewline
\texttt{the} & \texttt{dog killed the chicken}\tabularnewline
\end{tabular}%
\end{minipage}
\medskip{}
and the stalk for the cat:
\medskip{}
\qquad{}%
\noindent\begin{minipage}[t]{1\columnwidth}%
\begin{tabular}{rl}
\texttt{the dog chased the} & \texttt{cat}\tabularnewline
\texttt{the} & \texttt{cat chased the mouse}\tabularnewline
\texttt{the} & \texttt{cat killed the mouse}\tabularnewline
\texttt{the} & \texttt{cat chased the cockroach}\tabularnewline
\end{tabular}%
\end{minipage}
\medskip{}
One old approach to data mining is to trim these down to 3-grams,
and then compare them as feature vectors. These 3-gram feature vector
for the dog is:
\medskip{}
\qquad{}%
\noindent\begin{minipage}[t]{1\columnwidth}%
\begin{tabular}{rll}
\texttt{the} & \texttt{dog chased} & ; 2 observations\tabularnewline
\texttt{the} & \texttt{dog killed } & ; 1 observation\tabularnewline
\end{tabular}%
\end{minipage}
\medskip{}
and the 3-gram stalk for the cat is:
\medskip{}
\qquad{}%
\noindent\begin{minipage}[t]{1\columnwidth}%
\begin{tabular}{rll}
\texttt{chased the} & \texttt{cat} & ; 1 observation\tabularnewline
\texttt{the} & \texttt{cat chased } & ; 2 observations\tabularnewline
\texttt{the} & \texttt{cat killed } & ; 1 observation\tabularnewline
\end{tabular}%
\end{minipage}
\medskip{}
These are now explicitly vectors, as the addition of the observation
count makes them so. The vertical alignment reminds us that they are
also still stalks, and that the vector comes from collocations.
Recall how a vector is defined. One writes a vector $\vec{v}$ as
a sum over basis elements $\hat{e}_{i}$ with (usually real-number)
coefficients $a_{i}$:
\[
\vec{v}=\sum_{i}a_{i}\hat{e}_{i}
\]
The basis elements $\hat{e}_{i}$ are unit-length vectors. Another
common notation is the bra-ket notation, which says the same thing,
but in a different way:
\[
\vec{v}=\sum_{i}a_{i}\left|i\right\rangle
\]
The bra-ket notation is slightly easier to use for this example. The
above 3-gram collocations can be written as vectors. The one for dog
would be
\[
\overrightarrow{dog}=2\left|the\;*\;chased\right\rangle +\left|the\;*\;killed\right\rangle
\]
while the one for cat would be
\[
\overrightarrow{cat}=\left|chased\;the\;*\right\rangle +2\left|the\;*\;chased\right\rangle +\left|the\;*\;killed\right\rangle
\]
The $*$ here is the wild-card; it indicates where ``dog'' and ``cat''
should go, but it also indicates how the basis vectors should be treated:
the wild-card helps establish that dogs and cats are similar. It allows
the basis vectors to be explicitly compared to one-another. The ability
to compare these allows the dot product to be taken.
Recall the definition of a dot-product (the inner product). For $\vec{v}$
as above, and $\vec{w}=\sum_{i}b_{i}\hat{e}_{i}$, one has that
\[
\vec{v}\cdot\vec{w}=\sum_{i}\sum_{j}a_{i}b_{j}\hat{e}_{i}\cdot\hat{e_{j}}=\sum_{i}\sum_{j}a_{i}b_{j}\delta_{ij}=\sum_{i}a_{i}b_{i}
\]
where the Kronecker delta was used in the middle term:
\[
\hat{e}_{i}\cdot\hat{e_{j}}=\delta_{ij}=\begin{cases}
1 & \mbox{if }i=j\\
0 & \mbox{if }i\ne j
\end{cases}
\]
Thus, the inner product of $\overrightarrow{cat}$ and $\overrightarrow{dog}$
can be computed:
\[
\overrightarrow{cat}\cdot\overrightarrow{dog}=0\cdot1+2\cdot2+1\cdot1=5
\]
One common way to express the similarity of $\overrightarrow{cat}$
and $\overrightarrow{dog}$ is to compute the cosine similarity. The
angle $\theta$ between two vectors is given by
\[
\cos\theta=\vec{v}\cdot\vec{w}/\left|\vec{v}\right|\left|\vec{w}\right|
\]
where $\left|\vec{v}\right|=\sqrt{\sum_{i}a_{i}^{2}}$ is the length
of $\vec{v}$. Since $\left|\overrightarrow{cat}\right|=\sqrt{6}$
and $\left|\overrightarrow{dog}\right|=\sqrt{5}$ one finds that
\[
\cos\theta=\frac{5}{\sqrt{30}}\approx0.913
\]
That is, dogs and cats really are similar.
If one was working with a dependency parse, as opposed to 3-grams,
and if one used the Frobenius algebra notation such as that used by
Kartsaklis in \cite{Kart2014}, then one would write the basis elements
as a peculiar kind of tensor, and one might arrive at an expression
roughly of the form
\[
\overline{dog}=2\left(\overleftarrow{the}\otimes\overrightarrow{chased}\right)+1\left(\overleftarrow{the}\otimes\overrightarrow{killed}\right)
\]
and
\[
\overline{cat}=\left(\overleftarrow{chased}\otimes\overleftarrow{the}\right)+2\left(\overleftarrow{the}\otimes\overrightarrow{chased}\right)+1\left(\overleftarrow{the}\otimes\overrightarrow{killed}\right)
\]
Ignoring the differences in notation (ignoring that the quantities
in parenthesis are tensors), one clearly can see that these are still
feature vectors. Focusing on the vector aspect only, these represent
the same information as the 3-gram feature vectors. They're the same
thing. The dot products are the same, the vectors are the same. The
difference between them is that the bra-ket notation was used for
the 3-grams, while the tensor notation was used for the dependency
parse. The feature vectors can also be written using the link-grammar-inspired
notation: \\
\qquad{}%
\begin{minipage}[t]{0.8\columnwidth}%
\textsf{dog: {[}the- \& chased+{]}2 or {[}the- \& killed+{]}1;}
\textsf{cat: {[}chased- \& the-{]}1 or {[}the- \& chased+{]}2 or {[}the-
\& killed+{]}1;}%
\end{minipage}\\
\\
\\
The notation is different, but the meaning is the same. The above
gives two feature vectors, one for dog, and one for cat. They happen
to look identical to the 3-gram feature vectors because this example
was carefully arranged to allow this. In general, dependency parses
and 3-grams are going to be quite different; for these short phrases,
they happen to superficially look the same. In any of these cases,
and in any of these notations, the concept of feature vectors remain
the same.
\subsection*{Stalk fields and vector fields}
The figures \ref{fig:N-gram-base} and \ref{fig:Dependency-base}
illustrate the base space. Above each point in the base space, one
can, if one wishes, plant a stalk.
\begin{figure}[h]
\caption{Corn field; stalk field}
\includegraphics[width=0.4\columnwidth]{corn-field.jpg} \qquad{}\includegraphics[width=0.25\columnwidth]{stalk-field}
\end{figure}
Such a plantation is not a sheaf; or rather it could be, but it is
not one with large sections. The stalk field only has individuals
seeds up and down each stalk; the stalks are not linked to one-another.
In the general case, illustrated in figure \ref{fig:General-Sheaf},
the stalks are linked to one-another; the sections really do start
to resemble sheets of paper stacked one on top another.
\begin{figure}[h]
\caption{Sheafs have big sections, in general\label{fig:General-Sheaf}}
\includegraphics[width=0.45\columnwidth]{section-field}
\end{figure}
The general sheaf, as depicted here, holds much more data than just
the base space. It holds the data showing where the base space came
from: how the base space was a projection of sections. Holding such
a large amount of data might be impractical: in the previous example,
holding the parse data for 10 million individual, distinct sentences
might be a challenge. The stalk field is meant to be a half-way point:
it can hold more information than the base alone, but still be computationally
manageable. For example, the sme dataset discussed previously, containing
10 million sentences composed of 130K words has been found to contain
6 million seeds; these are observed on average of 2.5 times each,
although the distribution is roughly Zipfian: a few are observed hundreds
of thousands of times, and more than a third are observed only once.
A particular appeal of the stalk field is that each stalk can be re-interpreted
as a vector. For each point of the base space, one just attaches a
single vector. There is no additional structure, and all this talk
of stalks can be brushed away as just a layer of theoretical complexity:
in the end, its just per-base-point feature vectors.
The power of the stalk representation is to keep in mind that the
basis elements are not just vacuous items, but are in fact jigsaw-puzzle
pieces that can be connected to one-another. Again, each stalk can
be viewed as a stack of jigsaw-puzzle pieces.
If there is a vector at each point, can the sheaf, as described here,
be thought of as a fiber bundle? Maybe, but that is not the intent.
In a fiber bundle, each fiber is isomorphic to every other. Thus,
locally, a fiber bundle always looks like the produce space $U\times F$
with $U\subseteq B$ and $F$ the fiber. Fiber bundles are interesting
when they are glued together in non-trivial ways, globally. Here,
there's a different set of concerns: its the local structure that
is interesting, and not so much the global structure. Also, there
has been no attempt to make each stalk (or stalk-space) isomorphic
to every other. If each stalk is a vector in a vector space, one could,
in principle, force that vector space to be the same, everywhere.
This does not buy much: in the practical case, the support for any
given vector is extremely sparse.
In some cases, it is natural to have different stalks be incomparable.
In biology, some stalks may correspond to enzymes, others to RNA,
others to DNA. In some vague philosophical sense, it could be argued
that these are ``all the same'': examples of molecules. In practice,
forcing such unification seems to be a losing proposition. The goal
of the technology here is to detect, observe and model fine details
of structure, and not to mash everything into one bag.
\subsection*{Presheaves}
The formal definition of a sheaf entails a presentation of the so-called
``gluing axioms''. These are technical requirements that ensure
that the stalks can be linked, and sections projected in a ``common
sense'' kind of fashion. For example, if a section contains a sentence,
one expects that the sentence is grammatical. One also expects to
be able to extract phrases out of it. Gluing sentences together, one
expects to arrive at coherent paragraphs. In a biochemical setting,
one expects that all of the individual reactions in a pathway fit
together. One expects to be able to talk about subsets of the full
pathway without obtaining nonsense. This is just common sense.
Unfortunately, ``common sense'' being a commodity in short supply,
the gluing axioms must be written in detail. Before this can be done,
the axioms for a presheaf must be reviewed. There are several. Rather
than presenting these as axioms, they are presented below as ``claims''.
It is up to the reader to verify that the structures defined earlier
satisfy these claims. This is done for several reasons. First, such
proofs are a bit tedious, and would be out of place in this otherwise
rather informal treatment of the topic. Second, the overall informality
of this document gives little support for weighty proofs. Third, most
of these claims should be fairly self-evident, upon a bit of exploration.
Finally, many choices were left to the reader: should edges be directed?
Are they labeled? Do vertexes carry additional markings or values?
Each choice of labeling and marking potentially affects the verification
of these claims. Thus, the below are presented as ``claims'', living
in limbo between axioms and theorems.
First, a definition.
\begin{defn*}
An \noun{open subgraph} $U$ of a graph $G$ is defined to be a section
of $G$. $\diamond$
\end{defn*}
This definition helps avoid what would otherwise be confusing terminology.
The open subgraphs below will always be subgraphs of the base space
$B$. The open subgraphs are created by taking scissors and cutting
edges in the graph, but leaving the cut half-edges attached, as they
were originally. That is, the cut edges are converted into connectors.
By leaving these connectors in place, much of the information needed
to glue them back together remains intact. It is up to the reader
to convince themselves that these open subgraphs behave essentially
the same way as open sets in a topological space do: one can take
intersections and unions, and doing so still results in an open subgraph.
One can even build a Borel algebra out of them, but his will not be
needed.
The presheaf is defined in terms of a functor and it's properties.
\begin{claim*}
There exists a functor $F$ such that, for each open subgraph $U$
of the base graph $B$, there exists some collection $F(U)$ of sections
above $U$. $\diamond$
\end{claim*}
Next, the restriction morphism, which cuts down or restricts this
collection.
\begin{claim*}
For each open subgraph $V\subseteq U$ there is a morphism $\mbox{res}_{V,U}:F\left(U\right)\to F\left(V\right)$.
$\diamond$
\end{claim*}
Since $V$ is smaller, we expect $F\left(V\right)$to be smaller,
also. The restriction morphism trims away the unwanted parts. The
trimming needs to stay faithful, to preserve the structure. Thus
\begin{claim*}
For every open subgraph $U$ of the base graph $B$, the restriction
morphism $\mbox{res}_{U,U}:F\left(U\right)\to F\left(U\right)$ is
the identity on $F\left(U\right)$. $\diamond$
\end{claim*}
The restrictions must compose in a natural way, as well, so that if
one trims a bit, then trims a bit more, its the same as doing it all
at once.
\begin{claim*}
For a sequence of open subgraphs $W\subseteq V\subseteq U$, the restrictions
compose so that $\mbox{res}_{W,V}\circ\mbox{res}_{V,U}=\mbox{res}_{W,U}$.
$\diamond$
\end{claim*}
If a system obeys the above, it is technically called a \noun{presheaf}.
A presheaf is much like the (informal) definition given for a sheaf,
above. However, it is possible to create structures that satisfy the
above claims (axioms), but don't quite match the intended definition
of a sheaf. In particular, the above are not enough to guarantee that
the sections in the presheaf can be organized properly into stalks.
To get well-behaved stalks, more is needed. These are the gluing axioms.
\subsection*{Gluing axioms}
The open subgraphs behave much like open sets. Thus, the concept of
an open covering can be imported in a straight-forward way. A collection
$\left\{ U_{i}\right\} $ of open subgraphs is an open cover for an
open subgraph $U$ if the union of all the $U_{i}$ contain $U$.
That is, they are an open cover if $U\subseteq\bigcup_{i}U_{i}$.
The union of open subgraphs is meant to be ``obvious'': join together
the connectors, where possible.
A presheaf is a sheaf if it obeys the following two claims/axioms.
\begin{claim*}
(Locality) If $\left\{ U_{i}\right\} $ is an open cover for $U$,
and if $s,t\in F\left(U\right)$ are sections such that $s\vert_{U_{i}}=t\vert_{U_{i}}$
for each $U_{i}$, then $s=t$. $\diamond$
\end{claim*}
In the above, the notation $s\vert_{V}$ denotes the restriction of
the section $s$ to the open subgraph $V$ of the base space $B$.
Pictorially, $s\vert_{V}$ is that part of the section that sits on
the stalks above $V$. It is a trimming-down of $s$ so that it projects
cleanly down to $V$ and to nothing larger. If each $U_{i}$ is a
seed in the base space, then $s\vert_{U_{i}}$ is a seed in the stalk
above $U_{i}$. Note that $s\vert_{U_{i}}$ might be the empty set.
The locality axiom is basically saying ``stalks exist''. Alternately,
the locality axiom says that if you cut up a layer-cake, you can still
tell, after the cutting, which layer was which.
The gluing axiom is needed to reassemble the pieces.
\begin{claim*}
(Gluing) If $\left\{ U_{i}\right\} $ is an open cover for $U$, and
if $s_{i}\in F\left(U_{i}\right)$ are sections restricted to each
$U_{i}$, and if, for all pairs $i,j$ the $s_{i}$ and $s_{j}$ agree
on overlaps, then there exists a section $s\in F\left(U\right)$ such
that $s_{i}=s\vert_{U_{i}}$. $\diamond$
\end{claim*}
In the above, the phrase ``$s_{i}$ and $s_{j}$ agree on overlaps''
means that $s_{i}\vert_{U_{i}\cap U_{j}}=s_{j}\vert_{U_{i}\cap U_{j}}$.
Note that $U_{i}\cap U_{j}$ might be the empty set, in which case
no agreement is needed. The gluing axioms states, more or less, that
if the layer cake is cut into pieces, and the pieces can be reassembled
with the edges lining up correctly, then the original layers can be
re-discovered.
Gluing is perhaps not as trivial as it sounds. It will be seen later
on that gluing is essentially the same thing as parsing. Obtaining
a successful parse is the same thing as assembling a valid section
ut of the parts. In the case of natural language, a parse succeeds
if and only if a sentence is grammatically valid. But of course! The
sections of a natural language sheaf are exactly the grammatical sentences.
Until this more detailed presentation of parsing is described, one
can imagine the following scenario. If seeds correspond to jigsaw-puzzle
pieces, then the sections $s_{i}$ correspond to partially-assembled
parts of the jigsaw. Two such parts $s_{i}$ and $s_{j}$ agree on
overlaps if $U_{i}\cap U_{j}$ is non-empty, and these two parts can
be joined together. If the connectors are typed, then there may be
multiple distinct connectors that can be joined to one-another. They
just might fit. That is, there might be more than one way to make
$s_{i}$ and $s_{j}$ connect, possibly by shifting, turning, the
pieces, etc. If one then tried to connect $s_{k}$, there might be
multiple ways of doing this, leading to a combinatorial explosion.
At some point in this process, one might discover that there is simply
no way at all to connect the next piece: it just won't fit. One then
has to back-track, and try a different arrangement. Obtaining an efficient
algorithm to perform this back-tracking is non-trivial: such algorithms
are called parsers, and gluing is parsing.
\subsection*{Does this really work?}
The sheaf axioms presented above are standardized and are presented
in many books. See, for example, Eisenbud \& Harris\cite{Eisenbud2000}
or Mac Lane \& Moerdijk\cite{MacLane1992}. The point of the above
is to convince the reader that the structures being described really
are sheaves, in the formal sense of the word. There's a big difference
though: everything above was developed from the point of view of graphs,
and that really does change the nature of the game. That said, the
reason that all of this machinery ``works'' is because the open
subgraphs really do behave very much like open sets. Because of this,
many concepts from topology extend naturally to the current structures.
This is not exactly a new realization. The ``open subgraphs'' defined
here essentially form a \href{https://en.wikipedia.org/wiki/Grothendieck_topology}{Grothendieck topology,}
and the thing that is being called a ``sheaf'' should probably be
more accurately called a ``site''. Developing and articulating this
further is left for a rainy day.
It is worth noting at this point that the normal notion of a ``germ''
in sheaf theory corresponds to what is called a ``seed'', here.
I suppose that the vocabulary used here could be changed, but I do
like thinking of seeds as sticky burrs. The biological germ of a seed
is that thing left, when the outer casing is removed.
The use of the jigsaw-puzzle piece analogy to define connectors is
strongly analogous to the construction of the \href{https://en.wikipedia.org/wiki/Nerve_of_a_covering}{Čech nerve}.
This can be thought of as a way of inducing overlaps from fiber products.
This point is returned to, later on.
\subsection*{Cohomology}
In orthodox mathematics, the only reason that sheaves are introduced
is to promptly usher the reader to Čech cohomology in the next chapter
of any book on algebraic topology. That won't be done here, so what's
the point of all this?
Well, this won't be done here mostly because I'm running out of space,
and, in the context of biology and linguistics, this is uncharted
territory. But some comments are in order. First, if the point of
this was merely to get at graph theory, there would not be much to
say. For example, the homotopy theory of graphs is more-or-less boring:
every graph is homotopic to a bouquet of circles. Homotopy and homology
on graphs only becomes interesting if one can add 2-cells and $n$-cells
for $n>1$; then one gets cellular homology. Can that ever happen
here?
If one considers biochemistry, and use the Krebs cycle (the citric
acid cycle) as an example, then the answer is yes. This is a loop;
it's essentially exothermic, or a kind of pump, in that the loop always
goes around in one direction. The edges are directional. Its a cycle
not only in a biological sense, but also in the mathematical sense:
it can be considered to be the boundary of a 2-cell. The Krebs cycle
is not the only cycle in biochemistry, and many of these cycles share
common edges. In essence, there's a whole bunch of 2-cells in biochemistry,
and they're all tangent to one-another. That is, there are chain complexes
in biochemistry. Is there interesting homology? Perhaps not, as this
would require some 2-cells to run ``backwards'', and that seems
unlikely. That would imply that there are no 3-cells in biochemistry.
But who knows; we have not had the tools to ``solve biochemistry''
before.
What about linguistics? Examples here seem to be more forced. Yes,
dependencies can be directional. Dependency trees are trees, however.
One can allow loops in them, but these loops are always acyclic. (\emph{viz.}
a ``DAG'' - a directed acyclic graph). There are no obviously cyclic
phenomena in natural language.
\subsection*{Why sheaves?}
By pointing out that natural language and biology can be described
with sheaves, it is hoped that this will prove better insights into
their structure, and provide a clear framework to think about the
structure of such data.
For example, consider the normally vague idea of the ``language graph''.
What is this? One has dueling notions: the graph of all sentences;
the generative power of grammars. Sheaves provide a clearer picture:
the graph itself is the base space, while surface and deep structure
can be explored through sections.
It can be argued that orthodox corpus linguistics studies the sheaf
of surface structure, with especially strong focus on the stalks.
Differences in the stalks reveal differences between regional dialects.
Much more interesting is that the corpus linguists have analyzed stalks
to discover not just differences in socio-economic status, but even
to discover politically-motivated speech, truth and lack-thereof in
journalism and news media.\cite{Louw2007}
The orthodox corpus linguists are not interested in refining their
collocations into a generative grammar. One does not obtain a generative
model of how different speakers in different socio-economic classes
speak; corpus linguistics examples are just that: examples that are
not further refined. By applying a pattern mining approach, the underlying
grammar can be discovered computationally. By viewing structure holistically,
as a sheaf, one can see ways in which this might be done.
Besides the sheaf of surface realizations studied by corpus linguists,
there are several different kinds of sheaves of grammatical structure.
Each section is a grammatically valid sentence, expressed as a tree
or as a DAG (directed acyclic graph) of some sort, annotated with
additional information, based on the formalities of that particular
grammatical approach (dependency grammar, head-phrase-structure grammar,
etc). The orthodox approach is to view the grammar as being the primary
object of study. The sheaf approach helps emphasize how that grammar
was arrived at: distinct words were grouped into grammatical classes.
Put differently, distinct stalks are recognized as being very similar,
if not identical, and are merged together to form a grammatical category;
it is no longer individual words that link with one-another, but the
grammatical classes.
Viewing language as a sheaf helps identify how one can automatically
extract grammatical classes: If one can judge two stalks as being
sufficiently similar in some way, then one can merge them into one,
proceeding in this way to create a reduced, concentrated model of
language that captures it's syntactic structure.
One can do even more: one can play off the differences in regional
dialects, or differences due to social-economic classes, discovered
by statistical means from a corpus, and attach these to specific grammatical
structures, identified from syntactic analysis. That is, by seeing
both activities: surface realizations and deeper structure as two
slightly different forms of ``the same thing'', one can see-saw,
lever ones way about, moving from one to the other and back. Tools
can be developed that do both, instead of just one or just the other.
One can actually unify into one, what seem to be very theories and
approaches, and one can develop the techniques to move between these
theories. This seems to be a very big win.
\section*{clustering morphisms}
The primary topic of this part is that the extraction of structure
from data is more-or-less a kind of morphism between sheaves. A ``pseudo-morphism''
might be an more appropriate term, as the definition here will not
be axiomatically precise.
There are several types of morphisms that are of interest. one kind
keeps the base space intact, but attempts to map one kind of section
into another: for example, mapping sections of $n$-grams into sections
of dependency parses. This resembles the orthodox concept of a morphism
between sheaves. The other kind of morphism is one that attempts to
re-arrange the base space, by grouping together multiple stalks into
one. This second kind of morphism is the one discussed in this part.
It is roughly termed a ``clustering morphism''.
There are several kinds of clustering morphisms that are interesting.
One was previously illustrated. Starting with
\[
\overrightarrow{Mary}\otimes\overline{walked}\otimes\overrightarrow{home}+\overrightarrow{Mary}\otimes\overline{ran}\otimes\overrightarrow{home}+\overrightarrow{Mary}\otimes\overline{drove}\otimes\overrightarrow{home}
\]
one wishes to deduce
\[
\overrightarrow{Mary}\otimes\left(\overline{walked}+\overline{ran}+\overline{drove}\right)\otimes\overrightarrow{home}
\]
This seems to be relatively straight-forward to accomplish, as it
looks like a simple application of the distributive law of multiplication
over addition. It is perhaps deceptive, as it presumes that the three
words ``walked'', ``ran'', ``drove'' do not appear anywhere
else in the sheaf.
A different example is that of forcing diagonalization where there
is none. Given a structure such as
\[
\left|Mary\right\rangle \otimes\left|walked\right\rangle +\left|Adam\right\rangle \otimes\left|ran\right\rangle
\]
one wishes to induce
\[
\left(\left|Mary\right\rangle +\left|Adam\right\rangle \right)\otimes\left(\left|walked\right\rangle +\left|ran\right\rangle \right)
\]
This resembles a grammar-school error: an inappropriate application
of the distributive law. But is it really? Part of the problem here
is that the notation itself is biased: the symbols $\otimes$ and
$+$ look like the symbols for multiplication and addition, and we
are deeply ingrained, from childhood - from grammar school, that multiplication
distributes over addition, but not the other way around. By using
these symbols, one introduces a prejudice into one's thinking; the
prejudice suggests that one operation is manifestly legal, while the
other is dubious and requires lots of justification.
This prejudice can run very deeply: in data-mining software, if not
in the theories themselves, the first operation might be hard-coded
into the software, into the theory, and assumed to be \emph{de facto}
correct. By contrast, the second relation seems to require data-mining,
and maybe lots of it: crunching immense, untold numbers of examples
to arrive at the conclusion that such a diagonalization is valid.
Perhaps reality is somewhere between these two extremes: the first
factorization should not be assumed, and, as a result, the second
diagonalization might not be so hard to discover. Perhaps induction
can be applied uniformly to both cases.
\subsection*{Induction}
The goal of machine learning in data science is the induction of the
factorization and diagonalization from a given dataset. Both examples
given above are misleading, because they ignore the fact that they
are embedded in a much larger corpus of language. How might these
two cases be induced from first principles, \emph{ab initio}, from
nothing at all, except for a bunch of examples?
On possibility is to start by looking for pair-wise correlations.
This works: that is how $\left|Mary\right\rangle \otimes\left|walked\right\rangle $
is discovered in the first place: these two words were collocated.
Likewise, for $\left|Adam\right\rangle \otimes\left|ran\right\rangle $.
But what about inducing diagonalization? Here, one observes that Mary
does lots of things, and so does Adam. Writing down the collocation
stalk for Mary, and the one for Adam should indicate that these two
stalks are quite similar. How can similarity be judged? The cosine
distance, previously reviewed, is a plausible way to start. One can
legitimately conclude that Adam and Mary belong in the same grammatical
category. What about ``walked'' and ``ran''? One can create a
stalk for these two as well, and it should not be hard to conclude,
using either cosine distance, or something else, that the two are
quite similar.
Great. Now what? Just because Adam and Mary are similar, and ``ran''
and ``walked'' are similar, this is still not quite enough to justify
the diagonalization. After all, ``Mary ran'' and ``Adam walked''
have not been observed; how can one justify that these will likely
be observed, which is the central claim that diagonalization is making?
The answer would need to be that certain cross-correlations are only
weakly seen. Define the set of named-things, and action-things, already
discovered: $names=\{Adam,Mary\}$ while $actions=\{ran,walked\}$.
Let the $\lnot$ symbol denote ``not'', so that $\lnot names$ is
the set of all things are not $names$, and $\lnot actions$ denote
all things that are not actions. Consider then the correlation matrix
\medskip{}
\begin{center}
\begin{tabular}{c|c|c|}
\multicolumn{1}{c}{} & \multicolumn{1}{c}{$actions$} & \multicolumn{1}{c}{$\lnot actions$ }\tabularnewline
\cline{2-3}
$names$ & High & Low\tabularnewline
\cline{2-3}
$\lnot names$ & Low & n/a\tabularnewline
\cline{2-3}
\end{tabular}
\par\end{center}
\medskip{}
The entry ``High'' means that a large amount of correlation is observed,
while ``Low'' means that little is observed. Correlation can be
measured in many ways; mutual information and Kullbeck-Liebler divergence
are popular.
Why might this work? The point is that if $\lnot actions$ contains
words like $book$ or $tree$, then sentences like ``Mary book''
or ``Adam tree'' are not likely to be observed; if $\lnot names$
includes words like $green$ or $the$, then sentences like ``green
walked'' or ``the ran'' should be rare.
The correlation matrix embodies the very meaning of ``diagonalization'':
a matrix is diagonal, when the entries along the diagonal are large,
and the entries not on the diagonal are zero. Observing this structure
then justifies writing $names\otimes actions$, which is exactly what
one wanted to induce. Can one also validly claim that $\left(\lnot names\right)\otimes\left(\lnot actions\right)$?
Well, probably not. The correlation there might be low - pairs would
be inconsistent as to how compatible they are. It might be hard to
compute, and, in the current context, it seems not to be wanted.
Can one induce factorization in the same way? Factorization, as given
above, seemed ``obvious'', but that was only due to the use of symbols
that prejudiced one's thinking. Factorization is, in fact, every bit
as non-obvious as diagonalization. The reason it seems so obvious
in the example was that the corpus ``Mary walked home'', etc. did
not include any sentences about Adam, nor anything about ``to the
store'', ``to work'', etc. Once these are included, factorization
starts to look a lot like diagonalization, if not exactly the same
thing. Inducing a subject-verb-object relationship can be done by
means of correlation, but is harder to depict, because the correlation
is no longer a pair-wise matrix, but is 3D, forming a cube, because
three categories need to be compared: $names$, $actions$, and $places$,
where $places=\{home,to\,the\,store,to\,work\}$. This is shown below.
\medskip{}
\begin{flushleft}
\qquad{}$places\;\begin{cases}
\\
\\
\\
\end{cases}$ %
\begin{tabular}{c|c|c|}
\multicolumn{1}{c}{} & \multicolumn{1}{c}{$actions$} & \multicolumn{1}{c}{$\lnot actions$ }\tabularnewline
\cline{2-3}
$names$ & High & Low\tabularnewline
\cline{2-3}
$\lnot names$ & Low & n/a\tabularnewline
\cline{2-3}
\end{tabular}
\par\end{flushleft}
\medskip{}
\medskip{}
\begin{flushleft}
\qquad{}\qquad{}\qquad{}\qquad{}\qquad{}$\lnot places\;\begin{cases}
\\
\\
\\
\end{cases}$ %
\begin{tabular}{c|c|c|}
\multicolumn{1}{c}{} & \multicolumn{1}{c}{$actions$} & \multicolumn{1}{c}{$\lnot actions$ }\tabularnewline
\cline{2-3}
$names$ & Low & n/a\tabularnewline
\cline{2-3}
$\lnot names$ & n/a & n/a\tabularnewline
\cline{2-3}
\end{tabular}
\par\end{flushleft}
\medskip{}
That is, one can induce a three-way relationship $(x,y,z)$ whenever
that relationship is frequently seen, and all three of the relations
$(\lnot x,y,z)$, $(x,\lnot y,z)$ and $(x,y,\lnot z)$ are not seen.
This extends to 4-way relations, and so on.
There is one notable phenomenon that is not covered by the above:
words that have different meanings, but the same spelling, for example,
``saw'' or ``fly'' which are both nouns and verbs. This complicates
the approach above; this issue is returned to in a later section,
titled \nameref{sec:Polymorphism}.
\subsection*{Related concept: Discrimination}
Several comments are in order. The above presents grammatical induction
as a form of discrimination - \href{https://en.wikipedia.org/wiki/Binary_classification}{binary discrimination},
even, which is considered to be a particularly simple form of learning.
There are many available techniques for this, and one can promptly
fall into the examination of \href{https://en.wikipedia.org/wiki/Receiver_operating_characteristic}{ROC curves},
and the like. It is important to note that what is being sketched
here is the idea of discrimination in the context of sheaves, and
not the idea of binary discrimination as some panacea for linguistics.
The above was also vague as to the form of correlation: how should
it be done? Should it literally be correlation, in the sense of probability
theory? Should it be mutual information? Something else? This is left
intentionally vague: different measures of correlation are possible.
Some may produce better results than others. A general theoretical
framework is being sketched here; the quality of different algorithms
is not assessed or presented. It is up to the reader to experiment
with different forms of correlation and discrimination.
\subsection*{Related concept: Clustering}
The induction, described above, resembles the machine-learning concept
of clustering in several ways. There are also some strong differences,
and so this is worth reviewing. Two old and time-honored approaches
to clustering are support vector machines (SVM) and $k$-means clustering.
The first relies explicitly on some sort of vectorial representation
for the data, while the second expects some sort of metric for judging
whether two points are similar or not. For the former, interpreting
the stalks as the feature vectors is sufficient, while for the latter,
the cosine distance can fill the role of a metric.
These two approaches are sufficient to extract classes of things,
such as $names$, $places$ and $actions$ in the above example. The
accuracy of the extracted categories is rarely excellent, but is certainly
adequate enough to proceed to other stages. Except ... that's it.
These clustering techniques stop there; they say nothing at all about
inducing grammatical relations. To induce grammatical relations, one
\emph{also} has to perform discrimination in some way. One has to
combine the results obtained from clustering, and then discriminate
to induce grammar.
Note that the discrimination step provides information about how good
the clustering was. Say, for example, that cosine distance was used,
together with $k$-means clustering, to obtain classes of words. Was
this clustering ``adequate''? That question can be answered by examining
the ROC curves obtained from a binary discrimination step. Different
kinds of clustering will present different ROC curves. This can be
used as feedback for the clustering step, so that one gets a recursive
learning step, alternating between discrimination and clustering.
This observation of recursion, of course, raises the question: can
clustering and discrimination be combined into one effective algorithm?
Yes, they can.
\subsection*{Related concepts: neurlal nets, adagram.}
Besides binary discrimination, there are other approaches. Approaches
that are more sophisticated include decision trees and decision forests.
These two approaches treat the vectors as tables of input data, and
then pick and choose among the vector components deemed predictive.
x
x
foo
orig neural net: \cite{Bengio2003}
\subsection*{More}
This: https://becominghuman.ai/how-does-word2vecs-skip-gram-work-f92e0525def4
\subsection*{Why clustering?}
foo-bar
x
x
By contrast, the goal here is not just to talk about a graph $G$
relative to a single $A$, but relative to a huge number of different
$A$'s. What's more, the internal structure of these $A$'s will continue
to be interesting, and so is carried onwards. Finally, the act of
merging together multiple vertexes into one $A$ may result in some
of the existing edges being cut, or new edges being created. The clustering
operation applied to the graph alters the graph structure. These considerations
are what makes it convenient to abandon traditional graph theory,
and to replace it by the notion of sheaves and sections.
x
The above establishes a vocabulary, a means for talking about the
clustering of similar things on graphs. It does not suggest how to
cluster. Without this vocabulary, it can be very confusing to visualize
and talk about what is meant by clustering on a graph. Its worth reviewing
some examples.
\begin{itemize}
\item In a social graph, a cluster might be a clique of friends. By placing
these friends into one group, the stalk allows you to examine how
different groups interact with one-another.
\item In proteomic or genomic data, if one can group together similar proteins
or genes into clusters, one can accomplish a form of dimensional reduction,
simplifying the network model of the dataset. It provides a way to
formalize network construction, without the bad smell of ad-hoc simplifications.
\item In linguistic data, the natural clustering is that of words that behave
in a similar syntactic fashion; such clusters are commonly called
``grammatical classes'' or ``parts of speech''. In particular,
it allows one to visualize language as a graph. So: consider, for
example, the set of all dependency parses of all sentences in some
corpus, say Wikipedia. Each dependency parse is a tree; the vertexes
are words, and the edges are the dependencies. Taken as a graph, this
is a huge graph, with words connecting to other words, all over the
place. Its not terribly interesting in this raw state, because its
overwhelmingly large. However, we might notice that all sentences
containing the word ``dish'' resemble all sentences containing the
word ``plate''; that these two words always get used in a similar
or the same way. Grouping these two words together into one reduces
the size of the graph by one vertex. Aggressively merging similar
words together can sharply shrink the size of the graph to a manageable
size. One gets something more: the resulting graph can be understood
as encapsulating the structure of the English language.
\end{itemize}
This last example is worth expanding on. Two things happen when the
compressed graph is created. First, that graph encodes the syntactic
structure of the language: the links between grammatical classes indicate
how words can be arranged into grammatically correct sentences. Second,
the amount of compression applied can reveal different kinds of structures.
With extremely heavy compression, one might discover only the crudest
parts of speech: determiners, adjectives, nouns, transitive and intransitive
verbs. Each of these classes are distinct, because they link differently.
However, if instead, a lot less compression is applied, then one can
discover synonymous words: so, ``plate'' and ``dish'' might be
grouped together, possibly with ``saucer'', but not with ``cup''.
Here, one is extracting a semantic grouping, rather than a syntactic
grouping.
So, the answer to ``why clustering?'' is that it allows information
to be extracted from a graph, and encoded in a useful, usable fashion.
No attempt is made here to suggest how to cluster; merely, that if
an equivalence relation is available, and if it is employed wisely,
then one can construct quotient graphs that encode important relationships
of the original, raw graph.
\section*{Types}
It is notationally awkward to have to write stalks in terms of the
sets of vertexes that they are composed of; it is convenient to instead
replace each set by a symbol. The symbol will be called a \noun{type}.
As it happens, these types can be seen to be the same things occurring
in the study of type theory; the name is justified.
The core idea can be illustrated with Link Grammar as an example.
The Link Grammar disjuncts \emph{are} one and the same thing as stalks.
It is worth making this very explicit. A subset of the Link Grammar
English dictionary looks like this:\medskip{}
\begin{minipage}[t]{0.5\columnwidth}%
\textsf{cat dog: D- \& S+;}
\textsf{the a: D+;}
\textsf{ran: S-;}%
\end{minipage}\medskip{}
\\
This states that ``cat'' and ``dog'' are both vertexes, and they
are in the same stalk. That stalk has two connectors: \textsf{D-}
and \textsf{S+}, which encode the other stalks that can be connected
to. So, the \textsf{D+} can be connected to the \textsf{D-} to form
a link. The link has the form \textsf{(\{the, a\}, \{cat, dog\})}
and the connector symbols \textsf{D+} and \textsf{D-} act as abbreviations
for the vertex sets that the unconnected end can connect to. The +
and - symbols indicate a directionality: to the right or to the left.
The capture the notion that, in English, the word-order matters. To
properly explain the + and -, we should have to go back to the definition
of a graph on the very first page, and introduce the notion of left-right
order among the vertices. Doing so from the very beginning would do
nothing but clutter up the presentation, so that is not done. The
reader is now invited to treat the initial definition of the graph
as a monad: there are additional details ``under the covers'', but
they are wrapped up and ignored, and only the relevant bits are exposed.
Perhaps the vertices had a color. Perhaps they had a name, or a numerical
weight; this is ignored. Here, we unwrap the idea that the vertices
must be organized in a left-right order. Its sufficient, for now,
to leave it at that.
\begin{figure}[h]
\caption{Three stalks and two typed links}
\includegraphics[width=0.4\columnwidth]{grammar}
\end{figure}
The three stalks here encode a set of grammatically valid English
language sentences. Hooking together the S- and S+ connectors to form
an S link, one obtains the sequence \textsf{{[}\{the, a\} \{cat, dog\}
\{ran\}{]}}. This can be used to generate grammatically valid sentences:
pick one word from each set, and one gets a valid sentence. Alternatively,
this structure can be taken to encode the sum-total knowledge about
this toy language: it is a kind-of graphical representation of the
entire language, viewed as a whole.
\begin{defn*}
Given a stalk $S=\left(V,L\right)$, the \noun{connector type} of
$L$ is a symbol that can be used as a synonym for the set $L$. It
serves as a short-hand notation for $L$ itself. $\diamond$
\end{defn*}
Just as in type theory, a type can be viewed a set. Yet, just as in
type theory, this is the wrong viewpoint: a type is better understood
as expressing a property: it is an intensional, rather than an extensional
description. Formally, in the case of finite sets, this may feel like
splitting hairs. For an intuitive understanding, however, its useful
to think of a type as a property carried by an object, not just the
class that the object can be assigned to.
\subsection*{Why types? }
Types are introduced here primarily as a convenience for working with
stalks. They are labels, but they can be useful. Re-examining the
examples:
\begin{itemize}
\item In a social graph, one group of friends might be called ``students''
and another group of friends might be called ``teachers''. The class
labels are useful for noting the function and relationship of the
different social groups.
\item In a genetic regulatory network, sub-networks can be classified as
\textquotedbl{}positive regulatory pathways\textquotedbl{} or \textquotedbl{}negative
regulatory pathways\textquotedbl{} with respect to the activation
of a particular gene.
\end{itemize}
These examples suggest that the use of types is little more than a
convenient labeling system. In fact, more hay can be made here, as
types interact strongly with category theory: types are used to describe
the internal language of monoidal categories. But this is a rather
abstract viewpoint, of no immediate short-term use. Suffice it to
say that appearance of types in grammatical analysis of a language
is not accidental.
\subsection*{What kind of information do types carry?}
The above example oversimplifies the notion of types, presenting them
as a purely syntactic device. In practice, types also carry semantic
information. The amount of semantic information varies inversely to
the broadness of the type. In language, coarse-grained types (noun,
verb) carry almost no semantic information. Fine-grained types carry
much more: a ``transitive verb taking a particle and an indirect
object'' is quite specific: it must be some action that can be performed
on some object using some tool in some fashion. An example would be
``John sang a song to Mary on his guitar'': there is a what, who
and and how yoked together in the verb ``sang''. The more fine-grained
the classification, the more semantic content is contained in it.
This suggests that the proper approach is hierarchical: a fine-grained
clustering, that captures semantic content, followed by a coarser
clustering that erases much of this, leaving behind only ``syntactic''
content.
\section*{Parsing}
The introduction remarked that not every collection of seeds can be
assembled in such a way as to create a valid graph. This idea can
be firmed up, and defined more carefully. Generically, a valid assembly
of seeds is called a parse, and the act of assembling them is called
parsing, which is done by parse algorithms. To illustrate the process,
consider the following two seeds:\\
\qquad{}%
\begin{minipage}[t]{0.8\columnwidth}%
$v_{2}:\left\{ \left(v_{2},v_{1}\right),\left(v_{2},v_{3}\right)\right\} $
$v_{3}:\left\{ \left(v_{3},v_{2}\right)\right\} $%
\end{minipage}\\
\\
Represented graphically, these seeds are
\begin{figure}[H]
\caption{Two unconnected seeds}
\includegraphics[width=0.55\columnwidth]{bad-graph}
\end{figure}
The connector (half-edge) $\left(v_{2},v_{3}\right)$ appears with
both polarities, and can be linked together to form a link. The connector
$\left(v_{2},v_{1}\right)$ has nothing to connect to. Even after
maximally linking these two seeds, one does not obtain a valid graph:
the vertex $v_{1}$ is missing from the vertex-set of the graph, even
though there is an edge ready to attach to it. This provides an example
of a failed parse. It is enough to add the seed $v_{1}:\left\{ \left(v_{1},v_{2}\right)\right\} $
to convert this into a successful parse. Adding this seed, and then
attempting to maximally link it results in a valid graph; the parse
is successful.
\begin{figure}[h]
\caption{Parsing is the creation of links}
\includegraphics[width=0.45\columnwidth]{parsing}
\end{figure}
Note the minor change in notation: the colon is used as a separator,
with the germ appearing on the left, and set of connectors on the
right. The relevance of this notational change becomes more apparent,
if we label the vertexes in a funny way: let $v_{1}$ carry the label
``the'', and $v_{2}$ carry the label ``dog'' and $v_{3}$ carry
the label ``ran''. The failed parse is meant to illustrate that
``dog ran'' is not a grammatically valid sentence, whereas ``the
dog ran'' is.
Converting these seeds to also enforce left-right word-order requires
the notation\\
\qquad{}%
\begin{minipage}[t]{0.8\columnwidth}%
\textsf{the: \{(the, dog+)\}}
\textsf{dog: \{(dog, the-), (dog,ran+)\}}
\textsf{ran: \{(ran, dog-)\}}%
\end{minipage}\\
\\
This notation is verbose, and slightly confusing. Repeating the germ
as the first vertex in every connector is entirely unnecessary. Write
instead:\\
\qquad{}%
\begin{minipage}[t]{0.8\columnwidth}%
\textsf{the: \{ dog+ \}}
\textsf{dog: \{ the-, ran+\}}
\textsf{ran: \{ dog- \}}%
\end{minipage}\\
\\
The set-builder notation is unneeded, and perhaps slightly confusing.
In particular, the word ``dog'' has two connectors on it; both must
be connected to obtain a valid parse. The ampersand can be used to
indicate the requirement that both connectors are required. This notation
will also be useful in the next section.\\
\qquad{}%
\begin{minipage}[t]{0.8\columnwidth}%
\textsf{the: dog+ ;}
\textsf{dog: the- \& ran+ ;}
\textsf{ran: dog- ;}%
\end{minipage}\\
\\
This brings us almost back to the previous section, but not quite.
Here, we are working with seeds; previously we worked with stalks.
Here, the connector type labels were not employed. In real-world use-cases,
using stalks and type labels is much more convenient.
This now brings us to a first draft of a parse algorithm. Given an
input set of vertices, it attempts to find a graph that is able to
connect all of them.
\begin{enumerate}
\item Provide a dictionary $D$ consisting of a set of unconnected stalks.
\item Input a set of vertices $V=\left\{ v_{1},v_{2},\cdots,v_{k}\right\} $.
\item For each vertex in $V$, locate a stalk which contains that vertex
in it's germ.
\item Attempt to connect all connectors in the selected stalks.
\item If all connectors can be connected, the parse is successful; else
the parse fails.
\item Print the resulting graph. This graph can be described as a pair $\left(V,E\right)$
with $V$ the input set of vertexes, and $E$ the set of links obtained
from fully connecting the selected stalks.
\end{enumerate}
The above algorithm is ``generic'', and does not suggest any optimal
strategy for the crucial steps 3 or 4. It also omits discussion of
any further constraints that might need to be applied: perhaps the
edges need to be directed; perhaps the resulting graph must be a planar
graph (no intersecting edges); perhaps the graph must be a minimum
spanning tree; perhaps the input vertexes must be arranged in linear
order. These are additional constraints that will typically be required
in some specific application.
\subsection*{Why parsing?}
The benefit of parsing for the analysis of the structure of natural
language is well established. Thus, an example of parsing in a non-linguistic
domain is useful. Consider having used the above graph compression/vertex-edge
clustering techniques to obtain a collection of stalks that describe
genomic interactions. This collection provides the initial dictionary
$D$. Now imagine a process where a certain specific set of genes
are associated with some particular trait or reaction. Is this a complete
set? Can it be said that their interactions are fully understood?
One way to answer these last two questions would be to apply the parse
algorithm, using the known dictionary, to see if a complete interaction
network can be obtained. If so, then this new specific gene-set fits
the general pattern. If not, if a complete parse cannot be found,
then one strongly suspects that there remain one or more genes, yet
undetermined, that also play a role in the trait. To find these, one
might examine the stalks that might have been required to complete
the parse: these will give hints as to the specific type of gene,
or style of interaction to search for.
Thus, parsing new gene expressions and pathways offers a way of discovering
whether they resemble existing, known pathways, or whether they are
truly novel. If they seem novel, parsing also gives strong hints as
to where to look for any missing pieces or interactions.
\subsection*{Is this really parsing?}
The above description of parsing is sufficiently different from standard
textbook expositions of natural language parsing that some form of
an apology needs to be written.
The first step is to observe that the presented algorithm is essentially
a simplified, generalized variation of the Link Grammar parsing algorithm.\cite{Sleator1993}
The generalization consists in the removal of word-order and link-crossing
constraints.
The second step is to observe that the theory of Link Grammar is more-or-less
isomorphic to the theory of pregroup grammars\cite{Kart2014} (see
\href{https://en.wikipedia.org/wiki/Pregroup_grammar}{Wikipedia});
the primary differences being notational. The left-right directional
Link Grammar connectors correspond to the left and right adjoints
in a pregroup. A Link Grammar disjunct (that is, a seed) corresponds
to a sequence of types in a pregroup grammar. The correspondence is
more-or-less direct, except that link grammar is notationally simpler
to work with.
The third step is to observe that the Link Grammar is a form of dependency
grammar. Although the original Link Grammar formulation uses undirected
links, it is straight-forward and unambiguous to mark up the links
with head-dependent directional arrows.
The fourth step is to realize that dependency grammars (DG) and head-phrase-structure
grammars (HPSG) are essentially isomorphic. Given one, one can obtain
the other in a purely mechanistic way.
The final step is to realize that most introductory textbooks describe
parsers for a context-free grammar, and that, for general instructional
purposes, such parsers are sufficient to work with HPSG. The priamry
issue with HPSG and context-free language parsers is that they obscure
the notion of linking together pieces; this is one reason why dependency
grammars are often favored: they make clear that it is the linkage
between various words that has a primary psychological role in the
human understanding of language. It should be noted that many researchers
in the psychology of linguistics are particularly drawn to the categorial
grammars; these are quite similar to the pregroup grammars, and are
more closely related to Link Grammar than to the phrase-structure
grammars.
\section*{Polymorphism\label{sec:Polymorphism}}
Any given vertex may participate in two or more seeds, independently
from one-another. It is this statement that further sharpens the departure
from naive graph theory. This is best illustrated by a practical example.
Consider a large graph, constructed from a large corpus of English
language sentences. As subgraphs, it might contain the two sentences
``A big fly landed on his nose'' and ``It will fly home''. The
vertex ``fly'' occurs as a noun (the subject, with dterminer and
adjective) in one sentence, and a verb (with subject and object) in
the other. Suppose that the equivalence relation, described in the
clustering section, also has the power to discern that this one word
should really be split into two, namely $fly_{\mbox{noun}}$ and $fly_{\mbox{verb}}$,
and placed into two different stalks, namely, in the ``noun'' stalk
in the first case, and the ``verb'' stalk in the second. Recall
that these two stalks must be different, because the kinds of connectors
that are allowed on a noun must necessarily be quite different from
those on a verb. One is then lead to the image shown in figure \ref{fig:Polymorphism}.
\begin{figure}
\caption{Polymorphism\label{fig:Polymorphism}}
\includegraphics[width=0.75\columnwidth]{polymorph}
This figure illustrates a polymorphic assignment for the word ``fly''.
It is split into two parts, the first, a noun, classed with other
nouns, showing labeled connectors to determiners, adjectives, and
a connector showing that nouns can act as the subject of a verb. The
second class shows labeled connectors to subjects and objects, as
is appropriate for transitive verbs. Underneath are the flattened
raw seeds, showing the words ``fly'' and ``cat'' and the myriad
of connectors on them. The flattened seeds cannot lead to grammatical
linkages, as they mash together into one the connectors for different
parts of speech.
\rule[0.5ex]{1\columnwidth}{1pt}
\end{figure}
The point of the figure is to illustrate that, although the ``base
graph'' may not distinguish one variant of a vertex from another,
it is important to discover, extract and represent this difference.
The concept of ``polymorphism'' applies, because the base vertex
behaves as one of several distinct types in practice. There are several
ways the above diagram can be represented textually. As before, the
Link Grammar-style notation is used, as it is fairly clear and direct.
One representation would be to expose the polymorphism only in the
connectors, and not in the base vertex label:
\medskip{}
\qquad{}%
\noindent\begin{minipage}[t]{1\columnwidth}%
\texttt{fly: (DET- \& ADJ- \& SUBJ+) or (SUBJ- \& OBJ+);}%
\end{minipage}
\medskip{}
A different possibility is to promptly split the vertex label into
two, and ignore the subscript during the parsing stage:
\medskip{}
\qquad{}%
\noindent\begin{minipage}[t]{1\columnwidth}%
\texttt{fly.noun cat: (DET- \& ADJ- \& SUBJ+);}
\texttt{fly.verb walk: (SUBJ- \& OBJ+);}%
\end{minipage}
\medskip{}
Either way, the non-subscripted version of $fly$ behaves in a polymorphic
fashion.
Note that the use of the notation ``\texttt{or}'' to disjoin the
possibilities denotes a choice function, and not a boolean-or. That
is, one can choose either one form, or the other; one cannot choose
both. During the parse, both possibilities need to be considered,
but only one selected in the end. This implies that at least some
fragment of linear logic is at play, and not boolean logic. (this
should be expanded upon in future drafts).
\subsection*{Similar concept: part of speech}
It is tempting to identify the connectors DET, ADJ, SUBJ, OBJ in the
diagrams above with ``parts of speech''. This would be a mistake.
In conventional grammatical analysis, there are half-a-dozen or a
dozen parts of speech that are recognized: noun, verb, adjective,
and so on. By contrast, these connector types indicate a grammatical
role. That is, the disjunct \texttt{SUBJ- \& OBJ+} indicates a word
that takes both a subject and an object: a transitive verb. That is,
the disjunct is in essence a fine-grained part of speech, indicating
not only verb-ness, but the specific type of verb-ness (transitive).
The Link Grammar English dictionary documents more than 100 connector
types, these are subtyped, so that approximately 500 connectors might
be seen. These connectors, when arranged into disjuncts, result in
tens of thousands of disjuncts. That is, Link Grammar defines tens
of thousands of distinct ``parts of speech''. The can be thought
of as parts of speech, but they are quite fine-grained, far more fine-grained
than any text on grammar might ever care to list.
If one uses a technique, such as MST parsing\cite{Yuret1998}, and
then extracts disjuncts, one might observe more than 6 million disjuncts
and 9 million seeds on a vocabulary of 140K words. These are, again,
in the above technical sense, just ``parts of speech'', but they
are hyperfine-grained. The count is overwhelming. So, although it
is techinically correct to call them ``parts of speech'', it is
a conceptual error to think of a class that has six million representatives
as if it were a class with a dozen members.
\subsection*{Similar concept: skip-grams}
The N-gram\cite{Rosen1996} and the more efficient skip-gram\cite{Guthrie2006}
models of semantic analysis provide somewhat similar tools for understanding
connectivity, and differentiating different forms of connectivity.
In a skip-gram model, one might extract two skip-grams from the above
example sentences: ``a fly landed'' and ``it fly home''. A clustering
process, such as adagram or word2vec might be used to classify these
two strings into distinct clusters, categorizing one with other noun-like
words, and the other with verb-like words.
The N-gram or skip-gram technique works only for linear, sequenced
data, which is sufficient for natural language, but cannot be employed
in a generic non-ordered graphical setting. To make this clear: a
seed representation for the above would be: ``fly: a- landed+''
indicating that the word ``a'' (written as the connector ``a-'')
comes sequentially before ``fly'', while the word ``landed'' (written
as the connector ``landed+'') comes after. The other phrase has
the representation ``fly: it- home+''. These two can now be employed
in a clustering algorithm, to determine whether they fall into the
same, or into different categories. If one treats the skip-grams,
and the seeds as merely two different representations of the same
data, then applying the same algorithm to either should give essentially
the same results.
The seed representation, however, is superior in two different ways.
First, it can be used for non-sequential data. Second, by making clear
the relationship between the vertex and its connectors, the connectors
can be treated as ``additional data'', tagging the vertex, carrying
additional bits of information. That additional information is manifested
from the overall graph structure, and is explicit. By contrast, untagged
N-grams or untagged skip-grams leave all such structure implicit and
hidden.
\subsection*{Polymorphism and semantics}
The concept of polymorphism introduced above lays a foundation for
semantics, for extracting meaning from graphs. This is already hinted
at by the fact that any English-language dictionary will provide at
least two different definitions for ``fly'': one tagged as a noun,
the other as a verb. The observation of hyperfine-grained parts of
speech can push this agressively farther.
In a modern corpus of English, one might expect to observe the seeds
``apple: green-'' and ``apple: iphone+''. The disjuncts ``green-''
and ``iphone+'' can be interpreted as a kind-of tag on the word
``apple''. Since there are exactly two tags in this example, they
can be viewed as supplying exactly one bit of additional information
to the word ``apple''. Effectively, a single apple has been split
into two distinct apples. Are they really distinct, however? This
can only be judged on the basis of some clustering algorithm that
can assign tagged words to classes. Even very naive, unsophisticated
algorithms might be expected to classify these two different kinds
of apple into different classes; the extra bit of information carried
by the disjunct is a bit of actual, usable information.
To summarize: the arrangement of vertexes into polymorphic seeds and
sections enables the vertexes to be tagged with extra information.
The tags are the connectors themselves: thier presence or absence
carries information. That extra information can be treated as ``semantic
information'', identifying different types or kinds, rather than
as purely syntactic information about arrangments and relationships.
\section*{Conclusion}
This document presents a way of thinking about graphs that allows
them to be decomposed into constituent parts fairly easily, and then
brought together and reassembled in a coherent, syntactically correct
fashion. It does so without having to play favorites among competing
algorithmic approaches and scoring functions. It makes only one base
assumption: that knowledge can be extracted at a symbolic level from
pair-wise relationships between events or objects.
It touches briefly, all too briefly, on several closely-related topics,
such as the application of category theory and type theory to the
analysis of graph structure. These topics could be greatly expanded
upon, possibly clarifying much of this content. It is now known to
category theorists that there is a close relationship between categories,
the internal languages that they encode, and that these are reflections
of one another, reflecting through a theory of types. A reasonable
but incomplete reference for some of this material is the HoTT book.
It exposes types in greater detail, but does not cover the relationship
between internal languages, parsing, and the modal logic descriptions
of parsing. It is possible that there are texts in proof theory that
cover these topics, but I am not aware of any.
This is a bit unfortunate, since I feel that much or most of what
is written here is ``well known'' to computational proof theorists;
unfortunately, that literature is not aimed at the data-mining and
machine-learning crowd that this document tries to address. Additions,
corrections and revisions are welcomed.
\bibliographystyle{tufte}
\input{sheaves.bbl}
\end{document}
|
1,314,259,996,622 | arxiv | \section{Introduction}
\label{introduction}
Smartphones have become an integral part of our everyday lives. In particular, smartphones have brought people to share their daily check-in experiences through social network services, such as Facebook, Foursquare and Instagram. Through these check-in data, it is possible to study users' online activities, physical movements and preferences on the point-of-interest (POI). Accordingly, various location-based service (LBS) providers utilize the check-in data to ensure the best experiences on their services. Among various tasks in LBSs, POI recommendation has attracted considerable attention in recent years \cite{b1,b2,b3,b4}.
Most of the recommendation methods are based on collaborative filtering algorithms. Among them, matrix factorization is widely used to analyze the relationship between users and items. Briefly, matrix factorization is worked as follows; First, a recommendation system collects the ratings of items from each user. The ratings are usually represented by a numerical value (e.g. 5 means very good, and 1 means very bad on a scale of one to five). After that, the system can learn the relationship between users and items by factorizing the user-item matrix and provide the personalized, ranked list of items by predicting the preferences of items that are not rated yet.
In order to recommend new items for users, the recommendation system should collect (item, rating) pairs from each user. However, since each (item, rating) pair embeds personal preferences, a privacy concern could arise. For example, the early study has shown that it is possible that an anomalous recommender can infer personal information from the collected rating data \cite{b5}. In other words, if the recommendation systems are untrusted, people will be reluctant to send their ratings, and thus, the system can no longer keep the quality of recommendation.
The notion of local differential privacy (LDP) \cite{b6} has attracted considerable attention in recent years from many industries due to its rigorous and provable privacy guarantees. In LDP setting, each user perturbs his/her original data in his/her device and sends the perturbed data to the server. In other words, the original data never leave the user's devices, and the server cannot infer sensitive information, regardless of the background knowledge. Accordingly, many global IT companies including Google\cite{b6}, Apple\cite{b7}, Microsoft\cite{b8}, and Samsung\cite{b9}, adopt LDP to collect data from their clients.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.9\textwidth]{fig1.pdf}
\caption{Overview of SPIREL.}
\label{fig1}
\end{figure*}
There have been several earlier works that adopt differential privacy in the recommendation system for preserving the privacy from the untrusted recommender \cite{b10, b11, b12}. Hua et al.\cite{b10} proposed a recommendation system based on the centralized differential privacy model. In the central model, however, we assume that the server is trusted and raw data are collected at a recommendation server. Zhang et al.\cite{b11} proposed a recommendation system based on personalized differential privacy. Likewise, they assume that the recommender is reliable, and each user's rating is randomly sampled and sent to the server. Finally, Shin et al.\cite{b12} proposed a recommendation system under LDP. In their work, LDP mechanism is utilized in stochastic gradient descent step, where each user sends perturbed gradients to the recommendation server.
However, the existing private recommendation methods have two limitations in their solutions. First, the recommenders are not always reliable, thus (item, rating) pairs should not be submitted in their original form. However, existing works \cite{b10, b11} assumed a trusted recommender exists, which is not able to protect privacy if the recommender is an adversary. Second, the above three methods \cite{b10,b11,b12} do not consider the temporal characteristics of POI. In other words, if we directly build a POI recommendation system based on these methods, the system will highly recommend irrelevant POI candidates, without considering the user's current location. The reason is that existing private recommendation systems directly predict users' preference on POIs, which do not reflect the preference transitions among POIs.
In this paper, we propose a novel private POI recommendation system called \textbf{SPIREL} (\textbf{S}uccessive \textbf{P}oint-of-\textbf{I}nterest \textbf{RE}commendation with \textbf{L}ocal differential privacy). SPIREL suggests next POI candidates by considering the user's current location as well as the preference transitions while preserving the location privacy. The overview of our SPIREL framework is illustrated in Figure \ref{fig1}. Briefly, SPIREL requires two types of data from the user's check-in history. First, to represent the POI-POI relationship, SPIREL uses a transition pattern modeled with a first-order Markov chain. Specifically, each user records the movements between two successive POIs. Second, users further extract the visiting counts for each POI from their check-in history, which reflects user-POI relationship. After that, SPIREL jointly learns the relationship between users and POIs using the visiting counts and noisy POI-POI matrix. Finally, the server sends the learned POI latent matrix to users, and the users can rank the preference of next POI based on their current location.
Compared to the existing private recommendation systems \cite{b10,b11,b12}, SPIREL never uses a user's original check-in history to learn the user-POI and POI-POI relationships. Moreover, to receive the next POI candidates, the users do not have to send their current location to the recommendation server. Formally, the contributions of our work are as follows.
\begin{itemize}
\item To figure out the approximate POI transition trend, we estimate only a coarse-grained frequency for each POI-POI relationship. Because the check-in history has huge domain size, directly adopting the LDP mechanism to estimate the check-in history frequency incurs exponential computation complexity. For example, if the domain size of POI is 10 and the maximum length of the check-in history 10, then the possible number of check-in history is $10^{10}$, which needs exponential time computation.
\item In the learning process, SPIREL jointly factorizes both user-POI and POI-POI matrix. To factorize two matrices simultaneously, we develop a new objective function and an optimization method under LDP. On the other hand, the existing methods can only factorize a user-POI matrix, which results in poor POI recommendation quality.
\item We conduct extensive experiments on the public datasets and show that our SPIREL achieves better performance in successive POI recommendation task.
\end{itemize}
The remainder of this paper is organized as follows. In Section \ref{preliminaries}, we explain the background knowledge of matrix factorization and local differential privacy. In Section \ref{problem definition}, we define the problem setting and describe the limitation of a naive approach. In Section \ref{proposed method}, we present SPIREL for successive POI recommendation. Section \ref{evaluation} demonstrates the performance of SPIREL on public datasets. Section \ref{related work} reviews related work. Finally, in Section \ref{conclusion}, we conclude this paper.
\section{Preliminaries}
\label{preliminaries}
\begin{table}[ht]
\renewcommand{\arraystretch}{1.3}
\caption{Notations}
\label{table1}
\centering
\begin{tabular}{|c||c|}
\hline
Notation & Meaning\\
\hline
$m$ & number of users\\
\hline
$n$ & number of POIs\\
\hline
$d$ & size of latent factors\\
\hline
$l_{t}^{u}$ & user $u$'s location at time $t$\\
\hline
$\mathbf{u_{i}} \in \mathbb{R}^{d}$ & profile vector of user $i$\\
\hline
$\mathbf{v_{j}} \in \mathbb{R}^{d}$ & profile vector of POI $j$\\
\hline
$U \in \mathbb{R}^{m \times d}$ & user profile matrix \\
\hline
$V \in \mathbb{R}^{n \times d}$ & POI profile matrix\\
\hline
$P \in \mathbb{R}^{m \times n}$ & user-POI matrix\\
\hline
$Q \in \mathbb{R}^{n \times n}$ & POI-POI matrix\\
\hline
$P_{i,*}$ & $i$th row vector of matrix $P$\\
\hline
$P_{*,j}$ & $j$th column vector of matrix $P$\\
\hline
\end{tabular}
\end{table}
\subsection{Matrix Factorization}
\label{matrix factorization}
Matrix factorization is used as a collaborative filtering algorithm in the recommendation systems. In recent years, thanks to its accurate prediction, many industries adopt it for personalized ad targeting. Matrix factorization decomposes the user-item matrix into two smaller matrices to discover the unobserved relation between users and items. Each decomposed matrix embeds user/item latent factors, which simplify the complicated user/item characteristics. Then, by multiplying the two latent matrices, we can predict the unobserved user/item relationships. In this paper, we assume that the item latent factors represent the characteristics of POIs.
One of the important things involved in matrix factorization is an optimization process for accurate prediction. Assume that there are explicit ratings about each item in user-item matrix. Then, the objective of matrix factorization is to find the user/item profile vectors whose product becomes similar to the original rating. In other words, matrix factorization attempts to reduce the error between the observed ratings and the predictions achieved by taking an inner-product of two profile vectors. While minimizing these errors, the latent factors are fitted to uncover the ratings that are not rated.
Formally, matrix factorization is defined as follows. We present in Table \ref{table1} the set of notations used throughout this paper. Unless otherwise stated, we assume that all vectors are column vectors. Suppose there are $m$ users and $n$ items. We can denote $r_{ij}$ as the explicit rating of user $i$ for item $j$. Then, $r_{ij}$ can be approximated by taking an inner product $r_{ij} \approx \mathbf{u_{i}}^{\intercal}\mathbf{v_{j}}$. Then, the objective of matrix factorization is reducing the error between $r_{ij}$ and $\mathbf{u_{i}}^{\intercal}\mathbf{v_{j}}$.
Generally, since users rate only a small set of items, we have a very limited number of observed ratings. Accordingly, the user-item matrix $P$ is very sparse, which means most of the elements in $P$ are unknown. Many of the recent studies, therefore, do optimization with only the observed ratings, while avoiding the overfitting by introducing the regularization term. Specifically, matrix factorization tries to minimize the following objective function.
\begin{equation}
\label{eq1}
\mathcal{L} = \sum_{{(i,j)} \in P} (r_{ij}-\mathbf{u_{i}}^{\intercal}\mathbf{v_{j}})^{2} + \lambda(\sum_{i=1}^{m}||\mathbf{u_{i}}||^{2} + \sum_{j=1}^{n}||\mathbf{v_{j}}||^{2})
\end{equation}
Here, the user profile vector $\mathbf{u_{i}}$ and item profile vector $\mathbf{v_{j}}$ are represented by a $d$-dimensional vector. Furthermore, $\lambda$ is a regularizer which is used for avoiding the overfitting problem. In summary, by minimizing the mean square error over the known ratings, we can predict the unobserved ratings.
There are mainly two optimization algorithms to solve the objective function: (1) stochastic gradient descent (SGD) and (2) alternative least square (ALS). SGD first computes a gradient of error, which indicates the direction of the greatest rate of increase of the function. Then, the gradient is multiplied with a learning rate $\gamma$, which determines how much we are updating the profile vector with respect to the gradient. After that, SGD takes steps in the opposite direction of the gradient to minimize the objective function. Here, we describe an update rule of user profile vector below (item profile vector can be updated similarly).
\begin{equation}
\label{eq2}
\nabla_{u_{i}}\mathcal{L} = \sum_{{(i,j)} \in P} -2\mathbf{v_{j}}(r_{ij}-\mathbf{u_{i}}^{\intercal}\mathbf{v_{j}}) + 2\lambda \mathbf{u_{i}}
\end{equation}
\begin{equation}
\label{eq3}
\mathbf{u_{i}} = \mathbf{u_{i}} - \gamma \cdot \nabla_{\mathbf{u_{i}}}\mathcal{L}
\end{equation}
Subsequently, we briefly introduce ALS. Since the minimizing Equation \ref{eq1} is a nov-convex problem, it is difficult optimize $\mathbf{u_{i}}$ and $\mathbf{v_{j}}$ jointly. For this reason, one way to solve this problem is to solve Equation \ref{eq1} in an alternating manner. Specifically, we first hold the item profile vector and take the derivative of Equation \ref{eq1} with respect to the user profile vector. Then, we set the derivative equal to zero and solve for the user profile vector. We list a ALS update rule of user profile vector below.
\begin{equation}
\label{eq4}
\begin{split}
\frac{\partial \mathcal{L}}{\partial \mathbf{u_{i}}} &= -2\sum_{j} (r_{ij}-\mathbf{u_{i}}^{\intercal}\mathbf{v_{j}})\mathbf{v_{j}}^{\intercal} + 2\lambda \mathbf{u_{i}}^{\intercal} \\
0 &= -(P_{i,*} - \mathbf{u_{i}}^{\intercal}V^{\intercal})V + \lambda \mathbf{u_{i}}^{\intercal} \\
\mathbf{u_{i}}^{\intercal} &= P_{i,*}V(V^{\intercal}V + \lambda I)^{-1}
\end{split}
\end{equation}
After updating the user profile vector following Equation \ref{eq4}, we take the derivative of Equation \ref{eq1} with respect to item profile vector and alternate this process until convergence.
\subsection{Local Differential Privacy}
\label{local differential privacy}
LDP is a rigorous mathematical definition of privacy \cite{b13}, which is used to preserve the location privacy of users in our work. Previously, differential privacy \cite{b14} has been employed in a centralized setting, which assumes that there exists a trusted data curator. In the central setting, each user submits their original record to the trusted data curator, and the curator perturbs the aggregated results to guarantee the privacy of users involved. However, in the real world, we cannot guarantee that the data curator is always trusted. Hence, we adopt a local version of differential privacy, which can guarantee the privacy of users under untrusted data curators.
Specifically, LDP requires the following setting. Suppose there exist an untrusted recommendation system and $m$ users. Each user $u_{i}$ holds a data $x_{i}$ in their mobile device and the recommendation system wants to know the aggregated result of $x_{1}, x_{2}, \cdots, x_{m}$. To guarantee the privacy of users, each user perturbs $x_{i}$ to obtain a noisy version of $x_{i}$, say $x_{i}'$. Then, the recommendation server receives $x_{i}'$ instead of $x_{i}$, and calculates aggregation result based on $x_{1}', x_{2}', \cdots, x_{m}'$. With regard to perturbing $x_{i}$, LDP requires that the recommendation server cannot infer the original value $x_{i}$ from the perturbed value $x_{i}'$ with high probability. The probability is decided by a privacy parameter $\varepsilon$, which controls the level of privacy guarantee. Formally, LDP is defined as follows.
\begin{definition}{\textbf{$\varepsilon$-Local Differential Privacy}}
\label{def1}
A randomized mechanism $\mathcal{A}$ satisfies $\varepsilon$-LDP for any two input values $x_{1}, x_{2} \in Domain(\mathcal{A})$ and any possible output value $x'$ of $\mathcal{A}$, we have that
\begin{equation*}
Pr[\mathcal{A}(x_{1})=x'] \leq e^{\varepsilon}Pr[\mathcal{A}(x_{2})=x']
\end{equation*}
\end{definition}
One of the methods that can realize the LDP is the randomized response \cite{warner1965randomized}. The randomized response provides plausible deniability by allowing each user to answer truthfully or at random. Specifically, suppose $m$ users holds a binary value and a data curator wants to know the number of users with value 1. Then, the users toss a biased coin in private. The users send their original value $x$ with probability $p$ and $1-x$ with probability $q$. Although the curator cannot infer the individual user's true value, the curator still can estimate aggregate results. For example, the number of users who hold 1 can be calculated by $\frac{c-mq}{p-q}$. Here, $c$ means the number of users who sends 1 to the curator. It is proved that by setting the value of $p=\frac{e^{\varepsilon}}{e^{\varepsilon} + 1}$ ($q=1-p$), the randomized response satisfies $\varepsilon$-LDP. In our work, we will use an optimized version of randomized response. Specifically, Wang et al.\cite{b15} proved that by setting $p=\frac{1}{2}$ and $q=\frac{1}{e^{\varepsilon} + 1}$, the variance of estimated count $\frac{c-mq}{p-q}$ is minimized.
Moreover, since LDP is also a variant of DP, LDP satisfies the composition theorem \cite{b16}. The composition theorem describes that if an algorithm consists of multiple differentially private mechanisms, the algorithm also satisfies differential privacy. Here, we introduce the sequential composition theorem that we utilize in this paper as follows.
\begin{theorem}{\textbf{Sequential Composition}}
\label{theorem1}
Suppose an algorithm $\mathcal{F}$ consists of $n$ LDP mechanisms ($\mathcal{A}_{1},\cdots,\mathcal{A}_{n}$), where each satisfies ($\varepsilon_{1},\cdots,\varepsilon_{n}$)-LDP. Then, $\mathcal{F}$ provides $\sum_{i=1}^{n}{\varepsilon_{i}}$-LDP.
\end{theorem}
Because of the composition theorem, we usually refer $\varepsilon$ as a privacy budget. In other words, to guarantee the $\varepsilon$-LDP, each LDP mechanism should use a part of $\varepsilon$, and the sum should be no more than $\varepsilon$.
\section{Problem Definition}
\label{problem definition}
\subsection{Successive POI Recommendation}
\label{successive POI recommendation}
We first define the basic notions for successive POI recommendation.
\begin{definition}
\label{def2}{\textbf{Check-in history}}
Let $\mathcal{L}$ be the set of POIs where the size of $\mathcal{L}$ is equal to $n$. Then, a check-in history of user $u$ can be defined by the series of check-ins {$c_{u}^{1},c_{u}^{2},\cdots,c_{u}^{t}$}. Here, each check-in $c_{u}^{t}$ can be represented as ($l$, $t$), which means the user $u$ visits POI $l\in\mathcal{L}$ at time $t$.
\end{definition}
Based on Definition \ref{def2}, we can formally define the successive POI recommendation problem as follows. Given a user $u\in\mathcal{U}$ and a series of his/her check-in history {$c_{u}^{1},c_{u}^{2},\cdots,c_{u}^{t}$}, the objective of POI recommendation system is to recommend a suitable POI for user $u$ at time $t+1$.
\begin{table}[t]
\caption{A result of successive POI recommendation using a naive method.}
\label{table2}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
& top-3 & top-5 & top-7 & top-10\\
\hline
\#user & 1,990 & 2,888 & 3,581 & 4,400\\
\hline
\end{tabular}
\end{table}
\subsection{A Naive Solution for Successive POI recommendation}
\label{a navie solution for successive POI recommendation}
The objective of this paper is to privately recommend next POI based on the user's current location. Before introducing our method, we here sketch a simple solution. As we explained in Section \ref{matrix factorization}, we need an explicit rating from each user to build a POI recommendation system. Accordingly, we should ask each user about their preferences for POIs. However, we assume that only the users' check-in history is available. Thus, we use a visiting count as implicit feedback for which POI the users probably like.
Let $P=\{ r_{ij} \}_{m\times n}$ be a user-POI matrix, where each element indicates the visiting count of POI $l_{j}$ of user $u_{i}$. Then, the task is to decompose $P$ into two matrices, where each matrix represents the latent factors of users and POIs. By multiplying the learned latent matrices, we can predict the preferences of next POIs that users have not visited before.
To evaluate the performance of this system, we use a Chicago taxi trip data \footnote{https://data.cityofchicago.org/Transportation/Taxi-Trips/wrvz-psew/data}. This dataset contains 112,860,054 records which describe the pick-up and drop-off locations of taxis in Chicago. We sampled and reconstructed the dataset, thereby there exist 10,000 users and 373 POIs. Each user has 20 check-ins and, we use the previous 19 check-ins as a training set. We evaluate the performance of the recommendation system using the latest check-in. In other words, if the POI in the latest check-in exists in the recommended POI candidates, then we consider the recommendation system recommends suitable POIs.
Table \ref{table2} shows the number of users whose latest POI is included in the recommended POIs. The accuracy of the recommendation system increases, as the system recommends more POI candidates to users. Although the system recommends top-10 POI candidates, only 44\% of users are given suitable POI candidates. We conclude that failing to handle the POI transition preferences leads to inaccurate POI recommendation.
\section{Proposed Method}
\label{proposed method}
\subsection{SPIREL Framework}
\label{SPIREL framework}
We now introduce our SPIREL framework that extends the existing private recommendation models to consider the relationship between POIs. The overview of SPIREL framework is illustrated in Figure \ref{fig1}. As shown in Section \ref{a navie solution for successive POI recommendation}, only using the visiting counts is not enough to predict the next POI. To overcome this limitation, we adopt the transfer learning approach \cite{b17,b18,b19}.
The transfer learning approach in matrix factorization is used to address the sparsity problem in the user-item matrix. Since each user cannot have experiences about all items, there are many missing ratings. Accordingly, an overfitting problem occurs frequently, which degrades the quality of the recommendation. To overcome this problem, the transfer learning approach takes advantage of the auxiliary data to address this problem, which reduces the impact of sparsity in the rating matrix. For example, the approach in \cite{b17} utilizes binary ratings (like/dislike) to address the lack of numerical ratings about each item.
In this paper, we further draw one more auxiliary information from users' check-in history to figure out the POI-POI relationship. To model the POI-POI relationship, we assume that the next POI depends on the users' current POI. Accordingly, we can model the relationship between two POIs by a POI-POI matrix, where each element of the POI-POI matrix represents the number of occurrences of specific transition pattern. For example, a user in Figure \ref{fig1} has four transition patterns ($POI_{1} \rightarrow POI_{2}$, $POI_{2} \rightarrow POI_{3}$, $POI_{3} \rightarrow POI_{4}$, $POI_{4} \rightarrow POI_{2}$).
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{fig2.pdf}
\caption{Illustration of transfer learning in our SPIREL framework.}
\label{fig2}
\end{figure}
Ultimately, our main idea is that the next POI will share the same latent factor between user-POI and POI-POI relationships based on the transfer learning approach. In Figure \ref{fig2}, we first illustrate the graphical model of transfer learning in SPIREL. Here, the POI latent matrix $V$ is shared and used to connect both the user-POI matrix $P$ and POI-POI matrix $Q$ in matrix factorization. In this way, the transition patterns and visiting counts are integrated in the learning process. Accordingly, our objective function should be slightly changed compared to Equation \ref{eq1} to learn $P$ and $Q$ simultaneously. We describe our new objective function in Equation \ref{eq5}.
\begin{equation}
\label{eq5}
\mathcal{L}_{SPIREL} = ||P - UV^{\intercal}||^{2} + ||Q - VV^{\intercal}||^{2} + \lambda (||U||^{2} + ||V||^{2})
\end{equation}
The challenge is that directly using the visiting counts and the transition patterns to optimize Equation \ref{eq5} can result in privacy breaches. The earlier work in \cite{b20} showed that only four successive location points are enough to identify the individuals uniquely. Thus, we propose two locally private methods to optimize Equation \ref{eq5}: (1) transition pattern perturbation and (2) gradient perturbation.
\subsection{Transition Pattern Perturbation}
\label{transition pattern perturbation}
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\textwidth]{fig3.pdf}
\caption{An example of POI transition pattern collection under LDP.}
\label{fig3}
\end{figure*}
We first propose our locally private transition pattern collection method. Before presenting our method, we first explain a naive method to collect the transition pattern under LDP. Suppose there are $n$ POIs in a region and the maximum length of a check-in history is $t$. Then, the domain size of the possible check-in history is $n^{t}$. A naive solution would be to directly collect the frequency of each possible check-in history using the randomized response method. For example, if we set $n=10$ and $t=10$, then there are $10^{10}$ possible check-in histories. However, directly computing frequencies over this cardinality is impractical, even if $n$ and $t$ are very small.
We do not need accurate frequencies of each check-in history to build the POI-POI matrix. In other words, we only need to figure out the coarse-grained preference of transition patterns between two POIs. Accordingly, our idea is to sample a transition pattern from a users' check-in history and estimate the frequency of sampled transition pattern under LDP to figure out the users' preference transitions. Figure \ref{fig3} shows an example of the transition pattern collection process. In Figure \ref{fig3}, each bit represents a specific transition pattern.
We assume that all users share the same POI domain. Then, each user $u$ selects one transition pattern $l_{t}^{u} \rightarrow l_{t+1}^{u}$ from his/her check-in history. There are two reasons for this sampling process. First, we can prevent some users who have a long sequence of check-ins from contributing too much information to the recommendation server. For example, suppose a user only moves between his home and workplace. If we allow him to contribute his entire check-in history, then the recommendation system can not figure out the globally frequent transition patterns. Second, we can avoid each user divides the privacy budget over a large domain size of check-in history, which negatively affects the accuracy of the estimated frequency.
\begin{algorithm}[t]
\caption{Transition Pattern Collection using Optimized Randomized Response}
\label{alg1}
\begin{algorithmic}[1]
\Require a privacy budget $\varepsilon$, a number of users $m$, a number of POIs $n$
\Ensure a POI-POI matrix $Q$
\LineComment{\emph{User part} (This protocol is run by the $i$-th user $u_{i}$)}
\State User $u_{i}$ samples a transition pattern $l_{t}^{u_{i}} \rightarrow l_{t+1}^{u_{i}}$ from check-in history
\State User $u_{i}$ initializes a length $n^2$ bit string $t$ such that
\For{$j=1$ to $n^2$}
\If {$l_{t}^{u_{i}} \rightarrow l_{t+1}^{u_{i}}$ corresponds to the $j$th position of $t$} $t[j]=1$
\Else {} $t[j]=0$
\EndIf
\State Apply Optimized RR to $t[j]$ to create perturbed bit value $\hat{t}[j]$
\State Set $p = \frac{1}{2}$, $q = \frac{1}{e^{\varepsilon} + 1}$
\State
\begin{equation*}
Pr[\hat{t}[j]=1]=
\begin{cases}
p & \mathrm{if\ } t[j]=1 \\
q & \mathrm{if\ } t[j]=0 \\
\end{cases}
\end{equation*}
\State
\begin{equation*}
Pr[\hat{t}[j]=0]=
\begin{cases}
p & \mathrm{if\ } t[j]=1 \\
1 - q & \mathrm{if\ } t[j]=0 \\
\end{cases}
\end{equation*}
\EndFor
\State Send $\hat{t}$ to the server
\LineComment{\emph{Server part}}
\State Initialize a $n \times n$ matrix $Q$ whose elements are all zero
\For{$i=1$ to $n$}
\For{$j=1$ to $n$}
\State Estimate the frequency of transition pattern $c(POI_{i} \rightarrow POI_{j})=\frac{\sum_{x=1}^{m}\hat{t}_{x}[j + ni] - mq}{p-q}$
\State Set $Q[i][j] = c(POI_{i} \rightarrow POI_{j})$
\EndFor
\EndFor
\State return $Q$
\end{algorithmic}
\end{algorithm}
Algorithm \ref{alg1} shows the detailed building process of POI-POI matrix. Here, the recommendation server receives a binary bit value for each element of POI-POI matrix, where each bit represents whether a user's sampled transition pattern corresponds to the specific POI-POI relationship. Since directly sending the original bit value can arise the location privacy breach \cite{b20}, each user perturbs the value using optimized randomized response \cite{b15}. After that, the server aggregates the perturbed bit values and estimates the frequency of each transition pattern. Consequently, we set the elements in POI-POI matrix to estimated frequencies, which reflect the global preference transitions.
We now analyze the privacy guarantee of Algorithm \ref{alg1}.
\begin{lemma}
Algorithm \ref{alg1} satisfies $\varepsilon$-LDP.
\begin{proof}
There are $n^2$ possible POI transition patterns. Accordingly, each user initializes a binary bit string of size $n^2$, where only one bit is set to 1 and the others are set to 0. Specifically, since each user only samples one transition pattern from his/her check-in history, only the associated bit has value 1. To estimate the frequency of each transition pattern, users perturbs each bit by the optimized randomized response \cite{b15} and submit the perturbed bit strings. Suppose the allocated privacy budget is $\varepsilon$ for each user. Because each bit is relevant to only one transition pattern, we can use the entire budget to perturb each bit value. In conclusion, following the analysis in \cite{b15}, the Algorithm \ref{alg1} satisfies $\varepsilon$-LDP.
\end{proof}
\end{lemma}
Since each user should submit $n^{2}$ bits, the communication cost can be a problem. However, the transition pattern collection happens only once per user. Thus, $n^{2}$-bits communication cost is still affordable. Further, the successive POI recommendation task aims to recommend POIs that are likely to be visited by a user. Accordingly, the recommendation systems focus on a small region. If the number of POI is still high, we can use an alternative method called optimal local hash (OLH) \cite{b15}, which can achieve reduced communication cost.
\subsection{Gradient Perturbation}
\label{gradient perturbation}
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\textwidth]{fig4.pdf}
\caption{Illustration of gradient perturbation.}
\label{fig4}
\end{figure*}
After obtaining a POI-POI matrix by Algorithm \ref{alg1}, the next step is to factorize two matrices to identify the latent factors of users and POIs. Recall that our objective function (Equation \ref{eq5}) aims to factorize the user-POI matrix and POI-POI matrix simultaneously. Hence, the update rule of Equation \ref{eq5} should be rewritten to learn the two matrices together. In Section \ref{matrix factorization}, we introduce two methods to minimize the quadratic objective function.
We first adopt the ALS method to optimize Equation \ref{eq5}. There are two reasons for this. First, ALS is very easy to parallelize, which is suitable in the local setting. For example, the study in \cite{b22} describes a parallel algorithm with ALS, designed for the Netflix Prize. Secondly, note that our user-POI and POI-POI matrices consist of implicit feedback from users' check-in history. When optimizing the objective function by SGD with explicit ratings, we can treat missing values as unobserved data and stochastically update the objective function with only observed ratings. However, when using the implicit feedbacks, we cannot assume whether the missing values indicate the users dislike the item or don't know about it.
In this section, we propose a locally private solution to optimize our objective function. Initially, we need to derive the update rules of Equation \ref{eq5}. We first hold the POI profile vector $\mathbf{v_{j}}$ constant and take the derivative of Equation \ref{eq5} with respect to the user profile vector $\mathbf{u_{i}}$. Then, we can obtain the ALS update rule of the user profile vector as follows.
\begin{equation}
\label{eq6}
\begin{split}
\frac{\partial \mathcal{L}_{SPIREL}}{\partial \mathbf{u_{i}}} &= -2\sum_{j} (r_{ij}-\mathbf{u_{i}}^{\intercal}\mathbf{v_{j}})\mathbf{v_{j}}^{\intercal} + 2\lambda \mathbf{u_{i}}^{\intercal} \\
\mathbf{u_{i}}^{\intercal} &= P_{i,*}V(V^{\intercal}V + \lambda I)^{-1}
\end{split}
\end{equation}
As shown in Equation \ref{eq6}, the user profile update rule is the same as in Equation \ref{eq4}. Since we assume that POI profile vector as constant, only the user-POI matrix term remains. Additionally, as the POI latent matrix $V$ is publicly known, by sending $V$ to each user, the users can update their profile vector locally without forwarding their visiting counts $P_{i,*}$ to the server.
Next, we list the ALS update rule of POI profile vector as follows.
\begin{equation}
\label{eq7}
\begin{split}
\frac{\partial \mathcal{L}_{SPIREL}}{\partial \mathbf{v_{j}}} &= -2\sum_{i} (r_{ij}-\mathbf{v_{j}}^{\intercal}\mathbf{u_{i}})\mathbf{u_{i}}^{\intercal} \\
& -2 \sum_{k}(s_{kj}-\mathbf{v_{j}}^{\intercal}\mathbf{v_{k}})\mathbf{v_{k}}^{\intercal}
+ 2\lambda \mathbf{v_{j}}^{\intercal} \\
0 &= -(P_{*,j}^{\intercal} - \mathbf{v_{j}}^{\intercal}U^{\intercal})U \\
& -(Q_{*,j}^{\intercal} - \mathbf{v_{j}}^{\intercal}V^{\intercal})V + \lambda \mathbf{v_{j}}^{\intercal} \\
P_{*,j}^{\intercal}U + Q_{*,j}^{\intercal}V &= \mathbf{v_{j}}^{\intercal}(U^{\intercal}U + V^{\intercal}V + \lambda I) \\
\mathbf{v_{j}}^{\intercal} &= (P_{*,j}^{\intercal}U + Q_{*,j}^{\intercal}V)(U^{\intercal}U + V^{\intercal}V + \lambda I)^{-1}
\end{split}
\end{equation}
Here, $P_{*,j}$ means a column vector of matrix $P$ with index $j$ and $s_{kj}$ indicates $(k,j)$-th element of matrix $Q$. To update the POI profile vector, in contrast to the case of the user profile vector, we need a user latent matrix $U$ which can reveal the preferences of the users by multiplying the POI latent matrix $V$. For preventing the privacy breaches, one option is to let users perturb their profile vectors directly and submit the perturbed profile vectors to the recommendation server. However, directly perturbing the profile vectors will enormously distort users' preferences, which leads to low quality of recommendation.
For circumventing the above issue, we instead apply SGD to update the POI profile vector. In other words, instead of adding noises to the user profile vectors directly, we let each user submit the perturbed gradients of Equation \ref{eq5}. Then, the recommendation server aggregates the perturbed gradients and update the POI profile vector based on SGD. Again, we can rewrite the gradient of POI profile vector as follows.
\begin{equation}
\label{eq8}
\begin{split}
\nabla_{\mathbf{v_{j}}}\mathcal{L}_{SPIREL} & =
-\sum_{i=1}^{m} 2\mathbf{u_{i}}(r_{ij}-\mathbf{u_{i}}^{\intercal}\mathbf{v_{j}})\\
& - \sum_{k=1}^{n} 2\mathbf{v_{k}}(s_{kj}-\mathbf{v_{k}}^{\intercal}\mathbf{v_{j}})
+ 2\lambda\mathbf{v_{j}}
\end{split}
\end{equation}
Equation \ref{eq8} largely consists of two terms: $\mathbf{u_{i}}(r_{ij}-\mathbf{u_{i}}^{\intercal}\mathbf{v_{j}})$ and $\mathbf{v_{k}}(s_{kj}-\mathbf{v_{k}}^{\intercal}\mathbf{v_{j}})$. The recommendation server can calculate the term $\mathbf{v_{k}}(s_{kj}-\mathbf{v_{k}}^{\intercal}\mathbf{v_{j}})$ by itself because the value $s_{kj}$ is already obtained by Algorithm \ref{alg1}. The other term $\mathbf{u_{i}}(r_{ij}-\mathbf{u_{i}}^{\intercal}\mathbf{v_{j}})$ has a prediction error of $r_{ij}$, which indicates the visiting count. Hence, the users should submit the perturbed gradients to prevent the recommendation server from learning whether a user $i$ visits POI $j$. To perturb the term $\mathbf{u_{i}}(r_{ij}-\mathbf{u_{i}}^{\intercal}\mathbf{v_{j}})$, we apply the randomized response method of Wang et al. \cite{b23}, called Piecewise Mechanism (PM).
\begin{algorithm}[t]
\caption{Gradient Perturbation using Piecewise Mechanism}
\label{alg2}
\begin{algorithmic}[1]
\Require a privacy budget $\varepsilon$, a user profile vector $\mathbf{u_{i}}$, a prediction error $e_{ij} = (r_{ij}-\mathbf{u_{i}}^{\intercal}\mathbf{v_{j}})$, a profile vector size $d$
\Ensure a perturbed gradient value $d\hat{e_{ij}}\mathbf{u_{i}}[t]$
\State User $i$ selects a value $t$ from $\{1,2,\cdots,d\}$
\State Project $e_{ij}$ into the range [-1, 1]
\State Set $C = \frac{e^{\varepsilon/2} + 1}{e^{\varepsilon/2} - 1}$
\State Set $l(e_{ij}) = \frac{C+1}{2} \cdot e_{ij} - \frac{C - 1}{2}$
\State Set $r(e_{ij}) = l(e_{ij}) + C - 1$
\State User $i$ selects a value $x$ from $[0, 1]$
\If{$x < \frac{e^{\varepsilon/2}}{e^{\varepsilon/2} + 1}$}
\State User $i$ selects a value $\hat{e_{ij}}$ from $[l(e_{ij}), r(e_{ij})]$
\Else{}
\State User $i$ selects a value $\hat{e_{ij}}$ from
\State $[-C, l(e_{ij})) \cup (r(e_{ij}), C]$
\EndIf
\State return $d\hat{e_{ij}}\mathbf{u_{i}}[t]$
\end{algorithmic}
\end{algorithm}
Figure \ref{fig4} and Algorithm \ref{alg2} demonstrates the gradients perturbation process that utilizes PM. PM focuses on the problem of estimating the mean of perturbed numeric values. Assume that the input value of PM is in the range $[-1, 1]$. For reducing the estimation error, PM first builds a probability distribution with three pieces that bound the output value within the range $[-C, C]$; (1) left piece with $[-C, l(e_{ij}))$, (2) center piece with $[l(e_{ij}), r(e_{ij})]$ and (3) right piece with $(r(e_{ij}), C]$. The center piece moves along the input value with its length ($C-1$) unchanged. Then, PM outputs a perturbed numeric value included in the center piece with relatively high probability.
\begin{lemma}
Algorithm \ref{alg2} satisfies $\varepsilon$-LDP.
\begin{proof}
We can compute the probability distribution function of Algorithm \ref{alg2} as follows.
\begin{equation*}
\hat{e_{ij}}=
\begin{cases}
[l(e_{ij}), r(e_{ij})] & with \; prob. \; \frac{e^{\varepsilon} - e^{\varepsilon/2}}{2(e^{\varepsilon/2} + 1)} \\
[-C, l(e_{ij})) & with \; prob. \frac{e^{\varepsilon/2} -1}{4e^{\varepsilon/2}(e^{\varepsilon/2}+1)} \\
(r(e_{ij}), C] & with \; prob. \frac{e^{\varepsilon/2} -1}{4e^{\varepsilon/2}(e^{\varepsilon/2}+1)} \\
\end{cases}
\end{equation*}
Let $e_{ij}$ and $e_{ij}'$ be the two distinct user's prediction error values. Then, in worst case, Algorithm \ref{alg2} outputs the same prediction error value ($\hat{e_{ij}}$) as the following probability.
\begin{equation*}
\frac{Pr[\mathcal{A}(e_{ij})=\hat{e_{ij}}]}{Pr[\mathcal{A}(e_{ij}')=\hat{e_{ij}}]} \leq \frac{\frac{e^{\varepsilon} - e^{\varepsilon/2}}{2(e^{\varepsilon/2} + 1)}}{\frac{e^{\varepsilon/2} -1}{2e^{\varepsilon/2}(e^{\varepsilon/2}+1)}} = e^{\varepsilon}
\end{equation*}
\end{proof}
\end{lemma}
Since each user submits a noisy gradient of a randomly sampled dimension of profile vector through Algorithm \ref{alg2}, the server cannot learn about which POIs the user visits. However, the server still could estimate the term $-\sum_{i=1}^{m} 2\mathbf{u_{i}}(r_{ij}-\mathbf{u_{i}}^{\intercal}\mathbf{v_{j}})$ by taking the average of perturbed gradients.
Finally, to satisfy $\varepsilon$-LDP, we should divide the privacy budget for each process of Algorithm \ref{alg1} and \ref{alg2}, according to Theorem \ref{theorem1}. Namely, we let users use $\varepsilon_{1}$ to perturb their transition pattern and $\varepsilon_{2}$ to perturb a gradient of selected dimension, thereby the sum of $\varepsilon_{1}$ and $\varepsilon_{2}$ equals to $\varepsilon$.
\subsection{Calculating preferences}
\label{calculating preferences}
With the learned profile vector of users and POIs, SPIREL can provide next POI candidates while considering the current location of users. Specifically, suppose a user $i$ is in POI $j$. Then, the preference of next POI $k$ of user $i$ can be calculated as follows:
\begin{equation}
\label{eq9}
pref_{i}^{k} = \mathbf{u_{i}}^{\intercal}\mathbf{v_{k}} + \mathbf{v_{j}}^{\intercal}\mathbf{v_{k}}
\end{equation}
Equation \ref{eq9} consists of two preferences. The first term $\mathbf{u_{i}}^{\intercal}\mathbf{v_{k}}$ indicates the personal preference of the next POI $k$. Further, the second term $\mathbf{v_{j}}^{\intercal}\mathbf{v_{k}}$ represents the POI transition preference of POI $j$ to POI $k$. By taking the sum of two preferences, we can recommend top-k next POIs to users. It should be noted that because the POI profile vectors are publicly known, users do not have to submit their current location to the server. In other words, only if the recommendation server sends all the POI profile vectors to users, users can sort the preferences for all next POIs by themselves.
\subsection{Optimizations}
\label{optimizations}
\subsubsection{Learning With User Group}
\label{learning with user group}
We further consider an optimization technique to estimate the perturbed gradients accurately. In a non-private setting, the matrix factorization process can stop when the prediction error is small enough. However, under a private setting, the recommendation server cannot stop the learning process before $k$ iterations because each user submits perturbed gradients to the server. Accordingly, the existing method \cite{b12} lets users split their privacy budget over $k$ iterations, where each gradient submission guarantees $\varepsilon/k$-LDP.
When answering multiple questions, the authors of \cite{b15} proved that partitioning users into groups is better than splitting the privacy budget in terms of accuracy of the aggregated result under LDP. Our optimization is based on this idea. Specifically, we partition the users randomly into $k$ groups and ask each user group to submit their perturbed gradients using the entire privacy budget. After that, the recommendation server stochastically updates the POI latent matrix with the mean of the perturbed gradients of each user group.
\subsubsection{Normalizing Implicit Feedback}
\label{normalizing implicit feedback}
Note that we model the movement of people based on the transition between two POIs. Furthermore, as explained in Section \ref{transition pattern perturbation}, we let each user sample one transition pattern from their check-in history and build a noisy POI-POI matrix. This means that the server does not have any explicit feedback from users regarding their transition preferences. Even worse, the estimated frequencies depend on the number of users that participate in the recommendation system. To sum up, we need to normalize the estimated frequencies to infer transition preferences indirectly.
Early work on matrix factorization for implicit feedback \cite{b24} defines the notion of confidence, which linearly increases based on the number of implicit feedback. For example, the authors in \cite{b24} suggest a plausible choice for the confidence value as $c_{ui} = 1 + \alpha r_{ui}$. Here, $r_{ui}$ is referred to \textit{observations}, which indicates the number of implicit feedback of user $u$ on item $i$. The rate of confidence increasing is controlled by the constant $\alpha$.
We cannot directly use the above notion in our framework for two reasons. First, the estimated frequency through LDP can have a negative value. If the true frequency is close to zero, the noisy frequency can be unbiasedly estimated and thus can take a negative value. In this situation, we cannot assume that the negative frequencies indicate that the users dislike the transition pattern, since the true count may be zero. Secondly, the estimated frequencies depend on the total number of users participating in the recommendation system. Note that the estimated count can be calculated by $\frac{\sum_{x=1}^{m}\hat{t}_{x} - mq}{p-q}$. This means that the overall frequencies keep growing as more users participate in the recommendation system.
To overcome this issue, we use the sigmoid function and set the ($i,j$)-th element of matrix $Q$ to $1 + sigmoid(Q[i][j])$. The sigmoid function can be defined as $sigmoid(x)=\frac{1}{1+e^{-x}}$. This sigmoid function has several properties. First, the sigmoid function has an input domain of all real numbers. Second, its output is monotonically increasing and is in the range $[0,1]$. A wide variety of sigmoid functions are used as the activation function of neural networks. In our work, we utilize this function to bound the estimated frequencies of transition patterns.
\subsubsection{Accelerating Learning Process}
\label{accelerating learning process}
As mentioned in Section \ref{gradient perturbation}, we update the user profile vector with ALS and POI profile vector with SGD. Generally, SGD relatively requires more iterations of updates to reach the optimum compared with ALS. Even worse, we experimentally found that SGD can get easily stuck in a local optimum of Equation \ref{eq5}. Consequently, we use Adam optimizer \cite{b25}, which is a variant of gradient descent methods and is known to achieve good results fast. Briefly, Adam optimizer maintains an exponentially decaying averages of previous gradients $m$ and squared gradients $v$ as follows.
\begin{equation}
\label{eq10}
\begin{split}
m &= \beta_{1}m + (1-\beta_{1})\nabla_{\mathbf{v_{j}}}\mathcal{L}_{SPIREL} \\
v &= \beta_{2}v + (1-\beta_{2})(\nabla_{\mathbf{v_{j}}}\mathcal{L}_{SPIREL})^{2}
\end{split}
\end{equation}
Since $m$ and $v$ are typically initialized to 0, Adam optimizer performs bias correction as follows.
\begin{equation}
\label{eq11}
\hat{m} = \frac{m}{1-\beta_{1}}, \hat{v} = \frac{v}{1-\beta_{2}}
\end{equation}
Finally, the update rule is given by as follows.
\begin{equation}
\label{eq12}
\mathbf{v_{j}} = \mathbf{v_{j}} - \frac{\gamma}{\sqrt{\hat{v}}+\epsilon}\hat{m}
\end{equation}
Here, $\beta_{1}$ and $\beta_{2}$ control the decay rates of the moving averages. Further, $\gamma$ indicates the learning rate and $\epsilon$ is used to avoid a divide by 0.
\section{Evaluation}
\label{evaluation}
In this section, we evaluate our SPIREL method in various settings to demonstrate the performance in successive POI recommendation task. All experiments were performed on a server with Intel i9-9900X CPU with 128GB of memory and Geforce RTX 2080 GPU. In all experiments, we average results over 10 runs.
\subsection{Experimental Setup}
\label{Experimental Setup}
\textbf{Datasets.} We use two public datasets: \textit{Gowalla}\footnote{https://snap.stanford.edu/data/loc-gowalla.html} and \textit{TaxiTrip}\footnote{https://data.cityofchicago.org/Transportation/Taxi-Trips/wrvz-psew/data}. \textit{Gowalla} contains 6,442,890 check-ins. In this experiment, we use a part of \textit{Gowalla} records in LA, where there are total 9,617 users and 585 POIs. Next, \textit{TaxiTrip} includes Chicago taxi trip records that represent the pick-up and drop-off locations of 6,558 taxies. Since each record does not contain any intermediate POIs, we slightly changed this dataset. We first build a road network of the Chicago area. For each pick-up and drop-off location, we count the number of passengers. Based on this distribution of pick-up and drop-off location, we generate the check-in history of 267,739 users with 526 POIs. We also sample 10,000 users and 373 POIs from \textit{TaxiTrip} dataset, which is referred to as \textit{TaxiTrip-Small}. In \textit{Gowalla} dataset, we let each user contain 10 check-ins and 20 check-ins in \textit{TaxiTrip} and \textit{TaxiTrip-Small} datasets, respectively.
\\
\textbf{Methods.} Whereas Hua et al. \cite{b10}, Zhang et al. \cite{b11} and Shin et al. \cite{b12} only consider the relationship between the users and items, SPIREL is the first work to consider the relationship between POIs while guaranteeing the LDP. That is, there is no existing private recommendation method that can directly compare with SPIREL. Accordingly, we evaluate SPIREL with two methods. First, we compare SPIREL with non-private baseline method that predicts a user's preference on POIs by only factorizing user-POI matrix, called \textit{NPB}. We also use SGD in the training process of \textit{NPB} method. Next, we compare with the private version of \textit{NPB}, referred to \textit{PB}. In learning process of \textit{PB} method, SGD follows the protocol of \cite{b12} satisfying LDP. For all methods, we exclude the latest check-in of check-in history and train the recommendation system with the remaining check-ins.
\\
\textbf{Metrics.} We employ two evaluation metrics to evaluate recommendation quality: Recall@$k$ and Mean Reciprocal Rank (MRR). In successive POI recommendation task, Recall@$k$ indicates the ratio of users whose latest POI is in the top-$k$ POI candidates. Because there is only one correct answer in successive POI recommendation task, we do not use Precision@$k$. Next, MRR is widely used in evaluating the quality of recommendation. For a single recommendation result, the reciprocal rank is $\frac{1}{rank}$, where $rank$ is the position of the correct answer in the recommendation list. We evaluate the Recall@$k$ and MRR from the top-$k\in\{3,5,7,10\}$ results.
\\
\textbf{Parameters.} Next, we describe the parameter values used in experiments. We experiment with five values of privacy budget $\varepsilon \in \{0.2,0.4,0.6,0.8,1.0\}$ and use $1.0$ as default value. The regularization parameter $\lambda$ is $10^{-8}$. For \textit{NPB} and \textit{PB}, we set the size of profile vector $d$ to 5 for \textit{Gowalla} and 10 for \textit{TaxiTrip} and \textit{TaxiTrip-Small}. Further, we set $d$ to 10 for \textit{Gowalla} and 15 for \textit{TaxiTrip} and \textit{TaxiTrip-Small} in our SPIREL. The learning rate $\gamma$ is set to 1 for \textit{PB} and SPIREL. In \textit{NPB}, $\gamma$ is set to 0.01 for \textit{Gowalla}, 0.005 for \textit{TaxiTrip-Small} and 0.001 for \textit{TaxiTrip}, respectively.
\subsection{Experimental Results}
\label{Experimental Results}
\textbf{Effects of normalizing implicit feedback.} We first evaluate the effects of optimization techniques applied in SPIREL. We first run SPIREL on \textit{Gowalla} dataset without applying the sigmoid function to the estimated frequency of transition pattern. Figure \ref{fig5} illustrates the Recall@$5$ of SPIREL. From the result, we observe that without normalizing, SPIREL could not accurately predict the next POI. In conclusion, normalizing the transition pattern frequency enhances the accuracy of SPIREL significantly.
\\
\begin{figure}[t]
\centering
\includegraphics[width=0.35\textwidth]{Sigmoid.pdf}
\caption{Recall@$5$ of SPIREL on \textit{Gowalla} dataset with and without Sigmoid function.}
\label{fig5}
\end{figure}
\\
\begin{figure}[b]
\centering
\includegraphics[width=0.35\textwidth]{Adam.pdf}
\caption{RMSE of user-POI matrix and POI-POI matrix over 15 iterations with and without Adam optimizer.}
\label{fig6}
\end{figure}
\textbf{Effects of Adam optimizer.} Next, we measure the effects of Adam optimizer. Note that we apply Adam optimizer when training POI-POI matrix. Figure \ref{fig6} plots the root mean square error (RMSE) of user-POI and POI-POI matrix over 15 iterations on \textit{Gowalla} dataset. As shown in Figure \ref{fig6}, RMSE of user-POI matrix converges after only a few iterations regardless of Adam optimizer. However, without Adam optimizer, the RMSE of POI-POI matrix decrease slowly and get stuck in a local optimum as we explained in Section \ref{accelerating learning process}. On the other hand, the RMSE of POI-POI matrix keeps decreasing with Adam optimizer and shows lower RMSE than SGD after 3 iterations.
\\
\begin{figure}[t]
\centering
\includegraphics[width=0.35\textwidth]{Iteration.pdf}
\caption{Recall@$5$ and MRR of SPIREL when varying the number of iterations on \textit{Gowalla} dataset.}
\label{fig7}
\end{figure}
\textbf{Varying number of iterations.} Figure \ref{fig7} shows the Recall@$5$ and MRR change of our SPIREL with respect to the maximum number of iterations on \textit{Gowalla} dataset. From the figure, we observe that the SPIREL attains the best performance at around 2 iterations. SPIREL keeps similar Recall@$5$ and MRR at around 30 iterations and, its performance drops off after 40 iterations. The reason is that when the number of iterations grows, the number of user groups also increases. In other words, the number of users in each group also decreases, which results in more noisy gradients at each iteration. Thus, we choose 10 as a default value of iterations, which achieves suitable RMSE and prediction performance.
\\
\begin{figure}[b]
\centering
\includegraphics[width=0.35\textwidth]{BudgetRatio.pdf}
\caption{Recall@$5$ and MRR of SPIREL when varying the privacy budget ratio on \textit{Gowalla} dataset.}
\label{fig8}
\end{figure}
\textbf{Varying privacy budget allocation ratio.} In Figure \ref{fig8}, we additionally evaluate the prediction performance of SPIREL with respect to privacy budget ratio. Note that SPIREL consists of two perturbation methods. Thus, each user of SPIREL should divide their privacy budget according to Theorem \ref{theorem1}. In this experiment, we set the entire privacy budget $\varepsilon$ to $1.0$ and change the allocation ratio from 1:9 to 9:1 (transition perturbation/gradient perturbation). The result shows that SPIREL achieves the best performance when the privacy budget is equally assigned. Thus, we equally divide the privacy budget in all experiments.
\begin{figure*}[ht]
\centering
\begin{subfigure}{0.30\textwidth}
\centering
\includegraphics[width = \textwidth]{Gowalla-Recall.pdf}
\caption{\textit{Gowalla}}
\label{fig9-1}
\end{subfigure}\hfill
\begin{subfigure}{0.30\textwidth}
\centering
\includegraphics[width = \textwidth]{TaxiSmall-Recall.pdf}
\caption{\textit{TaxiTrip-Small}}
\label{fig9-2}
\end{subfigure}\hfill
\begin{subfigure}{0.30\textwidth}
\centering
\includegraphics[width = \textwidth]{TaxiTrip-Recall.pdf}
\caption{\textit{TaxiTrip}}
\label{fig9-3}
\end{subfigure}
\caption{Recall@$k$ on the three datasets.}
\label{fig9}
\end{figure*}
\begin{figure*}[ht]
\centering
\begin{subfigure}{0.30\textwidth}
\centering
\includegraphics[width = \textwidth]{Gowalla-MRR.pdf}
\caption{\textit{Gowalla}}
\label{fig10-1}
\end{subfigure}\hfill
\begin{subfigure}{0.30\textwidth}
\centering
\includegraphics[width = \textwidth]{TaxiSmall-MRR.pdf}
\caption{\textit{TaxiTrip-Small}}
\label{fig10-2}
\end{subfigure}\hfill
\begin{subfigure}{0.30\textwidth}
\centering
\includegraphics[width = \textwidth]{TaxiTrip-MRR.pdf}
\caption{\textit{TaxiTrip}}
\label{fig10-3}
\end{subfigure}
\caption{MRR on the three datasets.}
\label{fig10}
\end{figure*}
\begin{figure*}[ht]
\centering
\begin{subfigure}{0.30\textwidth}
\centering
\includegraphics[width = \textwidth]{Gowalla-eps.pdf}
\caption{\textit{Gowalla}}
\label{fig11-1}
\end{subfigure}\hfill
\begin{subfigure}{0.30\textwidth}
\centering
\includegraphics[width = \textwidth]{TaxiSmall-eps.pdf}
\caption{\textit{TaxiTrip-Small}}
\label{fig11-2}
\end{subfigure}\hfill
\begin{subfigure}{0.30\textwidth}
\centering
\includegraphics[width = \textwidth]{TaxiTrip-eps.pdf}
\caption{\textit{TaxiTrip}}
\label{fig11-3}
\end{subfigure}
\caption{Recall@$k$ and MRR of SPIREL on the three datasets.}
\label{fig11}
\end{figure*}
\\
\textbf{Recall@$k$ and MRR comparison.} Figure \ref{fig9} and \ref{fig10} summarize the experimental results of SPIREL compared with \textit{NPB} and \textit{PB} methods. Recall that \textit{NPB} and \textit{PB} methods only consider the relationship between users and POIs. Compared to SPIREL, both methods could not recommend suitable next POIs based on the user's current location. Particularly, the results of \textit{NPB} are in accordance with Section \ref{a navie solution for successive POI recommendation}.
When factorizing the same user-POI matrix of \textit{NPB} using \textit{PB}, we observe that \textit{PB} further fails to recommend suitable next POIs for most of the users. Unlike \textit{NPB}, this can be attributed to the noises that \textit{PB} has added over the gradient descent steps. The added noise hurts the unobserved relationship between POIs, which fails to provide insights about transition preferences among POIs. This also reveals that the successive POI recommendation problem is not a trivial task for the existing private recommendation methods. Based on this intuition, we jointly optimize the user-POI matrix with POI-POI matrix by the linear combination model.
As shown in Figure \ref{fig9}, SPIREL outperforms all the methods in terms of Recall@$k$. In \textit{Gowalla} and \textit{TaxiTrip-Small} datasets, about 60\% of users are recommended with suitable POI candidates. The accuracy increases by up to 80\% when top-10 POIs are given to users. The accuracy even reaches up to 96\% in \textit{TaxiTrip} dataset, when the users are given top-10 POI candidates. This is because LDP mechanisms are effective mostly when there are many users. These results show that SPIREL can effectively provide the next POI candidates by using the POI transition patterns.
In terms of recommendation quality demonstrated in Figure \ref{fig10}, we observe that SPIREL can provide personalized POI candidates. In average, the correct POI is positioned at rank 2 in POI candidates in \textit{Gowalla} and \textit{TaxiTrip-Small} dataset. Furthermore, the correct POI is in the first place among the recommendation list for most users in \textit{TaxiTrip}. This is because SPIREL also considers the relationship between users and POIs. Thus, SPIREL can accurately predict personalized POI candidates while preserving the location privacy of users.
Finally, we evaluate the performance of SPIREL while varying the privacy budget. Figure \ref{fig11} demonstrates that both recommendation accuracy and quality increases as $\varepsilon$ grows from 0.2 to 1.0. These results are expected because the error of LDP mechanisms is reduced as $\varepsilon$ increases. Furthermore, the performance gap between $\varepsilon=0.2$ and $\varepsilon=1.0$ is very small among the three datasets. This shows that SPIREL maintains high recommendation quality even when $\varepsilon$ is small.
\section{Related Work}
\label{related work} The problem of successive POI recommendation has received much attention recently \cite{b1,b2,b3,b4}. To predict where a user will visit next, we need to consider the relationship between POIs. However, existing private recommendation methods \cite{b10,b11,b12} only focus on learning the relationship between users and items. Our research direction is to incorporate the relationship between POIs by adapting the transfer learning approach \cite{b17,b18,b19}. Most transfer learning methods in collaborative filtering utilize auxiliary domain data by sharing the latent matrix between two different domain. In our work, we use two domain data from users' check-in history: visiting counts and POI transition patterns. We assume that the POI latent factors can bridge the user-POI and POI-POI relationships. To figure out the POI-POI relationship, we build a POI-POI matrix, which represents global preference transitions between two POIs. After that, in the learning process, users update their profile vector based on the visiting counts which describe user-POI relationship.
Differential privacy \cite{b14} is a rigorous privacy standard that requires the output of a DP mechanism should not reveal information specific to any individuals. DP requires a trusted data curator who collects original data from users. Recently, a local version of DP has been proposed. In the local setting, each user perturbs his/her data and sends perturbed data to the data curator. Since the original data never leave users' devices, LDP mechanisms have the benefit of not requiring trusted data curator. Accordingly, many companies attempt to adopt LDP to collect data from the clients privately \cite{b6,b7,b8,b9}.
There are several works applying DP/LDP on the recommendation system \cite{b10,b11,b12}. Hua et al.\cite{b10} proposed an objective function perturbation method. In their work, a trusted data curator adds Laplace noises to the objective function so that the factorized item matrix satisfies DP. They also proposed a gradient perturbation method which can preserve the privacy of users' ratings from an untrusted data curator. Zhang et al.\cite{b11} proposed a probabilistic matrix factorization with personalized differential privacy. They used a random sampling method to satisfy different users' privacy requirements. Then, they applied the objective function perturbation method to obtain the perturbed item matrix. Finally, Shin et al.\cite{b12} proposed a new recommendation system under LDP. Specifically, users update their profile vectors locally and submit perturbed gradients in the iterative factorization process. Further, to reduce the error incurred by perturbation, they adopted random projection for dimensionality reduction.
\section{Conclusion}
\label{conclusion} In this paper, we proposed a novel successive POI recommendation system under LDP, namely SPIREL. We first investigated that considering the POI relationship in successive POI recommendation task is crucial. Accordingly, we utilize the POI transition patterns from the users' check-in history. SPIREL further incorporates visiting counts to learn the relationship between users and POIs. Moreover, we introduced two LDP mechanisms to train our SPIREL and several optimization techniques. Our experimental results on two public datasets show that SPIREL can provide improved POI recommendation performance than the existing private recommendation methods.
|
1,314,259,996,623 | arxiv | \section{Introduction}
Quantum communication is one of the most practically relevant applications of the quantum technologies, offering the perspective of secure communication
based on physical laws
\cite{Pirandola:20,RevModPhys.92.025002,RevModPhys.74.145,TeleportationReview,Renner2021,Principles,Werner2001,RevSpace,MohsenBook}. While security can be proven to hold
under enormously generous and general
conditions, it can only be guaranteed for
sufficiently low levels of loss. For short distances, this does not constitute a technological challenge.
However, for large distances, secure quantum communication becomes very challenging, since all loss has to be attributed to an eavesdropper and this prevents achieving arbitrarily high rates of secure bits. Similar limitations arise for the maximum attainable rates for quantum information (qubits) transmission entanglement (ebits) distribution over lossy bosonic channels that conveniently describe optical fibres or free-space links.
More specifically, it has been established that, for any point-to-point transmission protocol over a lossy bosonic channel with transmissivity equal to
$\eta\in (0,1)$, allowing the two parties to exploit unlimited two-way classical communications, the maximum achievable rates for key generation, entanglement distribution, and quantum bit transmissions, are all equal to the \emph{repeaterless PLOB bound} $-\log_2(1-\eta)$~\cite{PLOB}.
This severe limitation of direct point-to-point transmission is not a road block, however. Intermediate stations, referred to as \emph{quantum repeaters} \cite{RevModPhys.83.33},
can overcome this limitation and, in principle, allow communication over arbitrary
distances. Since the appearance of the first quantum repeater proposal~\cite{Briegel}, the goal of extending the distance at which two parties can faithfully share a secret key or entanglement has stimulated a plethora of repeater-assisted quantum communication schemes. The difficulty of assessing precise rates of (quantum) information
transmission and specifically
of key rates gives rise
to the necessity to identify
bounds that are
agnostic
to the specific implementation
chosen. Only in such a realm,
can ultimate bounds for quantum
communication be established.
Along this line of thought, a fundamental result about the rates at which two end-nodes in a linear repeater chain can transmit quantum information, distribute entanglement, or generate a secret key has been established in Ref.~\cite{PirNetwork}. In particular, when two users, say Alice and Bob, are connected by a line of $N-1$ middle repeater nodes, linked together through $N$ optical lossy fibres, the
quantum/private capacity
of the linear repeater chain, i.e., the ultimate rate for repeater-assisted quantum or private communication
between the two end-users is given by~\cite[Eq.~(9)]{PirNetwork}
\begin{equation}\label{idealRep}
C(\eta,N)=-\log_2(1-\sqrt[N]{\eta})~,
\end{equation}
where $\eta>0$ is the total Alice-to-Bob fibre transmissivity. This expression is derived by exploiting a combination of tools that we briefly recall in the appendices.
From the conceptual point of view, a traditional quantum repeater is a scheme in which entanglement is first distributed to intermediate nodes. Then, the quality is improved by means of entanglement distillation, transforming a collection of weakly entangled states into a smaller number of more highly entangled states. In the final step, one performs sequential entanglement swapping, bringing quantum systems together that have no joint past, to entangle the anticipated nodes. More recently, repeater schemes based on quantum error correction have appeared, where the repeater nodes instead contain quantum gates to implement the necessary operations. Nevertheless, in the case of ideal devices, both varieties have a performance that is upper bounded by Eq.~(\ref{idealRep}).
It is important to stress that, for a fixed total transmissivity $\eta$ but large number of repeaters, the end-to-end capacity $C(\eta,N)$
diverges as $\log_2N$.
This feature is immediately connected to the fact that the repeaters are assumed to be ideal, i.e., lossless. Under a more realistic point of view each repeater in the linear
\je{repeater chain}
must be characterised by imperfect components which introduce noise and decoherence to the stored and transmitted quantum states. For instance these internal losses could be the effect of non-unit detection efficiencies, channel-memory coupling losses, memory loading and readout efficiencies. Furthermore, detrimental effects can be introduced by the quantum memories at the nodes while the quantum systems are stored before the on-demand retrieval.
Here, we explicitly account for this crucial aspect and we consequently derive the end-to-end repeater capacity of a lossy bosonic quantum network where the repeater stations are also affected by internal loss. Although loss is not the only source of decoherence, it is often the dominant factor and is an excellent first approximation for an optical fibre channel. Furthermore, using lossy channels as a model for imperfections within the repeater nodes will facilitate derivation of exact formulae for the various capacities of interest.
Our derivation is carried out for the various types of
\emph{routing} (single- and multi-path). Our bound hence captures rather general classes of repeater schemes and can be seen as an analog
of the repeater-less PLOB in the presence of lossy repeater stations. Given a fixed amount of loss for each repeater node we can immediately evaluate our bounds as a function of transmission losses. However, in real implementations the loss in each node is itself typically a function of the transmission losses and the type of repeater strategy employed. The paradigmatic example is the loss induced by a quantum memory where the necessary storage time usually increases with transmission loss. In this scenario, the two major classes of strategy are those that require either one-way or two-way classical communication, where the latter corresponds to a longer typical waiting and hence a lossier effective channel.
To exemplify these findings, we
show that, considering a realistic time-dependent model of decoherence for a single repeater station, the achievable rates not only beat the benchmark of the repeater-less PLOB bound, but they are also not that far from the upper limit provided by our revised lossy-repeater capacity. As an example we consider polarisation-based BB84 key distribution protocol over a single repeater node using a simple entanglement swapping protocol with Rubidium memories and show it scales as one quarter of the optimal possible rate for such schemes.
Finally, recent years have enjoyed considerable
interest in identifying practical schemes
for routing in multi-partite
\emph{quantum networks}
\cite{PirNetwork,QuantumInternet,PhysRevA.93.032302,EppingA,HahnPappaEisert,NetSquid,PhysRevA.103.032610}. Our results
are general enough to accommodate such situations, as we show.
\section{Node splitting}
Let us consider a linear sequence $\{s_0,\ldots,s_{N}\}$ of $N-1$ repeater nodes, where Alice and Bob, the two end stations, are identified with $s_0$ and $s_N$, respectively.
We assume that each station $s_i$ in the chain is connected to $s_{i+1}$ through an optical fibre described by a Gaussian lossy channel
\cite{GaussianChannel} $\mathcal L_i$ with transmissivity $\eta_i$, for $i=0,\dots, N$. Thus, the total transmissivity of the link (e.g., an optical fibre
providing the communication channel) connecting Alice and Bob is $\eta=\prod_i\eta_i$.
Each node $s_i$ has internal losses that can be quantified by a global transmissivity $\tau_i\in[0,1]$
obtained by the product of single inefficiencies.
In this way we can describe the effect of the node on the incoming quantum systems as another Gaussian lossy channel, mathematically described as a beam splitter mixing the input system with an environment in the vacuum
\begin{equation}\label{BS}
\hat{x}_{\text{out}}=\sqrt\tau_i\hat{x}_{\text{in}}+\sqrt{1-\tau_i}\hat{x}_{\text{vac}}~.
\end{equation}
We can further distinguish two different contributions in $\tau_i$: a {\itshape transmitting} efficiency $\tau_i^t$, and a {\itshape receiving} efficiency $\tau_i^r$. The former is associated for instance with the overall effects of a source efficiency (e.g. photon creation efficiency), a memory read-out efficiency and a memory-channel interface efficiency. The latter involves a detector efficiency, a memory write-in efficiency and channel-memory coupling efficiency in some fashion.
To account for the internal lossy features in the various stations, we perform the \emph{node splitting} depicted in Fig.~\ref{Fig:lossyrep}.
We split the generic node $s_i$ into three ``children'' nodes $s_i^k$ ($k=1,2,3$), which are then linked together through a composition of two lossy channels $\mathcal R_{s_i^2\rightarrow s_i^3}^t$ and $\mathcal R_{s_i^1\rightarrow s_i^2}^r$, with associated transmissivities $\tau_i^{t,r}$.
Combining these internal channels with $\mathcal L_i$ associated to the $i$th link, we can model the linear network with noisy quantum repeaters as a sequence of composite quantum channels. More precisely, we can identify a building-block channel, so that the linear network can be described as the collection $\{\mathfrak C_i\}_i$ of the following composite quantum channels (see Fig.~\ref{Fig:lossyrep})
\begin{equation}\label{DefChannel}
\mathfrak C_i=\mathcal R_{s_{i+1}^1\rightarrow s_{i+1}^2}^r\circ\mathcal L_{i+1}\circ\mathcal R_{s_{i}^2\rightarrow s_{i}^3}^t,
\end{equation}
for $i=1,\ldots,N-1$, while for the two end-nodes we set $\mathcal R_{s_0^1\rightarrow s_0^2}^r=\mathcal R_{s_N^2\rightarrow s_N^3}^t=\mathcal I$, where $\mathcal I$ is the identity channel. To simplify notation, we rename $\mathcal R_{s_i^k\rightarrow s_i^{k+1}}^{r,t}=\mathcal R_{i}^{r,t}$.
By means of the decomposition in Eq.~(\ref{DefChannel}), we are able to apply the machinery developed in Ref.~\cite{PirNetwork} to our scenario so we can derive a single-letter upper bound on the secret-key capacity (and therefore on the two-way quantum capacity) of the lossy-repeater linear chain.
\begin{figure}[htbp]
\vspace{0.1cm}
\par
\begin{center}
\includegraphics[width=0.49 \textwidth]{lossychain} \vspace{-0.6cm}
\end{center}
\caption{Node splitting in a repeater chain. a) $N-1$ repeater stations $s_i$ are linked together to form a linear network (chain) between $s_0$ (Alice) and $s_N$ (Bob). The end-to-end transmissivity is $\eta=\eta_1\eta_2\cdots\eta_N$, where $\eta_i>0$ is the transmissivity of the single link described by the quantum lossy channel $\mathcal L_i$. b) Node splitting of the linear network. Each node $s_i$ is split into three children nodes $\{s_i^1, s_i^2, s_i^3\}$ and two links $s_i^1-s_i^{2}$, $s_i^2-s_i^{3}$ are added. The overall effect of the internal losses in the $i$-th node is then described by the composition $\mathcal R_i^t\circ\mathcal R_i^r$ of two additional quantum lossy channels.}
\label{Fig:lossyrep
\vspace{0.3cm}
\end{figure}
By performing an entanglement cut labelled by $i$, we disconnect the chain along the channel $\mathcal L_i$. In doing so, we create a bipartition $(A,B)$ of the chain, with $A=\{s_0^1, s_0^2, s_0^3\ldots,s_i^1, s_i^2, s_i^3\}$ and $B=\{s_{i+1}^1, s_{i+1}^2, s_{i+1}^3\ldots,s_N^1, s_N^2, s_N^3\}$. This leads us
to formulate the following.
\begin{definition}[Lossy repeaters]
The state shared by Alice and Bob at the output of the most general adaptive protocol over $n$ uses of the repeater chain
is given by
\begin{equation}
\rho_{a, b}^n=\Lambda_i\left(\sigma_{\mathfrak C_i}^{\otimes n}\right),
\label{stretch}
\end{equation}
where $\Lambda_i$ is a trace-preserving
\emph{local operation with classical communication}
(LOCC) while $\sigma_{\mathfrak C_i}$ is the Choi
matrix of the channel $\mathfrak C_i$, which is defined as $\sigma_{\mathfrak C_i}:=(\mathcal I\otimes\mathfrak C_i)
(\Phi)$.\end{definition}
Here, $\mathcal I$ is the identity channel and $\Phi$ is a maximally-entangled state. More precisely, the above equation has to be intended as asymptotic, since for CV systems, the maximally entangled state is asymptotic and as a consequence the Choi matrix $\sigma_{\mathfrak C_i}$ is obtained as a limit.
In the appendix,
we give details on this argument.
We notice that for $i=0$ and for $i\geq1$, the quantum channel $\mathfrak C_i$ is a pure loss channel or a composition of two pure-loss channels respectively. Thus we can conclude that for any $i\geq0$, $\mathfrak C_i$ is a distillable channel, for which the two-way quantum and private capacities are identical and exactly determined, i.e.~\cite{PLOB}
\begin{equation}
\mathcal C(\mathfrak C_i)=E_R(\sigma_{\mathfrak C_i})=D_1(\sigma_{\mathfrak C_i})~,
\end{equation}
where $E_R(\sigma_{\mathfrak C_i})$ is the
\emph{relative entropy of entanglement} of the Choi matrix $\sigma_{\mathfrak C_i}$ and $D_1(\sigma_{\mathfrak C_i})$ is the entanglement that can be \emph{distilled} from the Choi matrix with one-way, forward or backward, classical communication (see the appendix for a recap about these types of capacities).
By exploiting Theorem 7 in
Ref.~\cite{PirNetwork}, we conclude that the two-way quantum/private capacity of the linear chain with lossy repeaters satisfies
\begin{equation}
\mathcal C(\{\mathfrak C_i\})=\min_{0\leq i\leq N}E_R(\sigma_{\mathfrak C_i})=\min_{0\leq i\leq N}\mathcal C(\mathfrak C_i).
\end{equation}
Using the PLOB bound and the fact that the transmissivity of a composition of lossy channels is given by the product of the individual transmissivities, we can state the following theorem which generalizes the formula for ideal repeaters given in Ref.~\cite{PirNetwork},
\begin{theorem}[Lossy-repeater capacity]
The ultimate achievable rate for repeater-assisted quantum/private communication between the two-end users of a linear network with $N-1$ lossy quantum repeaters connected by $N$ pure-loss channels
is given by
\begin{equation}\label{chainCap1}
\mathcal C(\{\mathfrak C_i\})=\min_i[-\log_2(1-\tau_i^t\tau_{i+1}^r\eta_{i+1})]~,
\end{equation}
i.e., it equals the minimum capacity of the channel $\mathfrak{C_i}$ describing the loss of node $i$, the pure loss channel $i+1$ and the loss of node $i+1$~\cite{note2}.
\end{theorem}
Let us assume that the end-users, Alice and Bob, are at a distance $L$ apart and connected by an optical fibre whose transmissivity $\eta$ decays exponentially as $\eta=e^{-\alpha L}$ (typically, $\alpha=0.2$dB/km). If $N-1$ lossy repeaters are inserted along the line, the optimal configuration is represented by equally spaced nodes at a distance $L_0=L/N$, so we have $\eta_{i}=\sqrt[N]\eta$ for each $i$. We can thus recast Eq.~(\ref{chainCap1}) as follows
\begin{equation}\label{chainCap2}
\mathcal C(\{\mathfrak C_i\})=-\log_2(1-\widetilde\tau\sqrt[N]\eta)]~,
\end{equation}
where we have defined $\widetilde\tau:=\min_{i\geq0}\tau_i^r\tau_{i+1}^t$.
For simplicitly, assume that all the nodes are built and equipped with the same components, i.e., $\tau_i^r=\tau^r$ and $\tau_i^t=\tau^t$~, for all $i\in[0,N]$. We then get
\begin{equation}
\mathcal C(\{\mathfrak C_i\})\rightarrow C_\tau(\eta,N)=-\log_2(1-\tau\sqrt[N]{\eta})~,
\end{equation}
where $\tau:=\tau^r\tau^t$.
If we now consider a large number of nodes we obtain the following expansion
\begin{equation}
C_\tau(\eta,N\gg1)\simeq-\log_2(1-\tau)+\frac{\tau\log_2\eta }{(1-\tau)N}~.
\end{equation}
We can thus see that, by increasing the number of repeaters between Alice and Bob, i.e., by taking the limit of $N\rightarrow\infty$, the lossy-repeater capacity is bounded by the quantity $-\log_2(1-\tau)$ that depends solely on the loss within a node. In other words, even if we are allowed to arbitrarily increase the number of repeaters on the line, the optimal rate will be anyway bounded by the inevitable internal loss which act as ultimate bottleneck in the process.
\section{Time-dependence and realistic repeater protocols on a quantum linear network}
While the above results illuminate the performance of repeater networks with imperfect devices, there is a certain tension between our desire to quantify the ultimate limits to communication whilst also providing formulae that are as relevant as possible to near term demonstrations. The reason for this is that the bounds derived above, whilst totally general in the sense of applying to an optimal two-way LOCC encoding and decoding strategies, only hold for a given channel. However, the effective channel induced by the decoherence of realistic repeater nodes is itself, to some extent, determined by the choice of repeater protocol. For example, the effective loss experienced by a system stored in a quantum memory is a function of the ratio between memory coherence time and the required storage time, but this latter quantity can change depending upon the chosen protocol. In this section, we address this issue for memory-based repeater protocols by taking into account the role played by time. Incorporating these effects is crucial to obtain tighter bounds that provide more accurate benchmarks for realistic repeater protocols with imperfect devices. This is also a powerful example of how our relatively simple model can be used to meaningfully compare different protocols, as the major differences between them often boil down to variations in timing.
Ultimately, some of the operations involved in the design of repeater-assisted quantum communication and entanglement distribution protocols are intrinsically probabilistic. In memory-based quantum repeater protocols, such fundamental operations are represented by heralded entanglement generation (and possibly distillation) between neighbouring nodes and swapping that transfers such entanglement to nodes at increasing distance. Thus, besides the time required for the transmission of the quantum information carriers and classical heralding signals, which is limited by the speed of light, a finite time is also needed while waiting for success of various operations at the different repeater stations.
As a good first order approximation we can model the memories as time-dependent lossy channels with transmission given by (see, e.g., Ref.~\cite{Mem}),
\begin{equation}
\tau_{\mathrm{mem}}(t)=\tau_0e^{-t/t_c}~. \label{memloss}
\end{equation}
where $\tau_0$ is the maximum memory efficiency and $t_c$ is the coherence time.
The key task remaining to evaluate these bounds is to correctly model the expected storage time. Fortunately, this problem has been well studied in the literature \cite{Bernardes:2011ij, Shchukin:2019gs}. The situation can be analysed abstractly by defining the success probability of operations on one half of the repeater, $p$. The expected waiting time will be of the form $MT_0$, where $T_0>0$ is the time taken for one attempt and $M$ is the expected number of attempts. As a first illustration, consider the simplest, canonical setup of a linear chain with one repeater station and two segments separated by a total distance, $d$.
The quantities $T_0$ and $p$ are both influenced by the choice of repeater protocol. The minimal time unit, $T_0$, depends upon whether the central station is operating in a \emph{node-receives-photons} (NRP) or a \emph{node-sends-photons} (NSP) configuration \cite{vanLoock:2020gt}. In the former case, $T_0$ is simply set by $R$, the clock speed of either the source or the local processing (e.g. memory write-in time), whichever is slower. Thus, $T_0^{\mathrm{NRP}} = 1/R$. In the NSP case, for sufficiently large distances, $T_0$ will instead be limited by the time taken to transmit quantum states from the central node to the end stations and subsequently receive a classical signal heralding their successful arrival and initiating the swap. This corresponds to the time to transmit twice over one segment such that $T_0^{\mathrm{NSP}} = \max \{1/R, d/c \}$. A final subtlety is that in the NSP configuration, even if the first attempt is successful, a state must still be stored at the central for the time taken for at least one quantum transmission and one classical signal heralding success, i.e., a total of $M +2$ time steps.
The probabilistic elements that go into determining $p$ depend upon whether we think of a \emph{continuous variable} (CV) or \emph{discrete variable} (DV) scheme utilising single photon detection. In a DV scheme entanglement distillation can be avoided if desired and all that is strictly necessary is to store single photon until another has arrived that can be used to swap entanglement. Indeed it is this strategy that is currently pursued in state-of-the-art experiments \cite{Bhaskar:2020gh,Pu:2021tr} In this scenario, the probabilistic element is then simply the detection probability of a photon across a single link in the repeater chain and
\eqn{p = \sqrt{\eta}\tau^{t,\mathrm{eff}} \label{psimp}}
for a transmission node and an analogous expression for a receiving node. Here $\tau^{t,\mathrm{eff}}$ represents the efficiency of all of the elements in the transmitting nodes except the memory. These quantities, such as write-in, read-out or detection efficiencies, will all be time independent and can be captured by single constant. Note that certain nodes in a chain may not have memories.
In the CV case, the arrival of quantum information is deterministic, and the probabilistic element is the entanglement distillation operation. Once distillation has been successfully carried out on either input, that mode is stored until the mode on the other side as also been distilled and then entanglement is swapped. Again, whilst some distillation is essential, the exact amount is a free parameter leading to a trade-off between the success probability and amount of entanglement in the final state. There are only a relatively small number of CV repeater protocols \cite{Campbell:2012bq,Dias:2017jk,Furrer:2018im,Dias:2019tz} with arguably the most mature being those based upon a so-called
\emph{noiseless linear amplifier} (NLA) \cite{proc-disc-2009,Xiang:2010ua}. The NLA acts with a gain $g$, and the success probability can be upper bounded by $p\leq 1/g^2$, although this bound can be very loose in some circumstances \cite{Pandey:2013wb}. This is in principle a free parameter, but a reasonable strategy would be to adjust the gain to reverse the effects of the expected losses prior to distillation. To undo a lossy channel of transmission $\tau$ requires a gain of $g^2 = 1/\tau$. For these choices, a CV distillation would have success probability upper bounded by (\ref{psimp}), exactly as for a DV scheme.
Putting all of this together,
we compute the expected value of the memory transmission for the NRP and NSP configurations as \cite{Bernardes:2011ij, Shchukin:2019gs},
\eqn{\bar{\tau}_{\mathrm{mem}}^{\mathrm{NRP}} &=& \mathbb{E}\bk{\tau_0 e^{MT_0/t_c}}, \nonumber \\
&=& \frac{p}{2-p}\bk{\frac{2}{1 - e^{-T_0^{\mathrm{NRP}}/t_c}(1-p) }- 1} \label{tmem} ,\\
\bar{\tau}_{\mathrm{mem}}^{\mathrm{NSP}} &=& \mathbb{E}\bk{\tau_0 e^{(M+2)T_0/t_c}} \nonumber \\
&=& \frac{p}{2-p}\bk{\frac{e^{-\frac{2 T_0^\mathrm{NSP}}{t_c}} \left((1-p)+e^{T_0^\mathrm{NSP}/t_c}\right)}{e^{T_0^\mathrm{NSP}/t_c}-(1-p)} } .
\nonumber
}
In either the NRP or NSP protocol, the total loss over one link will include whatever constant detection or coupling efficiencies are present along with the additional lossy channel induced by the memory, which will be at either the receiving or transmitting nodes. This means in either case we could write the total repeater losses as $\tau_i^t\tau_{i+1}^r = \tau_i^{t,\mathrm{eff}}\tau_{i+1}^{r,\mathrm{eff}}\tau_{\mathrm{mem}}$. Thus we can substitute (\ref{tmem}) into (\ref{chainCap1}) and, using parameters from Ref.~\cite{vanLoock:2020gt}, evaluate the bounds for some representative repeater platforms. We present the results for a platform based on Rubidium memories in Fig.~\ref{Fig:rubidium}. Note that because we are explicitly considering time in our analysis we are able to calculate rates in terms of bits per second, which is the quantity that is ultimately important for applications, as opposed to the more common bits per channel use.
\begin{figure}[htbp]
\vspace{0.5cm}
\par
\begin{center}
\includegraphics[width=0.5 \textwidth]{Rubidiumv5.pdf} \vspace{-0.6cm}
\end{center}
\caption{Upper bounds to the secret key rate for both NSP and NRP protocols using Rubidium memories taken from Ref.~\cite{vanLoock:2020gt}. Parameters are: total efficiencies (which is what Ref.~\cite{vanLoock:2020gt} refers to as $P_{\rm link}$) of $(\tau^{\mathrm{eff}})^2 = 0.7$, a coherence time of 100 milliseconds and clock speed of $R = 5\times10^6$. The lower and the upper dashed black lines are respectively the repeaterless PLOB bound~\cite{PLOB} and the one-station repeater-assisted capacity~\cite{PirNetwork}.}
\label{Fig:rubidium
\end{figure}
Crucially, we see that our upper bound now has the same qualitative shape as a real repeater implementation. For short distances, where the storage times are small relative to the memory coherence time, the key rate scales as an ideal repeater with an offset due to extra losses at the station. However, for larger distances, the necessary storage time becomes comparable to the memory coherence time and thus the effective loss falls off exponentially faster. In this situation, the protocol fails to follow the ideal repeater scaling, regressing to scale similarly to the repeaterless bound. For certain system parameters our upper bounds can even drop below the repeaterless scaling as the waiting times for the NSP protocol cause additional losses that destroy any benefit for a repeater station.
Finally, we can also use our bounds to benchmark specific protocols carried out with the same system parameters. In Fig.~\ref{Fig:BB84comp}, we plot the ratio of a BB84 key rate using an entanglement swapping repeater protocol (see Appendix) to our lossy-repeater capacity given in Eq.~(\ref{chainCap1}). From this we can conclude that, over lossy repeater networks, standard BB84 and an entanglement swapping repeater is quite close to the optimal protocol, scaling identically for large distances and achieving slightly worse than one quarter of the optimal key rate.
\begin{figure}[htbp]
\vspace{0.5cm}
\par
\begin{center}
\includegraphics[width=0.5 \textwidth]{BB84compv2.pdf} \vspace{-0.6cm}
\end{center}
\caption{Ratio of the secret key rate for an BB84 protocol using Rubidium memories taken from Ref.~\cite{vanLoock:2020gt} with the lossy-repeater capacity of Eq.~(\ref{chainCap1}). Parameters as per Fig.~\ref{Fig:rubidium}.}
\label{Fig:BB84comp
\end{figure}
\section{Extension to general quantum networks}
Here, we extend the previous analysis from a linear to a more complex quantum network featuring an arbitrary topology, where the two end users aim at sharing entanglement or secret keys through single or multi-path routing strategies.
\subsection{Preliminaries}
A \emph{quantum communication network} $\mathbf N$ involving $N$ nodes
that can be interpreted as entities pursuing quantum communication can be described as an undirected graph $G=(V,E)$, where $V$ is the set of vertices or nodes ($|V|=N$), and $E$ the set of edges linking the elements in $V$. The set $E$ is determined by the underlying network infrastructure, i.e., an edge $(\nu_i,\nu_j)$ is an
element of $E$ if there is a communication channel connecting the two vertices $\nu_i$ and $\nu_j$. In
a quantum communication scenario the nodes are linked together through a
quantum channel $\mathcal E_{\nu_i-\nu_j}$. The transmission of quantum information through the quantum channel can be either forward $\nu_i\rightarrow\nu_j$ or backward $\nu_j\rightarrow\nu_i$. In what follows, we assign an orientation to the network so the quantum systems are always transmitted from sender $\nu_0$ to receiver $\nu_N$.
This is a basic formalization of what is commonly called a quantum network.
Quantum information and entanglement can be transmitted and distributed along the network through a generic route $R$ between the two end-users, which is determined by the sequence of vertices $R=\nu_0-\cdots-\nu_i-\cdots-\nu_N$. In a single network $\mathbf N$, the different routes form a set $\mathbf R_{\mathbf N}=\{R_1,R_2,\ldots\}$. For each route there is an associated sequence of quantum channels, those involved in the routing process. As an example, in panel $a)$ of Fig.~\ref{Fig:network}, we show a fully-connected graph of four vertices that represents a diamond network. The set of all the possible routes from $\nu_0$ to $\nu_3$ is given by
$
\mathbf R_\diamond=\{R_1=\nu_0-\nu_1-\nu_3, R_2=\nu_0-\nu_2-\nu_3, R_3=\nu_0-\nu_1-\nu_2-\nu_3, R_4=\nu_0-\nu_2-\nu_1-\nu_3\}.
$
\subsection{Node-splitting in the network}
As we have done for the linear network, in order to account for a loss model for the stations, we proceed by splitting the nodes $\nu_i$ of the network and by inserting two quantum channels $\mathcal E_{\nu_i^1-\nu_i^2}$ and $\mathcal E_{\nu_i^2-\nu_i^3}$, connecting the three children nodes $\{\nu_i^1,\nu_i^2,\nu_i^3\}$. By doing so, the original network $\mathbf N$, described by the graph $G$, is mapped into $\mathbf N^\prime$ whose associated new graph is given by $G^\prime=(V^\prime,E^\prime)$, where $|V^\prime|=3N$.
\begin{figure}[h]
\vspace{0.2cm}
\par
\begin{center}
\includegraphics[width=0.49 \textwidth]{network} \vspace{-0.6cm}
\end{center}
\caption{A diamond network $\mathbf N$ of ideal nodes ($a$) is mapped into a network $\mathbf N^\prime$ of lossy nodes ($b$) by means of splitting. Node $\nu_i$ is split in three children $\{\nu_i^1,\nu_i^2,\nu_i^3\}$ which are linked by additional edges $(\nu_i^1,\nu_i^2)$ and $(\nu_i^2,\nu_i^3)$ with associated lossy channels $\mathcal E_{\nu_i^1-\nu_i^2}$ and $\mathcal E_{\nu_i^2-\nu_i^3}$. The undirected link $(\nu_1,\nu_2)\in E$ in $\mathbf N$ is replaced, in $\mathbf N^\prime$, by two oriented links $\{(\nu_1^3,\nu_2^1),(\nu_2^3,\nu_1^1)\}\in E^\prime$.
Accordingly, via the node-splitting, the route set $\mathbf R_\diamond$ is mapped into the route set $\mathbf R_\diamond^\prime$.}
\label{Fig:network
\vspace{0.2cm}
\end{figure}
The generic route $R$ of the ideal repeater network is updated to the route $R^\prime=\stackrel{\curvearrowright}{\nu_0}-\cdots-\stackrel{\curvearrowright}{\nu_i}-\cdots-\stackrel{\curvearrowright}{\nu_N}$, where we have defined the node internal route $\stackrel{\curvearrowright}{\nu_i}:=\nu_i^1-\nu_i^2-\nu_i^3$.
In panel $b)$ of Fig.~\ref{Fig:network} we show the node-splitting for the diamond network.
It is important to note that, any edge belonging to two different routes with two opposite orientations, must be replaced by two distinct edges through node-splitting. More specifically, in the diamond network scenario of Fig.~\ref{Fig:network} by observing the route set $\mathbf R_{\diamond}$, the link connecting nodes $\nu_1$ and $\nu_2$ has two opposite orientation in route $R_3$ and route $R_4$. This means that, after node-splitting $\mathbf N\rightarrow\mathbf N^\prime$, the edge $(\nu_1,\nu_2)$ is replaced by two edges $(\nu_1^3,\nu_2^1)$ and $(\nu_2^3,\nu_1^1)$ with opposite orientations and the same associated quantum channel, i.e. $\mathcal E_{\nu_1^{3}-\nu_2^{1}}=\mathcal E_{\nu_2^{3}-\nu_1^{1}}$. These two links belong to the two distinct routes $R_3^\prime$ and $R_4^\prime$ of the new route set $\mathbf R_{\diamond}^\prime:=\{R^\prime_1=\stackrel{\curvearrowright}{\nu_0}-\stackrel{\curvearrowright}{\nu_1}-\stackrel{\curvearrowright}{\nu_3}, R^\prime_2=\stackrel{\curvearrowright}{\nu_0}-\stackrel{\curvearrowright}{\nu_2}-\stackrel{\curvearrowright}{\nu_3}, R^\prime_3=\stackrel{\curvearrowright}{\nu_0}-\stackrel{\curvearrowright}{\nu_1}-\stackrel{\curvearrowright}{\nu_2}-\stackrel{\curvearrowright}{\nu_3}, R^\prime_4=\stackrel{\curvearrowright}{\nu_0}-\stackrel{\curvearrowright}{\nu_2}-\stackrel{\curvearrowright}{\nu_1}-\stackrel{\curvearrowright}{\nu_3}\}$.
\begin{figure}[htbp]
\vspace{0.1cm}
\par
\begin{center}
\includegraphics[width=0.45 \textwidth]{netCut1} \vspace{-0.5cm}
\end{center}
\caption{Two examples of entanglement cut in a quantum diamond network of lossy nodes. The set of vertices $E^\prime$ of the network is divided into the two bipartitions $(V_A,V_B)$ and $(V_A^\prime,V_B^\prime)$ by the cuts $C$ and $C^\prime$ respectively. In the top network, $V_A=\{\stackrel{\frown}{\nu_0},\stackrel{\frown}{\nu_1}\}$ (purple), while $V_B=\{\stackrel{\frown}{\nu_2},\stackrel{\frown}{\nu_3}\}$ (orange). In the bottom network, $V_A^\prime=\{\stackrel{\frown}{\nu_0},\stackrel{\frown}{\nu_2}\}$ (purple), while $V_B^\prime=\{\stackrel{\frown}{\nu_1},\stackrel{\frown}{\nu_3}\}$ (orange). The induced cut sets (thick colored arrows) are respectively given by $K=\{(\stackrel{\frown}{\nu_0},\stackrel{\frown}{\nu_2}), (\stackrel{\frown}{\nu_2},\stackrel{\frown}{\nu_1}), (\stackrel{\frown}{\nu_1},\stackrel{\frown}{\nu_3}\}$ and $K^\prime=\{(\stackrel{\frown}{\nu_0},\stackrel{\frown}{\nu_1}), (\stackrel{\frown}{\nu_1},\stackrel{\frown}{\nu_2}), (\stackrel{\frown}{\nu_2},\stackrel{\frown}{\nu_3}\}$.}
\label{Fig:netcut
\end{figure}
\subsection{Cuts of the lossy-repeater network}
An essential ingredient for our derivation is represented by the entanglement cut of the quantum network~\cite{PirNetwork}.
Given the two end-nodes of a network of lossy repeaters, $\stackrel{\frown}{\nu_0}$ and $\stackrel{\frown}{\nu_N}$ (where $\stackrel{\frown}{\nu_i}:=\{\nu_i^1,\nu_i^2,\nu_i^3\}$), such an entanglement cut $C$ is defined as a bipartition $(V_A,V_B)$ of the set of nodes of the network such that $\stackrel{\frown}{\nu_0}$ belongs to $V_A$ and $\stackrel{\frown}{\nu_N}$ belongs to $V_B$, with the elements of $V_A$ disconnected from the elements of $V_B$.
The entanglement cut induces the definition of the associated cut set $K$ which is the set of the links disconnected by the cut $C$. In Fig.~\ref{Fig:netcut}, we show two possible entanglement cuts of the diamond network in the presence of lossy repeater nodes. While the cut $C$ is always performed over the network link of the kind $(\nu_i^{3},\nu_j^1)$ between two distinct nodes $i$ and $j$, in the cut set $K$, we include also the internal repeater links which have vertices in common with the link disconnected by $C$. In other words, if $(\nu_i^{3},\nu_j^1)$ is a network link cut by $C$, the overall link $(\stackrel{\frown}{\nu_i},\stackrel{\frown}{\nu_j}):=(\nu_i^2,\nu_j^2)=(\nu_i^2,\nu_i^3)\cup(\nu_i^3,\nu_j^1)\cup(\nu_j^1,\nu_j^2)$ is an element of the cut set $K$, i.e. $K=\{(\stackrel{\frown}{\nu_i},\stackrel{\frown}{\nu_j})|,\stackrel{\frown}{\nu_i}\in V_A,\stackrel{\frown}{\nu_j}\in V_B\}$. Accordingly, the quantum channel associated to the generic element of the cut set is given by
\begin{equation}
\mathfrak E_{i,j}:=\mathcal E_{\nu_j^1-\nu_j^{2}}\circ\mathcal E_{\nu_i^3-\nu_j^1}\circ\mathcal E_{\nu_i^{2}-\nu_i^{3}}~,
\label{netCh}
\end{equation}
and we set $\mathcal E_{\nu_0^1-\nu_0^2}=\mathcal E_{\nu_N^2-\nu_N^3}=\mathcal I$ for the two end nodes (see the dashed links in Fig.~\ref{Fig:netcut}).
\subsection{Single-path capacity of the lossy-repeater network}
Now that we have obtained a formalisation of the entanglement cuts for a lossy-repeater network (accounting for the node splitting), we are able to derive a corresponding formula for the single-path routing capacity. As per linear networks, our derivation is based on a straightforward generalisation of the ideal scenario with fully error-corrected repeaters. We know from Ref.~\cite[Th. 6 and 7]{PirNetwork} that the single-path capacity of a quantum network $\mathbf N$ of ideal-repeaters is bounded as follows
\begin{equation}\label{upper}
\mathcal C(\mathbf N)\leq\min_{C}E_R(C),
\end{equation}
where the right hand side term represents the minimization over all the possible cuts of the network of the single-path REE $E_R(C)$ associated to cut $C$. The latter quantity is defined by maximizing the REE over the edges of the cut set $K$, i.e.,
\begin{equation}
E_R(C):=\max_{(\nu_i,\nu_j)\in K}E_R(\rho_{\mathcal E_{i,j}}),
\end{equation}
where $\rho_{\mathcal E_{i,j}}$ is the Choi matrix of the lossy channel associated to the link $(\nu_i,\nu_j)$ (more technically this state and the associated REE are implicitly defined via asymptotic limits~\cite{PLOB}).
In contrast, a lower bound can be derived by finding the widest path in the quantum network~\cite{PirNetwork}, so we can write
\begin{equation}\label{lower}
\mathcal C(\mathbf N)\geq C(R^\star)=\min_C\mathcal C(C)
\end{equation}
where $R^\star$ is the optimal route such that the capacity of a single route $\mathcal C(R):=\min_\alpha \mathcal C(\mathcal E_\alpha^R)$ is maximum. Here $\alpha$ is the index over the route and we are implicitly defining $\mathcal E_\alpha^R:=\mathcal E_{\nu_i-\nu_j}$, with edge $(\nu_i,\nu_j)\in R$. Similarly, $\mathcal C(C):=\max_{(\nu_i,\nu_j)\in K}\mathcal C(\mathcal E_{\nu_i-\nu_j})$ is the single-path capacity associated to the cut. Furthermore, for a network of distillable channels, Eq.~(\ref{upper}) exactly coincides with Eq.~(\ref{lower}), and we can write~\cite{PirNetwork}
\begin{equation}\label{chainCap}
\mathcal C(\mathbf N)=\mathcal C(R^\star)=\min_C\mathcal C(C)=\min_{C}E_R(C). \end{equation}
Thanks to the extension of the definition of the entanglement cut to the lossy repeater scenario, we can still rely on the chain of equalities in Eq.~(\ref{chainCap}).
Using the quantum channel defined in Eq.~(\ref{netCh}), we can therefore define the capacity of the single route $R^\prime\in\mathbf R^\prime_{\mathbf N^\prime}$ in the lossy-repeater network $\mathbf N^\prime$ as
\begin{equation}\label{routecap}
\mathcal C(R^\prime):=\min_{(\nu_i^{2},\nu_j^{2})\in R^\prime}\mathcal C(\mathfrak E_{i,j}^{R^\prime}).
\end{equation}
We notice that the links $(\nu_0^1,\nu_0^2)$ and $(\nu_N^2,\nu_N^3)$ belong to any possible existing single-path route of the lossy-repeater network, but since in our model they are both associated to a noiseless quantum channel, they can be disregarded in the definition of the route capacity.
The main aim of our investigation is the analysis to the fundamental example of optical networks, where the link $(\nu_i^{3},\nu_j^1)$ connecting different nodes is described by a lossy channel with transmissivity $\eta_{i,j}$. We again assume that the two distinct quantum channels associated to the two internal repeater links $(\nu_i^1,\nu_i^2)$ and $(\nu_i^2,\nu_i^3)$ are represented by two lossy channels $\mathcal E_{\nu_i^1-\nu_i^2}$ and $\mathcal E_{\nu_i^2-\nu_i^3}$ with respective transmissivities $r_i$ and $t_i$. As a consequence, the quantum channel $\mathfrak E_{i,j}$, describing the effect of the transmission over the generic node-fibre-node link $(\nu_i^{2},\nu_j^{2})$, is a lossy channel with a transmissivity given by the product of the transmissivities of the involved lossy channels, i.e. $\mathcal T_{i,j}:=\eta_{i,j}r_it_j$ and capacity $\mathcal C(\mathfrak E_{i,j})=-\log_2(1-\mathcal T_{i,j})$.
It then follows that the generic route $R^\prime\in\mathbf R^\prime_{\mathbf N^\prime}$ is identified by a collection of lossy channels with transmissivities $\{\mathcal T_{i,j}^{R^\prime}\}$. By defining the transmissivity of route $R^\prime$ as
\begin{equation}\widetilde{\mathcal T}^{R^\prime}:=\min_{(\nu_i^{2},\nu_j^{2})\in R^\prime}\mathcal T_{i,j}^{R^\prime},
\end{equation}
its capacity reads
\begin{equation}\label{routeCap}
\mathcal C(R^\prime)= -\log_2(1-\widetilde{\mathcal T}^{R^\prime}).
\end{equation}
If we now maximize the expression in Eq.~(\ref{routeCap}) over the route set $\mathbf R^\prime_{\mathbf N^\prime}$, we obtain the single-path capacity of the lossy-repeater quantum network
\begin{align}\label{netCap1}
\mathcal C_{\text{loss}}(\mathbf N^\prime)&:=\max_{R^\prime\in\mathbf R^\prime_{\mathbf N^\prime}}\mathcal C(R^\prime)= -\log_2(1-\mathcal T),\\
\mathcal T&:=\max_{R^\prime\in\mathbf{R^\prime_{\mathbf N^\prime}}}\widetilde{\mathcal T}^{R^\prime}
\end{align}
Equivalently, following the last terms of Eq.~(\ref{chainCap}), we can compute the capacity by minimizing, over all the possible cuts $C$, either the capacity of an entanglement cut $\mathcal C(C)$ or the REE of an entanglement cut $E_R(C)$. Thus, we may consider
\begin{align}
E_R(C)&:=\max_{(\nu_i^{2},\nu_j^{2})\in K}E_R(\rho_{\mathfrak E_{i,j}})\\
&=\max_{(\nu_i^{2},\nu_j^{2})\in K}[-\log_2(1-\mathcal T_{i,j})]
\nonumber
\\&=-\log_2(1-\widetilde{\mathcal T}_C),\nonumber
\end{align}
with $\widetilde{\mathcal T}_C=\max_{(\nu_i^{2},\nu_j^{2})\in K}\mathcal T_{i,j}$. We then obtain the single-path capacity of the lossy-repeater network via the minimization
\begin{equation}\label{netCap2}
\mathcal C_{\text{loss}}(\mathbf N^\prime)=\min_C[-\log_2(1-\widetilde{\mathcal T}_C)].
\end{equation}
By specifying Eqs.~(\ref{netCap1}) and~(\ref{netCap2}) to identical repeaters, i.e. $r_i=r_j=r$ and $t_i=t_j=t$, $\forall i,j=0,\ldots,|V|$, we get
\begin{equation}
\mathcal C_{\text{loss}}(\mathbf N^\prime)=-\log_2[(1-v\cdot\eta_{\mathbf N^\prime})],
\label{sLossCap}
\end{equation}
where we have defined $v:=rt$ and
\begin{equation}
\eta_{\mathbf{N^\prime}}:=\max_{R^\prime\in\mathbf{R^\prime_{\mathbf N^\prime}}}\min_{(\nu_i^{2},\nu_j^{2})}\eta_{i,j}^{R^\prime}=\min_C\max_{(\nu_i^{2},\nu_j^{2})\in K}\eta_{i,j}.
\end{equation}
The expressions above generalize the single-path capacity formulas of Ref.~\cite{PirNetwork} from ideal to lossy repeaters.
\subsection{Multi-path capacity of the lossy-repeater network}
A powerful routing strategy in a network is represented by flooding, where systems are transmitted in parallel so that each edge is exploited in each network use.
Let us consider a quantum network $\mathbf N^\prime$ obtained, as described in the previous section, after node splitting $\mathbf N\rightarrow\mathbf N^\prime$, with a corresponding graph $G^\prime=(V^\prime,E^\prime)$ where $V^\prime=\{(\nu_i^1,\nu_i^2,\nu_i^3)\}_{i=0,\cdots,N}$. Once an orientation to the network $\mathbf N^\prime$ has been assigned, a multi-path flooding protocol can be defined as a collection of multicasts, each one realizing a point-to-multipoint communication. An orientation to the undirected network $\mathbf N$ is assigned by setting Alice ($\stackrel{\frown}{\nu_0}$) and Bob ($\stackrel{\frown}{\nu_N}$) respectively as the source and the sink of the network, and then by assigning a source-sink orientation to each edge of the network. Namely, for a generic link between the $i$-th and the $j$-th, we always identify $\nu_i^3$ as the source and $\nu_j^1$ as the sink. In this way a point-to-multipoint communication from node $\stackrel{\frown}{\nu_i}$ is defined as a quantum communication between $\stackrel{\frown}{\nu_i}$ and its out-neighborhood $D^{\text{out}}_{\stackrel{\frown}{\nu_i}}:=\{\nu_j^1\in V^\prime|(\nu_i^3,\nu_j^1)\in E_D^\prime\}$, with $E_D^\prime$ the edge-set $E^\prime$ where each element is now oriented. After the internal route $\stackrel{\curvearrowright}{\nu_0}$ at the sender's repeater station, the multi-path protocol starts with node $\nu_0^3$ sending quantum systems to each repeater station belonging to its neighbourhood.
The converse upper bound for the multi-path capacity $\tilde{\mathcal C}(\mathbf N)$ of a quantum network $\mathbf N$ is given by~\cite{PirNetwork}
\begin{equation}\label{multiUP}
\tilde{\mathcal C}(\mathbf N)\leq\min_C\tilde{E}_R(C),
\end{equation}
where the minimization is over all the possible cuts of the network and $\tilde{E}_R(C)$ is the multi-path REE flowing through an entanglement cut $C$. This is defined as the total REE of the cut set $K$ associated to $C$, namely
\begin{equation}
\tilde{E}_R(C):=\sum_{(\nu_i,\nu_j)\in K}E_R(\rho_{\mathcal E_{i,j}})~.
\end{equation}
An achievable rate (lower bound) for the multi-path capacity of the network is computed by applying the max-flow/min-cut theorem to the network, leading to~\cite{PirNetwork}
\begin{equation}\label{multiLOW}
\tilde{\mathcal C}(\mathbf N)\geq\min_C\tilde{\mathcal C}(C)~,
\end{equation}
where $\mathcal C(C)$ is the multi-path capacity of an entanglement cut, defined by
\begin{equation}
\tilde{\mathcal C}(C):=\sum_{(\nu_i,\nu_j)\in K}\mathcal C(\mathcal E_{\nu_i-\nu_j})~.
\label{multicapcut}
\end{equation}
When the quantum network is connected by distillable quantum channels~\cite{PLOB}, the previous upper (\ref{multiUP}) and lower bound (\ref{multiLOW}) coincide and the multi-path capacity satisfies the following chain of equalities
\begin{equation}
\tilde{\mathcal C}(\mathbf N)=\min_C\tilde{\mathcal C}(C)=\min_C\tilde{E}_R(C).
\end{equation}
Again we are able to generalize the analytical formulas, by extending the multi-path capacity for quantum and private communication over a quantum network from ideal to imperfect lossy nodes. For the fundamental case of an optical network connected by lossy channels (e.g., fibres), the crucial decomposition is the one in Eq.~(\ref{netCh}), where all the channels involved are lossy channels and therefore distillable.
Combining our decomposition with Eq.~(\ref{multicapcut}), we compute the multi-path capacity of an entanglement cut by summing up over the capacities of the quantum channels $\mathfrak E_{i,j}$ associated with the cut set $K$. We then have
\begin{align}
\tilde{\mathcal C}_{\text{loss}}(C)&=\sum_{(\stackrel{\frown}{\nu_i},\stackrel{\frown}{\nu_j})\in K}\mathcal C(\mathfrak E_{i,j})\\
&=\sum_{(\stackrel{\frown}{\nu_i},\stackrel{\frown}{\nu_j})\in K}E_R(\rho_{\mathfrak E_{i.j}})
\nonumber
\\
&=\sum_{(\stackrel{\frown}{\nu_i},\stackrel{\frown}{\nu_j})\in K}-\log_2(1-\mathcal T_{i,j})
\nonumber\\
&=-\log_2(L_C)
\nonumber
\end{align}
where we have defined the total losses over a cut set as the product of the losses over the channels (repeater and link losses) in the cut set, i.e.
\begin{equation}
L_C:=\prod_{(\stackrel{\frown}{\nu_i},\stackrel{\frown}{\nu_j})\in K}(1-\mathcal T_{i,j}).
\end{equation}
Then the multi-path capacity of the quantum network with lossy repeaters
is given by the minimization over all the possible entanglement cut of the above expression, i.e.,
\begin{align}
\tilde{\mathcal C}_{\text{loss}}(\mathbf N^\prime)&=\min_C\tilde{\mathcal C}_{\text{loss}}(C)\\
&=-\log_2(\max_CL_C)~.
\label{mLossCap}
\end{align}
It is easy to see that multi-path strategy is advantageous with respect to single-path even in the presence of lossy repeaters. For this purpose we can consider a split network $\mathbf N^\prime$ with identical repeaters (i.e. same loss) at each node and where all the network links $(\nu_i^3,\nu_j^1)$ are identical lossy channels with transmissivity $\eta$. Then from Eqs.~(\ref{sLossCap}) and (\ref{mLossCap}),
we get
\begin{equation}
\tilde{\mathcal C}_{\text{loss}}(\mathbf N^\prime)=-\log_2(1-v\eta)^m=m\mathcal C_{\text{loss}}(\mathbf N^\prime),
\end{equation}
where $m$ is the number of network links of the smallest allowed cut set. For instance, in the diamond network $\mathbf N_\diamond^\prime$ of Fig.~\ref{Fig:network} panel $b)$, the value of $m$ is equal to $2$.
\section{Conclusion and outlook}
Our work establishes analytical formulas for the maximum achievable rate of quantum and private communication between two end-users of a quantum network where the nodes are affected by internal loss.
In the linear
\je{repeater chain}
scenario, we exploit a classical network technique, known as node splitting, to model the inevitable internal repeater loss.
In this way, we are able to describe the repeater chain as a suitable collection of distillable quantum channels, i.e. channels for which the lower and the upper bounds on the two-way assisted quantum (and private) capacity coincide.
Given this setting, by employing
the powerful methodology of channel simulation and teleportation stretching, we have established an exact expression for the lossy-repeater capacity for quantum communication over a network with arbitrary number of lossy repeaters connected by pure-loss channels. Interestingly, when the number of repeaters increases, the derived capacity turns out to be a function of the internal loss of a single node, which then acts as the ultimate upper limit to the maximum achievable rate for quantum and private communication.
Finally, we have considered the important role played by \emph{time} that must be taken into account in any actual implementation of a quantum repeater chain. In such a realistic setting, we have shown how the performance can indeed
overcome the repeaterless PLOB bound and approach the optimal single-repeater bound, even in the presence of internal time-dependent loss, e.g., induced by limited coherence times.
The present study can be seen as a relevant step in an
important direction and invites further studies in many ways. This work has put an emphasis on losses, which in most practical implementations is indeed the main source of errors.
A more detailed study should accommodate dark counts and further offset noise as well. On a broader level, the work hopes to push forward
a line of thought aiming at identifying the ultimate bounds for practically achievable rates in quantum repeater schemes, capturing high-level distinctions between protocols without going too much into specifics of a particular implementation. Such considerations, it is reasonable to expect, will substantially help in assessing the potential of multi-partite long-distance quantum communication.
\section{Acknowledgments}
J.~E.~has been supported by the BMBF (QR.X on quantum repeaters) and the Einstein Foundation. R.~L.~acknowledges support from the Alexander von Humboldt Foundation. N.~W.~has been funded by a Marie Sklodowska-Curie Individual Fellowship. S.~P.~has been supported by the European Union via ``Continuous Variable Quantum Communications'' (CiViQ, Grant Agreement No.~820466). The authors would like to thank Frederik Hahn, Julius Walln{\"o}fer and Peter van Loock for interesting discussions. We are also grateful to Cillian Harney for his feedback that helped us improve Fig.~\ref{Fig:rubidium}.
\section*{Appendix}
\subsection*{Appendix A: Two-way quantum capacities and general bounds}
The most important point-to-point quantum communication scenario concerns two remote parties, Alice and Bob, which are connected by a (memoryless) quantum channel $\mathcal E$ without pre-sharing any entanglement. By means of this channel, the two parties may implement various quantum tasks as for instance the reliable transmission of qubits, the distillation of entanglement bits (ebits) and the generation of secret bits. The most general protocols are based on transmissions through the quantum channel which are interleaved by local operations (LO) assisted by unlimited and two-way classical communication (CC), briefly called adaptive LOCCs.
At the beginning of this protocol, Alice and Bob have two local registers $a$ and $b$ of quantum systems which are adaptively updated before and after each transmission through $\mathcal E$. After a number $n$ of channel's uses, Alice and Bob will share the quantum state $\rho_{a,b}^n$ which depends on the sequence of LOCCs $\mathcal L=\{L_1,L_2,\cdots,L_n\}$.
The rate $R_n$ of this protocol is defined through a target state $\phi_n$ whose content of information is equal to $nR_n$ bits. If the output state $\rho_{a,b}^n$ is close in the trace norm to $\phi_n$, i.e. $\|\rho_{a,b}^n-\phi_n\|\leq\epsilon$ for some $\epsilon\rightarrow>0$,
then the rate of the protocol is equal to $R_n$. The generic two-way capacity $\mathcal C(\mathcal E)$ is defined by taking the limit for a large number of channel's uses $n$ and by optimizing over all the possible adaptive protocols $\mathcal L$, i.e.
\begin{equation}
\mathcal C(\mathcal E):=\sup_{\mathcal L}\lim_{n\rightarrow\infty}R_n~.\label{ChCAP}
\end{equation}
In order for the quantity $\mathcal C(\mathcal E)$ to get an operational meaning, we need to specify the goal of the adaptive protocol implemented by Alice and Bob. Thus, if the target state is a maximally entangled state, meaning that the protocol is an entanglement distribution protocol, we have that $\mathcal C(\mathcal E)=D_2(\mathcal E)$, where $D_2(\mathcal E)$ is the two-way entanglement distribution capacity of the channel. Since an ebit can teleport a qubit and viceversa with a qubit is possible distribute an ebit, $D_2(\mathcal E)$ is equal to the two-way quantum capacity $Q_2(\mathcal E)$, i.e. the maximum achievable rate for transmitting quantum information. If the protocol is a QKD protocol, $\phi_n$ is a private state and the generic two-way quantum capacity is the secret key capacity $K(\mathcal E)$ which is equal to the private capacity $P_2(\mathcal E)$ (unlimited two-way CCs and one time-pad). Since a maximally entangled state is a specific type of
private state, we can write the following relations between all the different capacities
\begin{equation}\label{capIneq}
Q_2(\mathcal E)=D_2(\mathcal E)\leq P_2(\mathcal E)=K(\mathcal E)~.
\end{equation}
As one can see from Eq.~(\ref{ChCAP}), the quantity $\mathcal C(\mathcal E)$ cannot be evaluated directly from its definition and the best strategy to assess it is to resort to suitable lower and upper bounds that are usually built upon information and entanglement measures.
A general lower bound can be given in terms of the \emph{coherent}~\cite{Coh1,Coh2} or \emph{reverse coherent information}~\cite{RevCoh1,RevCoh2} which are, respectively, defined as
\begin{align}
I_C(\mathcal E,\rho_A)&=I(A\langle B)_{\rho_{RB}}:=S(\rho_B)-S(\rho_{RB}),\\
I_{RC}(\mathcal E,\rho_A)&=I(A\rangle B)_{\rho_{RB}}:=S(\rho_R)-S(\rho_{RB}),
\end{align}
where the quantum channel $\mathcal E$ takes as an input the quantum state $\rho_A$ of system $A$ (see also the related notions of
\emph{negative cb-entropy} of a channel~\cite{devetakcb} and
\emph{pseudo-coherent information}~\cite{hayashipseudo}). If $R$ is an auxiliary system and $|\psi\rangle_{RA}$ the purification of $\rho_A$, then the output of the channel is $\rho_{RB}=(\mathcal I\otimes\mathcal E)(|\psi\rangle\langle\psi|_{RA})$.
In the above expressions, we also have $\rho_{R(B)}=\Tr_{B(R)}\rho_{RB}$ and $S(\rho):=-\Tr(\rho\log_2\rho)$ is the
\emph{von Neumann entropy}.
When the input state $\rho_A$ is a maximally-mixed state, its purification is a maximally-entangled state $\Phi_{RA}$, so that $\rho_{RB}$ becomes the Choi matrix of the channel $\sigma_{\mathcal E}=(\mathcal I\otimes\mathcal E) (\Phi_{RA})$. Then we can define the coherent and reverse coherent information of the quantum channel $\mathcal E$ respectively as follows~\cite[Supp. Note 2]{PLOB}
\begin{align}
I_C(\mathcal E)&:=I(A\langle B)_{\sigma_{\mathcal E}}\label{coherent},\\
I_{RC}(\mathcal E)&:=I(A\rangle B)_{\sigma_{\mathcal E}}.\label{Rcoherent}
\end{align}
The quantity $I_C(\mathcal E)$ constitutes an achievable rate for {\itshape forward} one-way entanglement distillation, whereas $I_{RC}(\mathcal E)$ is an achievable rate for {\itshape backward} one-way entanglement distillation. In fact, due to the
\emph{hashing inequality}~\cite{DevWint},
we can write
\begin{equation}\label{lowerB}
\max\{I_C(\mathcal E),I_{RC}(\mathcal E)\}\leq D_1(\sigma_{\mathcal E})~,
\end{equation}
where $D_1(\sigma_\mathcal E)$ is the entanglement that can be distilled from the channel's Choi matrix with the assistance of forward or backward classical communication.
The general weak converse upper bound to the two-way quantum capacity $\mathcal C(\mathcal E)$ derived in Ref.~\cite{PLOB}, is built upon the notion of the \emph{relative entropy of entanglement} (REE)
\cite{RE2}
suitably extended from quantum
states to quantum channels.
Let us recall that the REE of a
quantum state $\rho$ is defined as the
minimum relative entropy between $\rho$ and a
separable state $\sigma_s$~\cite{RE1,RE2}, i.e.,
\begin{equation}
E_R(\rho):=\inf_{\sigma_s\in\text{SEP}}S(\rho\|\sigma_s)~.
\end{equation}
We can also introduce the REE of a discrete variable quantum channel $\mathcal E$ with associated Choi matrix $\sigma_{\mathcal E}$ in the following way
\begin{equation}
E_R(\mathcal E):=\sup_{\rho}E_R[
(\mathcal I\otimes\mathcal E) (\rho)]\leq E_R(\sigma_{\mathcal E})~.
\end{equation}
Then Ref.~\cite[Th.~1]{PLOB} states that generic two-way capacity of equation~(\ref{ChCAP}) is upper bounded by the REE bound
\begin{equation}\label{upperB}
\mathcal C(\mathcal E)\leq E_R^\star(\mathcal E):=\sup_{\mathcal L}\lim_{n\rightarrow\infty}\frac{E_R(\rho_{a,b}^n)}{n}~,
\end{equation}
where $\rho_{a,b}^n$ is the output state of a $n$-use adaptive protocol $\mathcal L$.
Note that both the lower bound (\ref{lowerB}) and the upper bound (\ref{upperB}) hold for an arbitrary quantum channel in arbitrary dimension.
In the subsequent section, we discuss how to extend them to asymptotic states, providing in this way a formulation for CV systems, following the asymptotic methodology of Ref.~\cite{PLOB}.
\subsection*{Appendix B: Asymptotic formulation for bosonic systems}
It is important to note that when dealing with continuous variable systems the maximally entangled state is an asymptotic state (energy-unbounded) obtained as the limit $\Phi:=\lim_\mu\Phi^\mu$, where $\Phi^\mu$ is a sequence of two mode squeezed vacuum (TMSV) states parametrized by $\mu$ which quantifies both the two-mode squeezing and the mean number $\bar n$ of thermal photons (local energy) in both modes, i.e., $\mu=\bar n+1/2$~\cite{TeleBK,PeterRMP}. According to this, the Choi state of a bosonic channel $\mathcal E$ (e.g. the pure-loss channel under consideration) is given by the asymptotic limit
\begin{equation}
\sigma_{\mathcal E}:=\lim_\mu\sigma_{\mathcal E}^\mu,\quad\sigma_{\mathcal E}^\mu:=
(\mathcal I\otimes\mathcal E)(\Phi^\mu)~.\label{AsymChoi}
\end{equation}
Correspondingly the computation of the (reverse) coherent information of a quantum channel introduced in Eq.~(\ref{coherent}) and (\ref{Rcoherent}) has to be performed as the following limits
\begin{align}
&I(A\langle B)_{\sigma_{\mathcal E}}:=\lim_{\mu\rightarrow\infty}I(A\langle B)_{\sigma_{\mathcal E}^\mu},\\
&I(A\rangle B)_{\sigma_{\mathcal E}}:=\lim_{\mu\rightarrow\infty}I(A\rangle B)_{\sigma_{\mathcal E}^\mu}~.
\end{align}
For \emph{bosonic Gaussian channels}
\cite{GaussianChannel}, it can be shown that the functionals $I(A\langle B)_{\sigma_{\mathcal E}^\mu}$ and $I(A\rangle B)_{\sigma_{\mathcal E}^\mu}$ are continuous, monotonic and bounded
in $\mu$. Therefore, the above limits are finite and we can continuously extend Eq.~(\ref{lowerB}) to the asymptotic Choi matrix of a CV channel, for which we may set $D_1(\mathcal E):=\lim_{\mu\rightarrow\infty}D_1(\sigma_{\mathcal E}^\mu)$.\\
Let us now consider two sequences of states $\rho_1^\mu$ and $\rho_2^\mu$ converging, respectively, in the trace norm to $\rho_1$ and $\rho_2$, i.e., $\|\rho_i^\mu-\rho_i\|\rightarrow0$, for $i=1,2$. By exploiting the lower semi-continuity of the relative entropy,
we can write
\begin{equation}
S(\rho_1\|\rho_2)\leq\liminf_{\mu\rightarrow\infty}S(\rho_1^\mu\|\rho_2^\mu)~.
\end{equation}
As a consequence the relative entropy of entanglement of an asymptotic state $\rho=\lim_\mu\rho^\mu$ is defined as follows
\begin{equation}\label{asREE}
E_R(\rho):=\inf_{\rho_s^\mu}\liminf_{\mu\rightarrow\infty}S(\rho^\mu\|\rho^\mu_s)~,
\end{equation}
where $\rho_s^\mu$ is an arbitrary sequence of separable states satisfying $\|\rho_s^\mu-\rho_s\|\stackrel{\mu\rightarrow\infty}{\longrightarrow}0$ for some separable state $\rho_s$. A direct implication of Eq.~(\ref{asREE}) is that the REE computed over the {\itshape quasi}-Choi matrix $\sigma_{\mathcal E}^\mu$ of a bosonic channel is defined as
\begin{equation}
E_R(\sigma_{\mathcal E}):=\inf_{\rho_s^\mu}\liminf_{\mu\rightarrow+\infty}S(\sigma_{\mathcal E}^\mu\|\rho_s^\mu)
\end{equation}
\subsection*{Appendix C: Channel simulation and teleportation stretching}
We already mentioned that in order to write Eq.~(\ref{stretch}), which is fundamental in simplifying the REE bound of Eq.~(\ref{upperB}), we need to rely on two ingredients which are, respectively, known as {\itshape channel simulation} and {\itshape teleportation stretching}.
In this section we briefly review these two technical steps with the main definitions while referring the reader to \cite{PLOB} for more technical details and a discussion of historical developments.
The notion of quantum channel simulation comes from a straightforward generalization of quantum teleportation protocol whose structure involves local operations (LO), Bell detection on Alice’s side and Bob’s unitary correction, plus classical communication (CC) from Alice to Bob~\cite{BennettTELE}. For a maximally entangled resource state $\Phi$, the teleported output perfectly correspond to the input. If we perform teleportation over an arbitrary mixed resource state of systems A and B, the teleported state on Bob’s side will result in the output of a certain quantum channel $\mathcal E$ from Alice to Bob (see Ref.~\cite[Sec.~V]{BDSW96} for the initial insights of this technique, later expanded by various groups over the years).
More generally, any implementation through an arbitrary LOCC $\mathbb L$ and a resource state $\sigma$ simulates the output of a quantum channel $\mathcal E$.
Thus, for any $\mathcal E$ and for any input $\rho$, we can express the output as~\cite{PLOB}
\begin{equation}
\mathcal E(\rho)=\mathbb L(\rho\otimes\sigma)~.\label{simula}
\end{equation}
When dealing with CV systems as in our scenario, the LOCC simulation involves the limit $ \sigma:=\lim_{\mu\rightarrow\infty}\sigma^\mu$ of resource states $\sigma^\mu$. Then, for any finite $\mu$, the simulation provides the approximated channel
\begin{equation}
\mathcal E^\mu(\rho)=\mathbb L(\rho\otimes\sigma^\mu)~,
\end{equation}
which defines the quantum channel $\mathcal E$ as the following point-wise limit
\begin{equation}
\mathcal E(\rho)=\lim_{\mu\rightarrow\infty}\mathcal E^\mu(\rho)~.\label{LimSimu}
\end{equation}
For any given quantum channel, we can always find a suitable LOCC $\mathbb L$ and a resource state $\sigma$ that achieve the simulation in Eq.~(\ref{simula}). A genuine LOCC simulation is established when the quantum channel satisfies the property of {\itshape teleportation covariance}. If $\mathcal U$ is the group of teleportation unitaries, a quantum channel $\mathcal E$ is teleportation covariant if the following identity holds for any $U\in\mathcal U$
\begin{equation}
\mathcal E(U\rho U^\dagger)=V\mathcal E(\rho)V^\dagger~,
\end{equation}
with $V$ a unitary transformation not necessarily belonging to $\mathcal U$. Note that the unitary group $\mathcal U$ is the Weyl-Heisenberg group (generalized Pauli operators) for DV systems, while for CV systems it coincides with the group of displacement operators. An interesting property of a tele-covariant quantum channel $\mathcal E$ is that it can be simulated by teleporting the input state $\rho$ using its Choi matrix $\sigma_{\mathcal E}$ as the resource for teleportation, i.e., for a DV channel we write
\begin{equation}
\mathcal E(\rho)=\mathbb T(\rho\otimes\sigma_{\mathcal E})
\end{equation}
where $\mathbb T$ is teleportation~\cite{BennettTELE}. For a CV channel, by recalling Eq.~(\ref{LimSimu}), the above relation is rewritten as
\begin{equation}\label{Sim1}
\mathcal E^\mu(\rho)=\mathbb T(\rho\otimes\sigma^\mu_{\mathcal E})~,
\end{equation}
where now $\mathbb T$ is the Braunstein-Kimble teleportation~\cite{TeleBK,TeleportationReview} and the asymptotic Choi state $\sigma_{\mathcal E}^\mu$ defines the asymptotic Choi state for large $\mu$ as in Eq.~(\ref{AsymChoi}). Note that several quantum channels satisfy the property of teleportation covariance, including Pauli and erasure channels in DVs, and bosonic Gaussian channels in CVs.
By making use of channel simulation, we are able to perform teleportation stretching and to simplify the adaptive structure of a protocol for quantum and private communication. This means that the protocol output $\rho_{a,b}^n$ is reduced into an $n$-fold tensor product of resource states $\sigma^{\otimes n}$ up to a TP LOCC $\bar{\Lambda}$. The reduction procedure starts by replacing each transmission over the channel $\mathcal E$ with its simulation $(\mathbb T,\sigma)$. At this stage, we can then stretch the resource state $\sigma$ outside the adaptive operations, while $\mathbb T$ is incorporated into the protocol LOCCs. After that, all the LOCCs together with the initial register preparation, are merged into a single final LOCC $\bar\Lambda$, which turns out to be TP after averaging over all the possible local measurement outcomes. At the end we can then write~\cite[Lemma~3]{PLOB}
\begin{equation}\label{stretch1}
\rho_{a,b}^n=\bar\Lambda(\sigma^{\otimes n})~.
\end{equation}
For CV quantum channels, the above equation must be interpreted in an asymptotic fashion in the following manner. We replace each transmission through $\mathcal E$ with the channel $\mathcal E^\mu$ defined in (\ref{Sim1}) with a finite-energy resource state $\sigma^\mu$. If we assume that the local registers of Alice and Bob have energy $\leq N$, i.e., the total input state of each transmission belongs to a bounded alphabet $D_N$, the channel $\mathcal E^\mu$ simulates $\mathcal E$ up to an error given by $\epsilon(\mu,N):=\|\mathcal E-\mathcal E^\mu\|_{\diamond N}$, where
\begin{equation}
\|\mathcal E-\mathcal E^\prime\|_{\diamond N}:=\sum_{\rho_{RS}\in D_N}\|\mathcal I_R\otimes \mathcal E_S(\rho_{RS})-\mathcal I_R\otimes \mathcal E^\prime_S(\rho_{RS})\|
\end{equation}
is the energy constrained diamond norm. By exploiting the non-increasing of the trace distance under CPTP maps and the triangle inequality, it can be proven~\cite{PLOB} that the trace distance between the output $\rho_{a,b}^n$ and the simulated output $\rho_{a,b}^{n,\mu}$ (the output of an adaptive protocol performed over $\mathcal E^\mu$) satisfies
\begin{equation}
\|\rho_{a,b}^n-\rho_{a,b}^{n,\mu}\|\leq n\epsilon(\mu,N)~. \end{equation}
We can now substitute $\rho_{a,b}^{n,\mu}$ with its decomposition given by the teleportation stretching, so that we obtain
\begin{equation}
\|\rho_{a,b}^n-\bar\Lambda(\sigma^{\mu\otimes n})\|\leq n\epsilon(\mu,N)~,
\end{equation}
for any energy constrain $N$. Then by taking the limit for $\mu\rightarrow\infty$ we get the asymptotic version of Eq.~(\ref{stretch}) (asymptotic stretching)
\begin{equation}\label{stretch2}
\lim_{\mu\rightarrow\infty}\|\rho_{a,b}^n-\bar\Lambda(\sigma^{\mu\otimes n})\|=0~.
\end{equation}
By using the decompositions of Eq.~(\ref{stretch1}) and (\ref{stretch2}) we can consequently simplify the upper bound in (\ref{upper}). In fact we can write
\begin{equation}\label{REEineq}
E_R(\rho_{a,b}^n)\leq E_R(\sigma^{\otimes n})\leq nE_R(\sigma)~,
\end{equation}
where in the two inequalities the monotonicity of the REE under TP LOCCs and the sub-additivity of the REE over tensor products are, respectively, exploited. By putting Eq.~(\ref{REEineq}) into Eq.~(\ref{upper}), we can get rid of both the optimization over all the adaptive protocols and the asymptotic limit so that a {\itshape single-letter} upper bound to the capacities introduced in (\ref{capIneq}) is obtained
\begin{equation}
\mathcal C(\mathcal E)\leq E_R(\sigma)~.
\end{equation}
If the channel is teleportation covariant we can then write the above equation in terms of the Choi matrix $\sigma_{\mathcal E}$ of the channel, i.e.,
\begin{equation}
\mathcal C(\mathcal E)\leq E_R(\sigma_{\mathcal E})~.
\end{equation}
See also Ref.~\cite[Th.~5]{PLOB} and related proofs for more details.
\subsection*{Appendix D: BB84 key rate}
Over a pure loss channel there is no dephasing so there is one bit of distillable key for every successful connection between the two remote stations. All that is required then is to calculate the probability of this happening for a single channel use for a repeater scheme based upon storage and entanglement swapping, but without any distillation. Following Ref.~\cite{vanLoock:2020gt} we can calculate that, for a scheme with a half-link success probability, $p$, given by (\ref{psimp}) and symmetric transmission and receiver losses $\tau^{t,\mathrm{eff}} = \tau^{r,\mathrm{eff}} = \tau^{\mathrm{eff}}$ that the BB84 rate is
\eqn{r_{\mathrm{BB84}} = \frac{1}{2} \frac{ \sqrt{\eta}\tau^{\mathrm{eff}}(2-\sqrt{\eta}\tau^{\mathrm{eff}})}{3-2\sqrt{\eta}\tau^{\mathrm{eff}}} \tau_{\mathrm{mem}} .}
The rate per time is
then given by $R r_{\mathrm{BB84}}$. For a standard polarisation based implementation, there are actually two
optical modes available (corresponding to horizontal and vertical polarisation) that must be transmitted for each round, so this rate must be halved to get the rate per transmitted mode, which gives the factor of $1/2$ in the above expression.
|
1,314,259,996,624 | arxiv | \section{Introduction}
The past decade has seen tremendous improvement in the ability of
Lattice QCD (LQCD) calculations to provide results that can confront
experiment. However, lattice computations of some key quantities
remain at odds with experimental determinations, including the
momentum fraction carried by partons in the nucleon, and, notably, the
axial-vector charge, $g_A^{u-d}$, of the nucleon. These discrepancies
are often attributed to finite-volume effects, and to the
contribution of excited states to ground-state matrix elements. The
calculation of the axial-vector charge has been a particular focus
within the lattice community, and a dedicated effort to resolve these
discrepancies has demonstrated success\cite{Chang:2018uxx}, using a method to
control excited states inspired by the Feynman-Hellman theorem. In
this paper we explore a different means of controlling excited states,
through the use of a novel smearing method,
``distillation''\cite{Peardon:2009gh}, and the variational method for
the case of the nucleon charges $g_A^{u-d}$, $g_S^{u-d}$ and $g_T^{u-d}$.
The matrix element of the flavor non-singlet axial-vector current
$A_\mu^a=\overline{\psi}\gamma_\mu\gamma_5\frac{\tau^a}{2}\psi$, where
$\psi$ is the isospin doublet of $u,d$ quark fields and $\tau$ the
Pauli-spin matrices acting in isospin space, between nucleons of
momenta $p$ and $p'$ can be expressed in terms of the axial-vector and the induced pseudoscalar and tensor form factors
\begin{align}
\begin{split}
&\bra{N\left(p',s'\right)}A_\mu^3\ket{N\left(p,s\right)}=\overline{u}_N\left(p',s'\right)\left[\gamma_\mu\gamma_5G_A\left(q^2\right)+\right. \\
&\quad\left.+\frac{q_\mu}{2M_N}\gamma_5G_P\left(q^2\right)+i\frac{\sigma^{\mu\nu}q_\nu}{2M_N}\gamma_5G_T\left(q^2\right)\right]\frac{\tau^3}{2}u_N\left(p,s\right)
\end{split}
\label{eq:axial}
\end{align}
where $q_\mu=p_\mu'-p_\mu$ the momentum transfer. At zero momentum transfer, the contribution of the pseudoscalar and tensor form factors to the matrix element of equation \ref{eq:axial} vanish, and we are left with the definition of the isovector axial charge of the nucleon $g_A^{u-d}=G_A\left(0\right)$.
The axial-vector charge $g_A$ quantifies the $n\rightarrow
pe^{-}\overline{\nu}_e$ coupling, the degree of low-energy chiral
symmetry breaking in QCD, and even the difference between the $u$ and
$d$ quark contributions to the proton spin $g_A=\Delta u-\Delta d$.
The breadth of different phenomena dependent on a knowledge of $g_A$
highlights the need for precise agreement between theoretical and
experimental determinations. Moreover, the straightforward definition
of $g_A$ serves as a useful proving ground for any new lattice
algorithm purporting to calculate meaningful quantities in QCD.
Precise neutron $\beta$ decay experiments have measured the nucleon
isovector axial charge $g_A^{u-d}=1.2724\pm0.0023$ \cite{Tanabashi:2018oca}. Yet,
historically, the bulk of lattice calculations have systematically
under-determined $g_A^{u-d}$ by roughly 10-15\%. This has led to intense
discussion in the community on the role of finite-volume effects, the
lack of chiral symmetry in most lattice formulations of QCD, the
influence of heavy quark flavors, the use of local currents with large
discretization effects, and excited-state contamination might have on
the calculation of $g_A^{u-d}$ and nucleon matrix elements in
general. Calculations have been performed using the Highly Improved
Staggered Quark (HISQ) \cite{Lin:2018obj} and Domain Wall (DW) \cite{Ohta:2015aos}
fermion actions which, despite the restoration of chiral symmetry,
present challenges, namely, identifying quark flavors and the
numerical cost compared to Wilson-type actions, respectively. Calculations have
even been performed with $\mathcal{O}\left(a^2\right)$ improved
currents with novel noise reduction techniques
\cite{Liang:2016fgy,Yang:2015zja}, and others still with 2+1+1 flavor
QCD accounting for several excited-states in requisite fits
\cite{Gupta:2018qil,Gupta:2018wrq}. In all such cases the calculated value of
$g_A^{u-d}$ differs from experiment by $\sim5-10$\%. Clearly standard
techniques of calculating and subsequently fitting two- and three-point correlation
functions to extract $g_A^{u-d}$ routinely come up short.
Recent work by \cite{Chang:2018uxx} employs a methodology inspired by
the Feynman-Hellman theorem to control excited-state effects by
summing over all current insertion times, engendering extrapolation in
a single Euclidean temporal variable rather than two - agreement was
found to within 1\% of experiment. We remark that a few 2-flavor
lattice QCD calculations employing the Wilson plaquette/clover fermion
actions have been performed for $g_A^{u-d}$ whose results are within
$\sim1$\% of the experimental determination
\cite{Bali:2014nma,Horsley:2013ayv}. These studies calculated
$g_A^{u-d}$ for source-sink separations greater than 1 fm. Furthermore it
was found $g_A^{u-d}$ has a strong dependence on the spatial lattice
volume $V_3$. Taken together, suggesting the largest contributors to
uncertainty in $g_A^{u-d}$ lie in small lattice volumes and
excited-state contamination.
In this paper, we investigate a means of overcoming one of the
dominant systematic uncertainties in the calculation of nucleon
charges, namely that arising from our inability to isolate the
ground-state nucleon from its excitations. The variational method, an
algorithm to improve overlap onto a desired state, was applied in
\cite{Yoon:2016dij,Yoon:2016jzj,Owen:2012ts} to a small basis of
Jacobi-smeared nucleon interpolators of the form
$\chi\left(x\right)=\epsilon^{abc}\left[u^{a\top}\left(x\right)C\gamma_5d^b\left(x\right)\right]u^c\left(x\right)$,
where it was found variationally improved interpolators can reduce
excited-state effects. We will show that an amalgam of a suitable
basis of interpolating operators, the use of the variational method
with that basis, and an efficient means of computing the needed
correlation functions through the application of ``distillation'',
affords a powerful and computationally efficient method of taming
excited-state contributions to those charges. Furthermore, we will
show that ``distillation'' alone, enabling a momentum projection to be
performed at each time slice in the two- and three-point functions,
provides a significant increase in the statistical precision whilst
being an effective smearing operator with only a single interpolating
operator. This allows the reliable extraction of matrix elements at
earlier source-sink separations when the nucleon signal-to-noise ratio
is exponentially better. As lattice calculations proceed to ever more
complicated quantities, exemplified by quasi-PDFs and pseudo-PDFs, the
need for tamed excited-state effects is paramount.
In this work, we abstain from calculating and presenting renormalized
isovector charges of the ground-state nucleon, and from performing continuum,
finite-volume, and chiral extrapolations. Throughout this work we consider
forward-scattering matrix elements of nucleons at rest. Although much
of our motivating comments have centered around $g_A^{u-d}$, we extend
our technique to include the scalar $g_S^{u-d}$ and tensor $g_T^{u-d}$
charges of the nucleon, whose precise determination will constrain BSM
searches at the TeV scale and dark matter direct-detection searches. A
future study will explore forward-scattering matrix elements of moving
states, and generalizations to scalar, axial, and tensor nucleon form
factors.
This paper is organized as follows. In section~\ref{sec:method} we discuss
excited-state contamination and present the computational methodology
employed throughout this work. In particular, we review distillation
as a low-mode approximation of the more standard Jacobi and Wuppertal
smearing algorithms, and discuss the variational method as a means of
improving overlap onto a desired state. Section~\ref{sec:details}
describes the lattice ensemble, the explicit construction of the
interpolating fields, and then finishes with a discussion of the
strategy used to extract the nucleon charges from our data. In section~\ref{sec:results}
we first feature a comparison of the nucleon effective masses
obtained using a Jacobi-smeared interpolator, a single ``local''
distilled interpolator, and different variationally improved
interpolators from distinct bases of distilled interpolators. We then
proceed to present determinations of effective masses of the nucleon
via fits to our data and ultimately our extracted charges. We then conclude
with a discussion of our results, a cost-benefit analysis
of standard smearing techniques and distillation, implications for
assorted studies in nucleon structure, and directions for future
research.
\section{Computational Methodology}\label{sec:method}
We begin this section with some definitions we will use throughout
this work. Isovector charges $g_\Gamma$ of the nucleon are measured
experimentally through the neutron to proton transitions
$\bra{p\left(p,s\right)}\mathcal{O}_\Gamma^{ud}\ket{n\left(p,s\right)}$,
where $\mathcal{O}_\Gamma^{ud}=\overline{u}\Gamma d$, or via proton
and neutron charge differences. Imposing isospin symmetry, one can
show
\begin{equation}
\bra{p\left(p,s\right)}\mathcal{O}_\Gamma^{ud}\ket{n\left(p,s\right)}=\bra{p\left(p,s\right)}\mathcal{O}_\Gamma^{u-d}\ket{p\left(p,s\right)}
\label{eq:isospin}
\end{equation}
where $\mathcal{O}_\Gamma^{u-d}=\overline{u}\Gamma
u-\overline{d}\Gamma d$ is an external current. The isovector charges
of the nucleon are thus defined as
\begin{equation*}
\bra{N\left(p,s\right)}\mathcal{O}_\Gamma^{u-d}\ket{N\left(p,s\right)}=g_\Gamma^{u-d}\overline{u}_s\left(p\right)\Gamma u_s\left(p\right)
\end{equation*}
where we normalize the nucleon spinors in Euclidean space according to
\begin{equation*}
\sum_su_s\left(p\right)\overline{u}_s\left(p\right)=-i\slashed{p}_E+m_N.
\end{equation*}
Indeed the calculation of isovector quantities is computationally less
demanding than the calculation of isoscalar and flavor-diagonal
quantities, in which the cancellation of disconnected quark lines in
the isospin limit does not occur.
\subsection{Excited-State Contamination}
The calculation of hadronic matrix elements within LQCD requires the
construction of operators that attempt to interpolate a given hadronic
state from the vacuum, with minimal overlap with neighboring
states. As the precise wavefunction of any hadronic state is not
known, any operator construction of a desired continuum $J^{PC}$ is
merely a ``best guess'' and necessarily overlaps with other hadronic
states of the same quantum numbers - most notably, excited-states and
multi-particle states. This problem is compounded with the explicit
breaking of Lorentz symmetry, in which continuum operators residing in
different irreducible representations (irreps) of the Lorentz group
can mix under subduction to the same lattice irrep thereby increasing the
number of states contributing within a given lattice irrep.
Explicitly, consider a two-point correlation function projected to zero momentum
\begin{equation}
C_{2\text{pt}}(t)=\sum_{\vec{x}}\langle\mathcal{O}(\vec{x},t)\overline{\mathcal{O}}(\vec{0},0)\rangle,
\label{eq:simple_2pt}
\end{equation}
where $\mathcal{O}$ is an interpolating operator for the state of interest. Performing a spectral decomposition, this can be expressed as
\begin{equation}
C_{2\text{pt}}(t) = \sum_n\frac{1}{2E_n}\left|\bra{0}\mathcal{O}\ket{n}\right|^2e^{-E_nt},\label{eq:decomp}
\end{equation}
where the sum is over all states, of energy $E_n$, that can be created
with the operator $\mathcal{O}$. In order to extract reliably the
ground-state mass $M_0 \equiv E_0$, one can study the large time behavior of the two-point correlator, wherein contributions of excited states are negligible, or make a judicial
choice of interpolating operator such that the overlap factors $Z_n=\bra{0}\mathcal{O}\ket{n}$, for states $n>0$, are greatly suppressed relative to $\bra{0}\mathcal{O}\ket{n=0}$, and thereby determine $E_0$ at small
temporal separations. Given that lattice calculations of baryon properties are constrained by an exponentially increasing noise-to-signal ratio with
increasing Euclidean time, the latter approach is far preferable. This issue is further compounded when considering
matrix elements of an external current $\mathcal{J}$, where the
suppression of excited-states is needed both between the source
interpolator and the current, and the current and the sink interpolator. Thus there is strong encouragement
to develop operators that couple predominantly to the ground-state, and only weakly to the excited-states.
\subsection{Smearing}
Interpolating fields constructed of point-like quark and gluonic
fields couple to hadronic states at all energy scales.
``Smearing'' is a well-established technique to increase the
overlap of interpolators with the low-lying states of the spectrum (i.e. confinement scale physics), and to reduce
the contribution of the high-energy modes to correlation functions, through the use of
spatially extended operators of hadronic size. Specifically, we can
replace the quark fields $\psi(\vec{x},t)$ occurring in the path integral with spatially extended quark fields
\[
\tilde{\psi}(\vec{x},t) = \sum_{\vec{y}} S[U](\vec{x},\vec{y})
\psi(\vec{y},t),
\]
where $S[U](\vec{x},\vec{y})$ is a gauge-covariant scalar ``smearing''
kernel that is functionally dependent on the underlying gauge fields $U$ on some time slice $t$.
A frequently utilized smearing operator is $J_{\sigma,n_\sigma}\left(t\right)=\left(1+\frac{\sigma\nabla^2\left(t\right)}{n_\sigma}\right)^{n_\sigma}$ defining the Jacobi-smearing method \cite{Allton:1993wc},
where $\nabla^2$ is a gauge-covariant discretization of the Laplacian,
$\sigma$ is the smearing ``width'', and $n_\sigma$ represents the
number of applications of the Laplacian. In the large $n_\sigma$ limit,
\begin{equation*}
\tilde\psi\left(\vec{x},t\right)=\lim_{n_\sigma\rightarrow\infty}J_{\sigma,n_\sigma}\left(t\right)\psi\left(\vec{x},t\right)\rightarrow e^{\sigma\nabla^2\left(t\right)}\psi\left(\vec{x},t\right)
\end{equation*}
thus approaching a Gaussian of width $\sigma$, characteristic of the
size of a hadron.
\subsection{Distillation \& Operator Construction}
Distillation\cite{Peardon:2009gh} is a low-rank approximation to a gauge-covariant smearing kernel. Specializing to the
case of the Laplacian, we begin by seeking solutions to
$-\nabla^2\left(t\right)\xi^k\left(t\right)=\lambda^k\left(t\right)\xi^k\left(t\right)$. Ordering
solutions by the magnitude of the eigenvalues
$\lambda^k\left(t\right)$, the distillation operator is constructed as
the outer-product of two eigenvectors on a given time slice
\begin{equation}
\square\left(\vec{x},\vec{y};t\right)_{ab}=\sum_{k=1}^N\xi_a^{\left(k\right)}\left(\vec{x},t\right)\xi_b^{\left(k\right)\dagger}\left(\vec{y},t\right),
\end{equation}
where the color indices $\lbrace a,b\rbrace$ are made explicit. The distillation operator
$\square$ is then applied to each quark or
antiquark field both at the source and at the sink.
Distillation affords several advantages over Jacobi-like smearing
methods. Distillation, firstly, factorizes the construction of the
interpolating or current operators from quark propagation. Secondly,
distillation allows operators not only at the sink location but also
at the source to incorporate more elaborate spatial structure, and in
particular derivatives, without additional inversions of the Dirac
operator. Finally, distillation enables momentum projection to be
performed both at the source and sink time slices, thus providing a
more thorough sampling of a given lattice configuration. In this case,
equation \ref{eq:simple_2pt} becomes
\begin{equation}
C_{2\text{pt}}\left(t\right)=\sum_{\vec{x},\vec{y}}\langle\mathcal{O}\left(\vec{x},t\right)\overline{\mathcal{O}}\left(\vec{y},0\right)\rangle.\label{eq:dist2pt}
\end{equation}
These features have been key to
the precise determination of the many energy levels in the QCD spectrum
needed for the study of resonances in lattice calculations. However,
these features are also of advantage in studies of hadron structure, by
facilitating more varied interpolator constructions, and by enabling momentum
projection at each time slice in a correlation function.
Specializing to the case of baryons, we will construct our
interpolating operators ${\cal O}_i$ following the procedure of
ref.~\cite{Edwards:2011jj,Dudek:2012ag}
\begin{equation}
\mathcal{O}_i\left(t\right)\propto\epsilon^{abc}\mathcal{S}^{\alpha\beta\gamma}_i
\left(\mathcal{D}_1\Box u\right)^\alpha_a\left(\mathcal{D}_2\Box d\right)^\beta_b\left(\mathcal{D}_3\Box u\right)^\gamma_c\left(t\right),
\end{equation}
where $\mathcal{D}_{1,2,3}$ are spatial operators
constructed from covariant derivatives, introduced to probe the
radial/angular structure of the nucleon wavefunction, and
$S^{\alpha\beta\gamma}$ are subduction matrices that project a state
of definite continuum spin into irreps of the hypercubic lattice (explicit construction given in Section \ref{subsec:distop_deets}). The building blocks of the two-point and three-point correlation functions
employing these interpolating operators are
\begin{itemize}
\item\textit{solution vectors}
\begin{equation*}
S^{\left(k\right)}_{\alpha\beta}(\vec{x},t';t)=M_{\alpha\beta}^{-1}\left(t',t\right)\xi^{(k)}\left(t\right)
\end{equation*}
\item\textit{perambulators}
\begin{equation*}
\tau^{kl}_{\alpha\beta}\left(t',t\right)=\xi^{(k)\dagger}\left(t'\right)M^{-1}_{\alpha\beta}\left(t',t\right)\xi^{(l)}\left(t\right)
\end{equation*}
\item\textit{elementals}
\begin{align*}
\Phi^{\left(i,j,k\right)}_{\alpha_1,\alpha_2,\alpha_3}\left(t\right)&=\epsilon^{abc}\left(\mathcal{D}_1\xi^{\left(i\right)}\right)^a\left(\mathcal{D}_2
\xi^{\left(j\right)}\right)^b \\
&\quad\left(\mathcal{D}_3 \xi^{\left(k\right)}\right)^c\left(t\right)S_{\alpha_1,\alpha_2,\alpha_3}
\end{align*}
\end{itemize}
where it should be noted that the inversion of the lattice Dirac operator against a smeared point-source in standard techniques, is replaced in distillation by inversion of the lattice Dirac operator against the $k^{\text{th}}$ eigenvector of time slice $t$.
The principle disadvantage of the distillation method is that the number of
distillation eigenvectors $N$ needed to construct
a correlation function of the same resolution is expected to scale as
the lattice spatial volume $V_3$. Since the evaluation of the Wick contractions
for mesons and baryons scales as $N^3$ and $N^4$,
respectively, the volume-scaling cost is potentially severe.
\subsection{Variational Analysis}
\label{subsec:vari}
The variational method is an often employed technique in lattice
spectroscopy calculations that seeks to disentangle the contributions
of eigenstates to the two-point correlation of two interpolating
operators. The variational method begins by constructing a matrix of
correlation functions
\begin{equation}
C_{ij}\left(t\right)=\langle\mathcal{O}_i\left(t\right)\overline{\mathcal{O}}_j\left(0\right)\rangle
\label{eq:corrmatrix}
\end{equation}
where $\mathcal{O}_i$ and $\mathcal{O}_j$ belong to some basis $\mathcal{B}$ of appropriately constructed interpolators with identical quantum numbers. In practice, these interpolators transform with definite symmetries in quark flavor, derivative structure, and Dirac structure - where the Dirac structure of the operator is encoded in the subduction of a continuum operator into lattice irreps. The actual variational method begins by considering the system of generalized eigenvalue equations
\begin{equation}
C\left(t\right)v_{\mathrm{\textbf{n}}}\left(t,t_0\right)=\lambda_{\mathrm{\textbf{n}}}\left(t,t_0\right)C\left(t_0\right)v_{\mathrm{\textbf{n}}}\left(t,t_0\right)
\label{eq:gevp}
\end{equation}
with $\mathrm{\textbf{n}}\in\lbrace1,\cdots,\text{dim}\left(\mathcal{B}\right)\rbrace$ and $t>t_0$. It can be shown that at large times $t$ this system of equations is described by the ``principal correlators'' $\lambda_{\mathrm{\text{n}}}\left(t,t_0\right)=e^{-M_n\left(t-t_0\right)}$, where the trivial solution $v_{\mathrm{\text{n}}}=0$ is avoided by imposing the normalization condition $v_{\mathrm{\text{n'}}}^{\dagger}C\left(t_0\right)v_{\mathrm{\text{n}}}=\delta_{\mathrm{\text{n'n}}}$. Equation \ref{eq:gevp} is solved independently for each $t>t_0$, with $\lambda_{\mathrm{\text{n}}}\left(t_0,t_0\right)=1$ by construction. It is possible that in the process of solving Equation \ref{eq:gevp}, what appears to be the $m^{\text{th}}$ eigenvector on time slice $t_m$ is not the $m^{\text{th}}$ eigenvector on time slice $t_{m+1}$. To remedy this, we follow the procedure of \cite{Edwards:2011jj} by associating states on different time slices using the relative similarity between their associated eigenvectors. This is accomplished by selecting some reference time slice $t_{\text{ref}}$ and maximizing $v_{\mathrm{\text{n'}}}^{\text{ref}\dagger}C\left(t_0\right)v_{\mathrm{\text{n}}}$, where $v_{\mathrm{\text{n'}}}^{\text{ref}}\equiv v_{\mathrm{\text{n'}}}\left(t_{\text{ref}}\right)$.
A fitting function
\begin{equation}
\lambda_{\mathrm{\text{n}}}\left(t,t_0\right)=\left(1-A_{\mathrm{\text{n}}}\right)e^{-M_{\mathrm{\text{n}}}\left(t-t_0\right)}+A_{\mathrm{\text{n}}}e^{-M'_{\mathrm{\text{n}}}\left(t-t_0\right)}
\label{eq:princorfunc}
\end{equation}
is applied to the principal correlator to obtain the masses $M_{\mathrm{\text{n}}}$, $M_{\mathrm{\text{n}}}'$, and amplitude $A_{\mathrm{\text{n}}}$. The choice for $t_0$ is made by attempting to reconstruct the original correlation matrix $C_{ij}\left(t\right)$ using the masses $M_{\mathrm{\text{n}}}$, $M_{\mathrm{\text{n}}}'$ and the extracted overlap factors $Z_i^{\mathrm{\text{n}}}$ \cite{Dudek:2007wv}. The degree of agreement for $t>t_0$ dictates the choice for $t_0$. Solving the variational method introduces a slight time-dependence in the overlap factors. The overlap factors are thus evaluated on a time slice $t_Z>t_0$ that minimizes differences between original/reconstructed correlators \cite{Dudek:2007wv}. The resulting eigenvectors $v_{\mathrm{\textbf{n}}}$ for each $t>t_0$ yield the optimal linear combination of $\mathcal{O}_i\in\mathcal{B}$ to project the $\mathrm{\textbf{n}}^{\text{th}}$-state from the vacuum
\begin{equation}
\mathcal{O}^{\text{opt}}_{\textbf{n}}=\sum_iv_{\mathrm{\textbf{n}}}^i\mathcal{O}_i.
\end{equation}
The variational method may equally be applied to correlation matrices comprised of three-point functions of different interpolating fields. This would necessarily produce a better determined $\mathcal{O}^{\text{opt}}_{\textbf{n}}$, but due to the high Wick contraction cost of using distillation we elected to perform the variational method on a correlation matrix of two-point functions, thereby fixing $\mathcal{O}^{\text{opt}}_{\textbf{n}}$ for later use.
\section{Computational Details}\label{sec:details}
Our analysis considers a 350 configuration ensemble of 2+1 flavor QCD
using the clover-Wilson fermion action, where the associated gauge
links are smeared by one application of the stout smearing
\cite{Morningstar:2003gk} algorithm. This smearing yields a
tadpole-improved tree-level clover coefficient that is nearly
identical to the corresponding non-perturbative determination. The
reader is referred to \cite{Yoon:2016dij,Yoon:2016jzj} for a
discussion regarding the gauge action used to generate this
ensemble. Calculations were performed on a $32^3\times64$ lattice with
periodic (spatial) and anti-periodic (temporal) boundary conditions
and an inverse coupling of $\beta=6.3$. In the three flavor theory,
the strange quark mass was set by requiring the ratio
$\left(2M_{K^+}^2-M_{\pi^+}^2\right)/M_{\Omega^-}$ to assume its
physical value. This lattice ensemble was found to have a lattice
spacing of $a=0.09840(4)\text{ fm}$ via the Wilson-flow scale $w_0$ ,
and with $am_\pi\sim0.176803$ yielding $am_{\pi}L\sim5.658$ and $m_\pi
= 356~{\rm MeV}$.
We explore the efficacy of four different types of operators used to
interpolate ground-state nucleons from the vacuum, with particular
consideration given to distillation. In this work we only study
zero-momentum nucleons, polarized along the z-axis. A future work will
study nucleons with non-zero momentum.
\subsubsection*{Jacobi smeared Interpolator}
Prior to construction of our quark sources comprising our selected
interpolators, the background gauge links are smeared via 20
applications of stout smearing with smearing parameter:
$\rho_{ij}=0.08$ and $\rho_{\mu4}=\rho_{4\mu}=0$, where
$\rho_{\mu\nu}$ quantifies the weight given to staple links aligned in
the $\mu\nu$-plane when constructing the smeared links. Such gauge
smearing is essential for reducing noise present in the resultant
correlation functions due to source fluctuations. Before inverting the
Dirac operator against the smeared sources, we apply a single
interation of stout smearing with $\rho=0.125$ to the gauge links -
thereby avoiding potentially small eigenvalues in the inversion.
As a benchmark with which to compare to distillation, we begin with the simplest nucleon interpolator consistent with the nucleon $J^{PC}$
\begin{equation}
\mathcal{N}\left(x\right)=\epsilon^{abc}\left[u^{a\top}\left(x\right)C\frac{\left(1\pm\gamma_4\right)}{2}\gamma_5 d^b\left(x\right)\right]u^c\left(x\right)
\label{eq:standardNuc}
\end{equation}
where $u,d$ are the two flavors of (degenerate) light quarks, $\lbrace a,b,c\rbrace$ color indices, $C=\gamma_2\gamma_4$ the charge conjugation matrix, and a suppressed Dirac index. The non-relativistic projector $\left(1\pm\gamma_4\right)/2$ is included in the operator construction to reduce the noise-to-signal ratio in forward (backward) propagating states. To make contact with previous work (e.g. \cite{Yoon:2016dij,Yoon:2016jzj}) $\mathcal{N}\left(x\right)$ is smeared with 60 hits of Jacobi smearing, with width $\sigma=5.0$. We refer the reader to \cite{Yoon:2016dij,Yoon:2016jzj} for an extensive analysis of the effect different Jacobi smearing parameters has on the determination of nucleon isovector charges. Herein we refer to the Jacobi smeared interpolator as ``Jacobi-SS''.
Correlation functions that employ $\mathcal{N}$ as the source/sink interpolator are constructed via application of appropriate projection operators
\begin{equation}
C^{2\text{pt}}\left(t\right)=\sum_{\vec{x}}\langle\mathcal{P}^{2\text{pt}}_{\beta\alpha}\mathcal{N}_\alpha\left(\vec{x},t\right)\overline{\mathcal{N}}_\beta\left(0\right)\rangle
\end{equation}
\begin{equation}
C^{3\text{pt}}\left(t,\tau\right)=\sum_{\vec{x},\vec{z}}\langle\mathcal{P}^{3\text{pt}}_{\beta\alpha}\mathcal{N}_\alpha\left(\vec{x},t\right)\mathcal{O}_\Gamma^{u-d}\left(\vec{z},\tau\right)\overline{\mathcal{N}}_\beta\left(0\right)\rangle
\end{equation}
where $\mathcal{P}^{2\text{pt}}=\left(1+\gamma_4\right)/2$ is used to project onto the forward-propagating positive-parity nucleon, and where $\mathcal{P}^{3\text{pt}}=\mathcal{P}^{2\text{pt}}\left(1+i\gamma_5\gamma_3\right)$ is used for the corresponding connected insertions. The spatial sums serve to project each correlation function to zero momentum. A standard spectral decomposition demonstrates that the Dirac structure of $\mathcal{O}_\Gamma^{u-d}$ must be $1$, $\gamma_4$, $\gamma_3\gamma_5$, $\gamma_1\gamma_2$ to extract the scalar, vector, axial, and tensor charges, respectively. Lastly, the sequential source method is implemented to calculate $C^{3\text{pt}}$, thereby minimizing the number of distinct inversions of the Dirac operator.
\subsubsection*{Distilled Interpolators}
\label{subsec:distop_deets}
We use a distillation space of rank 64, from which the
perambulators and solution vectors are constructed. The distillation
space on each time slice is calculated only after 10 iterations of
stout smearing is applied to the gauge links with smearing factor
$\rho_{ij}=0.08$ and $\rho_{\mu4}=\rho_{4\mu}=0$.
When expressed in a form exposing the permutational symmetry of the flavor ($\mathcal{F}$), spatial ($\mathcal{D}$) and Dirac ($\mathcal{S}$) structures, our distilled interpolators take the form
\begin{equation}
\mathcal{O}=\left(\mathcal{F}_{\mathcal{P}\left(\text{F}\right)}\otimes\mathcal{S}_{\mathcal{P}\left(\text{S}\right)}\otimes\mathcal{D}_{\mathcal{P}\left(\text{D}\right)}\right)\lbrace q_1q_2q_3\rbrace
\end{equation}
where $\mathcal{P}\left(\cdots\right)$ expresses the symmetric (S), mixed-symmetric (M), and anti-symmetric (A) character of the given structure. Explicitly our employed distilled interpolators are
\begin{itemize}
\centering
\item \resizebox{0.3\textwidth}{!}{$\left(N_M\otimes\left(\frac{1}{2}^+\right)^1_M\otimes D^{[0]}_{L=0,S}\right)^{J^P=\frac{1}{2}^+}=\thinspace^2S_S\tfrac{1}{2}^+$}
\item \resizebox{0.3\textwidth}{!}{$\left(N_M\otimes\left(\frac{1}{2}^+\right)^1_M\otimes D^{[2]}_{L=0,M}\right)^{J^P=\frac{1}{2}^+}=\thinspace^2S_M\tfrac{1}{2}^+$}
\item \resizebox{0.3\textwidth}{!}{$\left(N_M\otimes\left(\frac{1}{2}^+\right)^1_M\otimes D^{[2]}_{L=0,S}\right)^{J^P=\frac{1}{2}^+}=\thinspace^2S'_S\tfrac{1}{2}^+$}
\item \resizebox{0.3\textwidth}{!}{$\left(N_M\otimes\left(\frac{1}{2}^+\right)^1_M\otimes D^{[2]}_{L=1,A}\right)^{J^P=\frac{1}{2}^+}=\thinspace^2P_A\tfrac{1}{2}^+$}
\item \resizebox{0.3\textwidth}{!}{$\left(N_M\otimes\left(\frac{1}{2}^+\right)^1_M\otimes D^{[2]}_{L=1,M}\right)^{J^P=\frac{1}{2}^+}=\thinspace^2P_M\tfrac{1}{2}^+$}
\item \resizebox{0.3\textwidth}{!}{$\left(N_M\otimes\left(\frac{3}{2}^+\right)^1_S\otimes D^{[2]}_{L=1,M}\right)^{J^P=\frac{1}{2}^+}=\thinspace^4P_M\tfrac{1}{2}^+$}
\item \resizebox{0.3\textwidth}{!}{$\left(N_M\otimes\left(\frac{3}{2}^+\right)^1_S\otimes D^{[2]}_{L=2,M}\right)^{J^P=\frac{1}{2}^+}=\thinspace^4D_M\tfrac{1}{2}^+$}
\end{itemize}
\noindent where the superscript $J^P$ indicates the overall spin-parity quantum numbers of the interpolator \cite{Edwards:2011jj}. For brevity, we have expressed the interpolators in a compact spectroscopic notation $^{2S+1}L_\mathcal{P}J^{P}$, where $S$ represents the Dirac spin, $L$ the angular momentum induced by any derivatives, $\mathcal{P}$ the permutational symmetry of the derivatives, and $J^P$ the total angular momentum and parity of the interpolator. The first distilled interpolator we consider is the $^2S_S\frac{1}{2}^+$ interpolator, which is the closest non-relativistic analogue of the standard nucleon interpolator given in Eq. \ref{eq:standardNuc}.
Our first application of the variational method is to a basis of three distilled interpolators
\begin{equation}
\mathcal{B}_3=\lbrace\thinspace^2S_S\tfrac{1}{2}^+,\thinspace^4P_M\tfrac{1}{2}^+,\thinspace^4D_M\tfrac{1}{2}^+\rbrace
\end{equation}
where we note $\thinspace^4P_M\tfrac{1}{2}^+$ and $\thinspace^4D_M\tfrac{1}{2}^+$ are explicitly of hybrid character. This choice is principally motivated by \cite{Dudek:2012ag} where it was found these hybrid interpolators, in addition to $\thinspace^2S_S\tfrac{1}{2}^+$, had predominant overlap onto the ground-state nucleon. The variational method applied to $\mathcal{B}_3$ leads to a variationally improved interpolator that we define to be $\hat{\mathcal{P}}_3$. The final interpolator we consider is found by expanding the basis $\mathcal{B}_3$ to include distilled interpolators that probe the radial/orbital structure of the nucleon (see above)
\begin{align}
\begin{split}
\mathcal{B}_7&=\lbrace\thinspace^2S_S\tfrac{1}{2}^+,\thinspace^2S_M\tfrac{1}{2}^+,\thinspace^2S'_S\tfrac{1}{2}^+,\thinspace^2P_A\tfrac{1}{2}^+,\\
&\qquad\qquad \thinspace^2P_M\tfrac{1}{2}^+,\thinspace^4P_M\tfrac{1}{2}^+,\thinspace^4D_M\tfrac{1}{2}^+\rbrace.
\end{split}
\end{align}
We refer to this variationally improved interpolator as $\hat{\mathcal{P}}_7$. The superficially redundant $\thinspace^2S_S\tfrac{1}{2}^+$ and $\thinspace^2S'_S\tfrac{1}{2}^+$ interpolators, with the same spectroscopic notation but differing derivative constructions, correspond to different radial extents of the interpolator.
As outlined in Section \ref{subsec:vari}, the construction of a variationally-improved interpolator relies on careful selection of a reference time slice $t_0$, and the time slice $t_Z$ at which to evaluate the overlap factors. Dividing out the ground-state time dependence, if a single exponential were to contribute to the principal correlator, a plateau of unity should be expected. Based on the applied fits of equation \ref{eq:princorfunc}, a good determination of the ground-state within our basis of interpolators is thus indicated by a plateau in the re-scaled correlator around unity - $A_0\ll1$ and $\Delta m_{n'0}=m_{n'}-m_0\gg1$. For the two variationally-improved interpolators we consider, we found
\begin{align*}
\hat{\mathcal{P}}_3&\longrightarrow\lbrace t_0=3, t_Z=5\rbrace \\
\hat{\mathcal{P}}_7&\longrightarrow\lbrace t_0=6, t_Z=8\rbrace
\end{align*}
led to ideal reconstruction of the original correlator.
\subsection{Matrix Element Extraction}
The extraction of matrix elements on the lattice typically proceeds by
calculation of some 3-point correlator, over a range of source-sink
interpolator separations, and where an external current is inserted
for all intermediate times. Under the presumption of no excited-state
contamination (i.e. in the limits $0\ll\tau\ll t_{\text{sep}}$), the
3-point correlator is then divided per ensemble average by a two point
correlator with the same source/sink interpolators of the same
source-sink separation. This division removes overlap factors, masses,
and exponential source/sink time dependence from the matrix element
signal. A plateau in this ratio, for fixed interpolator separations
and varying current insertion times, should be the desired matrix
element. However, as we are interested in ground-state matrix elements
$\mathcal{J}_{00}$, the plateau necessarily contains contributions
from matrix elements of all excited-states.
\subsubsection{Correlator Behavior}
Our two-point correlation function using Jacobi-smeared interpolators is defined by
\begin{equation*}
C^{2\text{pt}}_{\alpha\beta}(t)=\sum_{\vec{x}}\langle\mathcal{N}_\alpha\left(\vec{x},t\right)\overline{\mathcal{N}}_\beta(\vec{0},0)\rangle.
\end{equation*}
Inserting the non-relativistic projector, and performing a spectral decomposition exposes the competing contributions of all states in the spectrum:
\begin{equation}
C^{2\text{pt}}\left(t\right)=2\sum_n\left|Z_n\right|^2e^{-M_nt},
\label{eq:2ptsimple}
\end{equation}
where the sum is only over eigenstates with quantum numbers of $\mathcal{N}$. To quantify and control excited-state contributions to $C^{2\text{pt}}$, we elect to perform a 2-state fit of the form
\begin{equation}
C^{\text{2pt}}_{\text{fit}}\left(t\right)=e^{-M_0t}\left[\left|\mathbf{a}\right|^2+\left|\mathbf{b}\right|^2e^{-\left(M_1-M_0\right)t}\right]
\label{eq:2ptfit}
\end{equation}
for each of the four distinct interpolators we consider. The factoring
of the ground-state time dependence aids in stabilizing our fits and
in the determination of our extracted charges, as explained later. In
this manner we obtain determinations of $M_0$, $M_1$,
$\left|Z_0\right|^2$ and $\left|Z_1\right|^2$, while simultaneously
quantifying the efficacy of each interpolator to separate the
ground-state from its excitations \textit{viz}.\ $\Delta
m=M_1-M_0$. The correlator behavior when using distilled interpolators
is identical to that above, except for the addition of an overall
volume factor $V_3$ due to the momentum projection at the source
implied by eqn.~\ref{eq:dist2pt}.
A zero momentum projected three-point correlation function using Jacobi-smeared interpolators is given by
\begin{equation}
C_{\alpha\beta}^{3\text{pt}}(t_{\text{sep}};\tau)=\sum_{\vec{x}}\sum_{\vec{z}}\langle\mathcal{N}_\alpha\left(\vec{x},t_{\text{sep}}\right)\mathcal{J}\left(\vec{z},\tau\right)\overline{\mathcal{N}}_\beta(\vec{0},0)\rangle,
\label{eq:3ptsimple}
\end{equation}
where $\tau$ is the insertion time slice, restricted to the time-ordering $0<\tau<t_{\text{sep}}$, and $\mathcal{J}$ the external current. An analogous spectral decomposition with the appropriate projector leads to
\begin{align}
\begin{split}
C^{3\text{pt}}&(t_{\text{sep}};\tau)=\sum_{n,s}\sum_{n',s'}\frac{e^{-M_{n'}\left(t_{\text{sep}}-\tau\right)}e^{-M_n\tau}}{4M_{n'}M_n}\times \\
&\quad\quad\times Z_{n'}Z_n^\dagger\bra{n',p',s'}\mathcal{J}\ket{n,p,s}.
\end{split}
\label{eq:3ptij}
\end{align}
Again retaining the lowest two energy eigenstates in the sum, we have
\begin{align*}
C^{3\text{pt}}&\left(t_{\text{sep}},\tau\right)=\left(\frac{\left|Z_0\right|^2}{4M_0^2}\mathcal{J}_{00}e^{-M_0t_{\text{sep}}}+\frac{\left|Z_1\right|^2}{4M_1^2}\mathcal{J}_{11}e^{-M_1t_{\text{sep}}}\right)+ \\
&+\left(\frac{Z_0Z_1^\dagger}{4M_0M_1}\mathcal{J}_{01}e^{-M_0t_{\text{sep}}}e^{-\left(M_1-M_0\right)\tau}+\right. \\
&\left.\quad\quad\quad\quad\quad\quad+\frac{Z_1Z_0^\dagger}{4M_0M_1}\mathcal{J}_{10}e^{-M_1t_{\text{sep}}}e^{\left(M_1-M_0\right)\tau}\right)
\end{align*}
where $\mathcal{J}_{n'n}=\bra{n',s'}\mathcal{J}\ket{n,s}$, with $n',n\in\mathbb{N}$. By isolating the current insertion time dependence of the three-point correlator, it becomes clear that calculation of a three-point correlation function for a single source-sink separation $t_{\text{sep}}$ is insufficient to reliably extract the matrix elements $\mathcal{J}_{00}$ and $\mathcal{J}_{11}$.
As $Z_{\mathrm{\text{n}}}$ are real and $\mathcal{J}_{01}=\mathcal{J}_{10}$ for zero-momentum states, the above can be reorganized into
\begin{align*}
\begin{split}
C^{\text{3pt}}\left(t_{\text{sep}},\tau\right)&=\left(\frac{\left|Z_0\right|^2}{4M_0^2}\mathcal{J}_{00}e^{-M_0t_{\text{sep}}}+\frac{\left|Z_1\right|^2}{4M_1^2}\mathcal{J}_{11}e^{-M_1t_{\text{sep}}}\right) \\
&+\frac{Z_0Z_1}{2M_0M_1}\mathcal{J}_{01}e^{-\frac{\left(M_1+M_0\right)}{2}t_{\text{sep}}}\times \\
&\qquad\qquad\times\cosh\left[\left(M_1-M_0\right)\left(\tau-\tfrac{t_{\text{sep}}}{2}\right)\right].
\end{split}
\end{align*}
We then apply a 2-state fit to the three-point correlation functions,
\begin{align}
\begin{split}
C^{3\text{pt}}_{\text{fit}}&\left(t_{\text{sep}},\tau\right)=e^{-M_0t_{\text{sep}}}\left(\mathcal{A}+\mathcal{B}e^{-\Delta mt_{\text{sep}}}+\right. \\
&\left.\qquad+\thinspace\mathcal{C}e^{-\Delta m\frac{t_{\text{sep}}}{2}}\cosh\left[\Delta m\left(\tau-\tfrac{t_{\text{sep}}}{2}\right)\right]\right)
\end{split}
\label{eq:3ptfit}
\end{align}
where $\tau$ is the current insertion time and $\Delta m=M_1-M_0$. We
note that we retain $M_0$ and $M_1$ as parameters in our fit, rather
than the difference $\Delta m$. As with the functional form employed
to fit the two-point correlation functions, we factor the ground-state
time dependence from the functional form of the three-point fits. This
factoring makes manifest the desired ground-state matrix element in
the limits $0\ll\tau\ll t_{\text{sep}}$. The correlator behavior when
using distilled interpolators is again identical to that above, except
for the addition of an overall volume factor $V_3$ to Equation
\ref{eq:3ptsimple}.
Determining the precise relationship between the fitted parameters to extract the ground-state matrix element $\mathcal{J}_{00}$, requires a more detailed look at our interpolators. Although the constructed distilled interpolators contain no free spinor indices, our use of positive-parity nucleons polarized along the $z$-direction can be viewed as the standard application of projectors on the nucleon interpolating field of Equation \ref{eq:standardNuc}. At zero-momentum we again have,
\begin{align*}
C^{\text{2pt}}&\left(t\right)=\sum_{\vec{x}}\langle\mathcal{P}^{\text{2pt}}_{\beta\alpha}\mathcal{N}_\alpha\left(\vec{x},t\right)\overline{\mathcal{N}}_\beta\left(0\right)\rangle \\
C^{\text{3pt}}\left(t_{\text{sep}},\tau\right)&=\sum_{\vec{x},\vec{z}}\langle\mathcal{P}^{\text{3pt}}_{\beta\alpha}\mathcal{N}_\alpha\left(\vec{x},t_{\text{sep}}\right)\mathcal{O}_\Gamma^{u-d}\left(\vec{z},\tau\right)\overline{\mathcal{N}}_\beta\left(0\right)\rangle.
\end{align*}
Proceeding with the spectral decomposition of $C^{2\text{pt}}$ we have,
\begin{align*}
C^{2\text{pt}}&\left(t_{\text{sep}}\right)=\sum_{n,s}\frac{e^{-M_nt_{\text{sep}}}}{2M_n}\mathcal{P}^{\text{2pt}}_{\beta\alpha}\times \\
&\qquad\qquad\qquad\times\bra{\Omega}\mathcal{N}_\alpha\ket{n,p,s}\bra{n,p,s}\mathcal{N}_\beta^\dagger\ket{\Omega} \\
&=\sum_{n,s}\frac{\left|Z_n\right|^2e^{-M_nt_{\text{sep}}}}{2M_n}\mathcal{P}^{\text{2pt}}_{\beta\alpha}u^n_\alpha(\vec{0},s)\overline{u}^n_\beta(\vec{0},s) \\
&=\sum_n\frac{\left|Z_n\right|^2e^{-M_nt_{\text{sep}}}}{2M_n}\text{Tr}\left[\mathcal{P}^{\text{2pt}}\left(-i\slashed{p}_E+M_n\right)\right].
\end{align*}
From which it is easily shown that the constant coefficients of the 2-state fit applied our two-point correlators are of the form $\left|\mathbf{a}\right|^2=2\left|Z_0\right|^2$ and $\left|\mathbf{b}\right|^2=2\left|Z_1\right|^2$.
The spectral decomposition of $C^{3\text{pt}}$ proceeds analogously,
\begin{align*}
C^{3\text{pt}}&\left(t_{\text{sep}},\tau\right)=\sum_{n,s}\sum_{n',s'}\frac{e^{-M_{n'}\left(t_{\text{sep}}-\tau\right)}e^{-M_n\tau}}{4M_{n'}M_n}\times \\
&\quad\quad\times Z_{n'}Z_n^\dagger\mathcal{P}^{\text{3pt}}_{\beta\alpha}u^{n'}_\alpha(\vec{0},s')\overline{u}^n_\beta(\vec{0},s)\times \\
&\quad\quad\quad\quad\quad\quad\quad\times\bra{n',p',s'}\mathcal{J}\ket{n,p,s} \\
&=\sum_{n,s}\sum_{n',s'}\frac{e^{-M_{n'}\left(t_{\text{sep}}-\tau\right)}e^{-M_n\tau}}{4M_{n'}M_n}\times \\
&\quad\quad\times Z_{n'}Z_n^\dagger\mathcal{P}^{\text{3pt}}_{\beta\alpha}u^{n'}_\alpha(\vec{0},s')\overline{u}^n_\beta(\vec{0},s)\times \\
&\quad\quad\quad\quad\quad\quad\quad\times\overline{u}^{n'}_\rho(\vec{0},s')\mathcal{J}_{\rho\sigma}u_\sigma(\vec{0},s) \\
&=\sum_{n',n}\frac{Z_{n'}Z_n^\dagger}{4M_{n'}M_n}e^{-M_{n'}\left(t_{\text{sep}}-\tau\right)}e^{-E_n\tau}\times \\
&\quad\quad\times\text{Tr}\left[\mathcal{P}^{\text{3pt}}\left(-i\slashed{p}_E\thinspace'+M_{n'}\right)\mathcal{J}_{n'n}\left(-i\slashed{p}_E+M_n\right)\right].
\end{align*}
It then follows $\mathcal{A}=2g_\Gamma^{00}\left|Z_0\right|^2$ and $\mathcal{B}=2g_\Gamma^{11}\left|Z_1\right|^2$. We can then extract the ground-state matrix element via
\begin{equation*}
g_{00}^\Gamma=\mathcal{A}/\left|\mathbf{a}\right|^2.
\end{equation*}
To extract the masses, overlap factors and matrix elements from our
data, we perform simultaneous fits to the two-point and three-point
correlation functions according to Equations \ref{eq:2ptfit} and
\ref{eq:3ptfit}, respectively.
\section{Results}\label{sec:results}
We compute the two-point functions averaged over three source
positions. The extraction for the matrix elements is obtained from
three-point functions for $t_{\text{sep}}\in\lbrace8,12,16\rbrace$,
and $\tau\in\left[0,t_{\text{sep}}-1\right]$.
Rather than view the calculated two-point correlation functions directly, we judge the quality of the two-point correlators for each interpolator by plotting the effective mass
\begin{equation}
M_{\mathrm{eff}}\left(t+0.5\right)=\frac{1}{a}\ln{\frac{C^{2\text{pt}}\left(t\right)}{C^{2\text{pt}}\left(t+1\right)}}
\end{equation}
as a function of the source-sink separation $t$. Calculating each two-point correlator, averaged over three different source positions, leads to the effective masses seen in Figure \ref{fig:meff_t2-16}. The lack of a plateau in the effective mass of the Jacobi-SS interpolator until $t\sim10$ is indicative of excited-state contamination for source-sink separations of $\lesssim1\text{ fm}$. Use of the $\thinspace^2S_S\frac{1}{2}^+$ distilled interpolator leads to an earlier onset of a plateau in the effective mass, with statistical uncertainty that is at least $50\%$ smaller than that of the Jacobi-SS interpolator. This plateau is seen to begin for $t\sim6$, or $\sim0.6\text{ fm}$. It is also worth noting that the expected exponentially increasing noise in the nucleon effective mass is substantially suppressed at larger source-sink separations, when compared to the Jacobi-SS interpolator.
Use of a variationally improved interpolator derived from bases of distilled interpolators, ($\hat{\mathcal{P}}_3$ or $\hat{\mathcal{P}}_7$), leads to a more rapid exponential decay of excited states at early Euclidean times. The effective mass induced by the $\hat{\mathcal{P}}_3$ interpolator exhibits a plateau that has nearly the same statistical precision as that of the $\thinspace^2S_S\frac{1}{2}^+$ interpolator, while the excited-states are seen to decay more rapidly for $t<5$. Expanding the basis of distilled interpolators, the $\hat{\mathcal{P}}_7$ interpolator leads to an even more rapid decay of excited states for $t<5$, consistent with a variationally improved interpolator receiving excited-state contributions from states higher in energy than those within the basis. As with the $\thinspace^2S_S\frac{1}{2}^+$ and $\hat{\mathcal{P}}_3$ interpolators, the effective mass of $\hat{\mathcal{P}}_7$ begins around $t=6$, however the plateau is noticeably lower than the former. In general, the statistical precision of all distilled interpolators appears to be comparable, except for large Euclidean times wherein the determination of the variationally improved interpolators becomes increasingly uncertain. We attribute this increased uncertainty to stem from the variationally improved interpolators being relatively unconstrained at large source-sink separations, where elements of the correlation matrix (Eq. \ref{eq:corrmatrix}) are themselves dominated by noise.
\subsection{Two-State Analysis}
To quantify the discussion above and to guide our future simultaneous
fits to the two- and three-point functions, we explore the efficacy of
one- and two-state fits have in describing the calculated effective
masses. All fits are restricted such that $2\le t_{\text{fit}}\le16$
thereby avoiding contact terms in the clover Wilson action and a
collective fluctuation of the effective masses for $t\gtrsim16$. The
results of these fits are collected in Tables \ref{tab:1pt-fits} and
\ref{tab:2pt-fits}.
\begin{table}[h!]
\centering
\scriptsize{
\begin{tabular}{ | c | c | c | c | c |}
\hline
\multicolumn{5}{|c|}{Fit $C^{2\text{pt}}\left(t\right)=\left|\mathbf{a}\right|^2e^{-M_0t}$}\\
\hline
$\hat{\mathcal{O}}$ & $t_{\text{fit}}$ & $\left|\mathbf{a}\right|^2$ & $M_0$ & $\chi_r^2$ \\
\hline
\multirow{4}{*}{\tiny Jacobi-SS} & $\left[4,16\right]$ & 5.032(095)e-08 & 0.563(4) & 4.334 \\
& $\left[5,16\right]$ & 4.701(100)e-08 & 0.556(3) & 1.458 \\
& $\left[6,16\right]$ & 4.516(122)e-08 & 0.551(4) & 0.932 \\
& $\left[7,16\right]$ & 4.450(148)e-08 & 0.550(4) & 0.977 \\ \hline
\multirow{4}{*}{\tiny $\thinspace^2S_S\frac{1}{2}^+$} & $\left[4,16\right]$ & 1.627(011)e-02 & 0.548(1) & 20.77 \\
& $\left[5,16\right]$ & 1.552(011)e-02 & 0.542(1) & 6.314 \\
& $\left[6,16\right]$ & 1.500(013)e-02 & 0.538(1) & 2.354 \\
& $\left[7,16\right]$ & 1.483(016)e-02 & 0.537(1) & 2.237 \\ \hline
\multirow{4}{*}{\tiny $\hat{\mathcal{P}}_3$} & $\left[4,16\right]$ & 1.159(08)e+00 & 0.546(1) & 9.819 \\
& $\left[5,16\right]$ & 1.114(09)e+00 & 0.541(1) & 3.227 \\
& $\left[6,16\right]$ & 1.084(11)e+00 & 0.538(1) & 1.510 \\
& $\left[7,16\right]$ & 1.073(13)e+00 & 0.537(2) & 1.393 \\ \hline
\multirow{4}{*}{\tiny $\hat{\mathcal{P}}_7$} & $\left[4,16\right]$ & 1.045(09)e+00 & 0.540(1) & 3.094 \\
& $\left[5,16\right]$ & 1.021(10)e+00 & 0.537(2) & 1.509 \\
& $\left[6,16\right]$ & 0.999(12)e+00 & 0.535(2) & 0.613 \\
& $\left[7,16\right]$ & 0.999(15)e+00 & 0.535(2) & 0.688 \\ \hline
\end{tabular}
}
\caption{Parameters of a single-state fit to the two-point correlators. As
the initial time slice over which fits are performed is made
larger, the determined ground-state mass, as expected, is found to
decrease. The ground-state mass of the Jacobi-SS interpolator is
seen to converge toward $M_0\sim0.55$, while for the distilled
interpolators the ground-state mass appears to converge towards
$M_0\sim0.535$. Errors are purely statistical.}
\label{tab:1pt-fits}
\end{table}
Although single-state fits make explicit the improvements gained by using distillation over standard Jacobi-smearing, notably a $\sim3\%$ difference in the determination of the ground-state mass, the improvements at this stage hardly seem worth the cost of constructing the required distillation basis. By including a second state in the fits performed to the two-point correlators, the gains produced by distillation are quite encouraging.
\begin{figure}[h!]
\centering
\includegraphics[width=0.52\textwidth]{jacobi-local-proj-projred_comp_meff_nofits_markers.png}
\caption{Nucleon effective mass when using Jacobi-SS (purple) and distilled interpolators. The non-relativistic $\thinspace^2S_S\frac{1}{2}^+$ (blue) distilled interpolator shows considerable improvement over the Jacobi-SS interpolator, while applying the GEVP to a basis of distilled interpolators of order 3 (green) and of order 7 (red) show further improvement.}
\label{fig:meff_t2-16}
\end{figure}
\begin{table*}[h!]
\scriptsize{
\begin{tabular}{ | c | c | c | c | c | c | c |}
\hline
\multicolumn{7}{|c|}{Fit $C^{2\text{pt}}\left(t\right)=\left|\mathbf{a}\right|^2e^{-M_0t}+\left|\mathbf{b}\right|^2e^{-M_1t}$}\\
\hline
$\hat{\mathcal{O}}$ & $t_{\text{fit}}$ & $\left|\mathbf{a}\right|^2$ & $M_0$ & $\left|\mathbf{b}\right|^2$ & $M_1$ & $\chi_r^2$ \\
\hline
\multirow{3}{*}{\tiny Jacobi-SS} & $\left[2,16\right]$ & 4.12(25)e-08 & 0.543(6) & 3.70(25)e-08 & 1.04(08) & 0.842 \\
& $\left[3,16\right]$ & 3.81(42)e-08 & 0.536(9) & 3.12(35)e-08 & 0.91(11) & 0.753 \\
& $\left[4,16\right]$ & 4.14(48)e-08 & 0.54(01) & 5.3(5.9)e-08 & 1.13(42) & 0.667 \\ \hline
\multirow{3}{*}{\tiny $\thinspace^2S_S\frac{1}{2}^+$} & $\left[2,16\right]$ & 1.45(02)e-02 & 0.536(2) & 1.69(06)e-02 & 1.25(03) & 1.535 \\
& $\left[3,16\right]$ & 1.43(03)e-02 & 0.534(2) & 1.35(14)e-02 & 1.14(06) & 1.268 \\
& $\left[4,16\right]$ & 1.42(04)e-02 & 0.534(3) & 1.31(52)e-02 & 1.13(15) & 1.407 \\ \hline
\multirow{3}{*}{\tiny $\hat{\mathcal{P}}_3$} & $\left[2,16\right]$ & 1.07(1)e+00 & 0.536(2) & 1.21(7)e+00 & 1.32(04) & 1.159 \\
& $\left[3,16\right]$ & 1.05(2)e+00 & 0.535(2) & 0.935(157) & 1.21(08) & 1.069 \\
& $\left[4,16\right]$ & 1.05(3)e+00 & 0.535(2) & 1.00(63)e+00 & 1.23(21) & 1.185 \\ \hline
\multirow{3}{*}{\tiny $\hat{\mathcal{P}}_7$} & $\left[2,16\right]$ & 1.00(1)e+00 & 0.535(2) & 1.08(13)e+00 & 1.43(08) & 0.737 \\
& $\left[3,16\right]$ & 0.98(2)e+00 & 0.533(2) & 0.68(24)e+00 & 1.23(17) & 0.668 \\
& $\left[4,16\right]$ & 0.98(4)e+00 & 0.533(3) & 0.48(53)e+00 & 1.13(39) & 0.729 \\ \hline
\end{tabular}
}
\caption{Parameters of two-state fits to the two-point correlators. There is general agreement in determination of the ground-state mass, while the mass gap $\Delta m=M_1-M_0$ is seen to become ever larger as distillation, and the GEVP applied to a basis of distilled interpolators, in place of the Jacobi-SS interpolator. Fits performed over larger source-sink separations were increasingly consistent with single-state fits, and are not considered further. Errors are purely statistical.}
\label{tab:2pt-fits}
\end{table*}
The inclusion of a second state in fits to the two-point correlators yields slightly smaller determinations for the ground-state mass and overlap factors when using the Jacobi, $\thinspace^2S_S\frac{1}{2}^+$, and $\hat{\mathcal{P}}_3$ interpolators, while little change is seen in the $\hat{\mathcal{P}}_7$ interpolator. The most striking difference between the interpolators comes in the determination of the first-excited state mass. The determined mass gaps are
\begin{equation*}
\Delta m=\left(M_1-M_0\right)\simeq
\begin{cases}
0.497(80) & \quad \text{Jacobi-SS} \\
0.714(30) & \quad \thinspace^2S_S\frac{1}{2}^+ \\
0.784(40) & \quad \hat{\mathcal{P}}_3 \\
0.895(80) & \quad \hat{\mathcal{P}}_7
\end{cases}
\end{equation*}
Evidently distillation and the variational method lead to greater elimination of excited-state contributions to the two-point correlators, where the mass gap is $\sim44\%$, $\sim58\%$, and $\sim80\%$ larger for the $\thinspace^2S_S\frac{1}{2}^+$, $\hat{\mathcal{P}}_3$, and $\hat{\mathcal{P}}_7$ interpolators, respectively, when compared to the Jacobi-SS interpolator. The compounded improvements of the variational method applied to our bases of distilled interpolators is entirely consistent with \cite{Blossier:2009kd}.
\subsection{Results for $g_S^{u-d}$, $g_A^{u-d}$, $g_T^{u-d}$, $g_V^{u-d}$}
In this section we seek a more reliable determination of the masses, overlap factors, and likewise nucleon matrix elements, by performing simultaneous fits to three-point and two-point correlators with interpolator $\mathcal{O}$ and current structure $\Gamma$. We fix the window over which the two-point correlators are fit to be $t_{2\text{pt}}^{\text{fit}}\in\left[2,16\right]$, while several three-point fit windows are studied. The three-point fit windows are identified by $\tau_{\text{buff}}$ - that is for a given $t_{\text{sep}}$, $\tau_{\text{fit}}\in\left[0+\tau_{\text{buff}},t_{\text{sep}}-\tau_{\text{buff}}\right]$.
To illustrate the extracted isovector charges and to quantify the
degree of excited-state contamination, we plot an effective charge
defined as
\begin{equation}
g_{\Gamma\text{, eff}}\left(t_{\text{sep}},\tau\right)=\frac{C_\Gamma^{\text{3pt}}\left(t_{\text{sep}},\tau\right)}{C_{\text{fit}}^{\text{2pt}}\left(t_{\text{sep}}\right)}
\end{equation}
where $C_\Gamma^{\text{3pt}}\left(t_{\text{sep}},\tau\right)$ is the
three-point correlation function for a given source-sink separation
and current insertion time, and
$C_{\text{fit}}^{\text{2pt}}\left(t_{\text{sep}}\right)$ is the
corresponding best fit applied to the two-point function of the same
interpolators and source-sink separation. Errors on the effective
charges are computed via a simultaneous jackknife re-sampling of the
two-point fit and three-point correlator, to account for the
correlations between fitted parameters. Shown together with the
effective charges are our extracted values for the isovector charges
using the methodology described above. Error (gray) bands of the
extracted charges (black line) are determined through the ratio of fit
parameters $\mathcal{A}/\left|\mathbf{a}\right|^2$, likewise
accounting for parameter covariance. We highlight that nearly the same
masses for the ground- and first-excited states are found from the
2-state fits to the two-point correlators and from the simultaneous
fits to the two- and three-point correlators.
\subsubsection{$g_S^{u-d}$}
Coupling of the isovector scalar current $S^3=\overline{q}\frac{\tau^3}{2}q$ to the Jacobi-SS interpolator $\mathcal{N}_{\alpha}$ produces an effective matrix element (denoted here as $\mathcal{M}_S^{\text{eff}}$) that is determined to within no less than $\sim10\%$ uncertainty. Although $\mathcal{M}_S^{\text{eff}}$ is symmetric about the midpoint $\tau-t_{\text{sep}}/2$ for $t_{\text{sep}}=8$, indicating equal excited-state contamination on the source/sink side of the insertion, $\mathcal{M}_S^{\text{eff}}$ exhibits antisymmetric behavior for $t_{\text{sep}}/a=12$ and is largely statistical noise for source-sink separations greater than $1.5\text{ fm}$.
Immediately noticeable with the use of a single distilled interpolator
($\thinspace^2S_S\frac{1}{2}^+$) is the considerable reduction in
statistical uncertainty of $\mathcal{M}_S^{\text{eff}}$ at each value
of $t_{\text{sep}}$. Moreover, the determinations of
$\mathcal{M}_S^{\text{eff}}$ for different $t_{\text{sep}}$ are in
greater agreement, with a clear plateau established for
$t_{\text{sep}}\sim1.18\text{ fm}$. Recalling that a simultaneous fit
for all values of $t_{\rm sep}$ has been performed to extract the
matrix element, the fact that the $t_{\text{sep}}=16$ values of
$\mathcal{M}_S^{\text{eff}}$ are not necessarily on the curve reflects
that the fit is largely constrained by data with $t_{\text{sep}}\lesssim12$.
Introduction of two hybrid interpolators to construct $\hat{\mathcal{P}}_3$ appears to return the $t_{\text{sep}}=8,12$ determinations of $\mathcal{M}_S^{\text{eff}}$ to a form structurally similar to that found for the Jacobi-SS interpolator. It is curious to note that the $t_{\text{sep}}=16$ determination of $\mathcal{M}_S^{\text{eff}}$ is nearly identical to the corresponding determination with $\thinspace^2S_S\frac{1}{2}^+$, with inflated errors bars. These data suggest the inclusion of only the two hybrid interpolators provides limited improvement in the extraction of $g_S^{u-d}$. Inclusion of the remaining distilled interpolators in the variational analysis yields determinations of $\mathcal{M}_S^{\text{eff}}$ that not only exhibit broad plateaus as a function of $\tau$, but are also consistent within error and are consistent with the $\thinspace^2S_S\frac{1}{2}^+$ determination.
\subsubsection{$g_A^{u-d}$}
The isovector axial current
$A_\mu^3=\overline{q}\gamma_\mu\gamma_5\frac{\tau^3}{2}q$ is the
insertion that is most sensitive to excited-state contamination and
finite volume effects. As with $g_S^{u-d}$, determinations of the
effective matrix element (denoted here as
$\mathcal{M}_A^{\text{eff}}$) for different $t_{\text{sep}}$ using the
Jacobi-SS interpolator are plagued with poor statistics. Clearly
spatial Gaussian smearing of $\mathcal{N}_\alpha$ is not sufficient to
suppress excited-state effects.
Employing the local distilled interpolator $\thinspace^2S_S\frac{1}{2}^+$ yet again leads to dramatic reduction in statistical uncertainties in determinations $\mathcal{M}_A^{\text{eff}}$, and an increase in the extracted value of $g_A^{u-d}$ by $\sim7\%$. Notably broad plateaus in $\mathcal{M}_A^{\text{eff}}$ can likewise be seen for $t_{\text{sep}}=8,12$ that are consistent within error, thereby reducing the minimal source-sink separation ($\lesssim1\text{ fm}$) to reliably extract $g_A^{u-d}$.
The behavior of the $\hat{\mathcal{P}}_3$ interpolator is in line with
that found for $g_S^{u-d}$ - reduced agreement between
$t_{\text{sep}}=8,12$ determinations, and fits most heavily
constrained by data at smaller source-sink separations. Curiously, the
slight oscillation seen in the $t_{\text{sep}}=16$
$\thinspace^2S_S\frac{1}{2}^+$ determination of
$\mathcal{M}_A^{\mathrm{eff}}$ is amplified following application of
the variational method. Prior to $\tau-t_{\text{sep}}<-2$, the
$t_{\text{sep}}=16$ effective matrix element seems to fall into
agreement with the $t_{\text{sep}}=12$ determination. We speculate the
abrupt decrease in the $t_{\text{sep}}=16$ effective matrix element to
be caused by neglect of backward-propagating positive- parity states
in the corresponding two-point functions.
The $\hat{\mathcal{P}}_7$ interpolator perpetuates the double-plateau feature in the $t_{\text{sep}}=16$ determination of $\mathcal{M}_A^{\mathrm{eff}}$. Remarkably, however, application of the variational method to this enlarged basis of distilled interpolators demonstrates absolute agreement between the $t_{\text{sep}}=8,12$ determinations of $\mathcal{M}_A^{\mathrm{eff}}$.
\subsubsection{$g_T^{u-d}$}
The isovector tensor current $\mathcal{T}_{\mu\nu}^3=\overline{q}\frac{i}{2}\left[\gamma_\mu,\gamma_\nu\right]\frac{\tau^3}{2}q$ is among one of the best determined quantities in the nucleon, particularly at zero virtuality defining $g_T^{u-d}$. This facet is supported by models that show excitations of excited-states in the nucleon are suppressed when coupling the tensor current $\mathcal{T}^3_{\mu\nu}$ \cite{Roberts:2018}.
As with previous charges, the $\thinspace^2S_S\frac{1}{2}^+$ interpolator leads to a dramatic reduction in statistical uncertainty of the effective matrix elements (denoted here as $\mathcal{M}_T^{\mathrm{eff}}$) when compared to Jacobi-SS. By $t_{\text{sep}}\sim1.18\text{ fm}$, a definite plateau is present in $\mathcal{M}_T^{\mathrm{eff}}$ for several insertion times $\tau$; this same plateau is shared with the $t_{\text{sep}}=16$ determination. The absolute agreement of the central values and error of the $t_{\text{sep}}=12,16$ determinations suggest that when using distillation, a source-sink separation of roughly 1fm is sufficient to reliably extract $g_T^{u-d}$.
The variationally improved $\hat{\mathcal{P}}_3$ expands upon the improvements seen with the $\thinspace^2S_S\frac{1}{2}^+$ interpolator. Despite the extracted value of $g_T^{u-d}$ being roughly equal (1.147(13) vs. 1.145(15), respectively) for $\thinspace^2S_S\frac{1}{2}^+$ and $\hat{\mathcal{P}}_3$, the $t_{\text{sep}}=12,16$ determinations depict an even broader plateau that are again entirely consistent within error. These enlarged plateaus for numerous insertion times $\tau$ follow naturally from a determination of $\Delta m$ that is greater than that determined with $\thinspace^2S_S\frac{1}{2}^+$.
Proceeding to the $\hat{\mathcal{P}}_7$ interpolator continues the trends seen with other distilled interpolators, wherein the statistical uncertainties are greatly reduced and the plateaus in the effective matrix elements are seen to become ever larger. Remarkably, the $t_{\text{sep}}=16$ determination using $\hat{\mathcal{P}}_7$ resembles that of the vector charge - a matrix element that is constant in $\tau$ within minor statistical fluctuations. We emphasize here that the effective matrix element for $\tau-t_{\text{sep}}/2=-8$ is coincident with the source interpolator and thus should not be considered a reflection of $g_T^{u-d}$.
Recalling the functional form of the fit applied to the three-point correlator, Eq. \ref{eq:3ptfit}, the coefficient $\mathcal{B}$ encodes the first excited-state matrix element and $\mathcal{C}$ captures the ground to first excited-state transition matrix element. As evident from Table \ref{tab:tensorparams}, the determinations of $\mathcal{B}$ for each interpolator are generally consistent with zero, indeed supporting the notion that excited-state contributions to $g_T^{u-d}$ are greatly suppressed. On the other hand, the largest source of contamination appears to stem from the transition matrix element contained in $\mathcal{C}$.
\subsubsection{$g_V^{u-d}$}
As the isovector vector current $V^3_\mu=\overline{q}\gamma_\mu\frac{\tau^3}{2}q$ simply counts the number of quarks within the nucleon, and all its excitations, there is no surprise in the extraction of $g_V^{u-d}$ for each interpolator. We have included the nucleon vector charge here as a useful sanity check, thus ensuring such an observable that is independent of state and $\tau$ is indeed recovered. The continuum requirement $1=Z_Vg_V^{u-d}$ trivially sets the vector current renormalization to be $Z_V\sim0.856$, consistent with determinations on finer lattice ensembles with lighter $m_\pi$ \cite{Yoon:2016dij,Yoon:2016jzj}. As for the previous charges, distillation alone affords considerably improved statistics over Jacobi-SS. That said, application of the variational method to a basis of distilled interpolators appears to produce a curious ``spreading'' in the determined effective vector matrix element.
\begin{figure*}[tb]
\centering
\includegraphics[width=0.49\linewidth]{scalar-Jacobi-eff-charge+fits.png}
\centering
\includegraphics[width=0.49\linewidth]{scalar-D0J0S-eff-charge+fits.png}
\caption{Extracted effective unrenormalized isovector scalar charge using the Jacobi-SS (left) and the $\thinspace^2S_S\frac{1}{2}^+$ (right) distilled interpolators. Displayed plots are for simultaneous fits with $t^{\text{fit}}_{2\text{pt}}\in\left[2,16\right]$ and $\tau_{\text{buff}}=2$.}
\end{figure*}
\begin{figure*}[tb]
\centering
\includegraphics[width=0.49\linewidth]{scalar-Proj_r-eff-charge+fits.png}
\includegraphics[width=0.49\linewidth]{scalar-Proj-eff-charge+fits.png}
\caption{Extracted effective unrenormalized isovector scalar charge using the projected distilled interpolator from the 3x3 GEVP (left) and the projected distilled interpolator from the 7x7 GEVP (right). Displayed plots are for simultaneous fits with $t^{\text{fit}}_{2\text{pt}}\in\left[2,16\right]$ and $\tau_{\text{buff}}=2$.}
\end{figure*}
\begin{table*}[tb]
\centering
\scriptsize{
\begin{tabular}{ | c | c | c | c | c | c | c | c | c | c |}
\hline
$\hat{\mathcal{O}}$ & $\tau_{\text{buff}}$ & $\mathcal{A}$ & $\mathcal{B}$ & $\mathcal{C}$ & $M_0$ & $M_1$ & $\left|\mathbf{a}\right|^2$ & $\left|\mathbf{b}\right|^2$ & $g^{u-d}_{S,\text{bare}}$ \\
\hline
\multirow{4}{*}{Jacobi-SS} & 1 & 3.72(40)e-08 & 2.57(3.12)e-07 & -2.93(28)e-08 & 0.546(5) & 1.087(77) & 4.26(20)e-08 & 3.72(26)e-08 & 0.87(10) \\
& 2 & 3.75(44)e-08 & 1.33(2.46)e-07 & -2.63(47)e-08 & 0.544(5) & 1.061(76) & 4.17(22)e-08 & 3.71(25)e-08 & 0.90(11) \\
& 3 & 3.86(51)e-08 & 1.54(2.87)e-07 & -3.08(95)e-08 & 0.543(6) & 1.054(78) & 4.13(23)e-08 & 3.75(25)e-08 & 0.94(13) \\
& 4 & 3.28(65)e-08 & 2.08(3.09)e-07 & 0.34(3.47)e-08 & 0.543(6) & 1.044(75) & 4.11(23)e-08 & 3.75(24)e-08 & 0.80(16) \\ \hline
\multirow{4}{*}{$\thinspace^2S_S\frac{1}{2}^+$} & 1 & 1.54(04)e-02 & 0.48(1.08)e-01 & -9.97(27)e-03 & 0.537(1) & 1.246(26) & 1.46(02)e-02 & 1.67(05)e-02 & 1.051(28) \\
& 2 & 1.54(05)e-02 & 0.47(1.23)e-01 & -1.00(06)e-02 & 0.537(1) & 1.249(28) & 1.46(02)e-02 & 1.68(05)e-02 & 1.051(31) \\
& 3 & 1.52(05)e-02 & 1.27(1.38)e-01 & -1.05(13)e-02 & 0.537(1) & 1.251(28) & 1.46(02)e-02 & 1.69(06)e-02 & 1.042(34) \\
& 4 & 1.52(06)e-02 & 0.18(1.41)e-01 & -4.3(5.5)e-03 & 0.536(1) & 1.247(28) & 1.46(02)e-02 & 1.68(06)e-02 & 1.043(40) \\ \hline
\multirow{4}{*}{$\hat{\mathcal{P}}_3$} & 1 & 1.01(4)e+00 & 9.8(14.0)e+00 & -7.44(23)e-01 & 0.537(1) & 1.289(35) & 1.06(1)e+00 & 1.15(5)e+00 & 0.952(35) \\
& 2 & 1.03(4)e+00 & 11.7(19.4)e+00 & -8.43(64)e-01 & 0.538(1) & 1.317(40) & 1.07(1)e+00 & 1.19(6)e+00 & 0.961(38) \\
& 3 & 1.03(5)e+00 & 27.7(25.8)e+00 & -1.06(15)e+00 & 0.538(1) & 1.328(41) & 1.07(1)e+00 & 1.21(6)e+00 & 0.958(43) \\
& 4 & 1.05(5)e+00 & 8.2(24.8)e+00 & -0.77(64)e+00 & 0.537(2) & 1.324(41) & 1.07(1)e+00 & 1.21(6)e+00 & 0.984(49) \\ \hline
\multirow{4}{*}{$\hat{\mathcal{P}}_7$} & 1 & 1.08(5)e+00 & 30.5(65.3)e+00 & -6.09(29)e-01 & 0.536(2) & 1.440(69) & 1.00(1)e+00 & 1.09(11)e+00 & 1.079(48) \\
& 2 & 1.06(4)e+00 & 42.4(48.4)e+00 & -5.43(80)e-01 & 0.536(2) & 1.416(71) & 1.00(1)e+00 & 1.06(11)e+00 & 1.061(43) \\
& 3 & 1.05(5)e+00 & 94.8(73.0)e+00 & -0.73(20)e+00 & 0.536(2) & 1.442(72) & 1.00(1)e+00 & 1.10(12)e+00 & 1.051(46) \\
& 4 & 1.05(6)e+00 & 57.0(78.0)e+00 & 0.6(1.1)e+00 & 0.536(2) & 1.443(77) & 1.00(1)e+00 & 1.10(12)e+00 & 1.045(55) \\ \hline
\end{tabular}
}
\caption{Results of simultaneous fits to the two-point and three-point correlators with scalar current insertion. The functional forms are given in Eqns. \ref{eq:2ptfit} and \ref{eq:3ptfit}.}
\end{table*}
\begin{figure*}[tb]
\centering
\centering
\includegraphics[width=0.49\linewidth]{axial-Jacobi-eff-charge+fits.png}
\includegraphics[width=0.49\linewidth]{axial-D0J0S-eff-charge+fits.png}
\caption{Extracted effective unrenormalized isovector axial charge using the Jacobi-SS (left) and the $\thinspace^2S_S\frac{1}{2}^+$ (right) distilled interpolators. Displayed plots are for simultaneous fits with $t^{\text{fit}}_{2\text{pt}}\in\left[2,16\right]$ and $\tau_{\text{buff}}=2$.}
\end{figure*}
\begin{figure*}[tb]
\centering
\centering
\includegraphics[width=0.49\linewidth]{axial-Proj_r-eff-charge+fits.png}
\includegraphics[width=0.49\linewidth]{axial-Proj-eff-charge+fits.png}
\caption{Extracted effective unrenormalized isovector axial charge using the projected distilled interpolator from the 3x3 GEVP (left) and the projected distilled interpolator from the 7x7 GEVP (right). Displayed plots are for simultaneous fits with $t^{\text{fit}}_{2\text{pt}}\in\left[2,16\right]$ and $\tau_{\text{buff}}=2$.}
\end{figure*}
\begin{table*}[tb]
\centering
\scriptsize{
\begin{tabular}{ | c | c | c | c | c | c | c | c | c | c |}
\hline
$\hat{\mathcal{O}}$ & $\tau_{\text{buff}}$ & $\mathcal{A}$ & $\mathcal{B}$ & $\mathcal{C}$ & $M_0$ & $M_1$ & $\left|\mathbf{a}\right|^2$ & $\left|\mathbf{b}\right|^2$ & $g^{u-d}_{A,\text{bare}}$ \\
\hline
\multirow{4}{*}{Jacobi-SS} & 1 & 4.92(32)e-08 & 2.65(6.23)e-08 & -6.66(1.32)e-09 & 0.539(5) & 0.995(62) & 3.93(23)e-08 & 3.76(20)e-08 & 1.253(52) \\
& 2 & 5.00(33)e-08 & 3.23(7.15)e-08 & -7.66(2.12)e-09 & 0.540(6) & 1.010(67) & 3.96(23)e-08 & 3.81(21)e-08 & 1.261(51) \\
& 3 & 5.03(33)e-08 & 6.83(6.78)e-08 & -1.30(0.44)e-08 & 0.540(6) & 0.999(66) & 3.95(23)e-08 & 3.74(21)e-08 & 1.272(54) \\
& 4 & 5.37(36)e-08 & 1.54(1.15)e-07 & -4.00(1.69)e-08 & 0.542(6) & 1.032(75) & 4.08(24)e-08 & 3.70(23)e-08 & 1.315(59) \\ \hline
\multirow{4}{*}{$\thinspace^2S_S\frac{1}{2}^+$} & 1 & 1.95(03)e-02 & -2.52(5.57)e-02 & -1.38(0.11)e-03 & 0.536(1) & 1.242(28) & 1.46(02)e-02 & 1.67(05)e-02 & 1.342(16) \\
& 2 & 1.96(03)e-02 & -3.03(5.78)e-02 & -1.33(0.23)e-03 & 0.536(1) & 1.244(28) & 1.46(02)e-02 & 1.68(05)e-02 & 1.343(16) \\
& 3 & 1.96(03)e-02 & -1.77(5.85)e-02 & -1.70(0.54)e-03 & 0.536(1) & 1.247(28) & 1.46(02)e-02 & 1.68(06)e-02 & 1.344(16) \\
& 4 & 1.98(04)e-02 & -0.55(6.32)e-02 & -6.11(2.61)e-03 & 0.536(1) & 1.253(28) & 1.46(02)e-02 & 1.69(06)e-02 & 1.357(17) \\ \hline
\multirow{4}{*}{$\hat{\mathcal{P}}_3$} & 1 & 1.39(2)e+00 & -3.3(5.9)e+00 & -1.74(0.11)e-01 & 0.535(1) & 1.279(35) & 1.05(1)e+00 & 1.15(5)e+00 & 1.315(17) \\
& 2 & 1.41(3)e+00 & -6.5(7.5)e+00 & -2.31(0.24)e-01 & 0.536(1) & 1.301(38) & 1.06(1)e+00 & 1.18(6)e+00 & 1.328(17) \\
& 3 & 1.42(3)e+00 & -6.8(8.1)e+00 & -3.38(0.59)e-01 & 0.536(1) & 1.308(39) & 1.06(1)e+00 & 1.19(6)e+00 & 1.335(18) \\
& 4 & 1.44(3)e+00 & -3.3(9.5)e+00 & -9.36(3.15)e-01 & 0.537(1) & 1.328(41) & 1.07(1)e+00 & 1.21(6)e+00 & 1.348(19) \\ \hline
\multirow{4}{*}{$\hat{\mathcal{P}}_7$} & 1 & 1.33(3)e+00 & 12.5(18.2)e+00 & -2.51(0.13)e-01 & 0.535(1) & 1.403(58) & 9.96(11)e-01 & 1.04(9)e+00 & 1.339(20) \\
& 2 & 1.35(3)e+00 & 11.5(26.9)e+00 & -2.87(0.38)e-01 & 0.535(2) & 1.442(72) & 9.99(12)e-01 & 1.10(12)e+00 & 1.347(22) \\
& 3 & 1.35(3)e+00 & 16.4(27.8)e+00 & -2.94(0.91)e-01 & 0.535(2) & 1.451(76) & 1.00(1)e+00 & 1.11(13)e+00 & 1.345(21) \\
& 4 & 1.35(3)e+00 & 23.7(29.5)e+00 & -6.39(5.15)e-01 & 0.535(2) & 1.443(77) & 1.00(1)e+00 & 1.10(12)e+00 & 1.346(22) \\ \hline
\end{tabular}
}
\caption{Results of simultaneous fits to the two-point and three-point correlators with axial current insertion. The functional forms are given in Eqns. \ref{eq:2ptfit} and \ref{eq:3ptfit}.}
\end{table*}
\begin{figure*}[tb]
\centering
\centering
\includegraphics[width=0.49\linewidth]{tensor-Jacobi-eff-charge+fits.png}
\includegraphics[width=0.49\linewidth]{tensor-D0J0S-eff-charge+fits.png}
\caption{Extracted effective unrenormalized isovector tensor charge using the Jacobi-SS (left) and the $\thinspace^2S_S\frac{1}{2}^+$ (right) distilled interpolators. Displayed plots are for simultaneous fits with $t^{\text{fit}}_{2\text{pt}}\in\left[2,16\right]$ and $\tau_{\text{buff}}=2$.}
\end{figure*}
\begin{figure*}[tb]
\centering
\centering
\includegraphics[width=0.49\linewidth]{tensor-Proj_r-eff-charge+fits.png}
\includegraphics[width=0.49\linewidth]{tensor-Proj-eff-charge+fits.png}
\caption{Extracted effective unrenormalized isovector tensor charge using the projected distilled interpolator from the 3x3 GEVP (left) and the projected distilled interpolator from the 7x7 GEVP (right). Displayed plots are for simultaneous fits with $t^{\text{fit}}_{2\text{pt}}\in\left[2,16\right]$ and $\tau_{\text{buff}}=2$.}
\end{figure*}
\begin{table*}[tb]
\centering
\scriptsize{
\begin{tabular}{ | c | c | c | c | c | c | c | c | c | c |}
\hline
$\hat{\mathcal{O}}$ & $\tau_{\text{buff}}$ & $\mathcal{A}$ & $\mathcal{B}$ & $\mathcal{C}$ & $M_0$ & $M_1$ & $\left|\mathbf{a}\right|^2$ & $\left|\mathbf{b}\right|^2$ & $g^{u-d}_{T,\text{bare}}$ \\
\hline
\multirow{4}{*}{Jacobi-SS} & 1 & 4.70(29)e-08 & 0.21(1.28)e-07 & 1.39(0.11)e-08 & 0.547(5) & 1.113(71) & 4.25(19)e-08 & 3.95(28)e-08 & 1.105(41) \\
& 2 & 4.49(32)e-08 & 3.36(7.57)e-08 & 1.21(0.19)e-08 & 0.542(6) & 1.043(73) & 4.07(23)e-08 & 3.78(23)e-08 & 1.101(42) \\
& 3 & 4.48(33)e-08 & 5.02(7.93)e-08 & 1.31(0.39)e-08 & 0.543(6) & 1.045(74) & 4.10(23)e-08 & 3.77(24)e-08 & 1.093(43) \\
& 4 & 4.50(33)e-08 & 2.45(9.79)e-08 & 2.04(1.28)e-08 & 0.544(6) & 1.059(79) & 4.15(23)e-08 & 3.76(26)e-08 & 1.084(45) \\ \hline
\multirow{4}{*}{$\thinspace^2S_S\frac{1}{2}^+$} & 1 & 1.65(03)e-02 & 4.10(3.26)e-02 & 5.21(0.09)e-03 & 0.534(1) & 1.204(19) & 1.44(01)e-02 & 1.61(04)e-02 & 1.146(13) \\
& 2 & 1.66(03)e-02 & 5.08(3.88)e-02 & 5.51(0.22)e-03 & 0.535(1) & 1.225(26) & 1.45(02)e-02 & 1.64(05)e-02 & 1.147(13) \\
& 3 & 1.67(03)e-02 & 6.03(4.64)e-02 & 6.51(0.53)e-03 & 0.536(1) & 1.245(28) & 1.46(02)e-02 & 1.68(06)e-02 & 1.147(14) \\
& 4 & 1.67(03)e-02 & 3.83(4.94)e-02 & 8.44(1.91)e-03 & 0.536(1) & 1.246(28) & 1.46(02)e-02 & 1.68(06)e-02 & 1.144(14) \\ \hline
\multirow{4}{*}{$\hat{\mathcal{P}}_3$} & 1 & 1.22(2)e+00 & 4.31(5.36)e+00 & 2.94(0.08)e-01 & 0.536(1) & 1.292(31) & 1.06(1)e+00 & 1.16(4)e+00 & 1.144(15) \\
& 2 & 1.22(2)e+00 & 4.19(5.74)e+00 & 3.05(0.20)e-01 & 0.536(1) & 1.301(38) & 1.06(1)e+00 & 1.17(5)e+00 & 1.145(15) \\
& 3 & 1.22(2)e+00 & 4.53(6.51)e+00 & 3.79(0.48)e-01 & 0.537(2) & 1.316(40) & 1.07(1)e+00 & 1.20(6)e+00 & 1.145(15) \\
& 4 & 1.22(3)e+00 & 1.87(7.17)e+00 & 6.62(2.24)e-01 & 0.537(2) & 1.319(41) & 1.07(1)e+00 & 1.20(6)e+00 & 1.139(16) \\ \hline
\multirow{4}{*}{$\hat{\mathcal{P}}_7$} & 1 & 1.12(2)e+00 & 25.4(15.6)e+00 & 1.62(0.08)e-01 & 0.534(2) & 1.395(57) & 9.92(12)e-01 & 1.03(8)e+00 & 1.128(16) \\
& 2 & 1.13(3)e+00 & 24.2(17.6)e+00 & 1.55(0.23)e-01 & 0.534(2) & 1.387(70) & 9.94(13)e-01 & 1.01(10)e+00 & 1.133(17) \\
& 3 & 1.13(3)e+00 & 32.3(23.9)e+00 & 2.54(0.63)e-01 & 0.535(2) & 1.427(75) & 9.99(13)e-01 & 1.08(12)e+00 & 1.135(17) \\
& 4 & 1.13(3)e+00 & 21.3(21.0)e+00 & 8.60(3.71)e-01 & 0.535(2) & 1.427(75) & 9.98(13)e-01 & 1.08(12)e+00 & 1.127(18) \\ \hline
\end{tabular}
}
\caption{Results of simultaneous fits to the two-point and three-point correlators with tensor current insertion. The functional forms are given in Eqns. \ref{eq:2ptfit} and \ref{eq:3ptfit}.}
\label{tab:tensorparams}
\end{table*}
\begin{figure*}[tb]
\centering
\includegraphics[width=0.49\linewidth]{vector-Jacobi-eff-charge+fits.png}
\includegraphics[width=0.49\linewidth]{vector-D0J0S-eff-charge+fits.png}
\caption{Extracted effective unrenormalized isovector vector charge using the Jacobi-SS (left) and the $\thinspace^2S_S\frac{1}{2}^+$ (right) distilled interpolators. Displayed plots are for simultaneous fits with $t^{\text{fit}}_{2\text{pt}}\in\left[2,16\right]$ and $\tau_{\text{buff}}=2$.}
\end{figure*}
\begin{figure*}[tb]
\centering
\includegraphics[width=0.49\linewidth]{vector-Proj_r-eff-charge+fits.png}
\includegraphics[width=0.49\linewidth]{vector-Proj-eff-charge+fits.png}
\caption{Extracted effective unrenormalized isovector vector charge using the projected distilled interpolator from the 3x3 GEVP (left) and the projected distilled interpolator from the 7x7 GEVP (right). Displayed plots are for simultaneous fits with $t^{\text{fit}}_{2\text{pt}}\in\left[2,16\right]$ and $\tau_{\text{buff}}=2$.}
\end{figure*}
\begin{table*}[tb]
\centering
\scriptsize{
\begin{tabular}{ | c | c | c | c | c | c | c | c | c | c |}
\hline
$\hat{\mathcal{O}}$ & $\tau_{\text{buff}}$ & $\mathcal{A}$ & $\mathcal{B}$ & $\mathcal{C}$ & $M_0$ & $M_1$ & $\left|\mathbf{a}\right|^2$ & $\left|\mathbf{b}\right|^2$ & $g^{u-d}_{V,\text{bare}}$ \\
\hline
\multirow{4}{*}{Jacobi-SS} & 1 & 4.68(31)e-08 & 8.30(9.65)e-08 & -8.62(1.37)e-10 & 0.543(6) & 1.049(77) & 4.13(23)e-08 & 3.73(25)e-08 & 1.131(47) \\
& 2 & 4.70(32)e-08 & 8.34(9.55)e-08 & -9.62(3.94)e-10 & 0.543(6) & 1.047(77) & 4.13(23)e-08 & 3.71(25)e-08 & 1.137(47) \\
& 3 & 4.71(32)e-08 & 8.26(9.28)e-08 & 0.07(1.57)e-09 & 0.543(6) & 1.043(77) & 4.13(23)e-08 & 3.69(24)e-08 & 1.140(48) \\
& 4 & 4.97(97)e-08 & 3.19(8.09)e-07 & -0.52(1.75)e-07 & 0.543(6) & 1.043(76) & 4.12(23)e-08 & 3.70(24)e-08 & 1.21(23) \\ \hline
\multirow{4}{*}{$\thinspace^2S_S\frac{1}{2}^+$} & 1 & 1.73(03)e-02 & 4.90(5.49)e-02 & -1.33(0.13)e-04 & 0.536(1) & 1.255(29) & 1.46(02)e-02 & 1.69(06)e-02 & 1.185(14) \\
& 2 & 1.73(03)e-02 & 4.82(5.31)e-02 & -8.84(3.77)e-05 & 0.536(1) & 1.251(29) & 1.46(02)e-02 & 1.69(06)e-02 & 1.184(14) \\
& 3 & 1.73(03)e-02 & 4.90(5.31)e-02 & 2.52(1.63)e-04 & 0.536(1) & 1.251(29) & 1.46(02)e-02 & 1.69(06)e-02 & 1.184(14) \\
& 4 & 1.65(05)e-02 & -4.01(2.63)e-01 & 3.99(2.17)e-02 & 0.536(1) & 1.249(29) & 1.46(02)e-02 & 1.69(06)e-02 & 1.131(32) \\ \hline
\multirow{4}{*}{$\hat{\mathcal{P}}_3$} & 1 & 1.25(2)e+00 & 9.64(7.17)e+00 & -1.60(0.11)e-02 & 0.536(2) & 1.306(39) & 1.06(1)e+00 & 1.19(6)e+00 & 1.174(15) \\
& 2 & 1.25(2)e+00 & 9.56(7.55)e+00 & -1.74(0.34)e-02 & 0.536(2) & 1.310(40) & 1.07(1)e+00 & 1.19(6)e+00 & 1.174(16) \\
& 3 & 1.26(2)e+00 & 9.00(7.61)e+00 & -0.21(1.54)e-02 & 0.536(2) & 1.315(41) & 1.07(1)e+00 & 1.20(6)e+00 & 1.177(15) \\
& 4 & 1.18(8)e+00 & -65.7(75.8)e+00 & 4.99(4.97)e+00 & 0.536(2) & 1.313(41) & 1.07(1)e+00 & 1.20(6)e+00 & 1.107(72) \\ \hline
\multirow{4}{*}{$\hat{\mathcal{P}}_7$} & 1 & 1.18(2)e+00 & 27.9(19.8)e+00 & -1.67(1.16)e-03 & 0.535(2) & 1.412(70) & 9.98(13)e-01 & 1.05(11)e+00 & 1.180(17) \\
& 2 & 1.18(2)e+00 & 26.0(23.2)e+00 & 0.68(3.75)e-03 & 0.535(2) & 1.412(75) & 9.98(13)e-01 & 1.05(11)e+00 & 1.181(18) \\
& 3 & 1.18(3)e+00 & 26.1(23.4)e+00 & 1.82(1.94)e-02 & 0.535(2) & 1.418(75) & 9.99(13)e-01 & 1.06(11)e+00 & 1.183(18) \\
& 4 & 1.07(6)e+00 & -294(237)e+00 & 13.1(7.1)e+00 & 0.535(2) & 1.427(77) & 9.99(13)e-01 & 1.08(12)e+00 & 1.068(53) \\ \hline
\end{tabular}
}
\caption{Results of simultaneous fits to the two-point and three-point correlators with vector current insertion. The functional forms are given in Eqns. \ref{eq:2ptfit} and \ref{eq:3ptfit}.}
\end{table*}
\subsection{Numerical Cost}
Given the substantial benefits incurred by use of distillation and the variational method in such calculations of hadronic structure, it is worth while to pause and consider the numerical cost of doing so. We highlight that a true one-to-one comparison of Distillation to standard techniques is not entirely possible, as distillation is markedly distinct from traditional spatial smearing techniques via sampling of entire time slices.
The calculation of point-to-all propagators in standard lattice calculations proceeds by inverting the chosen discretization of the Dirac operator against a point source in coordinate, spinor, and color space
\begin{equation}
S\left(\vec{x},\vec{z}\right)^{\beta\alpha}_{ba}=\sum_{\vec{y},\gamma,c}D^{-1}\left(\vec{x},\vec{y}\right)^{\beta\gamma}_{bc}\delta\left(\vec{y}-\vec{z}\right)\delta_{\gamma\alpha}\delta_{ca}.
\end{equation}
This operation requires 12 distinct inversions of the Dirac operator, one for each spinor and color index $\lbrace\alpha,a\rbrace$. In our case of the nucleon, with degenerate $u$ and $d$-quarks, this captures the propagation of the nucleon from some source point to all other points on the lattice. Implementing the sequential-source method, as we did for the Jacobi-SS interpolator, reduces the number of required inversions by combining the computed point-to-all propagator with the sink interpolator and using this object as a source for further inversions (deemed sequential propagators). As the $u$-quarks can be combined into one sequential source and the $d$-quarks another, the computation of a three-point function in the sequential-source framework requires two additional inversions against a color and spinor point source. As the sequential propagators must be recomputed for each new source-sink separation $t_{\text{sep}}$, we arrive at
\begin{equation}
N_{\text{src}}\left[12+24\cdot N_{\text{seps}}\right]N_{\text{cfg}}
\end{equation}
total inversions of the Dirac operator for a single Jacobi-smeared interpolator, where $N_{\text{src}}$ is the number of source positions, $N_{\text{seps}}$ the number of source-sink separations, and $N_{\text{cfg}}$ the number of gauge configurations. Were the variational method applied to a two-point correlation matrix of different Jacobi-smeared interpolators, $12\times N_{\text{ops}}$ inversions would be required to construct the requisite variationally-improved interpolator. The total number of inversions in the three-point function remains unchanged, and we would then have
\begin{equation}
N_{\text{src}}\left[12\left(1+N_{\text{ops}}\right)+24\cdot N_{\text{seps}}\right]N_{\text{cfg}}
\end{equation}
inversions, with $N_{\text{ops}}$ the dimension of the interpolator basis. If instead the variational method were applied to a correlation matrix of different Jacobi-smeared three-point functions, $N_{\text{ops}}$ inversions would be needed at the source and an additional $N_{\text{ops}}$ in the construction of the sequential propagators, for a total
\begin{equation}
N_{\text{src}}N_{\text{ops}}\left[12+24\cdot N_{\text{seps}}N_{\text{ops}}\right]N_{\text{cfg}},
\end{equation}
where we note that this is now proportional to $N_{\rm ops}^2$.
In the case of distillation, the inversion of the Dirac operator against a point source is replaced with inversion against an eigenvector on some time slice $t$:
\begin{equation}
S_{\alpha\beta}^{\left(k\right)}\left(\vec{x},t';t\right)=D^{-1}_{\alpha\beta}\left(t',t\right)\xi^{\left(k\right)}\left(t\right).
\end{equation}
As the eigenvectors are determined by solution of
$-\nabla^2\left(t\right)\xi^{\left(k\right)}\left(t\right)=\lambda^{\left(k\right)}\left(t\right)\xi^{\left(k\right)}\left(t\right)$
given some gauge covariant discretization of $\nabla^2$, the
calculation of the $k^{\text{th}}$ solution vector requires 3
inversions of the Dirac operator, one for each (suppressed) color
index. In practice, for a given $t_{\text{sep}}$, we calculate the
solution vectors forward (backward) from the source (sink), where the
solution vectors from the source are used in the two-point and
three-point calculations. Thus for a single distilled interpolator the
total number of inversions becomes
\begin{equation}
3\cdot N_{\text{src}}N_{\text{eigs}}\left(1+N_{\text{seps}}\right)N_{\text{cfg}}
\end{equation}
where $N_{\text{eigs}}$ is the dimension of the distillation
space. With $N_{\text{eigs}}=64$ in our case, this cost at first seems
excessive. However, once these solution vectors have been computed,
any number of interpolating fields, variationally-optimized or
otherwise, can be correlated \textit{without} additional cost. We note
that we have not taken into account the cost of the Wick comparisons
when using distillation in this study, and a future work will include
the stability of our extracted matrix elements as the rank of the
distillation space is reduced.
\subsection{Conclusions}
We have investigated the use of distillation, and an extended basis of
interpolators, in the calculation of the scalar, axial and tensor
isovector charges of the nucleon, and made comparisons with a
calculation on the same ensemble using the conventional
Jacobi-smearing method of a single smearing radius. We find that
distillation affords a considerable improvement in the statistical
quality of the data when compared with calculations using Jacobi
smearing. We attribute this improvement to be
a consequence of momentum projection performed at both source
and sink in the case of a two-point function, and at source, sink and
current in the case of a three-point function. More surprisingly,
even the use of a single, local distilled interpolating operator results in a
suppression of the contribution of excited-states relative to that of
the ground-state in both two-point and three-point functions.
For our variational analysis, we employed a basis of operators
comprising the non-relativistic operators that can be constructed with
up to two covariant derivatives, together with so-called ``hybrid''
operators where the gluons play a manifest role. Whilst the
improvement was not as dramatic as that between a single
Jacobi-smeared and a single distillation-smeared operator, the use of
the variational method and the extended basis provided more consistency and fidelity in the matrix elements for different source-sink separations.
Furthermore, the extended basis can be introduced without further
propagator calculations, in contrast to the case of Jacobi smearing
where the use of the variational method demands a considerable
increase in the number of propagators to be computed.
The next step in our program is to extend our investigation from
matrix elements between nucleons at rest to those in motion, and from
forward matrix elements to off-forward matrix elements. The former is
key to the efficient application of the quasi-PDF, pseudo-PDF and
current-current correlator methods to the calculation of parton distribution functions in the
nucleon, whilst the latter is important for form factors at high
momenta, and off-forward matrix elements such as generalized parton
distributions. Nonetheless, the work presented here clearly
demonstrates the efficacy of distillation as a means both of
decreasing the statistical uncertainty, and of reducing excited-state
contributions, in calculations of nucleon properties.
\subsection*{Acknowledgments}
Calculations were performed using the \textit{Chroma} \cite{Edwards:2004sx}
software suite on the computing clusters at Jefferson Lab.
We are grateful to Jo Dudek and Stefan Meinel for the use
of their fitting codes, and to Robert Edwards for invaluable discussions
on the feasibility of these calculations. CE extends thanks to
Balint Jo\'o for invaluable discussions on building software on the
varied machine architectures on the Jefferson Lab clusters. This
material is based upon work supported by the U.S. Department of
Energy, Office of Science, Office of Nuclear Physics under contract
DE-AC05-06OR23177. Computations for this work were carried out in part
on facilities of the USQCD Collaboration, which are funded by the
Office of Science of the U.S.\ Department of Energy. CE was supported
in part by the U.S. Department of Energy under contract
DE-FG02-04ER41302 and a Department of Energy Office of Science
Graduate Student Research fellowship, through the U.S. Department of
Energy, Office of Science, Office of Workforce Development for
Teachers and Scientists, Office of Science Graduate Student Research
(SCGSR) program. The SCGSR program is administered by the Oak Ridge
Institute for Science and Education (ORISE) for the DOE. ORISE is
managed by ORAU under contract number DE-SC0014664.
|
1,314,259,996,625 | arxiv | \section{Introduction}
\label{intro}
The stochastic volatility models have been widely studied in literature and one important approach consists of the Heston model \cite{Hes1993} and its extensions. In the standard Heston model, the instantaneous variance is a square-root mean-reverting CIR (Cox-Ingersoll-Ross \cite{CIR85}) process. On one hand, compared to the Black-Scholes framework, Heston model has the advantage to reproduce some stylized facts in equity and foreign exchange option markets. The model provides analytical tractability of pricing formulas which allows for efficient calibrations. On the other hand, the limitation of Heston model has also been carefully examined. For example, it is unable to produce extreme paths of volatility during the crisis periods, even with very high volatility of volatility (vol-vol) parameter. In addition, the Feller condition, which is assumed in Heston model to ensure that the volatility remains strictly positive, is often violated in practice, see e.g. Da Fonseca and Grasselli \cite{DaFGra11}.
To provide more consistent results with empirical studies, a natural extension is to consider jumps in the stochastic volatility models. In the Heston framework, Bates \cite{Bates1996} adds jumps in the dynamics of the asset, while Sepp \cite{Sepp2008} includes jumps in both asset returns and the variance, both papers using Poisson processes. In Barndorff-Nielsen and Shephard \cite{BNS2001}, the volatility process is the superposition of a family of positive non-Gaussian Ornstein-Uhlenbeck processes. Nicolato et al. \cite{NPS2017} study the case where a jump term is added to the instantaneous variance process which depends on an increasing and driftless L\'evy process, and they analyze the impact of jump diffusions on the realized variance smile and the implied volatility of VIX options. More generally, Duffie et al. \cite{DPS2000} \cite{DFS2003} propose the affine jump-diffusion framework for the asset and stochastic variance processes.
There are also other extensions of Heston model. Grasselli \cite{G2016} combines standard Heston model with the so-called $3/2$ model where the volatility is the inverse of the Heston one. Kallsen et al \cite{KMV2011} consider the case where stock evolution includes a time-change L\'evy process. In the framework of rough volatility models (see for example El Euch et al. \cite{EFR2016} and Gatheral et al. \cite{GJR2014}), El Euch and Rosenbaum \cite{ER2016} propose the rough Heston model where the Brownian term is replaced by a fractional Brownian motion and they provide the characteristic function by using the fractional Riccati equation.
In this paper, we introduce an extension of Heston model, called the $\alpha$-Heston model, by adding a self-exciting jump structure in the instantaneous variance.
On financial markets,
the CBOE's Volatility Index (VIX) has been introduced as a measure of market volatility of S\&P500 index. Starting from 2004, this index is exchanged via the VIX futures,
and its derivatives have been developed quickly in the last decade. Figure \ref{fig:vix} presents the daily closure values of VIX index from January 2004 to July 2017.
\begin{figure}[h]
\caption{The CBOE's VIX value from January 2004 to July 2017.}
\label{fig:vix}
\begin{center}
\includegraphics[width=0.65\textwidth]{vixglobal1.pdf}
\end{center}
\end{figure} The historical data shows clearly that the VIX can have very large variations and jumps, particularly during the periods of crisis and partially due to the lack of ``storage''. Moreover the jumps occur frequently in clusters. We note several significant jump clusters, the first one associated to the subprime crisis during 2008-2010, the second associated to the sovereign crisis of Greece during 2010-2012, and the last one to the Brexit event around 2016-2017. Between the jump clusters, the VIX values drop to relatively low levels during normal periods. One way to model the cluster effect in finance is to adopt the Hawkes processes \cite{Hawkes} where it needs to specify the jump process together with its intensity. So the inconvenience is that the dimension of the concerned stochastic processes is increased. For the volatility data, El Euch et al. \cite{EFR2016} emphasize that the market is highly endogenous and justify the use of nearly unstable Hawkes processes in their framework. Furthermore, Jaisson and Rosenbaum \cite{JR15} prove that nearly unstable Hawkes processes converge to a CIR process after suitable rescaling. Therefore it is natural to reconcile the Heston framework with a suitable jump structure in order to describe the jump clusters.
Compared to the standard Heston model, the $\alpha$-Heston model includes an $\alpha$-root term and a compensated $\alpha$-stable L\'evy process in the stochastic differential equations (SDE) of the instantaneous variance process $V= (V_t,t\geq 0)$. The number of extra parameters is sparing and the only main parameter $\alpha$ determines the jump behavior. This model allows to describe the cluster effect in a parsimonious and coherent way. We adopt a related approach of continuous-state branching processes with immigration (CBI processes). With the general integral characterization for SDE in Dawson and Li \cite{DawsonLi}, $V$ can be seen as a marked Hawkes process with infinite activity influenced by a Brownian noise (see Jiao et al. \cite{JMS2017}), which is suitable to model the self-exciting jump property. In this model, the $\alpha$-stable jump process is leptokurtotic and heavy-tailed. The parameter $\alpha$ corresponds to the Blumenthal-Getoor index. Hence it's able to seize both large and small fluctuations and even extreme high peaks during the crisis period. In addition, the law of jumps follows the Pareto distribution. Empirical regularities in economics and finance often suggest the form of Pareto law: Liu et al. \cite{Liu99} found that the realized volatility matches with a power law tail; more recently, Avellaneda and Papanicolaou \cite{AP2018} showed that the right-tail distribution of VIX time series can be fitted to a Pareto law. We note that the same Feller condition applies as in the standard Heston case and this condition is more easily respected by the $\alpha$-Heston model since the behavior of small jumps with infinite activity is similar to a Brownian motion so that the jump part allows to reduce the vol-vol parameter.
Thanks to the link between CBI and affine processes established by Filipovi\'c \cite{F01}, our model belongs to the class of affine jump-diffusion models in Duffie et al. \cite{DPS2000}, \cite{DFS2003} and the general result on the characteristic functions holds for the $\alpha$-Heston jump structure.
However, the associated generalized Riccati operator is not analytic, which breaks down certain arguments borrowed from complex analysis.
One important point is that although theoretical results on generalized Riccati operators are established for general affine models, in many explicit examples, the generalized Riccati equation which is associated to the state-dependent variable of $V$ is quadratic.
The $\alpha$-Heston model allows to add more flexibility to the cumulant generator function since its generalized Riccati operator contains a supplementary $\alpha$-power term.
We examine the moment explosion behaviors of both asset and variance processes following Keller-Ressel \cite{KR11}.
We are also interested in the implied volatility surface and its asymptotic behaviors based on the model-free result of Lee \cite{L04}. For the asset options, we show that the wing behaviors of the volatility smile at extreme strikes are the sharpest. For the variance options, we first estimate the asymptotic property of tail probability of the variance process.
One of the most interesting features of the $\alpha$-Heston model is that by using the CBI characteristics as in Li and Ma \cite{LM13}, we can thoroughly analyse the jump cluster effect.
Inspired by Duquesne and Labbe \cite{DuqLab}, we provide a decomposition formula for the variance process $V$ which contains a fundamental part together with a sequence of jump cluster processes. This decomposition implies a branching structure in the sense that each cluster process is induced by a ``mother jump'' which is followed by ``child jumps''. The mother jump represents a triggering shock on the market and is driven by exogenous news in general whereas the child jumps may reflect certain contagious effect. We then study relevant properties such as the duration of one cluster and the number of clusters occurred in a given period. We are particularly interested in the role played by the main parameter $\alpha$.
The rest of the paper is organized as follows. We present the model framework in Section \ref{sec:Model}. Section \ref{sec: affine} is devoted to the affine characterization of the model and related properties. In Section \ref{sec: extreme strike}, we study the asymptotic implied volatility behavior of asset and variance options. Section \ref{sec: clusters} deals with the analysis of jump clusters. We conclude the paper by providing the proofs in Appendix.
\section{Model framework}
\label{sec:Model}
Let us fix a probability space $(\Omega, \mathcal A, \mathbb Q)$ equipped with a filtration $\mathbb F=(\mathcal F_t)_{t\geq 0}$ which satisfies the usual conditions. We first present a family of stochastic volatility models by using a general integral representation of SDEs with random fields.
Consider the asset price process $S=(S_t,t\geq 0)$ given by
\begin{equation}\label{Heston-integral}
\frac{d S_t}{S_t} = rdt + \int_0^{V_t} B(dt,du), \quad S_0>0 \end{equation}
where $r\in\mathbb R_+$ is the constant interest rate, $B(ds,du)$ is a white noise on $\mathbb{R}_+^2$ with intensity $dsdu$, and the process $V=(V_t,t\geq 0)$ is given by
\begin{equation}\label{Vol integral}V_t = V_0 + \int_0^t a( b - V_s ) ds + \sigma \int_0^t \int_0^{V_s} W(ds,du) + \sigma_N \int_0^t \int_0^{V_{s-} } \int_{\mathbb{R}^+} \zeta \widetilde{N}(ds,du, d\zeta)
\end{equation}
where $a,b,\sigma, \sigma_N\in\mathbb R_+$, $W(ds,du)$ is a white noise on $\mathbb{R}_+^2$ correlated to $B(ds,du)$ such that $B(ds,du) = \rho W(ds,du) +\sqrt{1-\rho^2} \overline{W}(ds, du)$ with $\overline{W}(ds,du)$ being an independent white noise and $\rho\in (-1,1)$, $\widetilde N(ds,du,d\zeta)$ is an independent compensated Poisson random measure on $\mathbb{R}_+^3$ with intensity $dsdu\nu(d\zeta)$ with $\nu(d\zeta)$ being a L\'evy measure on $\mathbb{R}_+$ and satisfying
$\int_0^\infty (\zeta\wedge\zeta^2)\nu(d\zeta)<\infty $. The measure $\mathbb Q$ stands for the risk-neutral probability measure. We shall discuss in more detail the change of probability in Section \ref{sec: change of probability}.
The variance process $V$ defined above is a CBI process (c.f. Dawson and Li \cite[Theorem 3.1]{DawsonLi}) with the branching mechanism given by
\begin{equation}\label{equ: Psi general}
\Psi(q)=aq +\frac{1}{2}\sigma^2q^2+ \int_0^{\infty}(e^{-q\sigma_N\zeta}-1+q\sigma_N\zeta){\nu}(d\zeta)
\end{equation}
and the immigration rate $\Phi(q)=a b q$. The existence and uniqueness of a strong solution of \eqref{Vol integral} is proved in \cite{DawsonLi} and \cite{LM13}. From the financial viewpoint, Filipovi\'c \cite{F01} has shown how the CBI processes naturally enter the field of affine term structure modelling. The integral representation provides a family of processes where the integral intervals in \eqref{Vol integral} depend on the value of the process itself, which means that the jump frequency will increase when a jump occurs, corresponding to the self-exciting property.
We are particularly interested in the following model, which is called the {$\alpha$-Heston model},
\begin{eqnarray}\label{alpha Heston-root}
\displaystyle \frac{d S_t}{S_t} &=& \displaystyle r dt + \sqrt{V_t} dB_t \\
dV_t &=& a\left ( b - V_t \right) dt + \sigma \sqrt{V_t} dW_t
+\sigma_N \sqrt[\alpha]{V_{t-}} dZ_t \label{alpha CIR}
\end{eqnarray}
where $B=(B_t,t\geq 0)$ and $W=(W_t,t\geq 0)$ are correlated Brownian motions $d\left<B,W\right>_t=\rho dt$ and $Z=(Z_t,t\geq 0)$ is an independent spectrally positive compensated $\alpha$-stable L\'evy process with parameter $\alpha \in (1,2]$ whose Laplace transform is given, for any $q\geq 0$, by
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbb{E}\big[e^{-qZ_t}\big]=\exp\Big(-\frac{tq^\alpha}{\cos(\pi\alpha/2)}\Big).
\eeqnn
The equation \eqref{alpha CIR} corresponds to the choice of the L\'evy measure \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{Levymeasure}
\nu_\alpha(d\zeta)=-{1_{\{\zeta>0\}} d\zeta \over \cos(\pi\alpha/2)\Gamma(-\alpha)\zeta^{1+\alpha}}, \quad 1<\alpha<2.
\eeqlb
in \eqref{Vol integral}. Then the solutions of the two systems of SDEs admit the same probability law and are equal almost surely in an expanded probability space by \cite{Li11}.
The {$\alpha$-Heston model} is an extension of standard Heston model in which the jump part of the variance process depends on an $\alpha$-square root jump process. In particular, we call the process $V$ defined in \eqref{alpha CIR} an $\alpha$-{CIR}$(a,b,\sigma,\sigma_N,\alpha)$ process and the existence and uniqueness of the strong solution are established in Fu and Li \cite{FL10}. In this case, by \eqref{equ: Psi general} and \eqref{Levymeasure}, the variance $V$ has the explicit branching mechanism
\begin{equation}\label{equ: Psi SCIR}
\Psi_{\alpha}(q)=a q+\frac{\sigma^2}{2}q^2-\frac{\sigma_N^\alpha}{\cos(\pi\alpha/2)}q^\alpha.
\end{equation} Compared to the standard Heston model, the parameter $\alpha$ characterizes the jump behavior and the tail fatness of the instantaneous variance process $V$. When $\alpha$ is near $1$, $V$ is more likely to have large jumps but its values between large jumps tend to be small due to deeper negative compensations (c.f. \cite{JMS2017}). When $\alpha$ is approaching $2$, there will be less large jumps but more frequent small jumps. In the case when $\alpha=2$, the process $Z$ reduces to an independent Brownian motion scaled by $\sqrt{2}$ and the model is reduced to a standard Heston one.
The Feller condition, that is, the inequality $2ab\geq\sigma^2$, is often assumed in the Heston model to ensure the positivity of the process $V$. In the $\alpha$-Heston model, the same condition remains to be valid. More precisely, for any $\alpha\in(1,2)$, the point $0$ is an inaccessible boundary for \eqref{alpha Heston-root} if and only if $2ab\geq\sigma^2$ for any $\sigma_N\geq 0$ (c.f. \cite[Proposition 3.4]{JMS2017}). From the financial point of view, this means that the jumps have no impact on the possibility for the volatility to reach the origin, which can be explained by the fact that only positive jumps are added and their compensators are proportional to the process itself. When $\alpha=2$, the Feller condition becomes $2ab\geq\sigma^2+2\sigma_N^2$ since $Z$ becomes a scaled Brownian motion. Empirical studies show that (see e.g. Da Fonseca and Grasselli \cite{DaFGra11}, Graselli \cite{G2016}), in practice, the Feller condition is often violated since when performing calibrations on equity market data high vol-vol is required to reproduce large variations. This point is often seen as a drawback of the Heston model. In the $\alpha$-Heston model, part of the vol-vol parameter is seized
by the jump part. Indeed, as shown by Asmussen and Rosinski \cite{AR2001}, the small jumps of a L\'evy process can be approximated by a Brownian motion, so that the small jumps induced by the infinite activity of the variance process generates a behaviour similar as that of a Brownian motion. This allows to reduce mechanically the contribution from the Brownian part and hence the vol-vol parameter. As a consequence, our model is more likely to preserve the Feller condition and the positivity of the volatility process.
Figure \ref{fig: variance} provides a simulation of the variance process $V$ defined in \eqref{alpha CIR} for a period of $T=14$, in comparison with the empirical VIX data (from 2004 to 2017) in Figure \ref{fig:vix}. The parameters are chosen to be $a=5$, $b=0.14$, $\sigma=0.08$, $\sigma_Z=1$ and $\alpha=1.26$. The initial value is fixed to be $V_0=0.03$ according to the VIX data on January 2nd, 2004. Note that the Feller condition is largely satisfied with the above choice of parameters and the values of $V$ are always positive in Figure \ref{fig: variance}. We also observe the cluster phenomenon for jumps and in particular some large jumps concentrated on a short period. At the same time, the values of the variance process $V$ remain to be at a relatively low level between the jumps, which corresponds to the normal periods between the crisis, similarly as shown by empirical data in Figure \ref{fig:vix}.
\begin{figure}
\caption{Simulation of the variance process $V$.}
\begin{center}
\includegraphics[width=0.6\textwidth]{variance.eps}
\end{center}
\label{fig: variance}
\end{figure}
\section{Affine characteristics}\label{sec: affine}
In this section, we give the joint Laplace transform of the log-price, the variance and its integrated process according to Duffie et al. \cite{DPS2000, {DFS2003}} and Keller-Ressel \cite{KR11}.
We begin by discussing the probability change between the historical and the risk-neutral pricing probability measures. We shall also make comparisons with several other affine models in literature.
\subsection{Change of probability measures}\label{sec: change of probability}
We have assumed that model dynamics \eqref{Heston-integral}, \eqref{Vol integral} and (\ref{alpha Heston-root}) are specified under a risk-neutral probability $\mathbb Q$. However, it is important to establish a link with the physical or historical one generally denoted by $\mathbb{P}$ in order to keep a tractable form for the evolution of the processes describing the market.
The construction of an equivalent historical probability is based on an Esscher-type transformation in Kallsen et al. \cite{KMK2010} which is a natural extension of the class proposed by Heston \cite{Hes1993}.
The next result shows that the general class of temperated Heston-type model is closed under
the change of probability and is a slight modification of \cite[Proposition 4.1]{JMS2017}.
\begin{Pro}\label{pro:changementprob} Let $(S, V)$ be as in \eqref{Heston-integral} and \eqref{Vol integral} under the probability measure $\mathbb Q$
and assume that the filtration $\mathbb F$ is generated by the random fields $(W, \overline{W})$ and $\widetilde N$. Fix
$(\eta, \overline{\eta})\in\mathbb{R}^2$ and $\theta\in\mathbb{R}_+$, and define
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
U_t:=\eta\int_0^t\int_0^{V_s}W(ds,du) + \overline{\eta}\int_0^t\int_0^{V_s}\overline{W}(ds,du)
+\int_0^t\int_0^{V_{s-}}\int_0^\infty (e^{-\theta\zeta}-1)\widetilde{N}(ds,du,d\zeta).
\eeqnn
Then the Dol\'eans-Dade exponential $\mathcal{E} (U)$ is a martingale and the probability
measure $\mathbb P$ defined by
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\left. \frac{d\mathbb P}{d\mathbb Q}\right|_{\mathcal{F}_t}=\mathcal{E} (U)_t,
\eeqnn
is equivalent to $\mathbb Q$.
Moreover, under $\mathbb P$, $(S, V)$ satisfy \eqref{Heston-integral} and \eqref{Vol integral} with the parameters $\sigma^{\mathbb P}=\sigma$, $\sigma^{\mathbb P}_N=\sigma_N$,
$$a^{\mathbb P}=a-\sigma\eta -\frac{\alpha\sigma_N}{\cos(\pi\alpha/2)}\theta^{\alpha-1}, \quad b^{\mathbb P}=ab/a^{\mathbb P},$$ and the L\'evy measure
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\nu_\alpha^{\mathbb P}(d\zeta)=-\frac{1_{\{\zeta>0\}}e^{-\theta\zeta}}{\cos(\pi\alpha/2)\Gamma(-\alpha)\zeta^{1+\alpha}}d\zeta.
\eeqnn
\end{Pro}
The model under $\mathbb P$ remains in the CBI class of $\alpha $-Heston model and shares similar behaviors.
Note that the parameters $\eta, \overline{\eta}$ and $\theta$ are chosen such that $a^{\mathbb P}\in\mathbb R_+$.
As a direct consequence of the above proposition, the return rate of the price process under $\mathbb P$ becomes
\[\mu^{\mathbb P}_t = r - V_t \left( \rho \eta + \sqrt{1-\rho^2} \overline{\eta} \right).\]
The risk premiums are given by
\begin{eqnarray*}
\lambda_S(t) &:=& \mu_t^{\mathbb P} -r = -\left( \rho \eta + \sqrt{1-\rho^2} \overline{\eta} \right) V_t \\
\lambda_V(t) &:=& (a^{\mathbb P}-a) V_t = - \left(\sigma\eta + \frac{\alpha\sigma_N}{\cos(\pi\alpha/2)}\theta^{\alpha-1}\right) V_t \, .
\end{eqnarray*}
{When $\eta<0$, the risk premium $\lambda_V$ is positively correlated with the volatility process $V$.
The positive correlation between the risk premium and the volatility can explain the strongly upward sloping in VIX smile detailed in
\cite{BFG2016}.}
\subsection{Joint characteristic function}
In the Heston model, it is well known that the characteristic function plays a crucial role for the pricing of derivatives and the model calibration. We now provide the joint Laplace transform of the triplet: the log-price, the variance and its integrated process. The following result is a direct consequence of \cite{DPS2000} and \cite{KR11} and its proof is postponed to Appendix.
\begin{Pro}\label{Pro: joint laplace transform}
Let $Y_t=\log S_t$.
For any $\xi=(\xi_1,\xi_2,\xi_3)\in i\mathbb{R}\times\mathbb{C}_-^2$,
\begin{equation}
\mathbb E\Big[\exp\big(\xi_1Y_t+\xi_2V_t+\xi_3\int_0^tV_sds\big)\Big]=\exp\Big(\xi_1Y_0+\psi(t,\xi)V_0+\phi(t,\xi)\Big)\end{equation}
where $\phi$ and $\psi$ solve the generalized Riccati equations
\begin{eqnarray}\label{generalized Racci}
\partial_t\phi(t,\xi)&=F(\xi_1,\psi(t,\xi), \xi_3),\quad \phi(0,\xi)=0;\label{generalized Racci 1}\\
\partial_t\psi(t,\xi)&=R(\xi_1,\psi(t,\xi),\xi_3),\quad \psi(0,\xi)=\xi_2.\label{generalized Racci 2}
\end{eqnarray}
Moreover, the functions $F$ and $R: i\mathbb{R}\times\mathbb{C}_-^2\rightarrow \mathbb R$ are defined by
\begin{eqnarray}
F(\xi_1,\xi_2,\xi_3)&=&r\xi_1+ab\xi_2,\label{F explicit}\\
\label{R explicit}
R(\xi_1,\xi_2,\xi_3
&=&\frac{1}{2}(\xi_1^2-\xi_1)+\rho\sigma \xi_1\xi_2+\frac{1}{2}\sigma^2\xi_2^2-a\xi_2-\frac{\sigma_N^\alpha}{\cos(\pi\alpha/2)}(-\xi_2)^\alpha+\xi_3.\label{R explicit}
\end{eqnarray}
\end{Pro}
To compare the $\alpha$-Heston model with other models in literature, we consider in the remaining of the paper the usual case as in \cite{DPS2000} and \cite{KR11} where the third vaiable $\xi_3$ is omitted and $r=0$. Recall that in the standard Heston model, the generalized Riccati operators are given by
\begin{eqnarray}F_H(\xi_1,\xi_2)=ab\xi_2,\quad { \text{and } }\quad R_H(\xi_1,\xi_2)=\frac{1}{2}(\xi_1^2-\xi_1)+\rho\sigma \xi_1\xi_2+\frac{1}{2}\sigma^2\xi_2^2-a\xi_2.\end{eqnarray}
By Proposition \ref{Pro: joint laplace transform}, the $\alpha$-Heston model admits
\begin{equation}\label{definition-R}
F(\xi_1,\xi_2)=F_H(\xi_1,\xi_2),\quad { \text{and } }\quad R(\xi_1,\xi_2)=R_H(\xi_1,\xi_2)-\frac{\sigma_N^\alpha}{\cos(\pi\alpha/2)}(-\xi_2)^\alpha.
\end{equation}
Note that the function $R$ in \eqref{definition-R} is not analytic and is well defined only for $\xi_2\leq 0$. The difference $R(\xi_1,\xi_2) -R_H(\xi_1,\xi_2) $ is positive since $\cos(\pi \alpha /2)<0$ for $\alpha\in(1,2]$.
As stated in \cite{KR11}, $F$ characterizes the state-independent dynamic of $(S,V)$ while R characterizes the state-dependent dynamic. In order to highlight the primacy of function $\psi$ in \eqref{generalized Racci 2}, we refer $R$ as the main generalized Riccati operator.
The main point we highlight is that many models discussed in literature admit similar forms of $R$. In Barndorff-Nielsen and Shephard \cite{BNS2001}, $R$ is a particular case of Heston one, i.e.
$\sigma=0$, and the main innovation of their model is to
extend in an interesting way the auxiliary operator $F$. The model in Bates \cite{Bates1996} has a more general generalized Riccati operator $R$ but the new term depends only on the Laplace coefficient
of the stock $S$. So the variance process in \cite{Bates1996} follows the CIR diffusion and hence there is no difference for volatility and variance options compared to Heston model.
For the stochastic volatility jump model in Nicolato et al. \cite{NPS2017}, the examples share the same Riccati operator of the Heston model. As a consequence, the Laplace transform of the variance process has a certain form for the affine function. Then, it is not surprising that ``the specific choice of jump distribution has a minor effect on the qualitative behavior of the skew and the term structure of the implied volatility surface'' as noted in \cite{NPS2017} (see also \cite{NV03}), since the plasticity of the model is limited to the form of the
auxiliary function $\phi(t,\xi)$ which is independent of the level of initial variance $V_0$ in the cumulant generating function.
Our model exhibits a different behavior due to the supplementary $\alpha$-power term appearing in the main generalized Riccati operator $R$, which adds more flexibility to the coefficient of the variance $\psi(t,\xi)$ in the cumulant generating function. The reason lies in the fact that the new jump part depends on the variance itself, resulting in a non-linear dependence in \eqref{R explicit}.
In other words, the self-exciting property of jump term introduces a completely different shape of cumulant generating function.
\section{Asymptotic behaviors and implied volatility}\label{sec: extreme strike}
In this section, we focus on the implied volatility surfaces for both asset and variance options, in particular, on their asymptotic behaviors at small or large strikes. We follow the model-free result in the pioneering paper of Lee \cite{L04} and aim to obtain some refinements for the specific $\alpha$-Heston model. We also provide the moment explosion conditions.
\subsection{Asset options}
We begin by providing the following results on the generalized Riccati operator $R$ by \cite{KR11} and give the moment explosion condition for the asset price $S$.
\begin{Pro}\label{proposition-KR}
We assume $a> \sigma \rho$. Define $w(\xi_1)$ such that $R(\xi_1,w(\xi_1))=0$ and $T_*(u):= \sup\{T:\, \mathbb{E}[S_T^u]<\infty\}$
\begin{enumerate}[(1)]
\item $w(\xi_1)$ has $[0,1]$ as maximal support.
\item $\forall \xi_1\in[0,1]$ we have $\lim_{t\rightarrow \infty} \phi(t,\xi_1,w) = w(\xi_1)$.
\item $\forall \xi_1\in[0,1]$ we have $T_*(\xi_1) =\infty$ and $\forall \xi_1\notin[0,1]$ we have $T_*(\xi_1) =0$.
\end{enumerate}
\end{Pro}
\proof
The couple $(Y_t, V_t)$ is an affine process characterized by (\ref{definition-R}) and $F(u,w) := ab w$. Note that $F(0,0)=R(0,0)=R(1,0)=0$ and
$\chi(q_1):=\frac{\partial R(q_1,q_2)}{\partial q_2}\big\vert_{q_2=0}=\rho\sigma q_1-a<\infty$. Then by Keller-Ressel \cite[Corollary 2.7]{KR11} we have
$\mathbb E[S_T]<\infty$ for any $T>0$. Also note that $\chi(0)<0$ and $\chi(1)<0$ as $a>0$, $\rho<0$ and $\sigma>0$. It follows from \cite[Lemma 3.2]{KR11} that
there exist a maximal interval $I$ and a unique function $w\in C(I)\cap C^1(I^\circ)$ such that $R(q_1,w(q_1))=0$ for all $q_1\in I$ with $w(0)=w(1)=0$. Since $0=\sup\{q_2\geq0: R(q_1,q_2)<\infty\}$,
$R(q_1,q_2)>0$ if $q_1<0$ and $q_2<0$, and $R(q_1,0)=\frac{1}{2}q_1(q_1-1)$, we immediately have that $I=[0,1]$. Then the set $\{q_1\in I: F(q_1,w(q_1))<\infty\}$ coincide with $[0,1]$.
By \cite[Theorem 3.2]{KR11} we have $\mathbb E[S_T^q]=\infty$ for any $q\in\mathbb{R}\setminus[0,1]$.
\finproof
\begin{Cor}The above proposition implies that for any $T>0$, we have
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\sup\{p>0: \mathbb E[S_T^p]<\infty\}=1\quad \mbox{and}\quad \sup\{p>0: \mathbb E[S_T^{-p}]<\infty\}=0.
\eeqnn
In other words, the maximal domain of moment generating function $\mathbb E[e^{q\log S_T}]$ is $[0,1]$.
\end{Cor}
Let $\Sigma_S(T, k)$ be the implied volatility of a call option written on the asset price $S$ with maturity $T$ and strike $K = e^k$. Then combined with a model-free result of Lee \cite{L04}, known as the moment formula, it yields that the asymptotic behavior of the implied volatility at extreme strikes is given by
\begin{equation}\label{Lee formula}
\limsup_{k \rightarrow \pm \infty } \frac{\Sigma_S^2(T, k)}{|k|} = \frac{2}{T},
\end{equation}
which means that the wing behavior of implied volatility for the asset options is
the sharpest possible one by \cite[Theorem 3.2 and 3.4]{L04}.
In the following of this subsection, we study the probability tails of $S$ which allows to replace the ``$\limsup$'' by the usual limit in \eqref{Lee formula} for the left wing of the asset options.
The next technical lemma, whose proof is postponed to Appendix, shows that the extremal behavior of $V$ is mainly due to one large jump of the driving processes $Z$.
\begin{Lem}\label{extremal behavior of V}
Fix $T>0$ and consider the variance process $V$ defined by (\ref{alpha Heston-root}). Then there exists a nonzero boundedly finite measure $\delta$ on $\mathscr{B}(\bar{D}_0[0,T])$ with $\delta(\bar{D}_0[0,T]\backslash D[0,T])=0$ such that, as $u\rightarrow\infty$,
\begin{equation}\label{functional extremal behavior}u^{\alpha}\mathbb P({V}/u\in\cdot)\overset{\widehat{w}}{\longrightarrow}\delta(\cdot)\quad on\quad \mathscr{B}(\bar{D}_0([0,T]),
\end{equation}
where $\delta$ is given by:
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\delta(\cdot)=\sigma_N^\alpha\int_0^T\big(b(1-e^{-as})+xe^{-as}\big)\int_0^\infty \mathbb E\Big[1_{\big\{w_t:=e^{-a(t-s)}y1_{[s,T]}(t)\in\;\cdot\;\big\}}\Big]\nu_{\alpha}(dy)ds,
\eeqnn
and $\nu_{\alpha}$ is defined by (\ref{Levymeasure}). We refer to Hult and Lindskog \cite[page 312]{HL07}
for the definition of $\bar{\mathbb{D}}_0[0,T]$ and the vague convergence $ \overset{\rm \widehat{w}}{\longrightarrow}$.
\end{Lem}
\begin{Pro}\label{prop1.3} Fix $t>0$. For any $x\ge 0$, we have that
\begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{tail prob of S}
\mathbb P_x(-\log S_t>u)\sim-\Big(\frac{\sigma_N}{2a}\Big)^\alpha
\frac{\iota_\alpha(t)}{\alpha\cos(\pi\alpha/2)\Gamma(-\alpha)}u^{-\alpha},\quad u\rightarrow+\infty,
\eeqlb
where
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\iota_\alpha(t)=e^{-\alpha at}\int_0^t(b(1-e^{-as})+xe^{-as})(e^{at}-e^{as})^\alpha ds.
\eeqnn
\end{Pro}
\proof
We have by (\ref{alpha Heston-root}) that
\begin{eqnarray}}\def\eeqlb{\end{eqnarray}
\label{logS_t}
\log S_t=\log s_0+\int_0^t(r-\frac{1}{2}V_s)ds+\int_0^t\sqrt{V_s}dB_s.
\eeqlb
For any $t>0$, consider the asymptotic behavior of the probability tail for $\int_0^tV_sds$, that is, $\mathbb P_x(\frac{1}{2}\int_0^tV_sds>x)$.
By Lemma \ref{extremal behavior of V}, as $u\rightarrow+\infty$,
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} u^{\alpha}\mathbb P({V}/u\in\cdot)\overset{\widehat{w}}{\longrightarrow}\delta(\cdot)\quad on\quad \mathscr{B}(\bar{D}_0[0,t]),
\eeqnn
Define the functional $h: \bar{D}_0[0,t]\longrightarrow \mathbb{R}_+$ by $h(w)=\frac{1}{2}\int_0^t
w_sds$. Let Disc($h$) be the set of discontinuities of $h$. By the definition of $h$ by (\ref{functional extremal behavior}), it is easy to see that $\delta({\rm Disc}(h)) =0$. It follows from \cite[Theorem 2.1]{HL07} that as $u\rightarrow+\infty$,
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
u^{\alpha}\mathbb P_x\Big(\frac{1}{2u}\int_0^tV_sds\in\cdot\Big)\overset{v}{\longrightarrow}\delta\circ h^{-1}(\cdot)\quad on\quad \mathscr{B}(\mathbb{R}_+),
\eeqnn
and
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\delta\circ h^{-1}(\cdot)
&=&
\sigma_N^\alpha\int_0^t
\mathbb E[V_s]\int_0^\infty1_{\{\frac{y}{2}\int_s^te^{-a(\zeta-s)}d\zeta\ \in\cdot\ \}}\nu_{\alpha}(dy)ds.
\eeqnn
Thus we have that
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbb P_x\Big(\frac{1}{2}\int_0^tV_sds>u\Big)\sim-\Big(\frac{\sigma_N}{2a}\Big)^\alpha
\frac{\iota_\alpha(t)}{\alpha\cos(\pi\alpha/2)\Gamma(-\alpha)}u^{-\alpha},\quad u\rightarrow+\infty.
\eeqnn
Furthermore we note that
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbb E_x\Big[\Big(\int_0^t\sqrt{V_s}dB_s\Big)^2\Big]=\int_0^t\mathbb E_x[V_s]ds<\infty.
\eeqnn
In view of (\ref{logS_t}), we have that
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbb P_x(-\log S_t>u)\sim \mathbb P_x\Big(\frac{1}{2}\int_0^tV_sds>u\Big),\quad u\rightarrow+\infty.
\eeqnn
\finproof
\begin{Cor}\label{prop1.4} Let $\Sigma_S(T, k)$ be the implied volatility of the option written on the stock price $S$ with maturity $T$ and strike $K = e^k$.
Then the left wing of $\Sigma_S(T, k)$ has the following
asymptotic shape as $k\rightarrow-\infty$:
\begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{eq-left-wing of S}
\frac{\sqrt{T}\Sigma_S(T, k)}{\sqrt{2}} &= &\sqrt{-k+\alpha\log(-k)-\frac{1}{2}\log\log(-k)}\nonumber\\
&&-\sqrt{\alpha\log(-k)-\frac{1}{2}\log\log(-k)}+O((\log(-k))^{-1/2}).
\eeqlb
\end{Cor}
\proof Without loss of generality we assume $k<0$. Note that the put option price can be written as
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
P(e^k):=\mathbb E[(e^k-S_T)_+]=\int_{-k}^\infty \mathbb P_x(-\log S_T>u)e^{-u}du.
\eeqnn
By Proposition \ref{prop1.3}, it is not hard to see that
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
P(e^k)\sim-\Big(\frac{\sigma_N}{2a}\Big)^\alpha
\frac{\iota_\alpha(t)}{\alpha\cos(\pi\alpha/2)\Gamma(-\alpha)}e^k k^{-\alpha},\quad k\rightarrow-\infty.
\eeqnn
Then (\ref{eq-left-wing of S}) follows from the above asymptotic equality and \cite[Theorem 3.7]{Guli2010}. \finproof
Figure \ref{fig:impvol asset} presents the implied volatility curves of the asset options.
We use a Monte Carlo method with $10^5$ trajectories with the parameters $V_0= 0.0332$, $a=5$, $b=0.144$, $\sigma = 0.08$ and $\sigma_N = 1$ coherent with the ones of Nicolato et al. \cite{NPS2017}
\begin{figure}[h]
\caption{Implied volatilites for asset options }
\label{fig:impvol asset}
\begin{center}
\includegraphics[width=0.65\textwidth]{volatility-stock}
\end{center}
\end{figure}
\subsection{Variance options}
We now consider the volatility and variance options for which a large growing literature has been developed (see for instance \cite{GM2013}, \cite{NPS2017} and \cite{Sepp2008}).
In particular, it is highlighted in \cite{Sepp2008} and \cite{NPS2017} about the upward-sloping implied volatility skew of VIX options.
In the following, we derive the asymptotic behavior of tail probability of $V$, which will imply the moment explosion condition for $V$ and the extreme behaviors of the variance options.
We begin by giving two technical lemmas.
\begin{Lem} \label{Lemma 1.1} Let $X$ be a positive random variable.
\begin{enumerate}[(i)]
\item (Karamata Tauberian Theorem \cite[Theorem 1.7.1]{BGT87}) For constants $C>0$, $\beta>0$ and a slowly varying function (at infinity) $L$,
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbb E[e^{-\lambda X}]\sim C\lambda^{-\beta}L(\lambda),\quad\mbox{as}\quad \lambda\rightarrow\infty,
\eeqnn
if and only if
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbb P(X\leq u)\sim\frac{C}{\Gamma(1+\beta)}u^\beta L(1/u),\quad\mbox{as}\quad u\rightarrow 0^+.
\eeqnn
\item (de Bruijn's Tauberian Theorem \cite[Theorem 4]{BT75}) Let $0\leq\beta\leq1$ be a constant, $L$ be a slowly varying function at infinity, and $L^*$ be the
conjugate slowly varying function to $L$. Then
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\log \mathbb E[e^{-\lambda X}]\sim-\lambda^\beta/L(\lambda)^{1-\beta} \quad \mbox{as} \ \lambda\rightarrow\infty,
\eeqnn
if and only if
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\log \mathbb P(X\leq u)\sim-(1-\beta)\beta^{\beta/(1-\beta)}u^{-\beta/(1-\beta)}L^*(u^{-1/(1-\beta)})\quad\mbox{as}\ u\rightarrow0^+.
\eeqnn
\end{enumerate}
\end{Lem}
\begin{Lem}\label{moments of V} For any $0<\beta<\alpha$, there exists a locally bounded function $ C(\cdot)\ge 0$ such that for any $T\geq0$,
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbb {E}_x\Big[\sup_{0\le t\le T}V_t^\beta\Big]\le C(T)(1+x^\beta).
\eeqnn \end{Lem}
\begin{Pro}\label{prop1.2} (probability tails of $V_t$) Fix $t>0$. For any $x\ge 0$, we have that
\begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{tail prob of V}
\mathbb P_x(V_t>u)
\sim
-\frac{\sigma_N^\alpha}{\alpha\Gamma(-\alpha)\cos(\pi\alpha/2)} \big(q_\alpha(t)+p_\alpha(t)x\big)
u^{-\alpha}, \quad \mbox{as}\quad u\to \infty, \eeqlb
where
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}\label{3.2}
&&~ p_\alpha(t) = \frac{1}{a(\alpha-1)}\left(e^{-a t} - e^{-\alpha a t}\right),
\quad
q_\alpha(t) = b\left(\frac{1}{\alpha a}(1 - e^{-\alpha a t}) -
p_\alpha(t)\right).
\eeqnn
Furthermore, \begin{enumerate}[(i)]
\item if $\sigma>0$, then
\begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{small prob V with sigma}
\mathbb P_x(V_t\leq u)\sim u^{2ab/\sigma^2}\frac{\bar{v}_t^{2ab/\sigma^2}}{\Gamma\left(1+{2ab}/{\sigma^2}\right)}\exp\Big(-x\bar{v}_t-ab\int_{\bar{v}_t}^\infty \Big(\frac{z}{\Psi_\alpha(z)}-\frac{2}{\sigma^2z}\Big)dz\Big), \, \mbox{as}\,\, u\to 0,\eeqlb
where $\bar{v}_t$ is the minimal solution of the ODE
\begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{ODE barV}
\frac{d}{dt}\bar{v}_t=-\Psi_\alpha(\bar{v}_t),\quad t>0,
\eeqlb
with singular initial condition $\bar{v}_{0+}=\infty$;
\item if $\sigma=0$, then
\begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{small proba of V 2}
\log \mathbb P_x(V_t\leq u)\sim-\frac{\alpha-1}{2-\alpha}\left(-ab\cos\big(\frac{\pi\alpha}{2}\big)\right)^{\frac{1}{\alpha-1}}\sigma_N^{-\frac{\alpha}{\alpha-1}}u^{-\frac{2-\alpha}{\alpha-1}},\quad\mbox{as}\quad u\rightarrow 0.
\eeqlb
\end{enumerate}
\end{Pro}
\proof We have by (\ref{alpha Heston-root}) that
\begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{Ito transform}
V_t=e^{-at}V_0+ab\int_0^te^{-a(t-s)}ds+\sigma\int_0^te^{-a(t-s)}\sqrt{V_s}dB_s+\sigma_N\int_0^te^{-a(t-s)}V_{s-}^{1/\alpha}dZ_s.
\eeqlb Note that $\mathbb E_x[V_t]=e^{-at}x+b(1-e^{-at})$. By Markov's inequality,
\begin{eqnarray}}\def\eeqlb{\end{eqnarray}
\mathbb P_x\Big(\Big|\int_0^te^{-a(t-s)}\sqrt{V_s}dB_s\Big|>u\Big)&\leq&u^{-2}\mathbb E_x\Big[\int_0^te^{-2a(t-s)}V_sds\Big]\nonumber\\
&\leq& \Big(\frac{x}{a}+bt\Big)u^{-2}.\label{variance estimates}
\eeqlb
It follows from Lemma \ref{moments of V} that $\mathbb {E}[\sup_{0\leq t\leq T}(\sqrt[\alpha]{V_t})^{\alpha+\delta}]< \infty$ for
$0<\delta<\alpha(\alpha-1)$. Then by Hult and Lindskog \cite[Theorem~3.4]{HL07},
we have as $u\rightarrow\infty$,
\begin{eqnarray}}\def\eeqlb{\end{eqnarray}
\mathbb P_x\Big(\sigma_N\int_0^te^{-a(t-s)}V_{s-}^{1/\alpha} dZ_s> u\Big)
&\sim&
\nu_{\alpha}(u,\infty)\sigma_N^{\alpha}\int_0^te^{-\alpha a(t-s)} \mathbb{E}_x[V_s]
ds\nonumber\\
&\sim&
-\frac{\sigma_N^\alpha}{\alpha\cos(\pi\alpha/2)\Gamma(-\alpha)}\big(q_\alpha(t) + p_\alpha(t)x\big)u^{-\alpha}.\label{stable tail estimates}
\eeqlb
In view of (\ref{Ito transform}), (\ref{variance estimates}) and (\ref{stable tail estimates}), the extremal behavior of $V_t$ is
determined by the forth term on the right-hand side of (\ref{Ito transform}). Then we have, as $u\rightarrow \infty$,
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbb{P}_x(V_t>u)
&\sim&
\mathbb P_x\Big(\sigma_N\int_0^te^{-a(t-s)}V_{s-}^{1/\alpha} dZ_s> u\Big),
\eeqnn
which gives (\ref{tail prob of V}). On the other hand, by Proposition \ref{Pro: joint laplace transform} we have
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbb E_x\left[e^{-\lambda V_t}\right]=\exp\Big(-xv_t(\lambda)-ab\int_0^tv_s(\lambda)ds \Big),
\eeqnn
where $v_t(\lambda)$ is the unique solution of the following ODE:
\begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{ODE V}
\frac{\partial v_t(\lambda)}{\partial t}=-\Psi_\alpha(v_t(\lambda)),\quad v_0(\lambda)=\lambda.
\eeqlb
It follows from \cite[Theorem 3.5, 3.8, Corollary 3.11]{Li11} that $\bar{v}_t=\uparrow\lim_{\lambda\rightarrow\infty}v_t(\lambda)$ exists in $(0,\infty)$
for all $t>0$, and $\bar{v}_t$ is the minimal solution of the singular initial value problem (\ref{ODE barV}).
First consider the case of $\sigma>0$. By (\ref{ODE V}),
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\int_0^tv_s(\lambda)ds=\int_{v_t(\lambda)}^\lambda \frac{u}{\Psi_\alpha(u)}du=\int_{v_t(\lambda)}^\lambda \frac{2}{\sigma^2u}du+
\int_{v_t(\lambda)}^\lambda\Big(\frac{u}{\Psi_\alpha(u)}-\frac{2}{\sigma^2u}\Big)du, \quad \lambda>0,\ t>0.
\eeqnn
Note that $\frac{2}{\sigma^2u}-\frac{u}{\Psi_\alpha(u)}=O(u^{-(3-\alpha)})$ as $u\rightarrow\infty$ and
thus $0<\int_{\bar{v}_t}^\infty\Big(\frac{2}{\sigma^2u}-\frac{u}{\Psi_\alpha(u)}\Big)du<\infty$. A simple calculation shows that
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbb{E}_x \left[e^{-\lambda V_t}\right]
\sim\bar{v}_t^{2ab/\sigma^2}\lambda^{-2ab/\sigma^2}
\exp\left(-x\bar{v}_t-ab\int_{\bar{v}_t}^\infty \Big(\frac{u}{\Psi_\alpha(u)}-\frac{2}{\sigma^2u}\Big)du\right),
\quad \lambda\rightarrow0.
\eeqnn
Then Karamata Tauberian Theorem (see Lemma \ref{Lemma 1.1} (i)) gives (\ref{small prob V with sigma}).
Now we turn to the case of $\sigma=0$.
Denote by $\sigma_1=-\frac{\sigma_N^\alpha}{\cos(\pi\alpha/2)}$. Recall that $\bar{v}_t=\uparrow\lim_{\lambda\rightarrow\infty}v_t(\lambda)\in(0,\infty),$ which is the minimal solution of the singular initial value problem (\ref{ODE barV}) with $\sigma=0$.
Still by (\ref{ODE V}),
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\log \mathbb{E}_x\left[e^{-\lambda V_t}\right]=
-xv_t(\lambda)-ab\int_{v_t(\lambda)}^\lambda \frac{1}{a+\sigma_1\lambda^{\alpha-1}}du\sim
\frac{ab}{\alpha-2}\frac{\lambda}{a+\sigma_1\lambda^{\alpha-1}}\sim\frac{ab}{\sigma_1(\alpha-2)}\lambda^{2-\alpha}.
\eeqnn
Then de Bruijn's Tauberian Theorem (see Lemma \ref{Lemma 1.1} (ii)) gives (\ref{small proba of V 2}).
\finproof
\begin{Cor}As a consequence of Proposition \ref{prop1.2}, we have, for any $\alpha\in(1,2)$,
\begin{equation}
\{p\in\mathbb R: \mathbb E[V_t^p]<\infty\}=\big(-\frac{2ab}{\sigma^2}, \alpha\big)
\end{equation}
where by convention $2ab/\sigma^2=+\infty$ if $\sigma=0$.
\end{Cor}
\proof By integration by parts, we have, for $p>0$,
\[\mathbb E[V_t^{p}]=-\lim_{u\rightarrow\infty}u^p\mathbb P(V_t>u)+p\int_0^\infty u^{p-1}\mathbb P(V_t>u)du.\]By Proposition \ref{prop1.2}, $\mathbb P(V_t>u)\sim C(t)u^{-\alpha}$ as $u\rightarrow\infty$ for some function $C(t)$. Then we obtain $\mathbb E[V_t^p]<\infty$ for $0\leq p<\alpha$ and $\mathbb E[V_t^p]=\infty$ for $p\geq\alpha$. Similarly, we consider $\mathbb E[(1/V_t)^p]$ and have $\mathbb P(1/V_t>u)\sim D(t)u^{-2ab/\sigma^2}$ as $u\rightarrow \infty$. Then we obtain $\mathbb E[(1/V_t)^p]<\infty$ for $0\leq p<2ab/\sigma^2$ and $\mathbb E[(1/V_t)^p]=\infty$ if $p\geq 2ab/\sigma^2$.
\finproof
\begin{Cor}\label{implied volatility based on variance processes} Let $\Sigma_V(T, k)$ be the implied volatility of call option written on the variance process $V$ with maturity $T$ and strike $K = e^k$ and let $\psi(q)=2 -4(\sqrt{q^2+q}-q)$.
Then the right wing of $\Sigma_V(T, k)$ has the following
asymptotic shape:
\begin{equation}\label{eq-right-wing}
\Sigma_V(T, k)\sim\Big(\frac{\psi(\alpha)}{T}\Big)^{1/2}\sqrt{k}, \quad k\rightarrow+\infty
\end{equation}
The left wing satisfies \begin{enumerate}[(i)]
\item if $\sigma>0$, then
\begin{equation}\label{eq-left-wing 1}
\Sigma_V(T, k)\sim\Big(\frac{\psi(\frac{2ab}{\sigma^2})}{T}\Big)^{1/2}\sqrt{-k}, \quad k\rightarrow-\infty;
\end{equation}
\item if $\sigma=0$, then
\begin{equation}\label{eq-left-wing 2}
\Sigma_V(T, k)\sim\frac{1}{\sqrt{2T}}(-k)\Big(\log\frac{e^k}{P(e^k)}\Big)^{1/2}, \quad k\rightarrow-\infty.
\end{equation}
where $P(e^k)=E[(e^k-V_T)^+]$.
\end{enumerate}
\end{Cor}
\proof Combining (\ref{tail prob of V}) and \cite[Proposition 2.2-(a)]{NPS2017}, we obtain directly (\ref{eq-right-wing}). Similarly, (\ref{small prob V with sigma}) and
\cite[Proposition 2.4-(a)]{NPS2017} leads to (\ref{eq-left-wing 1}). In the case where $\sigma=0$, (\ref{small proba of V 2}) implies that $\sup\{p>0: \mathbb E[V_t^{-p}]<\infty\}=\infty$. Then (\ref{eq-left-wing 2}) follows from \cite[Theorem 2.3-(iii)]{NPS2017}.
\finproof
Corollary \ref{implied volatility based on variance processes} gives the explicit behavior of the implied volatility of variance options with extreme strikes far from the moneyness. We note that
the right wing depends only on the parameter $\alpha$ which is the characteristic parameter of the jump term. When $\alpha$
decreases, the tail becomes heavier and the slope in (\ref{eq-right-wing}) increases. In contrast, the left wing depends on the parameters which belong to the pure CIR part with Brownian diffusion and the explaining coefficient $2ab/\sigma^2$ in \eqref{eq-left-wing 1} is linked to the Feller condition. When the Brownian term disappears, i.e. $\sigma=0$, then there occurs a discontinuity on the left wing behavior of the variance volatility surface.
\section{Jump cluster behaviour }\label{sec: clusters}
In this section, we study the jump cluster phenomenon by giving a decomposition formula of the variance process $V$ and we analyze some properties of the cluster processes.
\subsection{Cluster decomposition of the variance process}
Let us fix a jump threshold ${y}=\sigma_Z\overline{y}$ and denote by $\{\tau_n\}_{n\geq 1}$ the sequence of jump times of $V$ whose sizes are larger than $y$. We call $\{\tau_n\}_{n\geq 1}$ the large jumps.
By separating the large and small jumps, the variance process \eqref{Vol integral} can be written as
\begin{equation}\label{V integral rewritten}
\begin{array}{rcl}
\displaystyle V_t & \displaystyle = V_0 &\displaystyle + \int_0^t a\left(b-\frac{\sigma_N\Theta(\alpha, y)V_s}{a}-V_s\right) ds +
\sigma\int_{0}^t \int_0^{V_s} W(ds,du) \\
& & \displaystyle +\sigma_N \int_{0}^t \int_0^{V_{s-} } \int_0^{\overline{y}}\zeta \widetilde{N}
(ds, du, d\zeta)++\sigma_N \int_{0}^t \int_0^{V_{s-} } \int_{\overline{y}}^{\infty}\zeta {N}
(ds, du, d\zeta)
\end{array}
\end{equation}
where
\begin{eqnarray}\label{def-a-b-theta}
\Theta(\alpha, y) &=& \int_{\overline{y}}^{\infty}\zeta\nu_{\alpha}(d\zeta)=\frac{2}{\pi}\alpha\Gamma(\alpha-1)\sin\left(\frac{\pi\alpha}{2}\right)\overline{y}^{1-\alpha}.
\end{eqnarray}
We denote by \[\widetilde{a}(\alpha, y) = a+ \sigma_N\Theta(\alpha, y) \quad \text{ and }\quad
\widetilde{b}(\alpha, y) = \frac{ab}{a+ \sigma_N \Theta(\alpha, y)}.\]
Then between two large jumps times, that is, for any $t\in[\tau_n, \tau_{n+1})$, we have
\begin{equation}\label{lambda integral-without-compensation}
\begin{array}{rcl}
\displaystyle V_t & \displaystyle = V_{\tau_n} &\displaystyle + \int_{\tau_n}^t \widetilde{a}(\alpha, y) \big(\widetilde{b}(\alpha, y) - V_s\big)ds +
\sigma\int_{\tau_n}^t \int_0^{V_s} W(ds,du) \\
& & \displaystyle +\sigma_N \int_{\tau_n}^t \int_0^{V_{s-} } \int_0^{\overline{y}}\zeta \widetilde{N}
(ds, du, d\zeta).
\end{array}
\end{equation}
The expression \eqref{lambda integral-without-compensation} shows that two phenomena arise between two large jumps. First, the mean long-term level $b$ is reduced.
This effect is standard
since the mean level $\widetilde{b}(\alpha, y)$ becomes lower to compensate the large jumps in order to preserve the global mean level $b$.
Second and more surprisingly, the mean reverting speed $a$ is augmented. That is, the volatility decays more quickly between two jumps.
Moreover, this speed is greater when the parameter $\alpha$ decreases and tends to infinity as $\alpha$ approaches $1$ since $\Theta(\alpha, y) \sim (\alpha-1)^{-1}$.
We introduce the truncated process of $V$ up to the jump threshold, which will serve as the fundamental part in the decomposition, as
\begin{equation}\label{rhat1}\begin{split}
{V}^{(y)}_t =V_0 &+ \int_0^t \widetilde{a}(\alpha, y) \big( \widetilde{b}(\alpha, y) - {V}_s^{(y)} \big) ds
+ \sigma \int_0^t \int_0^{{V}_s^{(y)}} W(ds,du)\\
&+ \sigma_N \int_0^t \int_0^{{V}_{s-}^{(y)} } \int_0^{\overline{y}}\zeta \widetilde{N}
(ds,du,d\zeta), \quad t\geq 0.
\end{split}\end{equation}
Similar as $V$, the process $V^{(y)}$ is also a CBI process. By definition, the jumps of the process $V^{(y)}$ are all smaller than $y$. In addition, $V^{(y)}$ coincides with $V$ before the first large jump $\tau_1$. The next result studies the first large jump and its jump size, which will be useful for the decomposition.
\begin{Lem}\label{large jump size} We have
\begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{tau1}
\mathbb{P}(\tau_1>t)=\mathbb{E}\Big[\exp{\Big\{-\Big(\int_{\overline y}^\infty\mu_\alpha(d\zeta)\Big)\Big(\int_0^t{V}^{(y)}_sds\Big)\Big\}}\Big].
\eeqlb
The jump $\Delta V_{\tau_1}:=V_{\tau_1}-V_{\tau_1-}$ is independent of
$\tau_1$ and $V^{(y)}$, and satisfies
\begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{size distribution}
\mathbb{P}(\Delta V_{\tau_1}\in d\zeta)=1_{\{\zeta>{y}\}}\,\frac{ \alpha{y}^\alpha} {\zeta^{1+\alpha}}d\zeta.
\eeqlb
\end{Lem}
It is not hard to
see that $\mathbb{P}(V_t\geq V^{(y)}_t,\forall t\geq0)=1$. Then the large jump in \eqref{V integral rewritten} can be separated into two parts as
\begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{large jumps}
\int_0^t\int_0^{V_{s-}}\int_{\bar{y}}^\infty N(ds,du,d\zeta)=\int_0^t\int_0^{V^{(y)}_{s-}}\int_{\bar{y}}^\infty N(ds,du,d\zeta)+\int_0^t\int_{V^{(y)}_{s-}}^{V_{s-}}\int_{\bar{y}}^\infty N(ds,du,d\zeta).
\eeqlb
Let
\begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{mother jump intensity}
J_t^{(y)}=\int_0^t\int_0^{V^{(y)}_{s-}}\int_{\bar{y}}^\infty N(ds,du,d\zeta),\quad t\geq 0\eeqlb
which is a point process whose arrival times $\{T_n\}_{n\geq 1}$ coincide with part of the large jump times and those jumps are called the mother jumps. By definition, the mother jumps form a subset of large jumps. Each mother jump will induce a cluster process $v^{(n)}$ which starts from time $T_n$ with initial value $\Delta V_{T_n}=V_{T_n}-V_{T_n-}$ and is given recursively by
\begin{equation}\label{cluster}\begin{split}
v_t^{(n)}=\Delta V_{T_n} &-a \int_{T_n}^t v^{(n)}_s ds + \sigma \int_{T_n}^t
\int_{V^{(y)}_s+\sum_{i=1}^{n-1}v^{(i)}_s}^{V^{(y)}_s+\sum_{i=1}^{n}v^{(i)}_s} W(ds, du)\\
&+ \sigma_Z \int_{T_n}^t \int_{V^{(y)}_{s-}+\sum_{i=1}^{n-1}v^{(i)}_{s-}}^{V^{(y)}_{s-}+\sum_{i=1}^{n}v^{(i)}_{s-}} \int_{\mathbb{R}^+}
\zeta \widetilde{N}(ds,du, d\zeta),\quad t\in[T_n,\infty).\end{split}\end{equation}
The next result provides the decomposition of $V$ as the sum of the fundamental process $V^{(y)}$ and a sequence of cluster processes. The decomposition form is inspired by Duquesne and Labbe \cite{DuqLab}.
\begin{Pro}\label{Pro: decomposition}
The variance process $V$ given by \eqref{Vol integral} has the decomposition:
\begin{equation}\label{decomposition}{V_t=V^{(y)}_t+\sum_{n=1}^{J^{(y)}_t}u_{t-T_n}^{(n)}},\quad t\geq 0, \end{equation}
where $u^{(n)}_t=v^{(n)}_{T_n+t}$ with $v^{(n)}$ given by (\ref{cluster}).
Moreover, we have that
\begin{enumerate}[(1)]
\item $\{u^{(n)}: n=1,2,\cdots\}$ is the sequence of independent identically distributed processes and for each $n$, $u^{(n)}$ has the same distribution as an $\alpha$-$\mathrm{CIR}(a,0,\sigma,\sigma_Z,\alpha)$ process given by
\begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{CB}
u_t=u_0-a\int_0^tu_sds + \sigma\int_0^t \sqrt{u_s} dB_s
+\sigma_N\int_0^t\sqrt[\alpha]{u_{s-}}dZ_s,
\eeqlb
where $u_0\overset{d}{=}\Delta V_{\tau_1}$ and its distribution is given by \eqref{size distribution}.
\item The pair $({V}^{(y)}, J^{(y)})$ is independent of $\{u^{(n)}\}$. Conditional on ${V}^{(y)}$, $J^{(y)}$ is a time inhomogenous Poisson process with intensity function $\big(\int_{\bar y}^\infty\nu_\alpha(d\zeta)\big){V}^{(y)}_\cdot$.
\end{enumerate}
\end{Pro}
Note that each cluster process has the same distribution as an $\alpha$-square root jump process which is similar to \eqref{alpha Heston-root} but with parameter $b=0$, that is, an $\alpha$-$\mathrm{CIR}(a,0,\sigma,\sigma_Z,\alpha)$ process also known as a CB process without immigration. The jumps given by $(J_t,t\geq 0)$ are called mother jumps in the sense that each mother jump $T_n$ will induce a cluster of jumps, or so-called son jumps, via its cluster (branching) process $u^{(n)}$. Conversely, any jump from $\big(\int_0^t\int_{V^{(y)}_{s-}}^{V_{s-}}\int_{\bar{y}}^\infty N(ds,du,d\zeta), t\geq0\big)$ in \eqref{large jumps}, that is, a large jump but not mother jump, is a child jump of some mother jump.
\subsection{The cluster processes}
We finally focus on the cluster processes and present some of their properties. We are particularly interested in two quantities. The first one is the number of clusters before a given time $t$, which is equal to the number of mother jumps. The second one is the duration of each cluster process.
\begin{Pro}\label{Pro: decomposition2} \begin{enumerate}[(1)]
\item The expected number of clusters during $[0,t]$ is
\begin{equation}\label{number cluster}\mathbb E[J_t^{(y)}]=
\frac{(1-\alpha)\sigma_Z^\alpha}{\cos(\pi\alpha/2)\Gamma(2-\alpha)y^\alpha}\Big(\widetilde{b}(\alpha, y)t+\frac{V_0-\widetilde{b}(\alpha, y)}{\widetilde{a}(\alpha, y)}(1-e^{-\widetilde{a}(\alpha, y)t})\Big).\end{equation}
\item Let $\theta_n:=\inf\{t\geq0: u_t^{(n)}=0\}$ be the duration of the cluster $u^{(n)}$. We have
$ \mathbb P(\theta_n<\infty)=1$ and
\begin{equation}\label{duration cluster}\mathbb E[\theta_n]= \alpha y^\alpha\int_0^\infty\frac{dz}{\Psi_{\alpha}(z)}\int_{y}^{\infty}\frac{1-e^{-\zeta z}}{\zeta^{1+\alpha}}d\zeta.
\end{equation}
\end{enumerate}
\end{Pro}
We note that the expected duration of all clustering processes are equal, which means that the initial value of $u^{(i)}$, that is, the jump size of the triggering mother jump has no impact on the duration. By \eqref{duration cluster}, we have
\begin{equation*}\mathbb E[\theta_n]=
\alpha \int_0^\infty\frac{dz}{\Psi_{\alpha}(z)}\int_{1}^{\infty}\frac{1-e^{-\zeta y z}}{\zeta^{1+\alpha}}d\zeta,
\end{equation*}
which implies that $\mathbb{E}[\theta_n]$ is increasing with $y$. It is natural as larger jumps induce longer-time effects. But typically, the duration time is short, which means that there
is no long-range property for $\theta_n$, because we have the following estimates:
\begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{light tails}
\mathbb{P}(\theta_n>t)\leq\frac{\alpha y}{\alpha-1}q_1e^{-a(t-1)},\quad t>1,
\eeqlb
for some constant $0<q_1<\infty$.
We illustrate in Figure \ref{fig: cluster} the behaviors of the jump cluster processes by the above proposition. The parameters are similar as in Figure \ref{fig: variance} except that we compare three different values for $\alpha=1.2$, $1.5$ and $1.8$. The first graph shows the expected number of clusters given by \eqref{number cluster}, as a function of $y$ for a period of $t=14$. We see that when the jump threshold $y$ increases, there will be less clusters. In other words, we need to wait a longer time to have a very large mother jump. However once such case happens, more large son jumps might be induced during a cluster duration so that the duration is increasing with $y$. For large enough $y$, the number of clusters is decreasing with $\alpha$. In this case, the large jumps play a dominant role. For small values of $y$, there is a mixed impact of both small and large jumps which breaks down the monotonicity with $\alpha$. The second graph illustrates the duration of one cluster which is given by \eqref{duration cluster}.
Although the duration is increasing with respect to $y$, it is always relatively short due to finite expectation and exponentially decreasing probability tails
given by (\ref{light tails}).
\begin{figure}
\caption{The expected number of clusters (left) and the duration of one cluster (right) as a function of the jump threshold $y$, for different values of $\alpha$.}
\begin{center}
\includegraphics[width=0.49\textwidth]{number_cluster.eps}
\includegraphics[width=0.49\textwidth]{duration_cluster.eps}
\end{center}
\label{fig: cluster}
\end{figure}
When the jump threshold $y$ becomes extremely large, the point process $\{J_t^{(y)}\}$ is asymptotic to a Poisson process and the expected number of clusters converges to a fixed level, as shown by the following result.
\begin{Pro}\label{Pro: decomposition3}\; Let $\{y_n\}_{n\geq 1}$ be the sequence of positive thresholds with $y_n\sim cn^{1/\alpha}$ as $n\rightarrow\infty$ where $c$ is some positive constant. Then for each $t\geq0$,
\begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{convergence of J_t}
J^{(y_n)}_{nt}\overset{w}{\longrightarrow} J_t,
\eeqlb
as $n\rightarrow\infty$, where $J$ is a Poisson process with the parameter $\lambda$ given by
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\lambda=-\frac{\sigma_N^\alpha b}{\alpha\cos(\pi\alpha/2)\Gamma(-\alpha)c^\alpha},
\quad 1<\alpha<2.
\eeqnn
\end{Pro}
\section{Appendix}
{\bf Proof of Proposition \ref{Pro: joint laplace transform}. } As a a direct consequence of \cite{DPS2000} and \cite{KR11}, the proof mainly serves to provide the explicit form of the generalized Riccati equations.
By (\ref{Heston-integral}) we have
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
dY_t=(r-\frac{1}{2}V_t)dt+\rho\int_0^{V_t}W(dt,du)+\sqrt{1-\rho^2}\int_0^{V_t}\overline{W}(dt,du).
\eeqnn
By Ito's formula, we have that the process $(Y_t, V_t, \int_0^tV_sds)$ is an affine process with generator given by
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
Af(y,v,u)&=& (r-\frac{1}{2}v)f'_y(y,v,u)+ a(b-v)f'_v(y,v,u)+vf'_u(y,v,u)\\
& &+\frac{1}{2}vf''_{yy}(y,v,u)+\rho\sigma vf''_{yv}(y,v,u)+\frac{1}{2}\sigma^2vf''_{vv}(y,v,u)\\
& &+\sigma_N^\alpha v\int_0^\infty \big(f(y,v+\zeta,u)-f(y,v,u)-f'_v(y,v,u)\zeta\big)\nu_{\alpha}(d\zeta).
\eeqnn
Denote by $X_t=(Y_t,V_t,\int_0^tV_sds)$. We aim to find some functions $(\phi,\tilde{\Psi})\in\mathbb{C}\times\mathbb{C}^3$
with $\phi(0,\xi)=0$ and $\tilde{\Psi}(0,\xi)=\xi$ such that the following duality holds
\begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{duality}
{\mathbb E}\left[e^{\langle\xi,X_T\rangle}\right]=\exp\Big(\phi(T,\xi)+\langle\tilde{\Psi}(T,\xi),X_0\rangle\Big).
\eeqlb
In fact, if
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
M_t=f(t,X_t)=\exp\Big(\phi(T-t,\xi)+\langle\tilde{\Psi}(T-t,\xi),X_t\rangle\Big)
\eeqnn
is a martingale, then we immediately have that
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbb{E}[e^{\langle \xi,X_T\rangle}]=\mathbb{E}[M_T]=M_0=\exp\Big(\phi(T,\xi)+\langle\tilde{\Psi}(T,\xi),X_0\rangle\Big),
\eeqnn
which implies (\ref{duality}). Now assume that $(\phi,\tilde{\Psi})$ are sufficiently differential and applying the Ito formula to
$f(t,X_t)$, we have that
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
M_T-M_0&=&\mbox{local martingale part}-\int_0^Tf(t,X_t)\Big(\dot{\phi}(T-t,\xi)+\langle X_t,\dot{\tilde{\Psi}}(T-t,\xi)\rangle\Big)dt\\
&&+\int_0^Tf(t,X_t)\tilde{\Psi}_1(T-t,\xi)(r-\frac{1}{2}V_t)dt+\int_0^Tf(t,X_t)\tilde{\Psi}_2(T-t,\xi)a(b-V_t)dt\\
&&+\int_0^Tf(t,X_t)\tilde{\Psi}_3(T-t,\xi)V_tdt+\frac{1}{2}\int_0^T
f(t,X_t)\tilde{\Psi}^2_1(T-t,\xi)V_tdt\\
&&+\rho\sigma\int_0^Tf(t,X_t)\tilde{\Psi}_1(T-t,\xi)\tilde{\Psi}_2(T-t,\xi)V_tdt+\frac{1}{2}\sigma^2\int_0^Tf(t,X_t)\tilde{\Psi}_2^2(T-t,\xi)
V_tdt\\
&&+\sigma_N^{\alpha}\int_0^Tf(t,X_t)V_t\int_0^\infty \Big[\exp\{\tilde{\Psi}_2(T-t,\xi)z\}-1-\tilde{\Psi}_2(T-t,\xi)z\Big]\nu_{\alpha}(dz)
\eeqnn
where $\tilde{\Psi}=(\tilde{\Psi}_1,\tilde{\Psi}_2,\tilde{\Psi}_3)$. Then $f(t,X_t)$ is a local martingale, if
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\dot{\phi}(T-t,\xi)=r\tilde{\Psi}_1(T-t,\xi)+ab\tilde{\Psi}_2(T-t,\xi),\quad\dot{\tilde{\Psi}}_1(T-t,\xi)=0, \quad\dot{\tilde{\Psi}}_3(T-t,\xi)=0,
\eeqnn
and
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\dot{\tilde{\Psi}}_2(T-t,\xi)&=&-\frac{1}{2}\tilde{\Psi}_1(T-t,\xi)-a\tilde{\Psi}_2(T-t,\xi)+\tilde{\Psi}_3(T-t,\xi)\\
&&+\frac{1}{2}\tilde{\Psi}_1^2(T-t,\xi)+\rho\sigma\tilde{\Psi}_1(T-t,\xi)\tilde{\Psi}_2(T-t,\xi)+\frac{1}{2}\sigma^2\tilde{\Psi}^2_2(T-t,\xi)\\
&&+\sigma_N^{\alpha}\int_0^\infty \Big(e^{z\tilde{\Psi}_2(T-t,\xi)}-1-z\tilde{\Psi}_2(T-t,\xi))\Big)\nu_{\alpha}(dz).
\eeqnn
Then we have that $\tilde{\Psi}_1(t,\xi)=\xi_1$ and $\tilde{\Psi}_3(t,\xi)=\xi_3$ for $0\leq t\leq T$. Furthermore $\tilde{\Psi}_2(t,\xi)$ solves the ODE
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\dot{\tilde{\Psi}}_2(t,\xi)&=&-\frac{1}{2}\xi_1-a\tilde{\Psi}_2(t,\xi)+\xi_3+\frac{1}{2}\xi_1^2+\rho\sigma\xi_1\tilde{\Psi}_2(t,\xi)+\frac{1}{2}\sigma^2\tilde{\Psi}^2_2(t,\xi)\\
&&+\sigma_N^{\alpha}\int_0^\infty \Big(e^{z\tilde{\Psi}_2(t,\xi)}-1-z\tilde{\Psi}_2(t,\xi))\Big)\nu_{\alpha}(dz)\\
&=&-\frac{1}{2}\xi_1-a\tilde{\Psi}_2(t,\xi)+\xi_3+\frac{1}{2}\xi_1^2+\rho\sigma\xi_1\tilde{\Psi}_2(t,\xi)+\frac{1}{2}\sigma^2\tilde{\Psi}^2_2(t,\xi)
-\frac{\sigma_N^\alpha}{\cos(\pi\alpha/2)}(-\tilde{\Psi}_2(t,\xi))^{\alpha}
\eeqnn
Now let $\Psi(t,\xi)=\tilde{\Psi}_2(t,\xi)$, which obviously satisfies the ODE (\ref{generalized Racci}) and $$\phi(t,\xi)=\int_0^t(r\xi_1+ab\Psi(s,\xi)ds.$$The proof is thus complete.
\finproof
\noindent {\bf Proof of Lemma \ref{extremal behavior of V}. } Consider (\ref{Ito transform}). By Doob's inequality,
$$
\mathbb E_x\Big[\sup_{0\leq t\leq T}\Big|\int_0^t e^{-a(t-s)}\sqrt{V_s}dB_s\Big|^2\Big]
\leq 4 \mathbb E_x \Big[\int_0^T e^{2as} V_s ds\Big] \leq \frac{2x+b}{2a}e^{2aT},
$$
which implies that $u^\alpha \mathbb P_x(\sup_{0\leq t\leq T}|\int_0^t e^{-a(t-s)}\sqrt{V_s}dB_s|>u)\rightarrow0$ as $u\rightarrow\infty$.
Then, in view of (\ref{Ito transform}), the extremal behavior of $V_t$ in the sense of (\ref{functional extremal behavior}) is determined by
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\sigma_N\int_0^te^{-a(t-s)}\sqrt[\alpha]{V_{s-}} dZ_s=e^{-at}\cdot \sigma_N\int_0^te^{as}\sqrt[\alpha]{V_{s-}}dZ_s:=X_t\cdot Y_t.
\eeqnn
Note that $\mathbb{E}[\sup_{0\leq t\leq T}(\sqrt[\alpha]{V_t})^{\alpha+\delta}]< \infty$ for
$0<\delta<\alpha(\alpha-1)$ from Lemma \ref{moments of V}. Then by \cite[Theorem 3.4]{HL07},
we have as $u\rightarrow\infty$,
\begin{equation}\label{functional extremal behavior2}u^{\alpha}\mathbb P({Y}/u\in\cdot)\overset{\widehat{w}}{\longrightarrow}\delta_Y(\cdot)\quad on\quad \mathscr{B}(\bar{D}_0([0,T]),
\end{equation}
where $\delta$ is given by:
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\delta_Y(\cdot)=T \mathbb E\Big[\int_0^\infty 1_{\{w_t:=\sigma_N e^{a\tau}\sqrt[\alpha]{V_\tau}y1_{[\tau,T]}(t)\in\cdot\}}\nu_{\alpha}(dy)\Big],
\eeqnn
where $\tau$ is uniformly distributed on $[0, T]$ and independent of $V$.
Furthermore, by \cite[Theorem 3.1]{HL07}, we have as $u\rightarrow\infty$,
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}\label{functional extremal behavior2}u^{\alpha}\mathbb P({XY}/u\in\cdot)\overset{\widehat{w}}{\longrightarrow}\delta_Y(w\in\bar{D}_0[0,T]: Xw\in\cdot)\quad on\quad \mathscr{B}(\bar{D}_0([0,T]),
\eeqnn
A simple calculation shows that
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\delta(\cdot):=\delta_Y(w\in\bar{D}_0[0,T]: Xw\in\cdot)=\sigma_N^\alpha\int_0^T
\mathbb E[V_s]\int_0^\infty1_{\{w_t=e^{-a(t-s)}y1_{[s,T]}(t)\in\cdot\}}\nu_{\alpha}(dy)ds
\eeqnn
\finproof
\noindent{\bf Proof of Lemma \ref{moments of V}. } By \eqref{Ito transform}, an elementary inequality shows that there exists a locally bounded function $C_1(\cdot)$ such that
\begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{supV}
\mathbb {E}_x\Big[\sup_{0\le t\le T}V_t^\beta\Big]
&\leq&
C_1(T)\Big(x^\beta + b^{\beta} +\sigma^\beta \mathbb E_x\Big[\sup_{0\leq t\leq T}\big|\int_0^te^{-a(t-s)}\sqrt{V_s}dB_s\big|^\beta\Big]\nonumber\\
&&+\sigma^\beta_N \mathbb{E}_x\Big[\sup_{0\le t\le T} \big|\int_0^te^{-a(t-s)}V_{s-}^{1/\alpha} dZ_s\big|^\beta\Big]\Big).
\eeqlb
By H\"older's inequality and Doob's martingale inequality, there exist a locally bounded function $C_2(\cdot)$ such that
\begin{equation}
\begin{split}
&\quad\mathbb E_x\Big[\sup_{0\leq t\leq T}\Big|\int_0^te^{-a(t-s)}\sqrt{V_s}dB_s\Big|^\beta\Big]\leq
2^\beta\mathbb{E}_x\Big[\Big(\int_0^Te^{2 as}V_s ds\Big)^{\beta/2} \Big]\nonumber\\
&\le
2^\beta\Big(\int_0^T\mathbb{E}_x[e^{2 as}V_s] ds\Big)^{\beta/2} \nonumber \le
C_2(T)\left(x^{\beta/2}e^{\beta aT/2} + e^{\beta aT}\right). \label{integral inequality 1}
\end{split} \end{equation}
Moreover, by Long \cite[Lemma 2.4]{Lon10}, which is a generalization of Rosinski and Woyczynski \cite[Theorem 3.2]{RW85}, there exist locally bounded functions $C_3(\cdot)$ and $C_4(\cdot)$ such that
\begin{equation}\begin{split}
&\quad\mathbb{E}_x\Big[\sup_{0\le t\le T}\Big|\int_0^t e^{-a(t-s)}
V_{s-}^{1/\alpha} dZ_s\Big|^\beta\Big]
\leq
C_3(T)\mathbb{E}_x\Big[\Big(\int_0^T e^{\alpha as} V_s
ds\Big)^{\beta/\alpha}\Big]\nonumber\\
&\le
C_3(T)\Big(\int_0^T\mathbb{E}_x[e^{\alpha as}V_s] ds\Big)^{\beta/\alpha} \nonumber\le
C_4(T)\left(x^{\beta/\alpha}e^{\beta a(1-1/\alpha)T} + e^{\beta aT}\right).
\label{integral inequality 2}
\end{split}\end{equation}
By combining (\ref{supV}), (\ref{integral inequality 1}) and (\ref{integral inequality 2}), we have the lemma.
\finproof
\noindent{\bf Proof of Lemma \ref{large jump size}.}
By (\ref{Vol integral}), we note that
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\{\tau_1>t\}=\Big\{\tau_1>t, \ \int_0^t\int_0^{V_{s-}}\int_{\overline{y}}^\infty\zeta N(ds,du,d\zeta)=0\Big\}.
\eeqnn
Since $V^{(y)}$ coincides with $V$ up to
$\tau_1$, the comparison between (\ref{Vol integral}) and (\ref{rhat1}) implies that
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\{\tau_1>t\}=\Big\{\tau_1>t, \ \int_0^t\int_0^{V^{(y)}_{s-}}\int_{\overline{y}}^\infty\zeta N(ds,du,d\zeta)=0\Big\}\; a.s.
\eeqnn
If $\tau_1\leq t$, we immediately have \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\int_0^t\int_0^{V^{(y)}_{s-}}\int_{\overline{y}}^\infty\zeta N(ds,du,d\zeta)&\geq&
\int_0^{\tau_1}\int_0^{V^{(y)}_{s-}}\int_{\overline{y}}^\infty\zeta N(ds,du,d\zeta)\\
&=&\int_0^{\tau_1}\int_0^{V_{s-}}\int_{\overline{y}}^\infty\zeta N(ds,du,d\zeta)>0.
\eeqnn
Thus
\begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{first large jump time}
\{\tau_1>t\}=\Big\{\int_0^t\int_0^{V^{(y)}_{s-}}\int_{\overline{y}}^\infty\zeta N(ds,du,d\zeta)=0\Big\}\; a.s.
\eeqlb
Recall that $1_{\{\zeta>\overline{y}\}}N(ds,du,d\zeta)$ is the restriction of $N(ds,du,d\zeta)$ to $(0,\infty)\times (0,\infty)\times (\overline{y},\infty)$, which is
independent of $1_{\{\zeta\leq\overline{y}\}}N(ds,du,d\zeta)$.
By (\ref{rhat1}) we have that $1_{\{\zeta>\overline{y}\}}N(ds,du,d\zeta)$ is independent of $(V^{(y)}_t, t\geq 0)$. Then
conditional on $(V^{(y)}_t, t\geq 0)$, $\int_0^t \int_0^{V^{(y)}_{s-} } \int_{\overline y}^\infty {N}(ds,du,d\zeta)$ is a time inhomogenous Poisson process
with intensity function $\big(\int_{\overline y}^\infty\nu_\alpha(d\zeta)\big)V^{(y)}_{.}$.
Note that $\tau_1$ is the first jump time of $\sigma_{Z}\int_0^t \int_0^{V^{(y)}_{s-} } \int_{\overline y}^\infty {N}(ds,du,d\zeta)$, and $\Delta V_{\tau_1}$
is the first jump size of $\sigma_{Z}\int_0^t \int_0^{V^{(y)}_{s-} } \int_{\overline y}^\infty {N}(ds,du,d\zeta)$. Then we have
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbb E\big[\tau_1\in dt,\ \Delta V_{\tau_1}\in d\zeta \,| V_{.}^{(y)}\big]
=\Big(\int_{\overline y}^\infty\nu_\alpha(dx) \Big)\, \Big( V^{(y)}_{t}dt\Big) \, \Big(\frac{\alpha y^{\alpha}1_{\{\zeta>y\}}}{\zeta^{1+\alpha}}d\zeta\Big),
\eeqnn
which implies that $\Delta V_{\tau_1}$ is independent of $\tau_1$ and $V^{(y)}$. \finproof
{\noindent\bf Proof of Propositon \ref{Pro: decomposition}}
Step 1. Recall that
$
\tau_1=\inf\{t>0:\Delta V_t>y\}
$
and $T_1$ is the first jump time of the point process $\{J_t:t\geq0\}$ given by (\ref{mother jump intensity}).
By (\ref{first large jump time}), we immediately get $\tau_1=T_1$ a.s.. Thus by Lemma \ref{large jump size}, we have
that $V^{(y)}$ coincides with $V$ up to
$T_1$ and $\Delta V_{T_1}$ is independent of $V^{(y)}$. Note that $V^{(y)}_{T_1}=V_{T_1-}$ and
\begin{equation}\label{T1V}\begin{split}
{V}^{(y)}_t =V_{T_1-} &+ \int_{T_1}^t \widetilde{a}(\alpha, y) \big( \widetilde{b}(\alpha, y) - {V}_s^{(y)} \big) ds
+ \sigma \int_{T_1}^t \int_0^{{V}_s^{(y)}} W(ds,du)\\
&+ \sigma_N \int_{T_1}^t \int_0^{{V}_{s-}^{(y)} } \int_0^{\overline{y}}\zeta \widetilde{N}
(ds,du,d\zeta), \quad t\geq T_1.
\end{split}\end{equation}
By taking $k=1$ in (\ref{cluster}),
\begin{equation}\label{v1}\begin{split}
v_t^{(1)}=\Delta V_{T_1} &-a \int_{T_1}^t v^{(1)}_s ds + \sigma \int_{T_1}^t
\int_{{V}^{(y)}_s}^{{V}^{(y)}_s+ v^{(1)}_s} W(ds, du)\\
&+ \sigma_N \int_{T_1}^t \int_{{V}^{(y)}_{s-}}^{{V}^{(y)}_{s-}+ v^{(1)}_{s-} } \int_{\mathbb{R}^+}
\zeta \widetilde{N}(ds,du, d\zeta), \quad t\geq T_1. \end{split}\end{equation}
As mentioned above, $\Delta_{T_1}$ is independent of $V_{T_1-}$. By using the property of independent and stationary increments of $W$ and $N$, we have that $v^{(1)}$ and $V^{(y)}$ are independent of each other and $\{u^{(1)}_t:=v^{(1)}_{T_1+t},\ t\geq0\}$ is a CB process which has the same distribution as $u$ given by
(\ref{CB}); see e.g., \cite[Theorem 3.2, 3.3]{DawsonLi}). Now set
\begin{equation}\label{T1Vbar}\begin{split}
\bar{V}_t^{(1)} = V_{T_1-}
&+ \int_{T_1}^t a \left( b - \bar{V}_s^{(1)} \right) ds +
\sigma \int_{T_1}^t \int_0^{\bar{V}_s^{(1)}} W(ds, du)\\
&+ \sigma_N \int_{T_1}^t \int_0^{\bar{V}_{s-}^{(1)} } \int_{\mathbb{R}^+} \zeta \widetilde{N}
(ds,du, d\zeta),\quad t\geq T_1.
\end{split}\end{equation}
It is easy to see $\bar{V}^{(1)}$ is of the same type as $V$ but with initial value $V_{T_1-}$ and starting from time $T_1$. Define
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\bar{\tau}_1:=\inf\{t>T_1: \Delta \bar{V}_t^{(1)}>y\},
\eeqnn
which is the first jump time of $\bar{V}^{(1)}$ whose jump size larger than $y$. Then a comparison of (\ref{T1V}) and (\ref{T1Vbar}) shows that $\bar{V}^{(1)}_t=V_t^{(y)}$ for $t\in[T_1,\bar\tau_1)$. Furthermore the similar proof of Lemma \ref {large jump size} shows that for any $t>0$,
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\{\bar{\tau}_1-T_1>t\}=\Big\{\int_{T_1}^{T_1+t}\int_0^{V^{(y)}_{s-}}\int_{\overline{y}}^\infty\zeta N(ds,du,d\zeta)=0\Big\}\; a.s.,
\eeqnn
which implies that $\bar\tau_1=T_2$ a.s. Thus $\Delta\bar{V}^{(1)}_{\bar\tau_1}=\Delta V_{T_2}$ and
$\Delta V_{T_2}$ is independent of $V^{(y)}$ and $\Delta V_{T_1}$. Furthermore $\bar{V}^{(1)}_t=V_t^{(y)}$ for $t\in[T_1,T_2)$. We get that
\begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{first interval}
V^{(y)}_t+v^{(1)}_t=\bar{V}_t^{(1)}+v^{(1)}_t=V_t, \ a.s.\quad t\in[T_1,T_2).
\eeqlb
The third equality follows from (\ref{T1Vbar}), (\ref{v1}) and (\ref{Vol integral}).
Step 2. By taking $k=2$ in (\ref{cluster}),
\begin{equation}\label{v2}\begin{split}
v_t^{(2)}=\Delta V_{T_2} &-a \int_{T_2}^t v^{(2)}_s ds + \sigma \int_{T_2}^t
\int_{{V}^{(y)}_s+v^{(1)}_s}^{{V}^{(y)}_s+ v^{(1)}_s+v^{(2)}_s} W(ds, du)\\
&+ \sigma_N \int_{T_2}^t \int_{{V}^{(y)}_{s-}+v^{(1)}_{s-}}^{{V}^{(y)}_{s-}+ v^{(1)}_{s-}+v^{(2)}_{s-}} \int_{\mathbb{R}^+}
\zeta \widetilde{N}(ds,du, d\zeta), \quad t\geq T_2. \end{split}\end{equation}
Since $\Delta V_{T_2}$ is independent of $V^{(y)}_{T_2}$ and $\Delta V_{T_1}$, still by using the property of independent and stationary increments of $W$ and $N$, we have that $v^{(2)}$ are independent of $V^{(y)}$ and $v^{(1)}$, and $\{u^{(2)}_t:=v^{(2)}_{T_2+t},\ t\geq0\}$ is also a CB process which has the same distribution as $u$. Now set
\begin{equation}\label{T1Vbar}\begin{split}
\bar{V}_t^{(2)} = V^{(y)}_{T_2}
&+ \int_{T_2}^t a \left( b - \bar{V}_s^{(2)} \right) ds +
\sigma \int_{T_2}^t \int_0^{\bar{V}_s^{(2)}} W(ds, du)\\
&+ \sigma_N \int_{T_2}^t \int_0^{\bar{V}_{s-}^{(2)} } \int_{\mathbb{R}^+} \zeta \widetilde{N}
(ds,du, d\zeta),\quad t\geq T_2.
\end{split}\end{equation}
Define
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\bar{\tau}_2:=\inf\{t>T_2: \Delta \bar{V}_t^{(2)}>y\},
\eeqnn
As proved in Step 2 we have that $\bar\tau_2=T_3$ a.s. and $\bar{V}^{(2)}_t=V_t^{(y)}$ for $t\in[T_2,T_3)$. Note that $V_{T_2-}=V^{(y)}_{T_2}+\Delta r^{(1)}_{T_2}$ by (\ref{first interval}). We get that
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
V^{(y)}_t+v^{(1)}_t+v^{(2)}=\bar{V}_t^{(2)}+v^{(1)}_t+v^{(2)}_t=V_t, \ a.s.\quad t\in[T_2,T_3).
\eeqnn
Step 3. By induction, it is not hard to prove that $V_t=V^{(y)}_t+\sum_{k=1}^nv^{(k)}_t$ holds for any $ t\in[T_n,T_{n+1})$ and $n\geq1$, and the sequence of i.i.d processes is of the same distribution as $u$. Furthermore $\{u^{(n)}\}$ is independent of $V^{(y)}$. Then we have this proposition. \finproof
{\noindent\bf Proof of Propositon \ref{Pro: decomposition2}} \;(1) Note that $J_t^{(y)}\overset{d}{=}\int_0^t\int_0^{V_{s-}^{(y)}}\int_{D}M(ds,du, d\omega)$. Then
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbb{E}[J_t^{(y)}]=\int_0^t\mathbb{E}[V_s^{(y)}]ds\int_{\bar y}^\infty\nu_{\alpha}(d\zeta).
\eeqnn
A simple computation shows (\ref{number cluster}). (2) By Proposition \ref{Pro: decomposition}, $u^{(n)}$ is a subcritical CB process without immigration, i.e. the branching mechanism is
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\Psi_{\alpha}(q)=a q+\frac{\sigma^2}{2}q^2-\frac{\sigma_N^\alpha}{\cos(\pi\alpha/2)}q^\alpha.
\eeqnn
and the immigration rate $\Phi(q)=0$. Then $0$ is an absorbing point of $\theta_n$ and $\theta_n$ is the extinct time of CB process $u^{(n)}$. Since
$\int_1^{\infty}1/\Psi_{\alpha}(u)du<\infty$, the so-called Grey's condition is satisfied, it follows from Grey \cite[Theorem 1]{Grey74} that
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbb{P}(\theta_n<\infty)=\int_0^\infty \mathbb{P}_x(\theta_n<\infty) \mathbb{P}(\Delta V_{T_n}\in dx)=1.
\eeqnn
Furthermore, still by \cite[Theorem 1]{Grey74}, we have that
\begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{ODE2}
\mathbb{P}(\theta_n> t)=\mathbb{E}[1-e^{-{\Delta V_{T_n} q_t}}]=\alpha y^\alpha\int_{y}^{\infty}(1-e^{-xq_t})x^{-(1+\alpha)}dx,
\eeqlb
where $q_t$ is the minimal solution of the ODE
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\frac{d}{dt}q_t=-\Psi_\alpha(q_t),\quad t>0,
\eeqnn
with $q_{0+}=\infty$. In this case, $0<q_t<\infty$ for $t\in(0,\infty)$. Then
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbb{E}[\theta_n]=\alpha y^\alpha\int_0^\infty\int_{y}^{\infty}(1-e^{-xq_s})x^{-(1+\alpha)}dxds,
\eeqnn
which gives (\ref{duration cluster}) by (\ref{ODE2}).\finproof
{\noindent\bf Proof of Propositon \ref{Pro: decomposition3}}
By (\ref{mother jump intensity}), we have
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
J_{nt}^{(y_n)}=\int_0^{nt}\int_0^{V^{(y_n)}_{s-}}\int_{\bar{y}_n}^\infty N(ds,du,d\zeta)
\eeqnn
where $\bar{y}_n=y_n/\sigma_N$. It follows from Proposition \ref{Pro: decomposition}-(2) that for any $\theta>0$,
\begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{Prop5.3 equality}
\mathbb{E}\Big[e^{-\theta J_{nt}^{(y_n)}}\Big]&=&\mathbb{E}\bigg[\exp\bigg\{
\Big(\int_{\bar{y}_n}^\infty \nu_\alpha(d\xi)\Big)\int_0^{nt}V_s^{(y_n)}ds\Big(e^{-\theta}-1\Big)\bigg\}\bigg]\nonumber\\
&=&
\mathbb{E}\bigg[\exp\bigg\{
\Big(n\int_{\bar{y}_n}^\infty \nu_\alpha(d\xi)\Big)\frac{1}{n}\int_0^{nt}V_s^{(y_n)}ds\Big(e^{-\theta}-1\Big)\bigg\}\bigg].
\eeqlb
Based on (\ref{rhat1}), for fixed $y_n$, $\{V_t^{(y_n)}:t\geq0\}$ is a CBI process. By \cite[Remark 5.3]{JMS2017}, for $\theta>0$,
\begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{LS of truncated processes}
\mathbb{E}\bigg[e^{-\frac{\theta}{n}\int_0^{nt} V_s^{(y_n)}ds}\bigg]=\exp\bigg\{-v_n(\theta,nt)V_0-ab\int_0^{nt}v_n(\theta,s)ds\bigg\}
\eeqlb
where $v_n(\theta,t)$ is the unique solution of
\begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{ODE of n}
\frac{dv_n(\theta,t)}{dt}=\frac{\theta}{n}-\Psi_n(v_n(\theta,t)),
\eeqlb
with $v_n(\theta,0)=0$, and
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\Psi_n(q)=\Big(a+\sigma_N^\alpha\int_{y_n}^\infty\xi\nu_\alpha(d\xi)\Big)q+\frac{\sigma^2}{2}q^2+\sigma_N^\alpha\int_0^{y_n}
(e^{-q\xi}-1+q\xi)\nu_\alpha(d\xi).
\eeqnn
Then we have $-\Psi_n(v_n(\theta,t))\leq\frac{d v_n(\theta,t)}{dt}\leq \frac{\theta}{n}-av_n(\theta,t)$, which implies that $0\leq v_n(\theta,t)\leq \frac{\theta}{
an}(1-e^{-at})$. By (\ref{ODE of n}),
\begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{ODE of n 2}
nv_n(\theta,nt)=\frac{\theta}{a_n}(1-e^{-na_nt})-\int_0^{nt}e^{-a_n(nt-s)}n\widehat{\Psi}_n(v_n(\theta,s))ds,
\eeqlb
where
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
a_n=a+\sigma_N^\alpha\int_{y_n}^\infty\xi\nu_\alpha(d\xi),\quad \widehat{\Psi}_n(q)=\frac{\sigma^2}{2}q^2+\sigma_N^\alpha\int_0^{y_n}
(e^{-q\xi}-1+q\xi)\nu_\alpha(d\xi).
\eeqnn
Note that $a_n\rightarrow a$, and for all $t\geq 0$ and $n\geq1$,
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
0\leq nv_n(\theta,t)\leq \frac{\theta}{a}, \quad n\widehat{\Psi}_n(v_n(\theta,t))\leq \frac{\sigma^2\theta^2}{2a^2n}-\frac{\sigma_N^\alpha\theta^\alpha}{\cos(\pi\alpha/2)a^\alpha n^{\alpha-1}}.
\eeqnn
By (\ref{ODE of n 2}), we have $nv_n(\theta,nt)\rightarrow\frac{\theta}{a}$ and then
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\int_0^{nt}v_n(\theta,s)ds=\int_0^tnv_n(\theta,ns)ds\rightarrow\frac{\theta t}{a}.
\eeqnn
Thus by (\ref{LS of truncated processes}), we have for any $t\geq0$,
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\frac{\int_0^{nt}V_s^{(y_n)}ds}{n}\overset{p}{\rightarrow} bt.
\eeqnn
Recall that $y_n\sim cn^{1/\alpha}$. Then $n\int_{\bar{y}_n}^\infty \nu_\alpha(d\xi)\rightarrow-
\frac{\sigma_N^\alpha}{\alpha\cos(\pi\alpha/2)\Gamma(-\alpha)c^\alpha}$. By (\ref{Prop5.3 equality}),
\begin{eqnarray*}}\def\eeqnn{\end{eqnarray*}
\mathbb{E}\Big[e^{-\theta J_{nt}^{(y_n)}}\Big]\rightarrow \exp\bigg\{-\frac{\sigma_N^\alpha bt}{\alpha\cos(\pi\alpha/2)\Gamma(-\alpha) c^\alpha}(e^{-\theta}-1)\bigg\}.
\eeqnn
We are done.
\bibliographystyle{spbasic}
|
1,314,259,996,626 | arxiv | \section{Introduction}
\paragraph{}Theory of fractional order systems has gained remarkable significance during last few decades due to its real world applications in ostensibly diverse and wide spread fields of applied mathematics, physics and engineering. The monographs \cite{hr,kst,vl,mf,pi} are devoted to such practical problems in control theory, modeling, relaxations and serve as a foundation of fractional order theory in physics and applied sciences. Recently, Hilfer \cite{hr,rh}, Mainardi \cite{mf} discussed various applications of fractional differential equations in their works. Many complex phenomena in nature can be described more accurately using various fractional operators and are characterized by rapid change in their state.
Nowadays, numerous fractional differential operators are present in literature but Riemann-Liouville (R-L) \cite{kst} and Caputo \cite{q,mf} are universally accepted approaches. R-L operator places less constraints on concerned function but fails to physically sound with practically applicable initial conditions. To avoid such a difficulty, scientists accepted Caputo approach which admits lot of properties from classical calculus. But in \cite{dw}, it has shown that, Caputo derivative also has some defects in applications. Concretely, as in \cite{dw}, one has
\begin{equation*}
\lim_{\delta\to0} {^{C}D_{0^+}^{n-\delta}}x(t)=x^{(n)}(t),\qquad
\lim_{\delta\to0} {^{C}D_{0^+}^{n+\delta}}x(t)=x^{(n)}(t)-x^{(n)}(a),\quad \delta>0.
\end{equation*}
Observe that, if $x^{(n)}(a)\neq0$ near an integer $n$, a very small error of measurement on fractional order may result in totally unlike results, which implies a common case of fractional dynamic system when system starts from non-constant state. Such a problem does not arises in R-L sense. Additionally, when the Caputo fractional derivative is applied to describe the Nutting's law \cite{df,mf,n},
\begin{equation*}
\sigma(t)=\nu D^{k}\epsilon(t),
\end{equation*}
says a constant strain $\epsilon$ implies stress is independent of time $t$, i.e. $\sigma\equiv0.$ This violates the physical properties of real viscoelastic materials. While in the R-L theory, the constant strain $\epsilon$ does not lead to constant stress \cite{sb}. Analogously, we prefer the Hilfer (generalized Riemann-Liouville) derivative operator which interpolates the both R-L as well as Caputo sense.
In the recent investigations, many researchers studied the existence and uniqueness of solution of nonlinear fractional differential equations, see \cite{bal,cj,dr,df,kmf,lv,jd3,pi,ps,zr} and references therein. Infact, the global existence of solution of fractional differential equation is one of the elementary property, see \cite{kst,pi}. In this paper we mainly focus on developing the theory of existence and uniqueness. First we obtain the local existences followed by continuation theorems to extend the existence of solutions globally. The results obtained in this paper generalizes the existing results \cite{cz,ls} in the literature. Equivalently, the works by C. Kou, H. Zhou, C. P. Li \cite{cz} and C. Li, S. Sarwar \cite{ls} and references therein follow as particular cases of our main results.
The rest of the article is organized as follows: in section 2, we collect all the useful definitions and previously known lemmas which are used in construction of our main results. Section 3 devoted to local existence of solutions followed by continuation results with global existence theorems in section 4. Concluding remarks are given in the last section.
\section{Prerequisites}
This section is devoted to basic definitions and lemmas from \cite{kst} the theory of fractional calculus which are used in subsequent sections. Let $C_{1-\gamma}[0,T]$ is a complete metric space of all continuous functions mapping $[0,T]$ into $\R$ with the metric $d$ defined by \cite{kmf}
\begin{equation*}
d(x_1,x_2)={\|x_1-x_2\|}_{C_{1-\gamma}[0,T]}:=\max_{t\in[0,T]}|t^{1-\gamma}[x_1(t)-x_2(t)]|,
\end{equation*}
where
\begin{equation*}
C_{1-\gamma}[0,T]=\{x(t):(0,T]\to\R:{t}^{1-\gamma}x(t)\in C[0,T]\}.
\end{equation*}
\begin{definition} \cite{cj}
Let $\Omega=(0,T]$ and $f:(0,\infty)\to\R$ is a real valued continuous function. The Riemann-Liouville fractional integral of a function $f$ of order $\alpha\in{\R}^{+}$ is denoted as $I_{0^+}^{\alpha}f$ and defined by
\begin{equation}\label{d1}
I_{0^+}^{\alpha}f(t)=\frac{1}{\Gamma(\alpha)}\int_{0}^{t}\frac{f(s)ds}{(t-s)^{1-\alpha}},\quad t>0,
\end{equation}
where $\Gamma(\alpha)$ is the Euler's Gamma function.
\end{definition}
\begin{definition} \cite{kst}
Let $\Omega=(0,T]$ and $f:(0,\infty)\to\R$ is a real valued continuous function. The Riemann-Liouville fractional derivative of function $f$ of order $\alpha\in{\R}_{0}^{+}=[0,+\infty)$ is denoted as $D_{0^+}^{\alpha}f$ and defined by
\begin{equation}\label{d2}
D_{0^+}^{\alpha}f(t)=\frac{1}{\Gamma(n-\alpha)}\frac{d^{n}}{dt^{n}}\int_{0}^{t}\frac{f(s)ds}{(t-s)^{\alpha-n+1}},
\end{equation}
where $n=[\alpha]+1,$ and $[\alpha]$ means the integral part of $\alpha,$ provided the right hand side is pointwise defined on $(0,\infty).$
\end{definition}
\begin{definition} \cite{hr} The Hilfer fractional derivative $D_{0^+}^{\alpha,\beta}$ of function $f\in L^{1}(0,T)$ of order $n-1<\alpha<n$ and type $0\leq\beta\leq1$ is defined by
\begin{equation}\label{d3}
D_{0^+}^{\alpha,\beta}f(t)=I_{0^+}^{\beta(n-\alpha)}D^{n}I_{0^+}^{(1-\beta)(n-\alpha)}f(t),
\end{equation}
where $I_{0^+}^{\alpha}$ and $D_{0^+}^{\alpha}$ are Riemann-Liouville fractional integral and derivative defined by \eqref{d1} and \eqref{d2}, respectively.
\end{definition}
\begin{remark}
The Hilfer fractional derivative interpolates between the R-L and Caputo fractional derivative since
\begin{equation*}
D_{0^+}^{\alpha,\beta}=\begin{cases}
DI_{0^+}^{1-\alpha}=D_{0^+}^{\alpha}, \quad\quad \beta=0,\\
I_{0^+}^{1-\alpha}D= {^C D_{0^+}^{\alpha}}, \,\,\, \quad \beta=1.
\end{cases}
\end{equation*}
\end{remark}
Let $0<\alpha<1,0\leq\beta\leq1.$ For the analysis we consider the initial value problem
\begin{equation}\label{a}\begin{cases}
D_{0^+}^{\alpha,\beta}x(t)&=f(t,x),\quad t\in(0,+\infty),\\
I_{0^+}^{1-\gamma}x(0^+)&=x_0, \quad \gamma=\alpha+\beta-\alpha\beta,
\end{cases}
\end{equation}
and the initial value problem (IVP) for the system of differential equations
\begin{equation}\label{s1}\begin{cases}
D_{0^+}^{\alpha,\beta}x_{1}(t)&=f_{1}(t,x_{1},x_{2},..,x_{n}), \\
D_{0^+}^{\alpha,\beta}x_{2}(t)&=f_{2}(t,x_{1},x_{2},..,x_{n}), \\
&\cdots\\
D_{0^+}^{\alpha,\beta}x_{n}(t)&=f_{n}(t,x_{1},x_{2},..,x_{n}), \\
I_{0^+}^{1-\gamma}x_{i}(0^+)&=x_0, \quad \gamma=\alpha+\beta-\alpha\beta, i=1,2,..,n,
\end{cases}
\end{equation}
where $f:{\R}^{+}\times{\R}\to\R$ in IVP \eqref{a}, $f_{i}:{\R}^{+}\times{\R}^{n}\to\R$ in IVP \eqref{s1} have weak singularities with respect to $t$ and satisfies the Lipschitz conditions $$|f(t,x)-f(t,y)|\leq L|x-y|,\,\, L>0,$$
\begin{equation*}
|f_{k}(t,x_{1},x_{2},..,x_{n})-f_{k}(t,y_{1},y_{2},..,y_{n})|\leq\sum_{k=1}^{n}L_{k}|x_{k}-y_{k}|,\,\,L_{k}>0, k=1,2,..,n,
\end{equation*}
respectively to ensure the existence of unique solutions.
Furthermore, the equivalence of Hilfer fractional IVP and it's equivalent integral equation is established in \cite{kmf} the following lemma.
\begin{lemma}\cite{kmf}
Let $\gamma=\alpha+\beta-\alpha\beta$ where $0<\alpha<1$ and $0\leq\beta\leq1.$ Let $f:(0,T]\times\R\to\R$ such that $f(t,x(t))\in C_{1-\gamma}[0,T]$ for any $x\in C_{1-\gamma}[0,T].$ If $x\in C_{1-\gamma}^{\gamma}[0,T],$ then $x$ satisfies IVP \eqref{a} if and only if $x$ satisfies the Volterra fractional integral equation of the second kind
\begin{equation}\label{b}
x(t)=\frac{x_0}{\Gamma(\gamma)}t^{\gamma-1}+\frac{1}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}f(s,x(s))ds, \quad t\in(0,\infty).
\end{equation}
\end{lemma}
\begin{lemma}\cite{cz} If $M$ is a subset of $C_{1-\gamma}[0,T].$ Then $M$ is precompact if and only if the following conditions hold:
\item[(i)] $\{t^{1-\gamma}x(t):x\in M\}$ is uniformly bounded,
\item[(ii)] $\{t^{1-\gamma}x(t):x\in M\}$ is equicontinuous on $[0,T].$
\end{lemma}
\begin{lemma}\cite{cj}
Let $a<b<c, 0\leq\mu<1,$ $x\in C_{\mu}[a,b],$ $y\in C[b,c]$ and $x(b)=y(b).$ Define
\begin{equation*}
z(t)=\begin{cases}
x(t), & \mbox{if }\,\, t\in (a,b], \\
y(t), & \mbox{if} \,\,\, t\in [b,c].
\end{cases}
\end{equation*}
Then $z\in C_{\mu}[a,c].$
\end{lemma}
\begin{lemma}
\textbf{Schauder Fixed Point Theorem:} \cite{gd}
Let $U$ be a closed bounded convex subset of a Banach space $X$ and $T:U\to U$ is completely continuous. Then $T$ has a fixed point in $U.$
\end{lemma}
\begin{lemma} \cite{kst}
If $\alpha>0,0\leq\mu<1.$ If $\mu>\alpha,$ then the fractional integrals $I_{0^+}^{\alpha}$ are bounded from $C_{\mu}[0,T]$ into $C_{\mu-\alpha}[0,T].$ If $\mu\leq\alpha,$ then the fractional integrals $I_{0^+}^{\alpha}$ are bounded from $C_{\mu}[0,T]$ into $C[0,T].$
\end{lemma}
\section{Local existence}
In this section, we obtain the local existence of solutions of IVPs \eqref{a} and \eqref{s1}. For this, let us make the following two hypothesis.
$(H_1).$ Let $f:{\R}^{+}\times\R\to\R$ in IVP \eqref{a} be a continuous function and there exists a constant $0\leq\delta<1$ such that $(Ax)(t)=t^{\delta}f(t,x(t))$ is continuous bounded map from $C_{1-\mu}[0,T]$ into $C[0,T],$ where $T$ is positive constant.
$(H_2).$ Let $f_{i}:{\R}^{+}\times{\R}^{n}\to\R$ in IVP \eqref{s1} be a continuous functions and there exists constants $0\leq{\delta}_{i}<1,$ such that $(A_{i}x_{i})(t)=t^{{\delta}_{i}}f_{i}(t,x_{1},x_{2},..,x_{n})$, $i=1,2,..,n$ are continuous bounded map from $C_{1-\mu}[0,T]$ into $C[0,T],$ where $T$ is positive constant.
\begin{theorem}
Suppose that $(H_1)$ hold. Then IVP \eqref{a} has at least one solution $x\in C_{1-\gamma}[0,h]$ for some $(T\geq)h>0.$
\end{theorem}
\textbf{Proof:} Let
\begin{equation*}
E=\bigg\{x\in C_{1-\gamma}[0,T]:{\big\|x-\frac{x_0}{\Gamma(\gamma)}t^{\gamma-1}\big\|}_{C_{1-\gamma}[0,T]}=\sup_{0\leq t\leq T}\big|t^{1-\gamma}x(t)-\frac{x_0}{\Gamma(\gamma)}\big|\leq b\bigg\},
\end{equation*}
where $b>0$ is a constant. Since the operator $A$ is bounded, there exists a constant $M>0,$ such that
$\sup{\big\{|(Ax)(t)|:t\in [0,T],x\in E\big\}}\leq M.$
\begin{equation*}
\hspace{-1cm}\text{Again let}\qquad D_h=\bigg\{x:x\in C_{1-\gamma}[0,h],\sup_{0\leq t\leq h}\big|t^{1-\gamma}x(t)-\frac{x_0}{\Gamma(\gamma)}\big|\leq b\bigg\},
\end{equation*}
where $h=\min\bigg\{{\big(\frac{b\Gamma(\alpha-\delta+1)}{M\Gamma(1-\delta)}\big)}^{\frac{1}{\alpha-\delta}},T\bigg\}.$ Obviously, $D_h\subset C_{1-\gamma}[0,h]$ is nonempty, closed bounded and convex subset.\
Note that $h\leq T,$ define the operator $B$ as follows:
\begin{equation}\label{c}
(Bx)(t)=\frac{x_0}{\Gamma(\gamma)}t^{\gamma-1}+\frac{1}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}f(s,x(s))ds, \quad t\in[0,h].
\end{equation}
It follows from $(H_1)$ and Lemma 2.5 that we have $B(C_{1-\gamma}[0,h])\subset C_{1-\gamma}[0,h].$\
On the other hand by relation \eqref{c}, for any $x\in C_{1-\gamma}[0,h],$ we have
\begin{align*}
\bigg|t^{1-\gamma}(Bx)(t)-\frac{x_0}{\Gamma(\gamma)}\bigg|&=\bigg|\frac{t^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}s^{-\delta}[s^{\delta}f(s,x(s))]ds\bigg|\\
& \leq\frac{t^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}s^{-\delta}Mds \\
& \leq Mt^{1-\gamma} {I_{0^+}^{\alpha}(t^{-\delta})}\\
& \leq \frac{Mh^{\alpha-\gamma-\delta+1}\Gamma(1-\delta)}{\Gamma(\alpha-\delta+1)}\leq b,
\end{align*}
which means $BD_h\subset D_h.$
Next we show that $B$ is continuous. Let $x_n,x\in D_h, {\|x_n-x\|}_{C_{1-\gamma}[0,h]}\to 0$ as $n\to +\infty.$
In view of continuity of $A,$ we have ${\|Ax_n-Ax\|}_{[0,h]}\to 0$ as $n\to +\infty.$ Now noting that
\begin{align*}
\bigg|t^{1-\gamma}(Bx_n)(t)-t^{1-\gamma}&(Bx)(t)\bigg|=\bigg|\frac{t^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}f(s,x_n(s))ds\\
&\hspace{2cm}-\frac{t^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}f(s,x(s))ds\bigg|\\
&\leq\frac{t^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}s^{-\delta}[s^{\delta}|f(s,x_n(s))-f(s,x(s))|]ds\\
&\leq\frac{t^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}s^{-\delta}|(Ax_n)(s)-(Ax)(s)|ds\\
&\leq {\|(Ax_n)(s)-(Ax)(s)\|}_{[0,h]}\frac{t^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}s^{-\delta}ds
\end{align*}
we have
\begin{equation*}
{\|(Bx_n)(t)-(Bx)(t)\|}_{C_{1-\gamma}[0,h]}\leq{\|(Ax_n)(s)-(Ax)(s)\|}_{[0,h]}\frac{\Gamma(1-\delta)h^{\alpha-\gamma-\delta+1}}{\Gamma(\alpha-\delta+1)}.
\end{equation*}
Then ${\|(Bx_n)(t)-(Bx)(t)\|}_{C_{1-\gamma}[0,h]}\to 0$ as $n\to+\infty.$ Thus $B$ is continuous. Furthermore, we shall prove that the operator $BD_h$ is continuous. Let $x\in D_h,$ and $0\leq t_1<t_2\leq h.$ For any $\epsilon>0,$ note that
\begin{equation*}
\frac{t^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}s^{-\delta}ds=\frac{\Gamma(1-\delta)}{\Gamma(\alpha-\delta+1)}t^{\alpha-\gamma-\delta+1}\to 0 \,\,\text{as}\,\, t\to {0}^{+}, \quad 0\leq\delta<1,
\end{equation*}
there exists a $(h>)\delta_1>0$ such that, for $t\in [0,\delta_1],$
\begin{equation*}
\frac{2Mt^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}s^{-\delta}ds<\epsilon \qquad \text{holds.}
\end{equation*}
In the case with $t_1,t_2\in [0,\delta_1],$ we have
\begin{equation}\label{d}
\begin{split}
\big|&\frac{{t}_{1}^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t_1}(t_1-s)^{\alpha-1}f(s,x(s))ds-\frac{t_{2}^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t_2}(t_2-s)^{\alpha-1}f(s,x(s))ds\big|\\
&\leq\frac{Mt_{1}^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t_1}(t_1-s)^{\alpha-1}s^{-\delta}ds+\frac{Mt_{2}^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t_2}(t_2-s)^{\alpha-1}s^{-\delta}ds<\epsilon.
\end{split}
\end{equation}
In the case with $t_1,t_2\in [\frac{\delta_1}{2},h],$ we get
\begin{align*}
\big|{t}_{1}^{1-\gamma}&(Bx)(t_1)-{t}_{2}^{1-\gamma}(Bx)(t_2)\big|\\
=&\big|\frac{{t}_{1}^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t_1}(t_1-s)^{\alpha-1}f(s,x(s))ds-\frac{t_{2}^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t_2}(t_2-s)^{\alpha-1}f(s,x(s))ds\big|\\
=&\big|\frac{1}{\Gamma(\alpha)}\int_{0}^{t_1}[{t}_{1}^{1-\gamma}(t_1-s)^{\alpha-1}-t_{2}^{1-\gamma}(t_2-s)^{\alpha-1}] f(s,x(s))ds\\
&\hspace{1cm}-\frac{t_{2}^{1-\gamma}}{\Gamma(\alpha)}\int_{t_1}^{t_2}t_{2}^{1-\gamma}(t_2-s)^{\alpha-1}f(s,x(s))ds\big|
\end{align*}
We see from the fact that if $0\leq\mu_1<\mu_2\leq h,$ then for $0\leq s<\mu_1,$ we have $\mu_{1}^{1-\gamma}(\mu_1-s)^{\alpha-1}>\mu_{2}^{1-\gamma}(\mu_2-s)^{\alpha-1}$ and we obtain
\begin{align*}
\big|\frac{1}{\Gamma(\alpha)}&\int_{0}^{t_1}[{t}_{1}^{1-\gamma}(t_1-s)^{\alpha-1}-t_{2}^{1-\gamma}(t_2-s)^{\alpha-1}] f(s,x(s))ds\big|\\
&\leq \frac{1}{\Gamma(\alpha)}\int_{0}^{t_1}|[{t}_{1}^{1-\gamma}(t_1-s)^{\alpha-1}-t_{2}^{1-\gamma}(t_2-s)^{\alpha-1}]s^{-\delta}|s^{\delta}f(s,x(s))ds\\
&\leq\frac{M}{\Gamma(\alpha)}\int_{0}^{\frac{\delta_1}{2}}|[{t}_{1}^{1-\gamma}(t_1-s)^{\alpha-1}-t_{2}^{1-\gamma}(t_2-s)^{\alpha-1}]s^{-\delta}|ds\\
&\hspace{.4cm}+{({\frac{\delta_1}{2}})}^{-\delta}\frac{M}{\Gamma(\alpha)}\int_{\frac{\delta_1}{2}}^{t_1}[{t}_{1}^{1-\gamma}(t_1-s)^{\alpha-1}-t_{2}^{1-\gamma}(t_2-s)^{\alpha-1}]ds\\
&\leq\frac{2M{(\frac{\delta_1}{2})}^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{\frac{\delta_1}{2}}(\frac{\delta_1}{2}-s)^{\alpha-1}s^{-\delta}ds\\
&\hspace{.3cm}+\frac{M{(\frac{\delta_1}{2})}^{-\delta}}{\Gamma(\alpha+1)}[{t_2}^{1-\gamma}(t_2-t_1)^{\alpha}-{t_2}^{1-\gamma}(t_2-\frac{\delta_1}{2})^{\alpha}+{t_1}^{1-\gamma}(t_1-\frac{\delta_1}{2})^{\alpha}]\\
&\leq\epsilon+\frac{M{(\frac{\delta_1}{2})}^{-\delta}}{\Gamma(\alpha+1)}[{h}^{1-\gamma}(t_2-t_1)^{\alpha}+{t_2}^{1-\gamma}(t_2-\frac{\delta_1}{2})^{\alpha}+{t_1}^{1-\gamma}(t_1-\frac{\delta_1}{2})^{\alpha}]
\end{align*}
On the other hand,
\begin{align*}
\big|\frac{{t_2}^{1-\gamma}}{\Gamma(\alpha)}\int_{t_1}^{t_2}(t_2-s)^{\alpha-1}f(s,x(s))ds\big|&\leq\frac{{(\frac{\delta_1}{2})}^{-\delta}M}{\Gamma(\alpha)}\int_{t_1}^{t_2}t_{2}^{1-\gamma}(t_2-s)^{\alpha-1}ds\\
&=\frac{{(\frac{\delta_1}{2})}^{-\delta}M}{\Gamma(\alpha+1)}t_{2}^{1-\gamma}(t_2-t_1)^{\alpha}\\
&\leq \frac{{(\frac{\delta_1}{2})}^{-\delta}M{h}^{1-\gamma}}{\Gamma(\alpha+1)}(t_2-t_1)^{\alpha}.
\end{align*}
Clearly, there exist a $\delta,\frac{\delta_1}{2}>\delta>0$ such that, for $t_1,t_2\in[\frac{\delta_1}{2},h],|t_1-t_2|<\delta$ implies
\begin{equation}\label{e}
|{t_1}^{1-\gamma}(Bx)(t_1)-{t_2}^{1-\gamma}(Bx)(t_2)|<2\epsilon.
\end{equation}
It follows from equations \eqref{d} and \eqref{e} that $\{{t}^{1-\gamma}(Bx)(t):x\in D_h\}$ is \textit{equicontinuous.} Obviously, it is clear that $\{{t}^{1-\gamma}(Bx)(t):x\in D_h\}$ is \textit{uniformly bounded} since $BD_h\subset D_h.$ By Lemma 2.2, $BD_h$ is percompact. Therefore $B$ is completely continuous. By Schauder fixed point theorem and Lemma 2.1, the IVP \eqref{a} has a local solution. The proof is thus complete.
\begin{theorem}
Suppose that $(H_2)$ hold. Then IVP \eqref{s1} has at least one solution $x_{i}\in C_{1-\gamma}[0,T]$ for some $(T\geq0)h>0.$
\end{theorem}
\textbf{Proof:} Let
\begin{equation*}
E_{s}=\bigg\{x_{i}\in C_{1-\gamma}[0,T]:{\|x_{i}-\frac{x_0}{\Gamma(\gamma)}t^{\gamma-1}\|}_{C_{1-\gamma}[0,T]}=\sup_{0\leq t\leq T}|t^{1-\gamma}x_{i}-\frac{x_0}{\Gamma(\gamma)}|\leq b_{i}\bigg\},
\end{equation*}
for $b_{i}>0 (i=1,2,..,n)$ are constants. Since the operators $A_{i}, (i=1,2,..,n)$ are bounded then there exists constants $M_{i}>0, (i=1,2,..,n)$ such that $$\sup{ \big\{ |(A_{i}x_{i})(t)|:t\in[0,T], x_{i}\in E_{s}\big\}}\leq M_{i}, i=1,2,..,n.$$
\begin{equation*}
\hspace{-1cm}\text{Again let}\quad D_{ih}=\bigg\{x_{i}:x_{i}\in C_{1-\gamma}[0,h],\sup_{0\leq t\leq h}|t^{1-\gamma}x_{i}(t)-\frac{x_0}{\Gamma(\gamma)}|\leq b_{i}\bigg\},
\end{equation*}
\begin{equation*}
\text{where}\quad h=\min{ \Bigg\{{\bigg(\frac{b_1\Gamma(\alpha-\delta_1+1)}{M_1\Gamma(1-\delta_1)}\bigg)}^{\frac{1}{\alpha-\delta_1}},\cdots, {\bigg(\frac{b_n\Gamma(\alpha-\delta_n+1)}{M_n\Gamma(1-\delta_n)}\bigg)}^{\frac{1}{\alpha-\delta_n}}, \, T \Bigg\}}
\end{equation*}
$\alpha>{\delta}_{i},\,i=1,2,..,n.$ Clearly, $D_{ih}\subset C_{1-\gamma}[0,h]$ is nonempty, closed bounded and convex subsets. Note that $h\leq T,t\in[0,h]$ define operators $B_{i}$ as follows.
\begin{equation}\label{s2}
\begin{cases}
(B_{1}x_{1})(t)&=x_0+\frac{1}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}f_{1}(s,x_{1}(s),x_{2}(s),..,x_{n}(s))ds,\\
(B_{2}x_{2})(t)&=x_0+\frac{1}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}f_{2}(s,x_{1}(s),x_{2}(s),..,x_{n}(s))ds,\\
&\cdots\\
(B_{n}x_{n})(t)&=x_0+\frac{1}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}f_{n}(s,x_{1}(s),x_{2}(s),..,x_{n}(s))ds.
\end{cases}
\end{equation}
By \eqref{s2}, for $x_{i}\in C_{1-\gamma}[0,h],$ we have
\begin{align*}
|t^{1-\gamma}(B_{1}x_{1})(t)-\frac{x_0}{\Gamma(\gamma)}|&=|\frac{t^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}s^{-\delta_1}
[s^{\delta_1}f_{1}(s,x_{1}(s),x_{2},..,x_{n}(s))]ds|\\
&\leq\frac{M_1}{\Gamma(\alpha)}t^{1-\gamma} {_{0}I_{t}^{\alpha}(t^{-\delta_1})}=\frac{M_1\Gamma(1-\delta_1)}{\Gamma(\alpha-\delta_1+1)}t^{\alpha-\delta_1-\gamma+1},\\
|t^{1-\gamma}(B_{1}x_{1})(t)-\frac{x_0}{\Gamma(\gamma)}|&\leq \frac{M_1\Gamma(1-\delta_1)}{\Gamma(\alpha-\delta_1+1)}h^{\alpha-\delta_1-\gamma+1}\leq b_{1},\\
|t^{1-\gamma}(B_{2}x_{2})(t)-\frac{x_0}{\Gamma(\gamma)}|&\leq \frac{M_2\Gamma(1-\delta_2)}{\Gamma(\alpha-\delta_2+1)}h^{\alpha-\delta_2-\gamma+1}\leq b_{2},\\
&\cdots\\
|t^{1-\gamma}(B_{n}x_{n})(t)-\frac{x_0}{\Gamma(\gamma)}|&\leq \frac{M_n\Gamma(1-\delta_n)}{\Gamma(\alpha-\delta_n+1)}h^{\alpha-\delta_n-\gamma+1}\leq b_{n},
\end{align*}
which shows that, $B_{i}D_{ih}\subset D_{ih}, i=1,2,..,n.$
Next we show that operators $B_{i}$ are continuous. Let $x_{m},x_{i}\in D_{ih}, m>n$, $i=1,2,..,n$ such that $\|x_{m}-x_{i}\|\to0$ as $m\to+\infty.$
In view of continuity of operators $A_{i},$ we have ${\|A_{i}x_{m}-A_{i}x_{i}\|}_{[0,h]}\to0$ as $m\to+\infty.$ Now noting that
\begin{align*}
|t^{1-\gamma}(B_{i}x_{m})&(t)-t^{1-\gamma}(B_{i}x_{i})(t)|=\\
&|\frac{t^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}f_{i}(s,x_{m}(s))ds-\frac{t^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}f_{i}(s,x_{i}(s))ds|\\
&\leq \frac{t^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}|f_{i}(s,x_{m}(s))-f_{i}(s,x_{i}(s))|ds\\
&\leq \frac{t^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}s^{-\delta_{i}}|A_{i}(x_{m})(s)-A_{i}(x_{i})(s)|ds\\
&\leq \frac{t^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}s^{-\delta_{i}}ds{\|A_{i}(x_{m})(s)-A_{i}(x_{i})(s)\|}_{[0,h]}.
\end{align*}
\begin{equation*}
\hspace{-1cm}\text{we have}\qquad{\|(B_{i}x_{m})(s)-(B_{i}x_{i})(s)\|}_{[0,h]}\leq \frac{\Gamma(1-\delta_i)}{\Gamma(\alpha-\delta_i+1)}h^{\alpha-\delta_i-\gamma+1}.
\end{equation*}
Then ${\|(B_{i}x_{m})(s)-(B_{i}x_{i})(s)\|}_{[0,h]}\to0$ as $m\to+\infty.$ Thus $B_{i}$ are continuous. Furthermore, we prove that operators $B_{i}D_{ih}$ are continuous. Let $x_{i}\in D_{ih}$ and $0\leq t_1<t_2\leq h.$ For any $\epsilon>0,$ note that
\begin{equation*}
\frac{t^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}s^{-\delta_i}ds=\frac{\Gamma(1-\delta_i)}{\Gamma(\alpha-\delta_i+1)}t^{\alpha-\delta_i-\gamma+1}\to0\quad\text{as}\quad t\to{0}^{+},
\end{equation*}
where $0\leq\delta_i<1.$ There exists $\tilde{{\delta}_{i}}>0$ such that for $t\in[0,h],$
\begin{equation*}
\frac{{2M_{i}}t^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}s^{-\delta_i}ds<\epsilon
\end{equation*}
holds. In this case, for $t_1,t_2\in[0,\tilde{{\delta}_{i}}],$ we have
\begin{align}\label{e1}
|&\frac{{t_1}^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t_1}(t_1-s)^{\alpha-1}f_{i}(s,x_{i}(s))ds-\frac{{t_2}^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t_2}(t_2-s)^{\alpha-1}f_{i}(s,x_{i}(s))ds|\\
&\leq\frac{{M_{i}}{t_{1}}^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t_1}(t_1-s)^{\alpha-1}s^{-\delta_i}ds+\frac{{M_{i}}{t_{2}}^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t_2}(t_2-s)^{\alpha-1}s^{-\delta_i}ds<\epsilon.\nonumber
\end{align}
In the case for $t_1,t_2\in[\frac{\tilde{{\delta}_{i}}}{2},h],$ we get
\begin{align}\label{e2}
|{t_1}^{1-\gamma}&(B_{i}x_{i})(t_1)-{t_2}^{1-\gamma}(B_{i}x_{i})(t_2)|\nonumber\\
=&|\frac{{t_1}^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t_1}(t_1-s)^{\alpha-1}f_{i}(s,x_{i}(s))ds-\frac{{t_2}^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t_2}(t_2-s)^{\alpha-1}f_{i}(s,x_{i}(s))ds|\nonumber\\
\leq&|\frac{1}{\Gamma(\alpha)}\int_{0}^{t_1}[{t_1}^{1-\gamma}(t_1-s)^{\alpha-1}-{t_2}^{1-\gamma}(t_2-s)^{\alpha-1}]f_{i}(s,x_{i}(s))ds|\nonumber\\
&\hspace{1cm}+|\frac{{t_2}^{1-\gamma}}{\Gamma(\alpha)}\int_{t_1}^{t_2}(t_2-s)^{\alpha-1}f_{i}(s,x_{i}(s))ds|
\end{align}
We see from the fact that if $0\leq\mu_1<\mu_2\leq h,$ then for $0\leq s<\mu_1,$ we have $\mu_{1}^{1-\gamma}(\mu_1-s)^{\alpha-1}>\mu_{2}^{1-\gamma}(\mu_2-s)^{\alpha-1}$ and we obtain from the first term on right hand side of inequality \eqref{e2} that
\begin{align*}
|\frac{1}{\Gamma(\alpha)}\int_{0}^{t_1}[&{t_1}^{1-\gamma}(t_1-s)^{\alpha-1}-{t_2}^{1-\gamma}(t_2-s)^{\alpha-1}]f_{i}(s,x_{i}(s))ds|\\
\leq& |\frac{M_{i}}{\Gamma(\alpha)}\int_{0}^{t_1}[{t_1}^{1-\gamma}(t_1-s)^{\alpha-1}-{t_2}^{1-\gamma}(t_2-s)^{\alpha-1}]s^{-\delta_i}ds|\\
\leq&|\frac{M_{i}}{\Gamma(\alpha)}\int_{0}^{\frac{\tilde{{\delta}_{i}}}{2}}|[{t_1}^{1-\gamma}(t_1-s)^{\alpha-1}-{t_2}^{1-\gamma}(t_2-s)^{\alpha-1}]s^{-\delta_i}|ds\\
&\hspace{0.3cm}+\frac{{(\frac{\tilde{{\delta}_{i}}}{2})}^{-\delta_i}}{\Gamma(\alpha)}\int_{(\frac{\tilde{{\delta}_{i}}}{2})}^{t}|[{t_1}^{1-\gamma}(t_1-s)^{\alpha-1}-{t_2}^{1-\gamma}(t_2-s)^{\alpha-1}]s^{-\delta_i}|ds\\
\leq&|\frac{2M_{i}{(\frac{\tilde{{\delta}_{i}}}{2})}^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{(\frac{\tilde{{\delta}_{i}}}{2})}(\frac{\tilde{{\delta}_{i}}}{2}-s)^{\alpha-1}s^{-\delta_i}|ds\\
&\hspace{0.3cm}+\frac{M_i{(\frac{\tilde{{\delta}_{i}}}{2})}^{-\delta_i}}{\Gamma(\alpha+1)}[{t_2}^{1-\gamma}(t_2-t_1)^{\alpha}-{t_2}^{1-\gamma}(t_2-\frac{\tilde{{\delta}_{i}}}{2})^{\alpha}+{t_{1}}^{1-\gamma}(t_2-\frac{\tilde{{\delta}_{i}}}{2})^{\alpha}]\\
\leq&\epsilon+\frac{M_i{(\frac{\tilde{{\delta}_{i}}}{2})}^{-\delta_i}}{\Gamma(\alpha+1)}[{h}^{1-\gamma}(t_2-t_1)^{\alpha}+|{t_2}^{1-\gamma}(t_2-\frac{\tilde{{\delta}_{i}}}{2})^{\alpha}+{t_{1}}^{1-\gamma}(t_2-\frac{\tilde{{\delta}_{i}}}{2})^{\alpha}|]
\end{align*}
On the other hand from second term on right hand side of equation \eqref{e2}
\begin{align*}
|\frac{{t_2}^{1-\gamma}}{\Gamma(\alpha)}\int_{t_1}^{t_2}(t_2-s)^{\alpha-1}f_{i}(s,x_{i}(s))ds|
&\leq\frac{M_i{(\frac{\tilde{{\delta}_{i}}}{2})}^{-\delta_i}}{\Gamma(\alpha)}\int_{t_1}^{t_2}{t_2}^{1-\gamma}(t_2-s)^{\alpha-1}ds\\
&=\frac{M_i{(\frac{\tilde{{\delta}_{i}}}{2})}^{-\delta_i}}{\Gamma(\alpha+1)}{t_2}^{1-\gamma}(t_2-t_1)^{\alpha}.
\end{align*}
From above discussion, there exist a $\lambda,(\frac{\tilde{{\delta}_{i}}}{2}>)\lambda>0$ such that for $t_1,t_2\in[\frac{\tilde{{\delta}_{i}}}{2},h]$ and $|t_1-t_2|<\lambda$ implies
\begin{equation}\label{e3}
|{t_1}^{1-\gamma}(B_{i}x_{i})(t_1)-{t_2}^{1-\gamma}(B_{i}x_{i})(t_2)|<2\epsilon.
\end{equation}
It follows from equations \eqref{e1} and \eqref{e3} that $\{t^{1-\gamma}(B_{i}x_{i})(t):x_{i}\in{D_{ih}}\}$ are \textit{equicontinuous.} It is also clear that $\{t^{1-\gamma}(B_{i}x_{i})(t):x_{i}\in{D_{ih}}\}$ is \textit{uniformly bounded} since $B_{i}D_{ih}\subset{D_{ih}}.$ Therefore $B_{i}D_{ih}$ are precompact. So the operators $B_{i}$ are completely continuous. By Schauder fixed point theorem and Lemma 2.1, the IVP \eqref{s1} has a local solution. This completes the proof.
\begin{remark}
If we let $\beta=0$ in IVP \eqref{a}, above Theorem 3.1 yields the local existence (\cite{cz}, Theorem 3.1) associated with R-L IVP (\cite{cz}, equation (1)).
\end{remark}
\begin{remark}
If we let $\beta=1$ in IVP \eqref{a}, above Theorem 3.1 yields the local existence (\cite{ls}, Theorem 3.1) associated with Caputo IVP (\cite{ls}, equation (1.1)).
\end{remark}
\section{Continuation and global existence} In this section, we concerned with continuation of solution of IVP \eqref{a} and then we obtain the global existence. Initially, we need the following definition.
\begin{definition}\cite{cz}
Let $x(t)$ on $(0,\nu)$ and $\tilde{x}(t)$ on $(0,\tilde{\nu})$ both are solutions of IVP \eqref{a}. If $\nu<\tilde{\nu}$ and $x(t)=\tilde{x}(t)$ for $t\in(0,\nu),$ we say that $\tilde{x}(t)$ is continuation of $x(t)$ or $x(t)$ can be continued to $(0,\tilde{\nu}).$ A solution $x(t)$ is noncontinuable if it has no continuation. The existing interval of noncontinuable solution $x(t)$ is called the maximum existing interval of $x(t).$
\end{definition}
\begin{lemma}
\cite{cz} Let $0<\alpha<1,\nu>0,h>0,0\leq\sigma<1, u\in C_\sigma[0,\frac{\nu}{2}]$ and $v\in [\frac{\nu}{2},h].$ Then
\begin{equation*}
I_1(t)=\int_{0}^{\frac{\nu}{2}}(t-s)^{\alpha-1}u(s)ds,\quad I_2(t)=\int_{\frac{\nu}{2}}^{{\nu}}(t-s)^{\alpha-1}v(s)ds
\end{equation*}
are continuous on $[\nu,\nu+h].$
\end{lemma}
\begin{theorem}[Continuation Theorem I]
Assume that $(H_1)$ hold. Then $x=x(t), t\in (0,\nu)$ is noncontinuable if and only if for some $\tau\in (0,\frac{\nu}{2})$ and any bounded closed subset $D\subset[\tau,+\infty)\times\R,$ there exists a $t^{*}\in[\tau,\nu)$ such that $(t^{*},x(t^{*}))\notin D.$
\end{theorem}
\textbf{Proof:} We prove this theorem by contradiction. If possible, suppose that $x=x(t)$ is continuable. Then there exists a solution $\tilde{x}(t)$ defined on $(0,\tilde{\nu}),$ such that $x(t)=\tilde{x}(t)$ for $t\in (0,\nu),$ which implies $\lim_{t\to\nu^{-}}x(t)=\tilde{x}(\nu).$ Define $x(\nu)=\tilde{x}(\nu).$ Evidently, $K=\{(t,x(t)):t\in[\tau,\nu)\}$ is a compact subset of $[\tau,+\infty)\times\R.$ However, there exists no $t^*\in[\tau,\nu)$ such that $(t^*,x(t^*))\notin K.$ This contradiction implies $x(t)$ is noncontinuable.\
We prove converse in two steps. Suppose that there exists a compact subset $\Omega\subset[\tau,+\infty)\times\R,$ such that $\{(t,x(t)):t\in[\tau,\nu)\}\subset\Omega.$ The compactness of $\Omega$ implies $\nu<+\infty.$ By $(H_1),$ there exists a $K>0$ such that $\sup_{(t,x)\in\Omega}|f(t,x)|\leq K.$\\
\textbf{Step:1.} We now show that the $\lim_{t\to\nu^{-}}x(t)$ exists. Let
\begin{equation}\label{g1}
G(s,t)=|\frac{x_0}{\Gamma(\gamma)}s^{\gamma-1}-\frac{x_0}{\Gamma(\gamma)}t^{\gamma-1}|,\quad (s,t)\in[2\tau,\nu]\times[2\tau,\nu],
\end{equation}
\begin{equation}\label{g2}
J(t)=\int_{0}^{\tau}(t-s)^{\alpha-1}s^{-\delta}ds, \quad t\in[2\tau,\nu].
\end{equation}
Easily we can see that $G(s,t)$ and $J(t)$ are uniformly continuous on $[2\tau,\nu]\times[2\tau,\nu]$ and $[2\tau,\nu],$ respectively.
For all $t_1,t_2\in[2\tau,\nu),t_1<t_2,$ by equation \eqref{g1} we have
\begin{align*}
|x(t_1)-x(t_2)|&=|\frac{x_0}{\Gamma(\gamma)}{t_1}^{\gamma-1}-\frac{x_0}{\Gamma(\gamma)}{t_2}^{\gamma-1}+\frac{1}{\Gamma(\alpha)}\int_{0}^{t_1}(t_1-s)^{\alpha-1}f(s,x(s))ds\\
&\hspace{1cm}-\frac{1}{\Gamma(\alpha)}\int_{0}^{t_2}(t_2-s)^{\alpha-1}f(s,x(s))ds|\\
&\leq G(t_1,t_2)+|\frac{1}{\Gamma(\alpha)}\int_{0}^{t_1}(t_1-s)^{\alpha-1}f(s,x(s))ds\\
&\hspace{1cm}-\frac{1}{\Gamma(\alpha)}\int_{0}^{t_2}(t_2-s)^{\alpha-1}f(s,x(s))ds|\\
&\leq G(t_1,t_2)+|\frac{1}{\Gamma(\alpha)}\int_{0}^{\tau}[(t_1-s)^{\alpha-1}-(t_2-s)^{\alpha-1}]s^{-\delta}[s^{\delta}f(s,x(s))]ds|\\
&\hspace{1cm}+|\frac{1}{\Gamma(\alpha)}\int_{\tau}^{t_1}[(t_1-s)^{\alpha-1}-(t_2-s)^{\alpha-1}]f(s,x(s))ds|\\
&\hspace{1cm}+|\frac{1}{\Gamma(\alpha)}\int_{t_1}^{t_2}(t_2-s)^{\alpha-1}f(s,x(s))ds|\\
&\leq G(t_1,t_2)+|\frac{1}{\Gamma(\alpha)}\int_{0}^{\tau}[(t_1-s)^{\alpha-1}-(t_2-s)^{\alpha-1}]s^{-\delta}(Ax)(s)ds|\\
&\hspace{1cm}+\frac{1}{\Gamma(\alpha)}\int_{\tau}^{t_1}[(t_1-s)^{\alpha-1}-(t_2-s)^{\alpha-1}]|f(s,x(s))|ds\\
&\hspace{1cm}+\frac{1}{\Gamma(\alpha)}\int_{t_1}^{t_2}(t_2-s)^{\alpha-1}|f(s,x(s))|ds\\
&\leq G(t_1,t_2)+\frac{{\|Ax\|}_{[0,\tau]}}{\Gamma(\alpha)}\int_{0}^{\tau}[(t_1-s)^{\alpha-1}-(t_2-s)^{\alpha-1}]s^{-\delta}ds\\
&\hspace{1cm}+\frac{K}{\Gamma(\alpha)}\int_{\tau}^{t_1}[(t_1-s)^{\alpha-1}-(t_2-s)^{\alpha-1}]ds+\frac{K}{\Gamma(\alpha)}\int_{t_1}^{t_2}(t_2-s)^{\alpha-1}ds\\
|x(t_1)-x(t_2)|&\leq G(t_1,t_2)+{\|Ax\|}_{[0,\tau]}|J(t_1)-J(t_2)|\\
&\hspace{1cm}+\frac{K}{\Gamma(\alpha+1)}\bigg[\big[(t_1-s)^{\alpha}-(t_2-s)^{\alpha}\big]{\bigg|}_{\tau}^{t_1}+\big[(t_2-s)^{\alpha}\big]{\bigg|}_{t_1}^{t_2}\bigg]\\
&\leq G(t_1,t_2)+{\|Ax\|}_{[0,\tau]}|J(t_1)-J(t_2)|\\
&\hspace{1cm}+\frac{K}{\Gamma(\alpha+1)}\big[2(t_2-t_1)^{\alpha}+(t_1-\tau)^{\alpha}-(t_2-\tau)^{\alpha}\big].
\end{align*}
From the continuity of $G(s,t)$ and $J(t)$ together with the Cauchy convergence criterion, we obtain $\lim_{t\to\nu^{-}}x(t)=x^{*}.$\\
\textbf{Step:2.} Now we show that $x(t)$ is continuable. Since $\Omega$ is closed subset, we have $(\nu,x^{*})\in\Omega.$ Define $x(\nu)=x^{*}.$ Then $x(t)\in C_{1-\gamma}[0,\nu].$ We define operator
\begin{equation*}
(Ny)(t)=x_1(t)+\frac{1}{\Gamma(\alpha)}\int_{\nu}^{t}(t-s)^{\alpha-1}f(s,x(s))ds, \qquad t\in[\nu,\nu+1],
\end{equation*}
where $y\in[\nu,\nu+1]$ and
\begin{equation*}
x_1(t)=\frac{x_0}{\Gamma(\gamma)}t^{1-\gamma}\frac{1}{\Gamma(\alpha)}\int_{0}^{\nu}(t-s)^{\alpha-1}f(s,x(s))ds, \qquad t\in[\nu,\nu+1].
\end{equation*}
In view of Lemma 2.5 and Lemma 4.1, we have $N(C[\nu,\nu+1])\subset C[\nu,\nu+1].$ Let
\begin{equation*}
E_b=\big\{ (t,y):\nu\leq t\leq\nu+1, |y|\leq{\max_{\nu\leq t\leq\nu+1}}|x(t)|+b \big\}, \,\, b>0.
\end{equation*}
In view of continuity of $f$ on $E_b,$ we denote $M={\max}_{(t,y)\in E_b}|f(t,y)|.$ Let
\begin{equation*}
E_h=\big\{ y\in[\nu,\nu+h]:{\max_{t\in[\nu,\nu+h]}}|y(t)-x_1(t)|\leq b, y(\nu)=x_1(\nu) \big\},
\end{equation*}
where $h=\min\bigg\{ 1,\big(\frac{\Gamma(\alpha+1)b}{m} \big)^{\frac{1}{\alpha}}\bigg\}.$\\
We claim that $N$ is completely continuous on $E_h.$ First we show the operator $N$ is continuous. In fact, let $\{y_n\}\subseteq C[\nu,\nu+h],$ ${\|y_n-y\|}_{[\nu,\nu+h]}\to 0$ as $n\to+\infty.$ Then we have
\begin{align*}
|(Ny_n)(t)-(N&y)(t)|=|\frac{1}{\Gamma(\alpha)}\int_{\nu}^{t}(t-s)^{\alpha-1}[f(s,y_n(s))-f(s,y(s))]ds|\\
&\leq\frac{1}{\Gamma(\alpha)}\int_{\nu}^{t}(t-s)^{\alpha-1}|f(s,y_n(s))-f(s,y(s))|ds \\
&\leq {\|f(s,y_n(s))-f(s,y(s))\|}_{[\nu,\nu+h]}\frac{1}{\Gamma(\alpha)}\int_{\nu}^{t}(t-s)^{\alpha-1}ds\\
&\leq {\|f(s,y_n(s))-f(s,y(s))\|}_{[\nu,\nu+h]}\frac{(t-s)^{\alpha}}{\Gamma(\alpha+1)}{\bigg|}_{s=\nu}^{t}\\
&\leq{\|f(s,y_n(s))-f(s,y(s))\|}_{[\nu,\nu+h]}\frac{h^{\alpha}}{\Gamma(\alpha+1)}.
\end{align*}
By virtue of continuity of $f$ on $E_b,$ we have ${\|Ny_n-Ny\|}_{[\nu,\nu+h]}\to 0$ as $n\to+\infty,$ which implies that $N$ is continuous.\
Secondly, we prove that $NE_h$ is euqicontinuous. For any $y\in E_h,$ for which $(Ny)(\nu)=x_1(\nu)$ and
\begin{align*}
|(Ny)(t)-x_1(t)|&=|\frac{1}{\Gamma(\alpha)}\int_{\nu}^{t}(t-s)^{\alpha-1}f(s,x(s))ds|\\
&\leq\frac{1}{\Gamma(\alpha)}\int_{\nu}^{t}(t-s)^{\alpha-1}|f(s,x(s))|ds \\
&\leq\frac{M}{\Gamma(\alpha)}\int_{\nu}^{t}(t-s)^{\alpha-1}ds\\
&\leq \frac{M(t-s)^{\alpha}}{\Gamma(\alpha+1)}{\bigg|}_{s=\nu}^{t}=\frac{M(t-\nu)^{\alpha}}{\Gamma(\alpha+1)}\\
&\leq\frac{Mh^{\alpha}}{\Gamma(\alpha+1)}\leq b.
\end{align*}
Thus $NE_h\subset E_h.$\
Set $I(t)=\frac{1}{\Gamma(\alpha)}\int_{0}^{\nu}(t-s)^{\alpha-1}f(s,x(s))ds.$ By Lemma 4.1, $I(t)$ is continuous on $[\nu,\nu+h].$ For every $y\in E_h$, $\nu\leq t_1<t_2\leq\nu+h,$ we have
\begin{align}\label{f}
|(Ny)(t_1)-&(Ny)(t_2)|\leq G(t_1,t_2)+\frac{1}{\Gamma(\alpha)}|\int_{0}^{\nu}[(t_1-s)^{\alpha-1}-(t_2-s)^{\alpha-1}]f(s,x(s))ds|\nonumber\\
&\hspace{1cm}+\frac{1}{\Gamma(\alpha)}|\int_{\nu}^{t_1}[(t_1-s)^{\alpha-1}-(t_2-s)^{\alpha-1}]f(s,x(s))ds|\nonumber\\
&\hspace{1cm}+\frac{1}{\Gamma(\alpha)}|\int_{t_1}^{t_2}(t_2-s)^{\alpha-1}f(s,y(s))ds|\nonumber\\
&\leq G(t_1,t_2)+\frac{1}{\Gamma(\alpha)}|\int_{0}^{\nu}[(t_1-s)^{\alpha-1}-(t_2-s)^{\alpha-1}]f(s,x(s))ds|\nonumber\\
&\hspace{1cm}+\frac{1}{\Gamma(\alpha)}\int_{\nu}^{t_1}[(t_1-s)^{\alpha-1}-(t_2-s)^{\alpha-1}]|f(s,x(s))|ds\nonumber\\
&\hspace{1cm}+\frac{1}{\Gamma(\alpha)}\int_{t_1}^{t_2}(t_2-s)^{\alpha-1}|f(s,y(s))|ds\nonumber\\
&\leq G(t_1,t_2)+|I(t_1)-I(t_2)|\nonumber\\
&\hspace{1cm}+\frac{M}{\alpha\Gamma(\alpha)}\bigg[(t_1-s)^{\alpha}-(t_2-s)^{\alpha}{\bigg|}_{\nu}^{t_1}+(t_2-s)^{\alpha}{\bigg|}_{\nu}^{t_1}\bigg]\nonumber\\
&\leq G(t_1,t_2)+|I(t_1)-I(t_2)|\nonumber\\
&\hspace{.5cm}+\frac{M}{\Gamma(\alpha+1)}\bigg[{\big|}(t_1-\nu)^{\alpha}+(t_2-t_1)^{\alpha}-(t_2-\nu)^{\alpha}+(t_2-t_1)^{\alpha}{\big|}\bigg]\nonumber\\
&\leq G(t_1,t_2)+|I(t_1)-I(t_2)|\nonumber\\
&\hspace{.5cm}+\frac{M}{\Gamma(\alpha+1)}\bigg[2(t_2-t_1)^{\alpha}+(t_1-\nu)^{\alpha}-(t_2-\nu)^{\alpha}\bigg]
\end{align}
In view of uniform continuity of $I(t)$ on $[\nu,\nu+h]$ and relation \eqref{f}, we obtain $\{(Ny)(t):y\in E_h\}$ is equicontinuous.
Therefore $N$ is completely continuous. By Schauder's fixed point theorem, $N$ has a fixed point $\tilde{x}(t)\in E_h.$ i.e.
\begin{align*}
\tilde{x}(t)&=x_1(t)+\frac{1}{\Gamma(\alpha)}\int_{\nu}^{t}(t-s)^{\alpha-1}f(s,\tilde{x}(s))ds\\
&=\frac{x_0}{\Gamma(\gamma)}t^{1-\gamma}+\frac{1}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}f(s,\bar{x}(s))ds
\end{align*}
\begin{equation*}
\hspace{-5cm}\text{where}\qquad\qquad\bar{x}(t)=\begin{cases}
x(t), & \mbox{if }\,\, t\in(0,\nu],\\
\tilde{x}(t), & \mbox{if}\,\,\,t\in[\nu,\nu+h].
\end{cases}
\end{equation*}
It follows from Lemma 2.3, that $\bar{x}\in C_{1-\gamma}[0,\nu+h]$ and
\begin{equation*}
\bar{x}(t)=\frac{x_0}{\Gamma(\gamma)}t^{1-\gamma}+\frac{1}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}f(s,\bar{x}(s))ds.
\end{equation*}
Therefore, in view of Lemma 2.5, $\bar{x}(t)$ is a solution of IVP \eqref{a} on $(0,\nu+h].$ This yields contradiction since $x(t)$ is noncontinuable. This completes the proof.
Now we present another continuation theorem which is more convenient for application purpose.
\begin{theorem}[Continuation Theorem 2]
Assume that $(H_1)$ hold. Then $x=x(t), t\in(0,\nu),$ is noncontinuable if and only if
\begin{equation}\label{g}
{\lim}_{t\to\nu^{-}}\sup{|M(t)|}=+\infty,
\end{equation}
where $\qquad M(t)=(t,x(t)),\qquad |M(t)|=(t^2+x^2(t))^{\frac{1}{2}}.$
\end{theorem}
\textbf{Proof:} We prove this theorem by contradiction. If possible, suppose $x=x(t)$ is continuable. Then there exists a solution $\tilde{x}(t)$ of IVP \eqref{a} defined on $(0,\tilde{\nu}),$ $\nu<\tilde{\nu},$ such that $x(t)=\tilde{x}(t)$ for $t\in(0,\nu),$ which implies ${\lim}_{t\to\nu^{-}}x(t)=\tilde{x}(\nu).$\\
Thus $|M(t)|\to|M(\nu)|,$ as $t\to\nu^{-},$ which is a contradiction.\
Conversely, suppose that relation \eqref{g} is not true. Then there exists a sequence $\{t_n\}$ and constant $L>0$ such that
\begin{align}\label{h}
t_n<t_{n+1}&, \,\,n\in \N,\nonumber\\
{\lim}_{n\to\infty}t_n&=\nu, \,\, |M(t_n)|\leq L,\\
i.e. \,\, {t_n}^2+x^2(t_n)&\leq {L}^{2}.\nonumber
\end{align}
Since $\{x(t_n)\}$ is bounded convergent subsequence, without loss of generality, set
\begin{equation}\label{i}
{\lim}_{n\to+\infty}x(t_n)=x^{*}.
\end{equation}
Now we show that, for any given $\epsilon>0,$ there exists $T\in(0,\nu),$ such that $|x(t)-x^{*}|<\epsilon, \,\, t\in(T,\nu),$ we have
\begin{equation}\label{j}
{\lim}_{t\to\nu^{-}}x(t_n)=x^{*}.
\end{equation}
For sufficiently small $\tau>0,$ let $E_1=\big\{ (t,x):t\in [\tau,\nu], |x|\leq{\sup}_{t\in[\tau,\nu)}|x(t)|\big\}.$
Since $f$ is continuous on $E_1,$ denote $M={\max}_{(t,y)\in E_1}|f(t,y)|.$
It follows from equations \eqref{h} and \eqref{i} that there exists $n_0$ such that $t_{n_0}>\tau$ and for $n\geq n_0,$ we have $$|x(t_n)-x^{*}|\leq{\frac{\epsilon}{2}}.$$
If equation \eqref{i} is not true, then for $n\geq n_0,$ there exists $\eta_{k}\in(t_n,\nu)$ such that for $t\in(t_n,\eta_n),$ $|x(t)-x^{*}|<\epsilon$ and $|x(\eta_n)-x^{*}|\geq\epsilon$. Thus
\begin{align*}
\epsilon&\leq |x(\eta_n)-x^{*}|\leq |x(t_n)-x^{*}|+|x(\eta_n)-x(t_n)|\\
&\leq\frac{\epsilon}{2}+|\frac{1}{\Gamma(\alpha)}\int_{0}^{t_n}(t_n-s)^{\alpha-1}f(s,x(s))ds-\frac{1}{\Gamma(\alpha)}\int_{0}^{\eta_n}(\eta_n-s)^{\alpha-1}f(s,x(s))ds|\\
&\leq\frac{\epsilon}{2}+\frac{1}{\Gamma(\alpha)}|\int_{0}^{\tau}[(t_n-s)^{\alpha-1}-(\eta_n-s)^{\alpha-1}]f(s,x(s))ds|\\
&\hspace{0.5cm}+\frac{1}{\Gamma(\alpha)}|\int_{\tau}^{t_n}[(t_n-s)^{\alpha-1}-(\eta_n-s)^{\alpha-1}]f(s,x(s))ds|\\
&\hspace{0.5cm}+\frac{1}{\Gamma(\alpha)}|\int_{t_n}^{\eta_n}(\eta_n-s)^{\alpha-1}f(s,x(s))ds|\\
&\leq\frac{\epsilon}{2}+\frac{{\|Ax\|}_{[0,\tau]}}{\Gamma(\alpha)}|J(t_n)-J(\eta_n)|\\
&\hspace{0.5cm}+\frac{M}{\Gamma(\alpha+1)}\bigg[(t_n-s)^{\alpha}-(\eta_n-s)^{\alpha}{\bigg|}_{\tau}^{t_n}+(\eta_n-s)^{\alpha}{\bigg|}_{t_n}^{\eta_n} \bigg]\\
&\leq\frac{\epsilon}{2}+\frac{{\|Ax\|}_{[0,\tau]}}{\Gamma(\alpha)}|J(t_n)-J(\eta_n)|\\
&\hspace{0.5cm}+\frac{M}{\Gamma(\alpha+1)}\bigg[2(\eta_n-t_k)^{\alpha}+(t_n-\tau)^{\alpha}-(\eta_n-\tau)^{\alpha}\bigg],
\end{align*}
where $J(t)$ is defined by \eqref{g2}. In view of the continuity of $J(t)$ on $[t_{{n}_0},\nu],$ for sufficiently large $n\geq n_0,$ we have
\begin{equation*}
\frac{{\|Ax\|}_{[0,\tau]}}{\Gamma(\alpha)}|J(t_n)-J(\eta_n)|+\frac{M}{\Gamma(\alpha+1)}\bigg[2(\eta_n-t_n)^{\alpha}+(t_n-\tau)^{\alpha}-(\eta_n-\tau)^{\alpha}\bigg]<\frac{\epsilon}{2}
\end{equation*}
implies $\epsilon\leq|x(\eta_n)-x^{*}|<\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon.$ This contradicts and ${\lim}_{t\to{\nu^{-}}}x(t)$ exists.\
By using the similar arguments as in the proof of Theorem 4.1, we can easily find the continuation of $x(t).$ The proof is so complete.
\begin{remark}
For $\beta=0,$ Continuation Theorems 1 and 2 reduces to Continuation Theorems for R-L IVP (\cite{cz}, Theorem 4.1, Theorem 4.2, respectively).
\end{remark}
\begin{remark}
For $\beta=1,$ Continuation Theorems 1 and 2 reduces to Continuation Theorems for Caputo IVP (\cite{ls}, Theorem 4.2, Theorem 4.4, respectively).
\end{remark}
Now we study the global existence of solutions for IVP \eqref{a} based on the results obtained in the earlier sections.
Applying Continuation Theorem 2, we can have the following conclusion about the existence of global solution for IVP \eqref{a}.
\begin{theorem}
Suppose that $(H_1)$ hold. Let $x(t)$ is a solution of IVP \eqref{a} on $(0,\nu).$ If $x(t)$ is bounded on $[\tau,\nu)$ for some $\tau>0,$ then $\nu=+\infty.$\
\end{theorem}
Continuing our discussion, we firstly need the following lemma for the further results in our analysis.
\begin{lemma}\cite{cz}
Let $v:[0,b]\to[0,+\infty)$ be a real function, and $w(.)$ be a nonnegative locally integrable function on $[0,b].$ And there exists $a>0$ and $0<\alpha<1,$ such that
\begin{equation*}
v(t)\leq w(t)+a\int_{0}^{t}(t-s)^{-\alpha}v(s)ds.
\end{equation*}
Then there exists a constant $k=k(\alpha)$ such that for $t\in[0,b],$ we have
\begin{equation*}
v(t)\leq w(t)+ka\int_{0}^{t}(t-s)^{-\alpha}w(s)ds.
\end{equation*}
\end{lemma}
\begin{theorem}
Suppose that $(H_1)$ hold and there exist three nonnegative continuous functions $l(t),m(t),p(t):[0,\infty)\to[0,\infty)$ such that $|f(t,x)|\leq l(t)m(|x|)+p(t),$ where $m(r)\leq r$ for $r\geq0.$ Then IVP \eqref{a} has one solution in $C_{1-\gamma}[0,\infty).$
\end{theorem}
\textbf{Proof:} The existence of a local solution $x(t)$ of IVP \eqref{a} can be concluded from Theorem 3.1. By Lemma 2.1, $x(t)$ satisfies the integral equation \eqref{b}. Suppose that $[0,\nu),\, \nu<+\infty,$ is the maximum existing interval of $x(t).$ Then
\begin{align*}
|t^{1-\gamma}x(t)|&=|\frac{x_0}{\Gamma(\gamma)}+\frac{t^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}f(s,x(s))ds|\\
&\leq\frac{x_0}{\Gamma(\gamma)}+\frac{\nu^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}[l(s)m(s^{1-\gamma}|x(s)|)+p(s)]ds\\
&\leq\frac{x_0}{\Gamma(\gamma)}+\frac{\nu^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}l(s)m(s^{1-\gamma}|x(s)|)ds\\
&\hspace{1cm}+\frac{\nu^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}p(s)ds\\
&\leq\frac{x_0}{\Gamma(\gamma)}+\frac{\nu^{1-\gamma}}{\Gamma(\alpha)}{\|l\|}_{[0,\nu]}\int_{0}^{t}(t-s)^{\alpha-1}m(s^{1-\gamma}|x|)ds\\
&\hspace{1cm}+\frac{\nu^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}p(s)ds\\
\end{align*}
take $v(t)=t^{1-\gamma}|x(t)|$,$w(t)=\frac{x_0}{\Gamma(\gamma)}+\frac{\nu^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}p(s)ds$ and $a=\frac{\nu^{1-\gamma}{\|l\|}_{[0,\nu]}}{\Gamma(\alpha)}.$
By Lemma 4.2, we know that $v(t)=t^{1-\gamma}|x(t)|$ is bounded on $[0,\nu).$ Thus for any $\tau\in(0,\nu), \, x(t)$ is bounded on $[\tau,\nu).$ By Theorem 4.3, the IVP \eqref{a} has a solution $x(t)$ on $[0,\infty).$
Following result guarantees the existence and uniqueness of global solution of IVP \eqref{a} on ${\R}^{+}.$
\begin{theorem}
Suppose that $(H_1)$ is satisfied and there exists a nonnegative continuous function $l(t)$ defined on $[0,\infty)$ such that $|f(t,x)-f(t,y)|\leq l(t)|x-y|.$ Then IVP \eqref{a} has a unique solution in $C_{1-\gamma}[0,\infty).$
\end{theorem}
The existence of global solution can be obtained by an arguments similar as above. From the Lipschitz-type condition and Lemma 4.2, we can conclude the uniqueness of global solution. We omit the proof here.
\section{Concluding remarks} In this paper, the global existence of a unique solution of nonlinear IVP with Hilfer fractional derivative is proved with the help of fixed point technique and continuation theorems. Continuation theorem 2 is conveniently more applicable in practical problems. Our results in this paper generalizes the existing results in the literature.
|
1,314,259,996,627 | arxiv | \section{Introduction}
\setcounter{equation}{0}
Let $(\varepsilon_n)_{n \geq 1}$ be independent random matrices taking values in $G= GL_d(\mathbb R)$, $d \geq 2$ (the group of invertible $d$-dimensional real matrices) with common distribution $\mu$. Let $\Vert \cdot \Vert$ be the euclidean norm on ${\mathbb R}^d$, and for every $A \in GL_d(\mathbb R)$, let $\|A\|=\sup_{x, \|x\|=1} \|A x \|$. We shall say that $\mu $ has a moment of order $p \geq 1$ if
\[
\int_G (\log N(g) )^p d \mu(g) < \infty \, ,
\]
where $N(g) := \max ( \Vert g \Vert , \Vert g^{-1} \Vert)$.
Let $A_n= \varepsilon_n \cdots \varepsilon_1$. It follows from Furstenberg and Kesten \cite{FK} that, if $\mu$ admits a moment of order $1$ then
\beq \label{SL1}
\lim_{n \rightarrow \infty} \frac{1}{n} \log \Vert A_n \Vert = \lambda_{\mu} \, \text{ ${\mathbb P}$-a.s.},
\eeq
where $ \lambda_{\mu} := \lim_{n \rightarrow \infty} n^{-1} \E \log \Vert A_n \Vert $ is the so-called first Lyapunov exponent.
Let now $X:= P({\mathbb R}^d)$ be the projective space of ${\mathbb R}^d $ and write ${\bar x}$ as the projection of $x \in {\mathbb R}^d -\{0\}$ to $X$. An element $A$ of $G= GL_d(\mathbb R)$ acts on the projective space $X$ as follows: $A \bar x = \overline{Ax}$. Let $\Gamma_\mu$ be the closed semi-group generated by the support of $\mu$. We say that $\mu$ is proximal if $\Gamma_\mu$ contains a matrix that admits a unique (with multiplicity $1$) eigenvalue of maximal modulus. We say that $\mu$ is strongly irreducible if no proper union of subspaces of ${\mathbb R}^d$ is invariant by $\Gamma_\mu$. Throughout the paper, we assume that $\mu$ is strongly irreducible and proximal.
In particular, there exists a unique invariant measure $\nu$ on ${\mathcal B} (X)$ with respect to $ \mu$, meaning that for any continuous and bounded function $h$ from $X$ to $\mathbb R$,
\beq \label{defnu}
\int_X h(x) d \nu(x) = \int_G \int_X h( g \cdot x ) d \mu(g) d \nu(x) \, .
\eeq
Note that, since $\mu$ is assumed to be strongly irreducible, the following strong law holds (see for instance \cite{BL}, Proposition 7.2 page 72): for any $x \in {\mathbb R}^d -\{0\}$,
\beq \label{SL2}
\lim_{n \rightarrow \infty} \frac{1}{n} \log \Vert A_n x \Vert = \lambda_{\mu} \, \text{ ${\mathbb P}$-a.s.}
\eeq
To specify the rate of convergence in the laws of large numbers \eqref{SL1} and \eqref{SL2}, it is then natural to address the question of the Central Limit Theorem for the two sequences
$\log \| A_n \| - n\lambda_\mu$ and $\log \| A_n x \| - n\lambda_\mu$. To specify the limiting variance in these central limit theorems, let us introduce some notations: $W_0$ will denote a random variable with values in the projective space $X$, independent of $(\varepsilon_n)_{n \geq 1}$ and with distribution $\nu$. By the invariance of $\nu$, we see that the process $(A_n W_0)_{n \geq 1}$ is a strictly stationary process. Denote also by
$V_0$ a random variable such that $\Vert V_0 \Vert =1 $ and ${\bar V}_0 = W_0$. Setting, $S_n = \log \| A_nV_0 \| - n\lambda_\mu$, Benoist and Quint \cite{BQ} proved that if $\mu$ has a moment of order $2$, then
\beq \label{definitions2}
\lim_{n \rightarrow \infty} \frac{1}{n} \E ( S_n^2) = s^2 >0 \, ,
\eeq
\beq \label{BQCLT}
\lim_{n \rightarrow \infty} \sup_{t \in {\mathbb R}} \sup_{x, \|x\|=1}\left | {\mathbb P} \left (
\log \| A_n x \| - n \lambda_\mu \leq t \sqrt n \right ) - \Phi (t/ s) \right | =0 \, ,
\eeq
and
\beq \label{BQCLT2}
\lim_{n \rightarrow \infty} \sup_{t \in {\mathbb R}} \left | {\mathbb P} \left (
\log \| A_n \| - n \lambda_\mu \leq t \sqrt n \right ) - \Phi (t/ s) \right | =0 \, ,
\eeq
where $\Phi$ is the cumulative distribution function of a standard normal distribution. Let us mention that \eqref{BQCLT} has been firstly established by Le Page \cite{LP} under an exponential moment for $\mu$ (meaning that $\int_G (N(g))^\alpha d \mu(g) < \infty$ for some $\alpha>0$, see also \cite{GR}) and then by Jan \cite{Jan} under the condition that $\mu$ has a moment of order $p>2$.
In the present paper, we are interested in Berry-Esseen type bounds in these central limit theorems, under polynomial moments for $\mu$ (more precisely we shall focus on the case of moments of order $q \in ]2,3]$ or $q=4$). Before giving our main results, let us briefly describe the previous works on this subject.
When $\mu$ has an exponential moment, Le Page \cite{LP} proved the following inequality: there exists a positive constant $C$ such that
\beq \label{Lepage}
\sup_{t \in {\mathbb R}} \sup_{x, \|x\|=1}\left | {\mathbb P} \left (
\log \| A_n x \| - n \lambda_\mu \leq t \sqrt n \right ) - \Phi (t/ s) \right | \leq C v_n \, \text{ with } \, v_n= \frac{1}{\sqrt{n}} \, .
\eeq
Still in the case of exponential moments, Edgeworth expansions (a strengthening of the Berry-Esseen theorem) have been recently obtained by Fernando and P\`ene \cite{FP20} and Xiao et al. \cite{XGLeuropean}. In these three last papers, the assumption that $\mu$ has an exponential moment is crucial since it allows to use the strength of the so-called Nagaev-Guivarc'h perturbation method. Indeed, in case of exponential moments, the associated complex perturbed transfer operator has spectral gap properties.
Now, under the assumption that all the moments of order $p$ of $\mu$ are finite, Jan \cite{Jan} obtained the rate $v_n= n^{-1/2+ \varepsilon}$ for any $\varepsilon >0$ in \eqref{Lepage}. Next,
Cuny et al. \cite{CDJ} gave an upper bound of order $v_n = n^{-1/4} \sqrt{\log n}$ in \eqref{Lepage} provided $\mu$ has a moment of order $3$ (as a consequence of an upper bound of order
$ n^{-1/2} \log n$ for the Kantorovich metric). More recently, Jirak \cite{Ji20} proved that, if $\mu$ has a moment of order $p>8$, then
there exists a positive constant $C$ such that
\beq \label{Jirak}
\sup_{t \in {\mathbb R}} \left | {\mathbb P} \left (
\log \| A_n V_0 \| - n \lambda_\mu \leq t \sqrt n \right ) - \Phi (t/ s) \right | \leq C v_n \, \text{ with } \, v_n= \frac{1}{\sqrt{n}} \, .
\eeq
This result is based on some refinements of the arguments developed in a previous paper of the same author (see \cite{Ji}), and then on a completely different method than the perturbation method for the transfer operator.
Since our proofs will use a similar scheme let us briefly explain it. First, due to the cocycle property (see the beginning of Section \ref{BEbounds}), $\log \| A_n V_0 \| - n \lambda$ is written as a partial sum associated with functions of a stationary Markov chain, which can be viewed also as a function of iid random elements (see also \cite{CDM}). Using the conditional expectation, the underlying random variables are then approximated by $m$-dependent variables, say $X_{k,m}$. Next, to break the dependence, a blocking procedure is used and the partial sum $\sum_{k=1}^n X_{k,m}$ is decomposed into two terms. The first one can be rewritten as the sum of random variables which are defined as blocks, say $Y_j^{(1)}$, of size $2m$ of the $X_{k,m}$'s. These random blocks have the following property: conditionally to ${\mathbb F}_m $ (a particular $\sigma$-algebra generated by a part of the $\varepsilon_k$'s), they are independent. In addition, for any bounded measurable function $h$, the random variables $Z_j= \E (h ( Y_j^{(1)}) | {\mathbb F}_m) $ are one-dependent. On another hand, the second term in the decomposition of $\sum_{k=1}^n X_{k,m}$ is ${\mathbb F}_m $-measurable and can be written as a sum of independent blocks of the initial random variables. For both terms in the decomposition, the conditional independence of the blocks comes from the independence of the $\varepsilon_k$'s. The next steps of the proof consist first of all in working conditionally to ${\mathbb F}_m $ and then in giving suitable upper bounds for the conditional characteristic function of the blocks $Y_j^{(1)}$.
Concerning matrix norms, we first note that the Berry-Esseen bound of order $n^{-1/4} \sqrt{\log n}$ under a moment of order $3$ is still valid for $\log \| A_n \| -n \lambda_\mu$ instead of $\log \| A_n x \| - n \lambda_\mu$ (see the discussion in Section 8 of \cite{CDJ}). Moreover, if $\mu$ has an exponential moment, Xiao et al. \cite{XGL} proved that there exists a positive constant $C$ such that
\beq \label{Liu}
\sup_{t \in {\mathbb R}} \left | {\mathbb P} \left (
\log \| A_n \| - n \lambda_\mu \leq t \sqrt n \right ) - \Phi (t/ s) \right | \leq C w_n \, \text{ with } \,w_n = \frac{ \log n}{\sqrt{n}} \, .
\eeq
Note that in \cite{XGL}, the authors also proved a similar upper bound for $\log (\rho (A_n) )$ where $\rho (A_n)$ is the spectral radius of $A_n$.
\smallskip
In the present paper, we prove that:
\begin{itemize}
\item If $\mu$ has a moment of order $q \in ]2,3]$, then the rate in \eqref{Lepage} (and then in \eqref{Jirak}) is $ v_n = ( \log n/n)^{q/2-1}$ and the rate in
\eqref{Liu} is $w_n =( \log n/n)^{q/2-1}$.
\item If $\mu$ has a moment of order 4, then the rate in \eqref{Lepage} (and then in \eqref{Jirak}) is $ v_n= n^{-1/2} $ and the rate in
\eqref{Liu} is $w_n = n^{-1/2} $.
\end{itemize}
To prove these results, we follow the blocking approach used in Jirak \cite{Ji,Ji20} (and described above), but with substantial changes. We refer to Comment \ref{comment31} to have a flavor of them. One of the main changes is the use of the dependency coefficients defined in
\cite{CDJ} (see also \eqref{delta-def} in Section \ref{sectionproofs}) which are well adapted to the study of the process $( \log \| A_n x \| - n \lambda_\mu)_{n \geq 1}$, instead of the coupling coefficients used in \cite{Ji20}.
\smallskip
The paper is organized as follows. In Section \ref{BEbounds}, we state our main results about Berry-Esseen type bounds in the context of left random walks when $\mu$ has either a moment of order $q \in ]2,3]$ or a moment of order $4$. All the proofs are postponed to Section \ref{sectionproofs}. Some technical lemmas used in the proofs are stated and proved in Section \ref{TL}.
\smallskip
In the rest of the paper, we shall use the following notations: for two sequences $(a_n)_{n \geq 1}$ and $(b_n)_{n \geq 1}$ of positive reals, $a_n \ll b_n$ means that there exists a positive constant $C$ not depending on $n$ such that $a_n \leq C b_n$ for any $n\geq 1$. Moreover, given a $\sigma$-algebra ${\mathcal F}$, we shall often use the notation $\E_{{\mathcal F}} ( \cdot) = \E ( \cdot |{\mathcal F}) $.
\begin{Remark}
After this article was submitted, we became aware of the paper
by Dinh, Kaufmann and Wu \cite{DKW}, in which the authors obtain the bound \eqref{Lepage} with $v_n = n^{-1/2}$
when $\mu$ has a moment of order $3$, but only in the case $d = 2$. Note that, in the same paper and still
in the case $d = 2$, a Local Limit Theorem is also established for $\log \Vert A_n x \Vert$.
\end{Remark}
\section{Berry-Esseen bounds} \label{BEbounds}
\setcounter{equation}{0}
Recall the notations in the introduction: let $(\varepsilon_n)_{n \geq 1}$ be independent random matrices taking values in $G= GL_d(\mathbb R)$, $d \geq 2$, with common distribution $\mu$. Let $A_n= \varepsilon_n \cdots \varepsilon_1$ for $n\geq 1$, and $A_0=$Id. We assume that $\mu$ is strongly irreducible and proximal, and we denote by $\nu$ the unique distribution on $X=P({\mathbb R}^d)$ satisfying \eqref{defnu}.
Let now $V_0$ be a random variable independent of $(\varepsilon_n)_{n \geq 1}$, taking values in ${\mathbb R}^d$, such that $\Vert V_0 \Vert =1$ and ${\overline {V_0}}$ is distributed according to $\nu$.
The behavior of $\log \| A_n V_0 \| - n \lambda_\mu $ (where $\lambda_\mu$ is the first Lyapunov exponent defined right after \eqref{SL1}) can be handled with the help of an additive cocycle, which can also be viewed as a function of a stationary Markov chain. More precisely, let $W_0=\overline {V_0}$ (so that $W_0$ is distributed according to $\nu$), and let $W_n = \varepsilon_n W_{n-1}=A_n W_0$ for any integer $n \geq 1$. By definition of $\nu$, the sequence $(W_n)_{n \geq 0}$ is a strictly stationary Markov chain with values in $X$. Let now, for any integer $k \geq 1$,
\beq \label{defMCXk}
X_k := \sigma (\varepsilon_k, W_{k-1} ) - \lambda_{\mu} = \sigma (\varepsilon_k, A_{k-1} W_0 ) - \lambda_{\mu} \, ,
\eeq
where, for any $g \in G$ and any ${\bar x} \in X$,
\[
\sigma( g , {\bar x} ) = \log \Big ( \frac{\Vert g \cdot x \Vert }{ \Vert x \Vert }\Big ) \, .
\]
Note that $\sigma$ is an additive cocycle in the sense that $\sigma ( g_1 g_2, {\bar x}) = \sigma ( g_1, g_2 {\bar x}) + \sigma ( g_2, {\bar x}) $. Consequently
\beq \label{defofSn}
S_n = \sum_{k=1}^n X_k = \log \Vert A_n V_0 \Vert - n \lambda_{\mu}\, .
\eeq
With the above notations, the following Berry-Esseen bounds hold.
\begin{Theorem} \label{thmq=3}
Let $\mu$ be a proximal and strongly irreducible probability measure on ${\mathcal B} (G)$. Assume that $\mu$ has a finite moment of order $q \in ]2,3]$. Then $n^{-1} \E (S_n^2) \rightarrow s^2>0$ as $n \rightarrow \infty$ and, setting $ \displaystyle v_n = \Big ( \frac{\log n}{n } \Big )^{q/2-1}$, we have
\beq \label{ineBE1}
\sup_{y \in {\mathbb R}} \Big | {\mathbb P} \big ( S_n \leq y \sqrt{n} \big ) - \Phi (y/ s) \Big | \ll v_n \, , \eeq
\beq \label{ineBE1bis}
\sup_{y \in {\mathbb R}} \Big | {\mathbb P} \big ( \log (\Vert A_n \Vert ) - n \lambda_\mu \leq y \sqrt{n} \big ) - \Phi (y/ s) \Big | \ll v_n \, ,
\eeq
and
\beq \label{ineBE1ter}
\sup_{x , \Vert x \Vert =1}\sup_{y \in {\mathbb R}} \Big | {\mathbb P} \big ( \log \Vert A_n x \Vert - n \lambda_{\mu} \leq y \sqrt{n} \big ) - \Phi (y/ s) \Big | \ll v_n \, . \eeq
\end{Theorem}
\begin{Remark} \label{remonvar}
As mentioned in the introduction, the fact that
$n^{-1} \E (S_n^2) \rightarrow s^2 >0$ has been proved by Benoist and Quint \cite{BQ} (see Item $(c)$ of their Theorem 4.11). Let us mention that we also have
$s^2 =\E (X_1^2) + 2 \sum_{k \geq 2} \E (X_1X_k) $, which follows for instance from the proof of item $(ii)$ of Theorem 1 in \cite{CDJ}.
\end{Remark}
\begin{Remark} \label{CRAS}
The results of Theorem \ref{thmq=3} are used in the article \cite{CDMP} to obtain Berry-Esseen type bounds for the matrix coefficients and for the spectral radius, that is for the quantities
\begin{align*}
\sup_{ \Vert x \Vert= \Vert y \Vert=1}\sup_{t \in {\mathbb R}} \Big | {\mathbb P} \big ( \log |\langle A_n x, y\rangle |- n \lambda_\mu \leq t \sqrt{n} \big ) - \Phi (t/ s) \Big | \, ,\\
\text{and} \quad \sup_{t \in {\mathbb R}} \Big | {\mathbb P} \big ( \log \lambda_1(A_n) - n \lambda_\mu \leq t \sqrt{n} \big ) - \Phi (t/ s) \Big | \, ,
\end{align*}
where $\lambda_1(A_n)$ is the greatest modulus of the eigenvalues of the matrix $A_n$. In \cite{CDMP}, only the case of polynomial moments of order $q\geq 3$ is considered, but it is actually possible to obtain bounds for moments $q>2$ thanks to Theorem \ref{thmq=3}. More precisely, for both quantities, the rates are
\begin{itemize}
\item $v_n=(\log n /n)^{q/2-1}$ if $\mu$ has a finite moment of order $q \in ]2, (3+\sqrt 5)/2]$;
\item $v_n=1/n^{(q-1)/2q}$ if $\mu$ has a finite moment of order $q> (3+\sqrt 5)/2$.
\end{itemize}
\end{Remark}
\medskip
Now if $\mu$ has a finite moment of order $4$ then the following result holds:
\begin{Theorem} \label{thmq=4}
Let $\mu$ be a proximal and strongly irreducible probability measure on ${\mathcal B} (G)$. Assume that $\mu$ has a finite moment of order $4$. Then $n^{-1} \E (S_n^2) \rightarrow s^2>0$ as $n \rightarrow \infty$ and \eqref{ineBE1},
\eqref{ineBE1bis} and \eqref{ineBE1ter} hold with $v_n= 1/\sqrt{n}$.
\end{Theorem}
Recall that the classical Berry-Esseen theorem for independent random variables, which corresponds to the case $d=1$ in our setting, provides the rate
$1/{\sqrt n}$ under a finite moment of order $3$. For $q=3$, Thorem \ref{thmq=3} provides the rate $\sqrt{(\log n) /n}$, so one may wonder whether the conclusion of Theorem \ref{thmq=4} holds when $\mu$ has a moment of order $3$ only.
Note also that we have chosen to focus on the cases where $\mu$ has a finite moment of order $q \in ]2,3]$ (since it corresponds to the usual moment assumptions for the Berry-Esseen theorem in the iid case) or a finite moment of order $4$ (since in this case we reach the rate $1/\sqrt{n}$), but we infer from the proofs that if $\mu$ has a finite moment of order $q \in ]3,4[$ then the above results hold with $v_n= ( \log n )^{(4-q)/2} /{\sqrt{n}} $.
\section{Proofs} \label{sectionproofs}
\setcounter{equation}{0}
\subsection{Proof of Theorem \ref{thmq=3}}
As usual, we shall denote by $X_{k,{\bar x}}$ the random variable $X_k$ defined by \eqref{defMCXk} when the Markov chain $(W_n)_{n \geq 0}$ starts from $\bar x\in X$. We then define $S_{n,{\bar x}}:= \log \big(\|A_n x\|/\|x\|\big) -n \lambda_\mu= \sum_{k=1}^n X_{k,{\bar x}}$. We shall first prove the upper bound \eqref{ineBE1} in Section \ref{subsection1} and then the upper bounds \eqref{ineBE1bis} and \eqref{ineBE1ter} in Sections
\ref{subsection2} and \ref{subsection3} respectively.
\subsubsection{Proof of the upper bound \eqref{ineBE1}} \label{subsection1}
$ $
\smallskip
As usual, the proof is based on the so-called Berry-Esseen smoothing inequality (see e.g. \cite[Ineq. (3.13) p. 538]{Fe71}) stating that,
there exists $C>0$ such that for any positive $T$ and any integer $n\ge 1$,
\begin{equation} \label{IneBE}
\sup_{x \in {\mathbb R}} \Big | {\mathbb P} \big ( S_{n} \leq x \sqrt{n} \big ) - \Phi (x/ s) \Big | \le C\Big( \int_{-T}^T \frac{ \big | \E \big ( {\rm e}^{{\rm i} \xi S_n /{\sqrt n} }\big ) - {\rm e}^{-\xi^2s^2/2} \big |}{|\xi|} {\rm d}\xi + T^{-1}\, \Big)\, ,
\end{equation}
where we recall that $S_n$ has been defined in \eqref{defofSn}.
To take care of the characteristic function of $S_n /{\sqrt n} $ we shall take advantage of the fact that $X_k $ is a function of a stationary Markov chain generated by the iid random elements $(\varepsilon_i)_{i \geq 1}$. As in \cite{Ji}, the first steps of the proof consist in approximating the $X_k$'s by $m$-dependent random variables $X_{k,m}$, and then in suitably decomposing the partial sum associated with the $X_{k,m}$'s. This is the subject of the following paragraph.
\smallskip
\noindent {\it Step 0. Notations and Preliminaries.} We shall adopt most of the time the same notations as in Jirak \cite{Ji}. Let ${\mathcal E}_{i}^j = \sigma (\varepsilon_i, \ldots, \varepsilon_j)$ for $i \leq j$, and $m$ be a positive integer that will be specified later. For any $k \geq m$, let
\beq \label{defXkm}
X_{k,m} = \E ( X_k| {\mathcal E}_{k-m+1}^k ) := f_m ( \varepsilon_{k-m+1}, \ldots, \varepsilon_k) \, ,
\eeq
where $f_m$ is a measurable function. More precisely, we have
\[
X_{k,m} = \int_X \sigma ( \varepsilon_k, A_{k-1}^{k-m+1} {\bar x} ) d \nu ({\bar x}) - \lambda_{\mu} \, ,
\]
where we used the notation $A_{j}^i = \varepsilon_j \cdots \varepsilon_i$ for $i \leq j$. Note that $\E(X_{k,m}) =0$.
Next, let $N$ be the positive integer such that $n = 2 N m + m' $ with $0 \leq m' \leq 2m-1$.
The integers $N$ and $m$ are such that $N \sim \kappa_1 \log n$ (where $\kappa_1$ is a positive constant specified later) and $m \sim ( 2\kappa_1)^{-1} n (\log n)^{-1}$ (see \eqref{selectionofN} for the selection of $\kappa_1$).
Define now the following $\sigma$-algebra
\beq \label{defbbFm}
{\mathbb F}_m = \sigma ( ( \varepsilon_{(2j-1)m +1}, \ldots, \varepsilon_{2j m}) , j \geq 1 ) \, .
\eeq
Let $U_1 = \sum_{k=1}^m X_k$ and, for any integer $j \in [2,N]$, define
\beq \label{defUj}
U_j = \sum_{k=(2j-2) m+1}^{(2j-1)m} ( X_{k,m} - \E ( X_{k,m}| {\mathbb F}_m ) ) \, .
\eeq
For any integer $j \in [ 1,N]$, let
\beq \label{defRj}
R_j = \sum_{k=(2j-1)m+1 }^{2jm} ( X_{k,m}- \E ( X_{k,m}| {\mathbb F}_m ) ) \, ,
\eeq
\beq \label{notaSm1}
Y_j^{(1)}= U_j+ R_j \ \mbox{ and } \ S_{|m}^{(1)} = \sum_{j=1}^N Y_j^{(1)} \, .
\eeq
Let also
\[
U_{N+1} = \sum_{k=2N m+1}^{ \min(n, (2N+1)m)} ( X_{k,m} - \E (X_{k,m}| {\mathbb F}_m ) )
\]
and
\[R_{N+1} = \sum_{k=(2N+1)m+1 }^{n} ( X_{k,m}- \E ( X_{k,m}| {\mathbb F}_m ))\, , \] where an empty sum has to be interpreted as $0$.
Note that under ${\mathbb P}_{{\mathbb F}_m}$ (the conditional probability given ${\mathbb F}_m$), the random vectors $(U_j,R_j)_{1 \leq j \leq N+1}$ are independent. Moreover, by stationarity, the r.v.'s
$(U_j,R_j)_{2 \leq j \leq N}$ have the same distribution (as well as the r.v.'s
$(R_j)_{1 \leq j \leq N}$).
Next, denoting by $ S_{|m}^{(2)} = \sum_{k=m+1}^n \E ( X_{k,m}| {\mathbb F}_m )$, the following decomposition is valid:
\[
S_{n,m}:= \sum_{k=1}^m X_k + \sum_{k=m+1}^{n} X_{k,m} = S_{|m}^{(1)} + S_{|m}^{(2)} + U_{N+1} + R_{N+1} \, .
\]
{\it To simplify the exposition, assume in the rest of the proof that $n=2Nm$} (so that $m'=0$). There is no loss of generality by making such an assumption: the only difference would be that since $(U_{N+1}, R_{N+1}) $ does not have the same law as the $(U_j, R_j) $'s, $2 \leq j \leq N$, its contribution would have to be treated separately.
Therefore, from now we consider $m'=0$ and then the following decomposition
\beq \label{decSnm}
S_{n,m}= S_{|m}^{(1)} + S_{|m}^{(2)} \, .
\eeq
We are now in position to give the main steps of the proof. We start by writing
\[
\big | \E \big ( {\rm e}^{{\rm i} \xi S_n /{\sqrt n} }\big ) - {\rm e}^{-\xi^2s^2/2} \big |
\leq \big | \E \big ( {\rm e}^{{\rm i} \xi S_{n} /{\sqrt n} }\big ) -\E \big ( {\rm e}^{{\rm i} \xi S_{n,m} /{\sqrt n} }\big ) \big |
+ \big | \E \big ( {\rm e}^{{\rm i} \xi S_{n,m} /{\sqrt n} }\big ) - {\rm e}^{-\xi^2s^2/2} \big | \, .
\]
Next
\begin{multline*}
\big | \E \big ( {\rm e}^{{\rm i} \xi S_{n,m} /{\sqrt n} }\big ) - {\rm e}^{-\xi^2s^2/2} \big | \\
= \Big | \E \Big ( {\rm e}^{{\rm i} \xi S_{|m}^{(2)} /{\sqrt n} } \Big [ \E_{{\mathbb F}_m} \big ( {\rm e}^{{\rm i} \xi S_{|m}^{(1)} /{\sqrt n} } \big ) - {\rm e}^{-\xi^2s^2/4} \Big ] \Big )
+ {\rm e}^{-\xi^2s^2/4} \Big ( \E \big ( {\rm e}^{{\rm i} \xi S_{|m}^{(2)} /{\sqrt n} } \big ) - {\rm e}^{-\xi^2s^2/4} \Big ) \Big | \\
\leq \Big \Vert \E_{{\mathbb F}_m} \big ( {\rm e}^{{\rm i} \xi S_{|m}^{(1)} /{\sqrt n} } \big ) - {\rm e}^{-\xi^2s^2/4} \Big \Vert_{1}
+ \Big | \E \big ( {\rm e}^{{\rm i} \xi S_{|m}^{(2)} /{\sqrt n} } \big ) - {\rm e}^{-\xi^2s^2/4} \Big | \, .
\end{multline*}
Hence, starting from \eqref{IneBE} and selecting $T =1 /v_n $ where $ \displaystyle v_n = \Big ( \frac{\log n}{n } \Big )^{q/2-1}$, Inequality \eqref{ineBE1} of Theorem \ref{thmq=3} will follow if one can prove that
\beq \label{inesmooth1}
\int_{-T}^T \frac{ \big | \E \big ( {\rm e}^{{\rm i} \xi S_{n} /{\sqrt n} }\big ) -\E \big ( {\rm e}^{{\rm i} \xi S_{n,m} /{\sqrt n} }\big ) \big | }{|\xi|} {\rm d}\xi \ll v_n \, ,
\eeq
\beq \label{inesmooth2}
\int_{-T}^T \frac{ \big \Vert \E_{{\mathbb F}_m} \big ( {\rm e}^{{\rm i} \xi S_{|m}^{(1)} /{\sqrt n} } \big ) - {\rm e}^{-\xi^2s^2/4} \big \Vert_{1} }{|\xi|} {\rm d}\xi \ll v_n
\eeq
and
\beq \label{inesmooth3}
\int_{-T}^T \frac{ \big | \E \big ( {\rm e}^{{\rm i} \xi S_{|m}^{(2)} /{\sqrt n} } \big ) - {\rm e}^{-\xi^2s^2/4} \big | }{|\xi|} {\rm d}\xi \ll v_n \, .
\eeq
The objective is then to prove these three upper bounds, and the main differences compared to \cite{Ji,Ji20} lie in the intermediate steps and the technical tools developed for this purpose. They will be based on the following dependence coefficients that are well adapted to our setting. Let $p \geq 1$. For every $k\ge 1$, define
\beq\label{delta-def}
\delta_{p, \infty}^p (k) = \sup_{{\bar x}, {\bar y} \in X} \E \big | X_{k,{\bar x} } - X_{k,{\bar y} } \big |^p \, .
\eeq
If $\mu$ has a finite moment of order $q >1$, then, by \cite[Prop. 3]{CDJ}, we know that
\beq \label{estimatedelta}
\sum_{k \geq 1} k^{q-p-1} \, \delta_{p, \infty}^p (k) < \infty
\qquad \forall p \in [1, q)\, .
\eeq
Hence, since $(\delta_{p, \infty} (k))_{k \geq 1}$ is
non increasing, it follows that (if $\mu$ has a moment of order $q>1$)
\beq \label{estimatedeltabis}\delta_{p, \infty} (k) = o \big ( 1/k^{q/p-1} \big)
\qquad \forall p\in [1,q)\, .
\eeq
In the following commentary, we list the places where it is essential to use the coefficients $\delta_{p, \infty} (k)$ rather than the coupling coefficients used by Jirak \cite{Ji20} in order to obtain the most accurate bounds possible. Note that this list is not exhaustive.
\begin{Comment} \label{comment31}
Denote by $\vartheta'_k(p)$ and $\vartheta^*_k(p)$ the coupling coefficients defined in \cite[Eq. (7)]{Ji20}. Note that in the Markovian case (which is our setting), these two coefficients are of the same order and can be bounded by
$\delta_{p, \infty} (k) $. As we shall see in Lemma \ref{lmaR1normep}, by using a suitable Rosenthal-type inequality and the strength of the $\delta_{p, \infty} $ coefficients, allowing to control also the infinite norm of conditional expectations (see for instance \eqref{condexpectationrkm}), we obtain, in particular, $ \Vert R_1 \Vert_{p} \ll 1$ for $p \geq 2 $ provided that $\mu$ has a moment of order $q=p+1$. As a counterpart, Lemma 5.4 in \cite{Ji20} entails that $\Vert R_1 \Vert_p \ll \sum_{k=1}^m \delta_{p, \infty} (k) $, and then $ \Vert R_1 \Vert_{p} \ll 1$ as soon as $\mu$ has a moment of order $q >2p$. A suitable control of $ \Vert R_1 \Vert_{p} $ for some $p \geq 2$ is a key ingredient to take care of the characteristic function of the $Y_j^{(1)} $'s conditionally to ${\mathbb F}_m$ that we will denote by $\varphi_j(t)$ in what follows (see the definition \eqref{defvarphijx}). More precisely, if the condition (among others) $ \Vert R_1 \Vert_{p} \ll 1$ holds for $p=2$, then we get the upper bound \eqref{conslma9} with $q=3$, and if it holds for $p=3$ then we get the better upper bound \eqref{q4varphij} (this difference in the upper bounds is the reason why in the statements of Theorem \ref{thmq=3} (with $q=3$) we have an extra logarithmic term compared to Theorem \ref{thmq=4}). Note that the upper bounds \eqref{conslma9} and \eqref{q4varphij} come from Lemmas \ref{lma4.9}, \ref{lma4.9q=4} and \ref{lma4.9q=4bis}. Another crucial fact that we would like to point out is the following: Imposing that $\mu$ has a moment of order $q=3$ implies $ \Vert R_1 \Vert_{p} \ll 1$ only for $p=2$ and then, when $q \leq 3$, Lemma 4.5 in \cite{Ji} cannot be used to get the upper bound \eqref{conslma4.5} which is essential to prove
\eqref{inesmooth2}. Indeed, in order for \cite[Lemma 4.5]{Ji} to be applied, it is necessary that $ \Vert R_1 \Vert_{p} \ll 1$ for some $p>2$. The role of our Lemma \ref{lma4.5} is then to overcome this drawback (see the step 2 below and in particular the control of both $ I_{1,N}(\xi) $ and $ I_{3,N}(\xi) $).
On another hand, in view of \eqref{estimatedeltabis}, it is clear that, as $k \to \infty$, for any $r \in [1, p[$, the coefficient $\delta_{r, \infty} (k) $ has a better behavior than $\delta_{p, \infty} (k)$. Hence, in some cases, it would be preferable to deal with the ${\mathbb L}^r$-norm rather than with the $ {\mathbb L}^p$-norm. For instance, in our case, it is much more efficient to control $\Vert S_n - S_{n,m} \Vert_1$ (see the forthcoming upper bounds \eqref{step1P1S} and \eqref{step1P2S}) rather than $\Vert S_n - S_{n,m} \Vert_p^p$ as done in Jirak \cite{Ji20} (see his upper bound (50)). This is the reason why we can start directly from Inequality \eqref{IneBE} and work with the characteristic function rather than using the decomposition given in \cite[Lemma 5.11]{Ji20}.
\end{Comment}
\smallskip
Let us now come back to the proof. The next steps will consist in proving the upper bounds \eqref{inesmooth1}-\eqref{inesmooth3}.
\medskip
\noindent \textit{Step 1. Proof of \eqref{inesmooth1}.} Note that
\[
\int_{-T}^T \frac{ \big | \E \big ( {\rm e}^{{\rm i} \xi S_n /{\sqrt n} }\big ) -\E \big ( {\rm e}^{{\rm i} \xi S_{n,m} /{\sqrt n} }\big ) \big | }{|\xi|} {\rm d}\xi \leq \frac{T}{\sqrt{n}}\Vert S_n - S_{n,m} \Vert_{1} \, .
\]
But, by stationarity and \cite[Lemma 24]{CDM} (applied with $M_k = +\infty$),
\begin{equation} \label{step1P1S}
\Vert S_n - S_{n,m} \Vert_1 \leq n \Vert X_{m+1}-X_{m+1,m} \Vert_1 \leq n \delta_{1, \infty} (m) \, .
\end{equation}
Hence, by \eqref{estimatedeltabis} and the fact that $\mu$ has a moment of order $q>1$, we derive
\begin{equation}\label{step1P2S}
\Vert S_n - S_{n,m} \Vert_1 \ll n m^{-(q-1)} \, .
\end{equation}
So, overall, since $T \ll m^{q/2-1}$, it follows that
\[
\int_{-T}^T \frac{ \big | \E \big ( {\rm e}^{{\rm i} \xi S_n /{\sqrt n} }\big ) -\E \big ( {\rm e}^{{\rm i} \xi S_{n,m} /{\sqrt n} }\big ) \big | }{|\xi|} {\rm d}\xi \ll \frac{n^{1/2}}{m^{q/2}} \, .
\]
The upper bound \eqref{inesmooth1} follows from the fact that $m \sim \kappa_2 n (\log n)^{-1}$.
\medskip
\noindent \textit{Step 2. Proof of \eqref{inesmooth2}.} For any $x\in {\mathbb R}$ and any integer $j \in [1,N]$, let
\beq \label{defvarphijx}
\varphi_j (x)= \E \Big ( {\rm e}^{{\rm i} x Y_j^{(1)} / \sqrt{ 2 m} } | {\mathbb F}_m\Big ) \, . \eeq
Since, under ${\mathbb P}_{{\mathbb F}_m}$, the $Y_j^{(1)} $'s are independent we write
\beq \label{ineBEclassic}
\big \Vert \E_{{\mathbb F}_m} \big ( {\rm e}^{{\rm i} \xi S_{|m}^{(1)} /{\sqrt n} } \big ) - {\rm e}^{-\xi^2s^2/4} \big \Vert_{1} = \E \Big [ \Big | \prod_{j=1}^N \varphi_j \Big (\frac{ \xi}{\sqrt{N}} \Big ) - \prod_{j=1}^N {\rm e}^{- \xi^2s^2/(4N) } \Big | \Big ] \, .
\eeq
As in \cite[Section 4.1.1]{Ji}, we use the following basic identity: for
any complex numbers $(a_j)_{1\le j\le N}$ and $(b_j)_{1\le j\le N}$,
$
\prod_{j=1}^N a_j - \prod_{j=1}^N b_j = \sum_{i=1}^n ( \prod_{j=1}^{i-1} b_j) (a_i-b_i) ( \prod_{j=i+1}^{N} a_j)
$ to handle the right-hand side of \eqref{ineBEclassic}. Taking into account that $(\varphi_j(t))_{1 \leq j \leq N}$ forms a one-dependent sequence and that the r.v.'s
$(U_j,R_j)_{2 \leq j \leq N}$ have the same distribution, we then infer that
\begin{equation} \label{ineBE4P0}
\E \Big [ \Big | \prod_{j=1}^N \varphi_j \Big (\frac{ \xi}{\sqrt{N}} \Big ) - \prod_{j=1}^N {\rm e}^{- \xi^2/(4N) }\Big | \Big ] \leq I_{1,N}(\xi) +I_{2,N}(\xi) + I_{3,N}(\xi) \, ,
\end{equation}
where
\[
I_{1,N}(\xi) = (N -1) \Vert \varphi_2 (\xi /{\sqrt N}) - {\rm e}^{- \xi^2s^2/(4N) } \Vert_{1} \Big \Vert \prod_{j=N/2}^{N-1} \Big | \varphi_j \Big (\frac{ \xi}{\sqrt{N}} \Big ) \Big | \Big \Vert_{1} \, ,
\]
\[
I_{2,N}(\xi) = N {\rm e}^{- \xi^2s^2 ( N-6) /(8N) } \Vert \varphi_2 (\xi /{\sqrt N}) - {\rm e}^{- \xi^2s^2/(4N) } \Vert_{1} \]
and
\[
I_{3,N}(\xi) = \Vert \varphi_1 (\xi /{\sqrt N}) - {\rm e}^{- \xi^2s^2/(4N) } \Vert_{1} \Big \Vert \prod_{j=N/2}^{N-1} \Big | \varphi_j \Big (\frac{ \xi}{\sqrt{N}} \Big ) \Big | \Big \Vert_{1} \, . \]
To integrate the above quantities, we need to give suitable upper bounds for the two terms $\Vert \varphi_j (t) - {\rm e}^{-s^2 t^2/ 4 } \Vert_{1}$ and $\Vert \prod_{j=N/2}^{N-1} | \varphi_j (t) | \Vert_{1}$. Applying the first part of Lemma \ref{lma4.9} and using stationarity, we derive that for any $2 \leq j \leq N$,
\beq \label{conslma9}
\Vert \varphi_j (t) - {\rm e}^{- s^2 t^2/ 4 } \Vert_{1} \ll \frac{ t^2 }{m^{q/2 -1}} + \frac{|t|}{m^{q-3/2}} \, .
\eeq
Moreover the second part of Lemma \ref{lma4.9} implies that
\beq \label{conslma9j=1}
\Vert \varphi_1 (t) - {\rm e}^{- s^2 t^2/ 4 } \Vert_{1} \ll \frac{ t^2 }{m^{q/2 -1}} \, .
\eeq
\noindent On another hand, according to \cite[Inequality (4.14)]{Ji}, for any integer $\ell \in [1, m]$,
\[
\Big \Vert \prod_{j=N/2}^{N-1} | \varphi_j (t) | \Big \Vert_{1} \leq \Big \Vert \prod_{j \in {\mathcal J}} \big | \varphi^{(\ell)}_j (t\sqrt{(m-\ell )/(2m)}) \big | \Big \Vert_{1} \, ,
\]
where ${\mathcal J} = [N/2, N-1] \cap 2 {\mathbb N}$,
\[
\varphi^{(\ell)}_j ( x ) = \E\Big ( {\rm e}^{{\rm i} x H^{(\ell)}_{j,m} } \big | {{\mathcal H}^{(\ell)}_{j,m}} \Big )
\]
with ${\mathcal H}^{(\ell)}_{j,m}= {\mathbb F}_m \vee \sigma( \varepsilon_{ 2(j-1)m +1}, \ldots, \varepsilon_{ 2(j-1)m +\ell}) $ and
\[
H^{(\ell)}_{j,m} = \frac{1}{\sqrt{m- \ell}} \Big ( \sum_{k=2(j-1)m + \ell +1}^{(2j-1)m}( X_{k,m} - \E ( X_{k,m}| {\mathcal H}^{(\ell)}_{j,m} ) ) + R_j - \E (R_j| {\mathcal H}^{(\ell)}_{j,m} ) \Big ) \, .
\]
We shall apply Lemma \ref{lma4.5} with
\[A_j = \frac{1}{\sqrt{m- \ell}} \sum_{k=2(j-1)m + \ell +1}^{(2j-1)m}( X_{k,m} - \E ( X_{k,m}| {\mathcal H}^{(\ell)}_{j,m} ) ), \ B_j = \frac{R_j - \E (R_j| {\mathcal H}^{(\ell)}_{j,m} ) }{m^{(3-q)/2}} \]
and
$\displaystyle a = \frac{ m^{(3-q)/2} }{(m- \ell)^{1/2} } $. By stationarity, for any $j \in {\mathcal J}$,
\begin{multline*}
{\mathbb P} \big ( \E_{H^{(\ell)}_{j,m} } (A^2_j ) \leq s^2/4 \big ) = {\mathbb P} \big ( \E_{H^{(\ell)}_{2,m} } (A^2_2 ) \leq s^2/4 \big ) \\
= {\mathbb P} \Big ( (m-\ell)^{-1} \E_{m} \Big ( \Big ( \sum_{k=m+1}^{2m-\ell} (X_{k,m} - \E_m ( X_{k,m} ) \Big )^2 \Big ) \leq s^2/4 \Big ) \, ,
\end{multline*}
where $\E_m (\cdot)$ means $\E(\cdot | {\mathcal G}_m )$ with ${\mathcal G}_m = \sigma (W_0, \varepsilon_1, \ldots, \varepsilon_m)$. Let $K$ be a positive integer and note that
\begin{align*}
& \Big | \Big \Vert \sum_{k=m+1}^{m+K } (X_{k,m} - \E_m ( X_{k,m} ) ) \Big \Vert_2 - \Big \Vert \sum_{k=m+1}^{m+K } X_{k} \Big \Vert_2 \Big | \\ & \quad \quad \leq
\Big \Vert \sum_{k=m+1}^{m+K } ( X_{k,m} - X_k ) \Big \Vert_2 + \sum_{k=m+1}^{m+K } \Vert \E_m ( X_{k,m} ) \Vert_{\infty} \, .
\end{align*}
But, by using the remark after \cite[Prop. 3]{CDJ}, we infer that, for $k \geq m+1$,
\beq \label{Borne1condexpect}
\Vert \E_m ( X_{k,m} ) \Vert_{\infty} \leq \delta_{1, \infty} (k-m) \, .
\eeq
Next, by \cite[Lemma 24]{CDM} (applied with $M_k = +\infty$),
\begin{align*}
\Big \Vert \sum_{k=m+1}^{m+K } ( X_{k,m} & - X_k ) \Big \Vert^2_2 = \sum_{k=m+1}^{m+K } \Vert X_{k,m} - X_k \Vert_2^2 \\ & \quad \quad + 2 \sum_{k=m+1}^{m+K-1 } \sum_{\ell=k+1}^{m+K }
\E \Big ( (X_{k,m} - X_k) \E_{k} (X_{\ell,m} - X_\ell) \Big ) \\
& \leq K \delta^2_{2 , \infty} (m)+ 2 \sum_{k=m+1}^{m+K-1 } \sum_{\ell=k+1}^{m+K } \Vert \E_{k} (X_{\ell,m} - X_\ell) \Vert_{\infty} \Vert X_{k,m} - X_k \Vert_1 \\
& \leq K \delta^2_{2 , \infty} (m)+ 2 \sum_{k=m+1}^{m+K-1 } \sum_{\ell=k+1}^{m+K } \delta_{1, \infty} (\ell - k) \delta_{1, \infty} (m) \, .
\end{align*}
Therefore, by taking into account \eqref{estimatedeltabis} and the fact that $\mu$ has a moment of order $q >2$, we get that
\[
\Big \Vert \sum_{k=m+1}^{m+K } ( X_{k,m} - X_k ) \Big \Vert^2_2 = o ( K m^{2-q } ) \, ,
\]
which combined with \eqref{Borne1condexpect} implies that
\[
K^{-1/2}\Big | \Big \Vert \sum_{k=m+1}^{m+K } (X_{k,m} - \E_m ( X_{k,m} ) ) \Big \Vert_2 - \Big \Vert \sum_{k=m+1}^{m+K } X_{k} \Big \Vert_2 \Big | \ll m^{1-q/2 } + K^{-1/2} \, .
\]
But, using stationarity, $K^{-1/2} \big \Vert \sum_{k=m+1}^{m+K } X_{k} \big \Vert_2 = K^{-1/2} \big \Vert \sum_{k=1}^{K } X_{k} \big \Vert_2 \rightarrow s >0 $. Hence provided that $(m-\ell) $ is large enough, we have
\beq \label{restrictioonell}
(m-\ell)^{-1} \E \Big ( \Big ( \sum_{k=m+1}^{2m-\ell} (X_{k,m} - \E_m ( X_{k,m} ) )\Big )^2 \Big ) > s^2/2\, .
\eeq
So, overall, setting ${\bar X}_{k,m} := X_{k,m} - \E_m ( X_{k,m} ) $, for $(m-\ell) $ large enough, we get
\begin{multline*}
{\mathbb P} \big ( \E_{H^{(\ell)}_{2,m} } (A^2_2 ) \leq s^2/4 \big ) \\
\leq {\mathbb P} \Big ( (m- \ell)^{-1} \Big | \E_{m} \Big ( \Big ( \sum_{k=m+1}^{2m-\ell} {\bar X}_{k,m} \Big )^2 - \E \Big ( \Big ( \sum_{k=m+1}^{2m-\ell} {\bar X}_{k,m} \Big )^2\Big ) \Big | \geq \frac{s^2}{4 } \Big ) \, .
\end{multline*}
Using Markov's inequality, the same arguments as those used in the proof of Lemma \ref{lma4.7}, and since $q >2$, we then derive that, for $(m-\ell) $ large enough and any $j \in {\mathcal J}$,
\[
{\mathbb P} \big ( \E_{H^{(\ell)}_{j,m} } (A^2_j ) \leq s^2/4 \big ) \ll (m - \ell)^{-\varepsilon} \text{ for some $\varepsilon >0$.}
\]
Hence, provided that $m-\ell$ is large enough, Item (ii) of Lemma \ref{lma4.5} is satisfied with $u^- = s^2/4$. Note now that by stationarity, for any $j \in {\mathcal J}$,
\[
\E(B_j^2) \leq 4 \frac{\E (R_j^2) } {m^{3-q}}= 4 \frac{\E (R_1^2) } {m^{3-q}} \ll 1 \, ,
\]
by using Lemma \ref{lmaR1normep} with $p=2$. This proves Item (iv) of Lemma \ref{lma4.5}. Next, for $p\geq 2$, using stationarity and \cite[Cor. 3.7]{MPU19}, we get that for any $j \in {\mathcal J}$,
\beq \label{momentAJ}
\E(|A_j|^p) \leq 2^p (m- \ell)^{-p/2} \Big \Vert \sum_{k=m+1}^{2m - \ell} X_{k,m} \Big \Vert_{p} \ll \Big [ \Vert X_{1+m,m} \Vert_{p} + \sum_{k=m+1}^{2m-\ell} k^{-1/2} \Vert \E_m (X_{k,m}) \Vert_{p} \Big ]^p \, .
\eeq
But $\Vert X_{1+m,m} \Vert_{p} \leq \Vert X_{1} \Vert_{p} < \infty$ if $p \leq q$ (indeed recall that it is assumed that $\mu$ has a moment of order $q$) and, by \eqref{Borne1condexpect}, $\Vert \E_m (X_{k+m,m}) \Vert_{p} \leq \delta_{1, \infty} (k)$. Hence, by \eqref{estimatedelta} and since $\mu$ has a moment of order $q>2$, Item (iii) of Lemma \ref{lma4.5} is satisfied for $p=q$. So, overall, noticing that $|{\mathcal J}| \geq N/8\ge 16$, we can apply Lemma \ref{lma4.5} to derive that there exist positive finite constants $c_1$, $c_2$ and $c_3$ depending in particular on $s^2$ but not on $(m,n)$ such that for $(m-\ell)$ large enough (at least such that $a = \frac{ m^{(3-q)/2} }{(m- \ell)^{1/2} } \leq c_1$), we have
\[
\Big \Vert \prod_{j \in {\mathcal J}} \big | \varphi^{(\ell)}_j (x) \big | \Big \Vert_{1} \le {\rm e}^{-c_3 x^2 N/8} + {\rm e}^{- N/256}\ \ \text{for $x^2 \leq c_2$, }
\]
implying overall that, for $(m-\ell)$ large enough and for $t^2 (m-\ell)/(2m) \leq c_2$,
\beq \label{conslma4.5}
\Big \Vert \prod_{j=N/2}^{N-1} | \varphi_j (t) | \Big \Vert_{1} \le {\rm e}^{-c_3 t^2 (m-\ell) N/ ( 16 m) } + {\rm e}^{- N/256} \, .
\eeq
The bounds \eqref{conslma9}, \eqref{conslma9j=1} and \eqref{conslma4.5} allow to give an upper bound for the terms $I_{1,N}(\xi)$, $I_{2,N}(\xi)$ and $I_{3,N}(\xi)$ and next to integrate them over $[-T,T]$ when they are divided by $|\xi |$. Hence the computations in \cite[Sect. 4.1.1., Step 4]{Ji} are replaced by the following ones. First, as in \cite{Ji}, we select
\beq \label{selectl}
\ell = \ell(\xi) = {\bf 1}_{ \{\xi^2 < N c_2\}} + ( m - [nc_2/(2\xi^2)] +1 ) {\bf 1}_{\{\xi^2 \geq N c_2\}} \, .
\eeq
Therefore $m - \ell $ is either equal to $m-1$ or to $ [nc_2/(2\xi^2)] -1 $. Since $|\xi | \leq T = \big ( n / (\log n ) \big )^{q/2-1}$, it follows that $nc_2/(2\xi^2) \geq 2^{-1}c_2 (\log n)^{q-2} n^{3-q }$.Therefore
\[
a = \frac{ m^{(3-q)/2} }{(m- \ell)^{1/2} } \leq \frac{ m^{(3-q)/2} }{m-1 } + \frac{ 2 m^{(3-q)/2} }{ c_2 n^{3-q } ( \log n )^{q-2} } \, ,
\]
which is going to zero as $n \rightarrow \infty$ by the selection of $m$. Then, for any $c_1 >0$, we have $a <c_1$ for $n $ large enough. This justifies the application of Lemma \ref{lma4.5}.
So, starting from \eqref{conslma4.5} and taking into account the selection of $\ell$, we get that for any $|\xi| \leq T$ and $n$ large enough,
\beq \label{conslma4.5bis}
\Big \Vert \prod_{j=N/2}^{N-1} | \varphi_j (\xi/{\sqrt N}) | \Big \Vert_{1} \ll {\rm e}^{-c_3 \xi^2 /32} {\bf 1}_{ \{\xi^2 < N c_2\}} + {\rm e}^{- c_3 c_2 N/32 } {\bf 1}_{ \{\xi^2 \geq N c_2\}} + {\rm e}^{- N/256} \, .
\eeq
Select now
\beq \label{selectionofN}
N = [\kappa \log n ] \ \ \text{with $\kappa >2 \max( 256, 32 (c_2c_3)^{-1}) $ }
\eeq
and then $m \sim (2 \kappa )^{-1}n / \log n$. Taking into account \eqref{conslma9}, \eqref{conslma9j=1} and \eqref{conslma4.5bis}, we get, for $n $ large enough,
\begin{multline} \label{ineBE4P1}
\int_{-T}^T ( I_{1,N}(\xi) + I_{3,N}(\xi) ) / |\xi | \, {\rm d}\xi \ll N \int_{0}^T \Big ( \frac{|\xi|}{ N m^{q/2-1}} + \frac{1}{ \sqrt{N } m^{q-3/2}} \Big ) \Big ( {\rm e}^{-c_1 \xi^2 /32} +n^{-2} \Big ) \, {\rm d}\xi \\
\ll \frac{1}{m^{q/2-1} } + \frac{\sqrt{N}}{ m^{(q-1)/2}m^{q/2-1}} + \frac{ T^2 }{n^2 m^{q/2-1}} + \frac{ T \sqrt{N}}{ n^2 m^{(q-1)/2}m^{q/2-1} } \ll \Big ( \frac{\log n }{n} \Big )^{q/2-1} \, .
\end{multline}
Next, using \eqref{conslma9}, we derive
\begin{align*}
I_{2,N}(\xi) \ll \Big ( \frac{\xi^2}{m^{q/2-1} } + \frac{ \sqrt{N}|\xi |}{m^{q-3/2}} \Big ) \times \ {\rm e}^{- s^2 \xi^2/ 16 } \, .
\end{align*}
Therefore, by the selection of $m$ and $N$,
\begin{equation} \label{ineBE4P2}
\int_{-T}^T I_{2,N}(\xi) / |\xi | \, {\rm d}\xi \ll \Big ( \frac{\log n }{n} \Big )^{q/2-1} \, .
\end{equation}
Starting from \eqref{ineBEclassic} and taking into account \eqref{ineBE4P0}, \eqref{ineBE4P1} and \eqref{ineBE4P2}, the upper bound in \eqref{inesmooth2} follows.
\medskip
\noindent \textit{Step 3. Proof of \eqref{inesmooth3}.} Recall that $S_{|m}^{(2)} = \sum_{k=m+1}^n \E ( X_{k,m}| {\mathbb F}_m )$, and recall that we assume that $2Nm = n$. Denoting
\[ Y_j^{(2)} = U_j^{(2)} + R_j^{(2)} \ \ \text{for} \ j =1, \ldots, N \, , \]
where $ U_N^{(2)} = \sum_{k=(2N-1)m+1}^{n} \E ( X_{k,m}| {\mathbb F}_m )$, $R_N^{(2)} =0$,
\[
U_j^{(2)} = \sum_{k=(2j-1)m+1}^{2jm} \E ( X_{k,m}| {\mathbb F}_m ) \, \mbox{ and } \, R_j^{(2)} = \sum_{k=2jm+1}^{(2j+1)m} \E ( X_{k,m}| {\mathbb F}_m )\ \ \text{for}\ j =1, \ldots, N-1 \, ,
\]
we have $S_{|m}^{(2)} = \sum_{j=1}^{N} Y_j^{(2)}$. Note that the random vectors $( U_j^{(2)} ,R_j^{(2)} )_{1 \leq j \leq N}$ are independent. The proof of \eqref{inesmooth3} can be done by using similar (but even simpler) arguments to those developed in the step 2. In this part, one of the important fact is to notice that the $R^{(2)}_j$'s also have a negligible contribution. Indeed, for any
$2m+1 \leq k \leq 3m$,
\begin{multline*}
\Vert \E ( X_{k,m}| {\mathbb F}_m ) \Vert_{\infty}
= \Big \Vert \int \! \! \int \Big ( f_m ( \varepsilon_{k-m+1} , \ldots, \varepsilon_{2m}, a_{2m+1}, \ldots, a_{k}) \\ - f_m ( b_{k-m+1} , \ldots, b_{2m}, b_{2m+1}, \ldots, b_{k}) \Big ) \prod_{i=2m+1}^{k} d \mu (a_i) \prod_{i=k-m+1}^{k} d \mu (b_i) \Big \Vert_{\infty} \\
\leq \sup_{{\bar x}} \Big | \E (X_{k-2m} | W_0={\bar x}) - \int \E (X_{k-2m} | W_0={\bar y}) d\nu ({\bar y}) \Big | \leq \delta_{1, \infty} (k-2m) \, .
\end{multline*}
Hence by stationarity, \eqref{estimatedelta} and since $q \geq 2$, we derive that $\Vert R^{(2)}_j \Vert_{ \infty} \ll 1$ for any $j=1, \ldots, N$.
\medskip
To complete the proof of the upper bound \eqref{ineBE1}, we just have to put together the results in the steps 1, 2 and 3. \qed
\subsubsection{Proof of the upper bound \eqref{ineBE1bis}} \label{subsection2} Recall the notation $S_{n,{\bar u}}:= \sum_{k=1}^n X_{k,{\bar u}}$ where $X_{k,{\bar u}}$ denotes the random variable $X_k$ defined by \eqref{defMCXk} when the Markov chain $(W_n)_{n \geq 0}$ starts from $\bar u \in X$. Our starting point is the following upper bound:
\begin{equation}\label{lemme-bougerol}
\sup_{n \geq 1} \Big \Vert \log (\Vert A_n \Vert) -n \lambda_\mu - \int_X S_{n,{\bar u}} d \nu ({\bar u}) \Big \Vert_{\infty} < \infty \, .
\end{equation}
The proof of \eqref{lemme-bougerol} is outlined in Section 8.1 in \cite{CDJ} but, since it is a key ingredient in the proof of \eqref{ineBE1bis}, we shall provide more details here.
Let $g\in G$ and $\bar u\in X$. By item $(i)$ of
Lemma 4.7 in \cite{BQ}, there exists $\bar v(g)$ such that
\[
\log \|g\|- \sigma(g,\bar u)\le -\log \delta \big(\bar u, \bar v(g)\big) \, ,
\]
where $\delta (\bar u,\bar v):= \frac{|\langle u,v\rangle|}{\|u\|\, \|v\|}$.
Integrating with respect to $\nu$, it follows that
\begin{equation}\label{BQ-est}
0\le \log \|g\| -\int_X \sigma(g, \bar u)\, d\nu(\bar u)\le \sup_{\bar v\in X} \int_X|\log \delta(\bar u, \bar v)|\, d\nu(\bar u) \, .
\end{equation}
But, according to Proposition 4.5 in \cite{BQ}, since $\mu$ has a polynomial moment of order $q \geq 2$, $ \sup_{\bar v \in X}\int_X \big| \log \delta({\bar u}, \bar v)\, \big|\, d \nu ({\bar u}) < \infty$. Therefore,
\eqref{lemme-bougerol} follows from an application of \eqref{BQ-est} with $g= A_n$.
Now, using \eqref{lemme-bougerol} and Lemma 1 in \cite{Bo82}, the upper bound \eqref{ineBE1bis} will follow if one can prove that
\beq \label{ineBE1bispr}
\sup_{y \in {\mathbb R}} \Big | {\mathbb P} \Big ( \int_X S_{n,{\bar u}} d \nu ({\bar u}) \leq y \sqrt{n} \Big ) - \Phi (y/ s) \Big | \ll \Big ( \frac{\log n }{n} \Big )^{q/2-1} \, .
\eeq
We proceed as in the proof of the upper bound \eqref{ineBE1} with the following differences. First we consider
\[
S_{n,m} = \sum_{k=1}^m \int_X X_{k,{\bar u}} d \nu ({\bar u}) + \sum_{k=m+1}^n X_{k,m} \, ,
\]
where $X_{k,m}$ is defined by \eqref{defXkm}.
Hence
\[
\Big \Vert \int_X S_{n,{\bar u}} d \nu ({\bar u}) - S_{n,m} \Big \Vert_1 \leq \int_X \sum_{k=m+1}^n \Vert X_{k,{\bar u}}- X_{k,m} \Vert_1 d \nu ({\bar u}) \leq n \delta_{1,\infty} (m) \, .
\]
It follows that the step 1 of the previous subsection is unchanged. Next, we use the same notation as in Subsection \ref{subsection1} with the following change: $U_1$ is now defined by
\beq \label{defU1bis}
U_1 = \sum_{k=1}^m \int_X X_{k,{\bar u}} d \nu ({\bar u}) \, ,
\eeq
and then, when $n =2mN$, the decomposition \eqref{decSnm} is still valid for
$S_{n,m}$. The step 3 is also unchanged. Concerning the step 2, the only difference concerns the upper bound of the quantity $\Vert \varphi_1 (t) - {\rm e}^{- s^2 t^2/ 4 } \Vert_{1} $ since the definition of $U_1$ is now given by \eqref{defU1bis}. To handle this term, we note that for $f(x) \in \{ \cos x, \sin x \}$, by using the arguments used in the proof of \cite[Lemma 24]{CDM},
we have
\begin{multline*}
\Big \Vert \E_{{\mathbb F}_m} \Big [ f \Big ( t \frac{ \sum_{k=1}^m \int_X X_{k,{\bar u}} d \nu ({\bar u}) +R_1 }{\sqrt{2m}} \Big ) \Big ] - \E_{{\mathbb F}_m} \Big [ f \Big ( t \frac{ \sum_{k=1}^m X_{k} +R_1 }{\sqrt{2m}} \Big ) \Big ] \Big \Vert_{1}
\\ \leq \frac{|t|}{\sqrt{2m}} \int_X \sum_{k=1}^m \Vert X_{k,{\bar u}}- X_{k} \Vert_1 d \nu ({\bar u}) \leq \frac{|t|}{\sqrt{2m}} \sum_{k= 1}^m \delta_{1, \infty} (k) \ll \frac{|t|}{\sqrt{m}} \, .
\end{multline*}
The last upper bound follows from \eqref{estimatedelta} together with the fact that $\mu$ is assumed to have a moment of order at least $2$.
Next, by taking into account \eqref{conslma4.5bis}, note that
\[
\int_{-T}^{T} \frac{|\xi|}{ \sqrt{N}\sqrt{m}} \Big \Vert \prod_{j=N/2}^{N-1} | \varphi_j (\xi/{\sqrt N}) | \Big \Vert_{1} d \xi \ll 1/\sqrt{n} \, .
\]
This implies in particular that \eqref{ineBE4P1} still holds. Compared to Subsection \ref{subsection1} the rest of the proof is unchanged. \qed
\subsubsection{Proof of the upper bound \eqref{ineBE1ter}} \label{subsection3}
Once again we highlight the differences with respect to the proof given in Subsection \ref{subsection1}. For $x \in S^{d-1}$, we consider
\[
S_{n,m,\bar x} = \sum_{k=1}^m X_{k,\bar x} + \sum_{k=m+1}^n X_{k,m} \, ,
\]
and we note that \[
\sup_{\bar x \in X} \Vert S_{n,\bar x} -S_{n,m,\bar x} \Vert_1 \leq \sum_{k=m+1}^n \sup_{\bar x\in X} \Vert X_{k,\bar x}- X_{k,m} \Vert_1 \leq n \delta_{1,\infty} (m) \, .
\]
Once again Step 1 of Subsection \ref{subsection1} is unchanged. Next, $U_1$ is now defined by
\beq \label{defU1ter}
U_{1,\bar x} =U_1= \sum_{k=1}^m X_{k,\bar x} \, ,
\eeq
and the step 3 is also unchanged. Concerning the step 2, due to the new definition \eqref{defU1ter} of $U_1$, the only difference concerns again the upper bound of the quantity $\Vert \varphi_{1} (t) - {\rm e}^{- s^2 t^2/ 4 } \Vert_{1} $. To handle this term, we note that for $f(y) \in \{ \cos y, \sin y \}$,
we have, by using \eqref{estimatedelta} together with the fact that $\mu$ is assumed to have a moment of order at least $2$,
\begin{multline*}
\sup_{\bar x\in X}\Big \Vert \E_{{\mathbb F}_m} \Big [ f \Big ( t \frac{ \sum_{k=1}^m \ X_{k,\bar x} +R_1 }{\sqrt{2m}} \Big ) \Big ] - \E_{{\mathbb F}_m} \Big [ f \Big ( t \frac{ \sum_{k=1}^m X_{k} +R_1 }{\sqrt{2m}} \Big ) \Big ] \Big \Vert_{1}
\\ \leq \frac{|t|}{\sqrt{2m}} \sum_{k=1}^m \sup_{\bar x \in X}\Vert X_{k,\bar x}- X_{k} \Vert_1 \leq \frac{|t|}{\sqrt{2m}} \sum_{k= 1}^m \delta_{1, \infty} (k) \ll \frac{|t|}{\sqrt{m}} \, .
\end{multline*}
We then end the proof as in Subsection \ref{subsection2}. \qed
\subsection{ Proof of Theorem \ref{thmq=4}}
Let us point out the differences compared to the proof of Theorem \ref{thmq=3} (the selections of $N$ and $m$ being identical). To get the upper bound \eqref{conslma4.5bis}, we still establish an upper bound similar to
\eqref{conslma4.5} valid for any $\ell \in [1, m]$ and any $t$ such that $t^2 (m-\ell)/ (2m) \leq C$ for some positive constant $C$. Since $\mu$ has a finite moment of order $q=4$, according to Lemma \ref{lmaR1normep}, $ \Vert R_1 \Vert_{3} \ll 1$. Hence, using Lemma
\ref{lma4.5} with
\[A_j = \frac{1}{\sqrt{m- \ell}} \Big ( \sum_{k=2(j-1)m + \ell +1}^{(2j-1)m}( X_{k,m} - \E ( X_{k,m}| {\mathcal H}^{(\ell)}_{j,m} ) ) + R_j - \E (R_j| {\mathcal H}^{(\ell)}_{j,m} ) \Big ) \]
and $a=0$ (here Lemma 4.5 in \cite{Ji} can also be used), the desired upper bound follows and the constant $C$ appearing above in the restriction for $t$ can be taken equal to $c_2$ (which is the constant appearing in Lemma
\ref{lma4.5}). The fact that $a=0$ implies that we do not need to verify, as in the proof of Theorem \ref{thmq=3}, that $m^{(3-q)/2} (m- \ell)^{-1/2} \leq c_1$. Next, we select $\ell$ as in \eqref{selectl}. This selection makes sense if $\xi^2 \leq n c_2 /2$. Therefore, we use \eqref{IneBE} by selecting
$
T = \eta \sqrt{n}$ with $\eta$ small enough (more precisely such that $ c_2 /(2 \eta^2) $ is large enough for \eqref{restrictioonell} to be satisfied when $m-\ell$ is of order $ c_2 /(2 \eta^2) $). Therefore, for any $|\xi| \leq T$, the upper bound \eqref{conslma4.5bis} is still valid. The second difference, in addition to the choice of $T$, is that instead of using Lemma \ref{lma4.9}, we use Lemmas \ref{lma4.9q=4} and \ref{lma4.9q=4bis} with $r=3$ which then entail that for any $j \geq 1$,
\beq \label{q4varphij}
\Vert \varphi_j (\xi / \sqrt{N}) - {\rm e}^{- s^2 \xi^2/ (4N) } \Vert_{1} \ll N^{-1} |\xi|^3 n^{-1/2} + |\xi | n^{-1/2} m^{-3/10} \, . \quad \qed
\eeq
Note that the upper bound (42) in Jirak \cite{Ji20} with $p=3$ has the same order as \eqref{q4varphij} and is obtained provided $\sum_{k \geq 1} k^a \delta_{3, \infty } (k) < \infty$ for some $a>0$ (indeed \cite[Lemma 5.8 (iii)]{Ji20} is a key ingredient to get (42)). Now, using \eqref{estimatedelta}, we see that $\sum_{k \geq 1} k^a \delta_{3, \infty } (k) < \infty$ for some $a>0$ as soon as $\mu$ has a moment of order $q >6$. Actually \cite[Lemma 5.8]{Ji20} is not needed in its full generality to get an upper bound as \eqref{q4varphij}. Indeed our Lemmas \ref{lma4.9q=4} and \ref{lma4.9q=4bis} are rather based on an estimate as \eqref{zolotarev3-step3P2} which involves the ${\mathbb L}^1$-norm rather than the ${\mathbb L}^{3/2}$-norm.
\section{Technical lemmas} \label{TL}
\setcounter{equation}{0}
Suppose that we have a sequence of random vectors $\{(A_j , B_j )\}_{1 \leq j \leq J} $ and a filtration $\{{\mathcal H}_j \}_{1 \leq j \leq J} $ such that
$$\Big(\E_{{\mathcal H}_j} (A_j^2), \E_{{\mathcal H}_j} (|A_j|^p), \E_{{\mathcal H}_j} (B_j^2)\Big)_{j\in J}
$$ is a sequence of independent random vectors (with values in ${\mathbb R}^3$).
For any real $a$, let
\[
H_j (a) = A_j+ a B_j \, \text{ and } \, \varphi_{j,a}^{\mathcal H} (x) = \E \big ( {\rm exp} ( {\rm i} x H_j(a) ) | {\mathcal H}_j \big ) \, .
\]
With the notations above, the following modification of \cite[Lemma 4.5]{Ji} holds:
\begin{Lemma} \label{lma4.5} Let $p >2$. Let $J\ge 16$ be an integer. Assume the following:
(i) $\E_{{\mathcal H}_j} (A_j )=\E_{{\mathcal H}_j} (B_j )= 0 $,
for any $1\le j\le J$,
(ii) there exists $u^- >0$ such that ${\mathbb P} ( \E_{{\mathcal H}_j} (A^2_j ) \leq u^- )< 1/2$, for any $1\le j\le J$,
(iii) $\sup_{j \geq 1} \E(|A_j|^p) < \infty$,
(iv) $\sup_{j \geq 1} \E(B_j^2) < \infty$.
\noindent Then there exist positive finite constants $c_1$, $c_2$ and $c_3$ depending only on $p$, $u^-$, $\sup_{j \geq 1} \E(|A_j|^p) $ and $\sup_{j \geq 1} \E(B_j^2) $ such that for any $a \in [0,c_1]$ and any
$x^2 \leq c_2$,
\[
\E \Big ( \prod_{j=1}^J | \varphi_{j,a}^{\mathcal H} (x)|\Big ) \le {\rm e}^{-c_3 x^2 J} + {\rm e}^{- J/32} \, .
\]
\end{Lemma}
\noindent {\bf Proof of Lemma \ref{lma4.5}.} The beginning of the proof proceeds as the proof of \cite[Lemma 4.5]{Ji} but with substantial modifications.
\medskip
Let $1\le j\le J$ be fixed for the moment. Using a Taylor expansion we have
\[
\E \big ( {\rm exp} ( {\rm i} x H_j(a) ) | {\mathcal H}_j \big ) = 1 - \E_{{\mathcal H}_j} (H^2_j (a)) x^2/2 + x^2/2 \int_0^1 (1-s) I(s,x) ds \, ,
\]
where, for any $h >0$ and any $s \in [0,1]$,
\begin{align*}
| I(s,x) | & \leq 4 a^2 \E_{{\mathcal H}_j} (B^2_j ) + 2 \E_{{\mathcal H}_j} \big ( A^2_j \big | ( \cos (sxH_j (a) ) - \cos (0) ) + {\rm i} ( \sin (sxH_j (a) ) - \sin(0) ) \big | \big ) \\
& \leq 4 a^2 \E_{{\mathcal H}_j} (B^2_j ) + 8 \E_{{\mathcal H}_j} (A^2_j ) |xh| + 4 \E_{{\mathcal H}_j} (A^2_j {\bf 1}_{|H_j (a) | \geq 2 h} ) \, .
\end{align*}
Using the fact that for any reals $u$ and $v$, $u^2 {\bf 1}_{|u+v| \geq 2 h} \leq u^2 {\bf 1}_{|u| \geq h} + v^2$, we get
\begin{align*}
| I(s,x) | & \leq 8 a^2 \E_{{\mathcal H}_j} (B^2_j ) + 8 \E_{{\mathcal H}_j} (A^2_j ) |xh| + 4 \E_{{\mathcal H}_j} (A^2_j {\bf 1}_{|A_j | \geq h} ) \\
& \leq 8 a^2 \E_{{\mathcal H}_j} (B^2_j ) + 8 \E_{{\mathcal H}_j} (A^2_j ) |xh| + 4 h^{2-p} \E_{{\mathcal H}_j} (|A_j|^{p} ) \, .
\end{align*}
Now, for any $\alpha >0$,
\[
\big \vert \E_{{\mathcal H}_j}(H^2_j (a)) - \big ( \E_{{\mathcal H}_j}(A_j^2
) + a^2 \E_{{\mathcal H}_j} (B^2_j ) \big ) \big \vert \leq \alpha^{-1} \E_{{\mathcal H}_j}(A_j^2) + \alpha a^2 \E_{{\mathcal H}_j}( B^2_j) \, .
\]
So, overall, for any $h >0$ and any $\alpha >0$,
\begin{multline*}
\Big | \E \big ( {\rm exp} ( {\rm i} x H_j(a) ) | {\mathcal H}_j \big ) - 1 + \E_{{\mathcal H}_j} (A^2_j )) x^2/2 \Big |
\leq x^2 ( 3 a^2 + \alpha a^2 ) \E_{{\mathcal H}_j} (B^2_j ) /2 \\ +
\E_{{\mathcal H}_j} (A^2_j ) ( x^2 \alpha^{-1} /2 + 2 h |x|^3 ) + x^2 h^{2-p} \E_{{\mathcal H}_j} (|A_j|^{p} ) \, .
\end{multline*}
Let us take $h = |x|^{-1/(p-1)} $. Set $\delta (p) := (p-2)/(p-1)$.
\smallskip Let $\tilde u, u^+$ be positive numbers to be chosen later.
\smallskip
Recall that by the conditional Jensen inequality,
$\E_{{\mathcal H}_j} (A_j^2)\le \big(\E_{{\mathcal H}_j} (|A_j|^p)\big)^{2/p}$ ${\mathbb P}$-almost surely. For the sake of simplicity, we shall assume that this inequality takes place everywhere.
\smallskip
From the above computations, we infer that, on the set $
\{\E_{{\mathcal H}_j} (B_j^2)\le \tilde u\} \cap \{\E_{{\mathcal H}_j} (|A_j|^p)\le u^+\}
$, one has, for any $\alpha >0$,
\begin{multline*}
\Big | \E \big ( {\rm exp} ( {\rm i} x H_j(a) ) | {\mathcal H}_j \big ) - 1 + \E_{{\mathcal H}_j} (A^2_j )) x^2/2 \Big | \\
\leq x^2 ( 3 a^2 + \alpha a^2 ) {\tilde u} /2 +
x^2 (u^+)^{2/p} \alpha^{-1} /2 + |x|^{2 + \delta(p)} ( 2 (u^+)^{2/p} + u^+ ) \, .
\end{multline*}
Set
\[
u(x) : = a^2 ( 3 + \alpha) {\tilde u} /2+
(u^+)^{2/p} \alpha^{-1} /2 + |x|^{ \delta(p)} ( 2 (u^+)^{2/p} + u^+ ) \, .
\]
Let $u^-$ be a positive number ($u^-$ will be given by $(ii)$ but it is unimportant at this stage). We infer that, for every $x$ such that $x^2\le 2/u^-$ and $x^2\le 2/(u^+)^{2/p}$, on the set
$$
\Gamma_j:= \{\E_{{\mathcal H}_j} (B_j^2)\le \tilde u\}\cap
\{\E_{{\mathcal H}_j} (A_j^2)> u^-\} \cap \{\E_{{\mathcal H}_j} (|A_j|^p)\le u^+\}\,
$$
one has
$$
\Big | \E \big ( {\rm exp} ( {\rm i} x H_j(a) ) | {\mathcal H}_j \big ) \Big|\le 1-u^-x^2/2+x^2u(x)\, .
$$
Select now $\alpha = 8 (u^+)^{2/p} / u^-$. Since $0 < u^-\! , u^+, {\tilde u} < \infty$, note that there exist positive
constants $c_1, c_2 < \infty$ (depending only on $( u^-, u^+, {\tilde u})$) such that
\begin{gather*}
a \leq c_1 \Rightarrow a^2 ( 3 + \alpha ) {\tilde u} /2 \leq u^-/16 \, ,\\
x^2\le c_2 \Rightarrow |x|^{ \delta(p)} ( 2 (u^+)^{2/p} + u^+ )
\le u^-/8\, .
\end{gather*}
Therefore, there exist constants $0 < c_1,c_2 < \infty$ (depending only on $(\tilde u, u^-\! , u^+)$) such that for any $a \leq c_1$ and any $x^2 \leq c_2$, we have, on the set $\Gamma_j$,
\[
\big | \E \big ( {\rm exp} ( {\rm i} x H_j(a) ) | {\mathcal H}_j \big ) \big | \le 1-u^-x^2/4 \leq e^{- u^- x^2/4} \, .
\]
Set also $\Sigma_J:= \sum_{j=1}^J{\bf 1}_{\Gamma_j}$ and
$\Lambda_J:= \{ \Sigma_J\ge J/8\}$.
\smallskip
From the previous computations and the trivial bound $\big | \E \big ( {\rm exp} ( {\rm i} x H_j(a) ) | {\mathcal H}_j \big ) \big | \le 1$, we see that, for any
$0<\tilde u, u^-\! , u^+ <\infty$, there exist positive contants
$c_1,c_2, c_3$ such that for every $x^2\le c_2$ and every $a\le c_1$, one has (recall that $J\ge 16$),
\[
\Big ( \prod_{j=1}^J | \varphi_{j,a}^{\mathcal H} (x)| \Big ) {\bf 1}_{ \Lambda_J } \le
{\rm e}^{-u^-x^2[J/8]/2}\le {\rm e}^{-u^-x^2J/32}\, .
\]
Using the above trivial bound again, the lemma will be proved if, with $u^-$ given by $(ii)$, one can chose
$\tilde u, u^+>0$ such that $\BBP(\Lambda^c_J)\le {\rm e}^{-J/32}$.
\smallskip
By Markov's inequality and condition $(iv)$,
\[\BBP(\E_{{\mathcal H}_j} (B_j^2)> \tilde u)
\le \frac{\sup_{j\in J}\E(B_j^2)}{\tilde u}
\underset{\tilde u\to +\infty}\longrightarrow 0\, .
\]
Hence there exists
$\tilde u>0$ such that, for any $1\le j\le J$, $\BBP(\E_{{\mathcal H}_j} (B_j^2)> \tilde u)\le 1/8$.
Similarly, by condition $(iii)$, there exists $u^+>0$ such that, for any $1\le j\le J$, $\BBP(\E_{{\mathcal H}_j} (|A_j|^p)> u^+)\le 1/8$.
On another hand, by condition $(ii)$ and by definition of $\tilde u$ and $u^+$, we have
\[
\E(\Sigma_J)\ge \sum_{j=1}^J(1- (1/2 + 1/8 + 1/8))=J/4\, .
\]
Hence,
\begin{multline*}
\BBP(\Lambda^c_J)= \BBP (\Sigma_J < J/8) = \BBP (\Sigma_J - \E(\Sigma_J)< J/8- \E(\Sigma_J))\\ \le
\BBP (\Sigma_J - \E(\Sigma_J)< - J/8) = \BBP ( -\Sigma_J + \E(\Sigma_J) > J/8) \, .
\end{multline*}
Therefore, using Hoeffding's inequality (see \cite[Theorem 2]{Hoeffding}),
\[
\BBP(\Lambda_J^c)\le {\rm e}^{\frac{-2(J/8)^2}{J}} = {\rm e}^{-J/32}
\, ,
\]
which ends the proof of the lemma. \qed
\smallskip
For the next lemma, let us introduce the following notation: for any real $\beta$, let
\beq \label{defkappalma}
\kappa_\beta = \frac{(\beta +1) (q-3/2)}{ q-1/2} \, .
\eeq
\begin{Lemma} \label{lma4.7} Assume that $\mu$ has a moment of order $q >2 $. Let $X_{k,m}$ be defined by \eqref{defXkm}. Then, setting ${\bar X}_{k,m} = X_{k,m} - \E_m (X_{k,m})$, for any real $\beta$ such that $-1 < \beta < q-3 + 1/q$, we have
\[
\Big \Vert \E_m \Big ( \sum_{k=m+1}^{2m} {\bar X}_{k,m} \Big )^2 - \E \Big ( \sum_{k=m+1}^{2m} {\bar X}_{k,m} \Big )^2 \Big \Vert_{1}\ll 1 + m^{3-q} {\bf 1}_{q \leq 3}+ m^{ 1 - \kappa_{\beta}} {\bf 1}_{ \beta < (q-3/2)^{-1}} \, ,
\]
where $\kappa_\beta$ is defined in \eqref{defkappalma} and $\E_m (\cdot)$ means $\E(\cdot | {\mathcal G}_m )$ with ${\mathcal G}_m = \sigma (W_0, \varepsilon_1, \ldots, \varepsilon_m)$. In particular, if $q>3$, then
\[
\Big \Vert \E_m \Big ( \sum_{k=m+1}^{2m} {\bar X}_{k,m} \Big )^2 - \E \Big ( \sum_{k=m+1}^{2m} {\bar X}_{k,m} \Big )^2 \Big \Vert_{1}\ll m^{1/5 } \, .
\]
\end{Lemma}
\noindent{{\bf Proof of Lemma \ref{lma4.7}.} Note first that
\begin{align} \label{step1lma47}
& \Big \Vert \E_m \Big ( \sum_{k=m+1}^{2m} {\bar X}_{k,m} \Big )^2 - \E \Big ( \sum_{k=m+1}^{2m} {\bar X}_{k,m} \Big )^2 \Big \Vert_{1} \\ & \leq \Big \Vert \E_m \Big ( \sum_{k=m+1}^{2m} X_{k,m} \Big )^2 - \E \Big ( \sum_{k=m+1}^{2m} X_{k,m} \Big )^2 \Big \Vert_{1} + 2 \Big \Vert \E_m \Big ( \sum_{k=m+1}^{2m} X_{k,m} \Big ) \Big \Vert^2_{2} \nonumber \\
& := I_m + I\!\!I_m \nonumber \, .
\end{align}
Taking into account \eqref{Borne1condexpect}, \eqref{estimatedelta} and the fact that $q\geq 2$, we get
\beq \label{P1lma47}
\Big \Vert \E_m \Big (\sum_{k=m+1}^{2m} X_{k,m} \Big ) \Big \Vert_{2} \ll \sum_{k=m+1}^{2m} \Vert \E_m ( X_{k,m} ) \Vert_{2} \ll \sum_{k=1}^m \delta_{1, \infty} (k) \ll 1 \, .
\eeq
It remains to handle $I_m$. With this aim, we first write the following decomposition: for any $\gamma \in (0,1]$
\begin{multline} \label{decIm}
I_m \leq \sum_{k=1}^{m} \Vert \E_m ( X^2_{k+m,m})- \E ( X_{k+m,m} ^2 ) \Vert_{1} \\ + 2 \sum_{\ell=1}^{m} \ell^{\gamma} \sup_{\ell \leq j < i \leq \min ( 2 \ell , m)} \Vert \E_m ( X_{i+m,m} X_{j+m,m})- \E ( X_{i+m,m} X_{j+m,m})\Vert_{1} \\
+ 2 \sum_{\ell=1}^{m} \sum_{k=[\ell^{\gamma} ]+1}^{m-\ell}\Vert \E_m ( X_{\ell +m,m} X_{\ell+k +m,m})- \E ( X_{\ell+m,m} X_{\ell+k+m,m})\Vert_{1} \, .
\end{multline}
Note that for $1 \leq i,j \leq m$,
\[
\Vert \E_m ( X_{i+m,m} X_{j+m,m})- \E ( X_{i+m,m} X_{j+m,m})\Vert_{1} \leq \sup_{{\bar x}_1 , {\bar x}_2 \in X \atop{{\bar y}_1 , {\bar y}_2 \in X}} \E \big | X_{i, {\bar x}_1 } X_{j, {\bar x}_2 } - X_{i, {\bar y}_1 } X_{j, {\bar y}_2 } \big | \, .
\]
With the same arguments as those developed in the proof of \cite[Prop. 4]{CDJ}, and since $\mu $ has a moment of order $q>2$, we then infer that
\beq \label{pr4.5prime}
\sum_{k \geq 1 } k^{q-3} \Vert \E_m ( X^2_{k+m,m})- \E ( X_{k+m,m}^2 ) \Vert_{1} \ll 1 \, ,
\eeq
and, for every $\beta < q-3+1/q$,
\beq \label{item2prop4}
\sum_{\ell \geq 1} \ell^{ \beta} \sup_{\ell \leq j < i \leq \min (2 \ell,m) } \Vert \E_m ( X_{i+m,m} X_{j+m,m})- \E ( X_{i+m,m} X_{j+m,m})\Vert_{1} \ll 1 \, .
\eeq
On another hand, with the same arguments as those used to prove \cite[Relation (34)]{CDJ}, we first write
\begin{multline*}
\sum_{\ell=1}^{m} \sum_{k=[\ell^{\gamma} ]+1}^{m-\ell}\Vert \E_m ( X_{\ell +m,m} X_{\ell+k +m,m})- \E ( X_{\ell+m,m} X_{\ell+k+m,m})\Vert_{1} \\
\ll \Big ( \sum_{\ell=m+1}^{2m} \Vert \E_m(X_{\ell,m} )\Vert_{2} \Big )^2 + \sum_{\ell=1}^{m} \sum_{k=[\ell^{\gamma} ]+1}^{m-\ell} \sum_{u=1}^{\ell} \Vert P_{m+1} ( X_{u + m,m} ) \Vert_2 \Vert P_{m+1} ( X_{u +k+ m,m} ) \Vert_2 \, ,\\
\ll \Big ( \sum_{\ell=m+1}^{2m} \Vert \E_m(X_{\ell,m} )\Vert_{2} \Big )^2 + \Big(\sum_{v = 1}^m a(0,v) \Big)\Big(
\sup_{u\ge 1} \sum_{\ell=1}^{m} \sum_{k=[\ell^{\gamma} ]+1}^{m-\ell} a(k,u) \, \Big) \, ,
\end{multline*}
where we have used the notations $P_{m+1}( \cdot) = \E_{m+1} ( \cdot) - \E_{m}( \cdot) $ and $a(k,u) = \Vert P_{m+1} ( X_{u +k+ m,m} ) \Vert_2$. Note first that
\begin{align*}
\sum_{\ell=1}^{m} \sum_{k=[\ell^{\gamma} ]+1}^{m-\ell} a(k,u) & \ll \sum_{k=2}^{m-1} ( k^{1/\gamma} \wedge m ) a(k,u) \\& \ll
\sum_{k=2}^{[m^{\gamma} ] } k^{-1} a(k,u) \sum_{ \ell =1}^k \ell^{1 / \gamma} + m \sum_{k = [m^{\gamma} ] +1}^m k^{-1}a(k,u) \sum_{ \ell =1}^k 1 \, . \end{align*}
Changing the order of summation and using Cauchy-Schwarz's inequality, it follows that
\begin{multline*}
\sum_{\ell=1}^{m} \sum_{k=[\ell^{\gamma} ]+1}^{m-\ell} a(k,u)
\ll \sum_{\ell =1}^{[m^{\gamma} ] } \ell^{1 / \gamma - 1/2} \Big ( \sum_{ k \geq \ell } a^2(k,u) \Big )^{1/2} \\ + m \sum_{\ell = [m^{\gamma} ]+1 }^{m } \ell^{ - 1/2} \Big ( \sum_{ k \geq \ell } a^2(k,u) \Big )^{1/2} + m^{ 1 + \gamma/2} \Big ( \sum_{ k \geq [m^{\gamma} ]+1 } a^2(k,u) \Big )^{1/2} \, .
\end{multline*}
But, for any $u \geq 1$, by stationarity,
\[
\Big ( \sum_{ k \geq \ell } a^2(k,u) \Big )^{1/2} \leq \Vert \E_{m+1} ( X_{u +\ell+ m,m} ) \Vert_2 \leq \Vert \E_{m+1} ( X_{u +\ell+ m,m} ) \Vert_{\infty} \leq \delta_{1, \infty} ( \ell) \, .
\]
Notice also that
\[
\sum_{v= 1}^m a(0,v)\le \sum_{v= 1}^m
\Vert \E_{m+1}(X_{v+m,m})\Vert_2 \le \sum_{v= 1}^m
\delta_{1,\infty}(v) \, .
\]
\smallskip
Hence, from the above considerations and taking into account \eqref{P1lma47}, \eqref{estimatedelta}, \eqref{estimatedeltabis} and the fact that $\mu$ has a moment of order $q \geq 2$, we infer that
\begin{multline} \label{pr4.7}
\sum_{\ell=1}^{m} \sum_{k=[\ell^{\gamma} ]+1}^{m-\ell}\Vert \E_m ( X_{\ell +m,m} X_{\ell+k +m,m})- \E ( X_{\ell+m,m} X_{\ell+k+m,m})\Vert_{1} \\
\ll 1 + m^{1 - \gamma ( q - 3/2 ) } {\bf 1}_{1 /\gamma > q-3/2 } \, .
\end{multline}
Starting from \eqref{decIm} and considering the estimates \eqref{pr4.5prime}, \eqref{item2prop4} and \eqref{pr4.7},
we get, for any $\gamma \in (0,1]$ and any $\beta$ such that $-1 < \beta < q-3 + 1/q$,
\beq \label{Imlma47new}
I_m \ll 1 + m^{3-q} {\bf 1}_{q \leq 3}+ m^{\gamma- \beta} {\bf 1}_{\gamma > \beta} +m^{1 - \gamma ( q - 3/2 ) } {\bf 1}_{1 /\gamma > q-3/2 } \, .
\eeq
Let us select now $ \gamma$ such that $\gamma - \beta = 1 - \gamma ( q - 3/2 )$. This gives $ \gamma = ( \beta +1) / (q-1/2)$. Since $\beta >-1$, $\beta < q-3 + 1/q$ and $q >2$ we have $\gamma \in (0, 1]$. Moreover $1 /\gamma > q-3/2 $ and $\gamma > \beta$ provided $ \beta < (q-3/2)^{-1}$. Starting from \eqref{step1lma47} and taking into account \eqref{P1lma47}, \eqref{Imlma47new} and the above selection of $\gamma$, which entails that $\kappa_\beta = \gamma ( q-3/2)$, the lemma follows. \qed
\begin{Lemma} \label{lmaR1normep} Let $p \geq 2$. Assume that $\mu$ has a moment of order $q$ in $ ]p, p+1]$. Then $ \Vert R_1 \Vert^p_{p} \ll m^{p+1-q}$, where $R_1$ is defined by \eqref{defRj}.
\end{Lemma}
\noindent{{\bf Proof of Lemma \ref{lmaR1normep}.} Let ${\tilde X}_{k,m} = X_{k,m} - \E_{{\mathbb F}_m} (X_{k,m})$ and
$\E_\ell (\cdot):=\E(\cdot | {\mathcal G}_\ell )$ with ${\mathcal G}_\ell = \sigma (W_0, \varepsilon_1, \ldots, \varepsilon_\ell)$. We write
\[
{\tilde X}_{k,m} = ({\tilde X}_{k,m} - \E_{k-1} ( {\tilde X}_{k,m} )) + \E_{k-1} ( {\tilde X}_{k,m} ) := d_{k,m} + r_{k,m} \, ,
\]
and then
\begin{equation} \label{dec1R1mart}
\Vert R_1 \Vert_{p} \leq \Big \Vert \sum_{k=m+1}^{2m} d_{k,m} \Big \Vert_p + \Big \Vert \sum_{k=m+1}^{2m} r_{k,m} \Big \Vert_p \, .
\end{equation}
Note that $(d_{k,m})_{k \geq 1} $ is a sequence of ${\mathbb L}^q$-martingale differences with respect to the filtration $({\mathcal G}_{k})_{k \geq 1} $. Moreover, for any $r \geq 1$,
$\Vert d_{k,m} \Vert_r \leq 2 \Vert {\tilde X}_{k,m} \Vert_r$ and, for any integer $k \in [m+1, 2m]$,
\begin{multline*}
\E \big | {\tilde X}_{k,m} \big |^r \\
= \E \Big | f_m (\varepsilon_{k-m+1}, \ldots, \varepsilon_{m}, \varepsilon_{m+1}, \ldots, \varepsilon_{k}) - \int f_m (v_{k-m+1}, \ldots, v_{m}, \varepsilon_{m+1}, \ldots, \varepsilon_{k}) \prod_{i=k-m+1}^{m} d\mu (v_i) \Big |^r \\
\leq \int \E \Big | f_m (\varepsilon_{k-m+1}, \ldots, \varepsilon_{m}, \varepsilon_{m+1}, \ldots, \varepsilon_{k}) - f_m (v_{k-m+1}, \ldots, v_{m}, \varepsilon_{m+1}, \ldots, \varepsilon_{k}) \Big |^r \prod_{i=k-m+1}^{m} d\mu (v_i)
\, .
\end{multline*}
Hence, for any integer $k \in [m+1, 2m]$ and any $r \geq 1$,
\begin{align} \label{P1lmaR1norme2}
\E \big | {\tilde X}_{k,m} \big |^r
& \leq \int \!\! \int \E \Big | f_m (u_{k-m+1}, \ldots, u_{m}, \varepsilon_{m+1}, \ldots, \varepsilon_{k}) \nonumber \\& \quad \quad - f_m (v_{k-m+1}, \ldots, v_{m}, \varepsilon_{m+1}, \ldots, \varepsilon_{k}) \Big |^r \prod_{i=k-m+1}^{m} d\mu (v_i) \prod_{i=k-m+1}^{m} d\mu (u_i) \nonumber \\
& \leq \sup_{{\bar x} , {\bar y} \in X} \E | X_{k-m,{\bar x}} - X_{k-m,{\bar y}}|^r= \delta^r_{r,\infty}\ (k-m) \, .
\end{align}
On another hand $(r_{k,m})_{k \geq 1} $ is a sequence of centered random variables such that
\[
\Vert r_{k,m} \Vert_{\infty} \leq 2 \Vert \E ( |X_k| | {\mathcal G}_{k-1} ) \Vert_{\infty} \leq 2 \int_G \log (N(g)) \mu (dg ):= K < \infty \, .
\]
To handle the first term in the right-hand side of \eqref{dec1R1mart}, we use the Rosenthal-Burkholder's inequality for martingales (see \cite{Bu73}). Hence, there exists a positive constant $c_p$ only depending on $p$ such that
\begin{equation*}
\Big \Vert \sum_{k=m+1}^{2m} d_{k,m} \Big \Vert_p^p \le c_p\Big \{ \Big \Vert \sum_{k=m+1}^{2m} \E(d_{k,m}^2| {\mathcal G}_{k-1}) \Big \Vert_{p/2}^{p/2} +\sum_{k=m+1}^{2m} \Vert d_{k,m } \Vert_p^p\Big \} \, .
\end{equation*}
Taking into account \eqref{P1lmaR1norme2}, \eqref{estimatedelta} and the fact that $\mu$ has a moment of order $q=p+1$, it follows that
\begin{equation*}
\sum_{k=m+1}^{2m} \Vert d_{k,m } \Vert_p^p \leq 2^p \sum_{k=m+1}^{2m} \delta^p_{p,\infty}\ (k-m) \ll m^{p+1-q}\, .
\end{equation*}
On another hand, by the properties of the conditional expectation, note that
\[
\Vert \E(d_{k,m}^2| {\mathcal G}_{k-1}) \Vert_\infty \leq \Vert \E ( X^2_k | {\mathcal G}_{k-1} ) \Vert_{\infty} \leq \int_G ( \log (N(g)) )^2 \mu (dg ):= L < \infty \, .
\]
Hence, by using \eqref{P1lmaR1norme2},
\[
\Vert \E(d_{k,m}^2| {\mathcal G}_{k-1}) \Vert_{p/2}^{p/2} \le L^{(p-2)/2} \Vert d_{k,m} \Vert_2^2 \leq 4L^{(p-2)/2} \Vert {\tilde X}_{k,m} \Vert_2^2 \leq 4L^{(p-2)/2} \delta^2_{2,\infty}\ (k-m) \, .
\]
It follows that
\[
\Big \Vert \sum_{k=m+1}^{2m} \E(d_{k,m}^2| {\mathcal G}_{k-1}) \Big \Vert_{p/2}^{p/2}
\le 4 L^{(p-2)/2}
\Big ( \sum_{k= 1}^m \delta^{4/p}_{2,\infty} (k) \Big )^{p/2} \, .
\]
By taking into account \eqref{estimatedelta} (when $p=2$) and \eqref{estimatedeltabis} (when $p >2$), and since $q \in ]p, p+1]$, we get
\[
\Big \Vert \sum_{k=m+1}^{2m} \E(d_{k,m}^2| {\mathcal G}_{k-1}) \Big \Vert_{p/2}^{p/2}
\le 4 L^{(p-2)/2} m^{p+1-q} \, .
\]
So, overall,
\begin{equation}\label{burkholder-2}
\Big \Vert \sum_{k=m+1}^{2m} d_{k,m} \Big \Vert_p^p \ll m^{p+1-q} \, .
\end{equation}
We handle now the second term in the right-hand side of \eqref{dec1R1mart}. By using the Burkholder-type inequality stated in \cite[Proposition 4]{DD03}, we get
\[
\Big \Vert \sum_{k=m+1}^{2m} r_{k,m} \Big \Vert_p^2 \leq 2 p \sum_{i= m+1}^{2m} \sum_{k=i}^{2m} \Vert r_{i,m} \E ( r_{k,m} | {\mathcal G}_{i-1} ) \Vert_{p/2} \, .
\]
For any $ k \geq i$, by the computations leading to the upper bound \cite[(63)]{CDM}, we have
\beq \label{condexpectationrkm}
\Vert \E ( r_{k,m} | {\mathcal G}_{i-1} ) \Vert_{\infty} \leq \delta_{1, \infty} (k-i +1) \, ,
\eeq
implying that
\[
\Vert r_{i,m} \E ( r_{k,m} | {\mathcal G}_{i-1} ) \Vert_{p/2} \ \leq \Vert r_{i,m} \Vert_{p/2} \delta_{1, \infty} (k-i +1) \, .
\]
Since $\mu$ has a moment of order at least $2$, by \eqref{estimatedelta}, $\sum_{\ell \geq 1} \delta_{1, \infty} (\ell) < \infty$. Hence
\[
\Big \Vert \sum_{k=m+1}^{2m} r_{k,m} \Big \Vert_p^2 \ll \sum_{i= m+1}^{2m} \Vert r_{i,m} \Vert_{p/2} \, .
\]
But, for any $r \geq 1$, $\Vert r_{i,m} \Vert^r_{r} \leq K^{r-1} \Vert r_{i,m} \Vert_1 \leq 2 K^{r-1} \Vert {\tilde X}_{i,m} \Vert_1 $. Hence, by using \eqref{P1lmaR1norme2}, it follows that,
for any $r \geq 1$, $\Vert r_{i,m} \Vert^r_{r} \leq 2 K^{r-1} \delta_{1, \infty} (i-m ) $. Therefore, by \eqref{estimatedeltabis} and the fact that $q-1 >p/2$ (since $q >p$ and $p \geq 2$), we derive that
\begin{equation}\label{burkholder-3}
\Big \Vert \sum_{k=m+1}^{2m} r_{k,m} \Big \Vert_p^p
\ll \Big (
\sum_{i= 1}^{m} \delta^{2/p}_{1, \infty} (i ) \Big )^{p/2} \ll 1 \, .
\end{equation}
Starting from \eqref{dec1R1mart} and considering the upper bounds \eqref{burkholder-2} and \eqref{burkholder-3}, the lemma follows. \qed
\begin{Lemma} \label{lmamomentp} Assume that $\mu$ has a finite moment of order $q \geq 2$. Then $
\big \Vert \sum_{k=m+1}^{2m } X_{k} \big \Vert_q \ll \sqrt{m}
$ and
$
\big \Vert \sum_{k=m+1}^{2m } X_{k,m} \big \Vert_q \ll \sqrt{m}
$.
\end{Lemma}
\noindent{{\bf Proof of Lemma \ref{lmamomentp}}.} The two upper bounds are proved similarly. Let us prove the second one. As to get \eqref{momentAJ}, we use \cite[Cor. 3.7]{MPU19}, to derive that
\[
\Big \Vert \sum_{k=m+1}^{2m } X_{k,m} \Big \Vert_q \ll \sqrt{m} \Big [ \Vert X_{1+m,m} \Vert_{q} + \sum_{k=m+1}^{2m} k^{-1/2} \Vert \E_m (X_{k,m}) \Vert_{q} \Big ] \, ,
\]
where $\E_m (\cdot)$ means $\E(\cdot | {\mathcal G}_m )$ with ${\mathcal G}_m = \sigma (W_0, \varepsilon_1, \ldots, \varepsilon_m)$.
But $\Vert X_{1+m,m} \Vert_{q} \leq \Vert X_{1} \Vert_{q} < \infty$ and $\Vert \E_m (X_{k+m,m}) \Vert_{q} \leq \Vert \E_m (X_{k+m,m}) \Vert_{\infty}\leq \delta_{1, \infty} (k)$. Hence, the lemma follows by considering \eqref{estimatedelta}. \qed
\medskip
For the next lemma, we recall the notations \eqref{defbbFm} and \eqref{notaSm1} for ${\mathbb F}_m$ and $Y_j^{(1)}$.
\begin{Lemma} \label{lma4.9} Assume that $\mu$ has a finite moment of order $q \in ]2,3]$. Then for $f(x) \in \{ \cos x, \sin x \}$, we have
\[
\Big \Vert \E_{{\mathbb F}_m} \Big [ f \Big ( t \frac{Y_2^{(1)}}{\sqrt{2m}} \Big ) \Big ] - \E \big [ f ( t s N /{\sqrt 2} ) \big ] \Big \Vert_{1} \ll \frac{ t^2 }{m^{q/2 -1}} + \frac{|t|}{m^{q-3/2}} \, .
\]
In addition
\[
\Big \Vert \E_{{\mathbb F}_m} \Big [ f \Big ( t \frac{Y_1^{(1)}}{\sqrt{2m}} \Big ) \Big ] - \E \big [ f ( t s N /{\sqrt 2} ) \big ] \Big \Vert_{1} \ll \frac{ t^2 }{m^{q/2 -1}} \, .
\]
\end{Lemma}
\noindent{{\bf Proof of Lemma \ref{lma4.9}.} Since the derivative of $x \mapsto f(tx)$ is $t^2$-Lipschitz, making use of a Taylor expansion as done in the proof of Item (2) of \cite[ Lemma 5.2]{DMR09}, we have
\begin{multline} \label{P1lma49}
\Big \Vert \E_{{\mathbb F}_m} \Big [ f \Big ( t \frac{Y_2^{(1)}}{\sqrt{2m}} \Big ) \Big ] - \E \big [ f ( t s N /{\sqrt 2} ) \big ] \Big \Vert_{1} \\
\leq \Big \Vert \E_{{\mathbb F}_m} \Big [ f \Big ( t \frac{U_2}{\sqrt{2m}} \Big ) \Big ] - \E \big [ f ( t s N /{\sqrt 2} ) \big ] \Big \Vert_{1} + \frac{ t^2 }{ 2m} \big ( \Vert R_2 \Vert_{2} \Vert U_2 \Vert_{2} + \Vert R_2 \Vert^2_{2} \big ) \, .
\end{multline}
Now recall that $U_2= \sum_{k=2m+1}^{3m} {\tilde X}_{k,m}$ where ${\tilde X}_{k,m}= X_{k,m} - \E_{{\mathbb F}_m} (X_{k,m} ) $ with
$X_{k,m} = \E ( X_{k} | {\mathcal E}_{k-m+1}^{k} ) :=f_m ( \varepsilon_{k-m+1}, \ldots, \varepsilon_{k})$.
Let $(\varepsilon^*_k)_k$ be an independent copy of $(\varepsilon_k)_k$ and independent of $W_0$. Define
\beq \label{defU2*}
{X}^*_{k,m}= f_{m} (\varepsilon^*_{k-m+1}, \ldots,\varepsilon_{2m}^*, \varepsilon_{2m+1}, \ldots, \varepsilon_{k}) \mbox{ and } U_2^*= \sum_{k=2m+1}^{3m} { X}^*_{k,m} \, .
\eeq
Clearly $U_2^*$ is independent of $ {\mathbb F}_m $. Using again the fact that the derivative of $x \mapsto f(tx)$ is $t^2$-Lipschitz and a Taylor expansion as in the proof of \cite[ Lemma 5.2]{DMR09}, we get
\begin{multline} \label{P2lma49}
\Big \Vert \E_{{\mathbb F}_m} \Big [ f \Big ( t \frac{U_2}{\sqrt{2m}} \Big ) \Big ] - \E \big [ f ( t s N /{\sqrt 2} ) \big ] \Big \Vert_{1} \\ \ll
\Big |\E \Big [ f \Big ( t \frac{U^*_2}{\sqrt{2m}} \Big ) \Big ] - \E \big [ f ( t s N /\sqrt{2}) \big ] \Big | + \frac{ t^2 }{2 m } \big ( \Vert U_2 - U_2^* \Vert_{2} \Vert U_2^* \Vert_{2} + \Vert U_2 - U_2^* \Vert^2_{2} \big ) \, .
\end{multline}
Setting
${\mathcal G}_{k,m} = \sigma ( \varepsilon^*_{m+2}, \ldots,\varepsilon_{2m}^*, \varepsilon_{m+2}, \ldots,\varepsilon_{2m}, \varepsilon_{2m+1}, \ldots, \varepsilon_{k}) $, we have
\begin{multline*}
\Vert U_2 - U_2^* \Vert^2_{2} \leq 2 \Big ( \sum_{k=2m+1}^{3m} \Vert \E_{{\mathbb F}_m} (X_{k,m} ) \Vert_{2} \Big )^2 +2 \sum_{k=2m+1}^{3m} \Vert { X}_{k,m} - { X}^*_{k,m}\Vert_{2}^2 \\
+ 4 \sum_{k=2m+1}^{3m} \sum_{\ell = k+1}^{3m} \Vert ( {X}_{k,m} - { X}^*_{k,m}) \E ( { X}_{\ell,m} - { X}^*_{\ell,m} | {\mathcal G}_{k,m}) \Vert_1 \, .
\end{multline*}
But, for any integer $k$ in $[2m+1,3m]$ and any $r \geq 1$,
\beq \label{new1EFmr}
\Vert \E_{{\mathbb F}_m} (X_{k,m} ) \Vert_{2} \leq \Vert \E(X_{k,m} | {\mathcal G}_{2m}) \Vert_{r} \leq \Vert \E(X_{k,m} | {\mathcal G}_{2m}) \Vert_{\infty} \leq \delta_{1, \infty} (k-2m) \, ,
\eeq
where ${\mathcal G}_{2m} = \sigma (W_0, \varepsilon_1, \ldots, \varepsilon_{2m})$.
On another hand, proceeding as in the proof of \cite[Lemma 24]{CDM}, we get that, for any $k \geq 2m$ and any $r \geq 1$,
\beq \label{new2EFmr}
\Vert { X}_{k,m} - { X}^*_{k,m} \Vert^r_{r} \leq \delta_{r, \infty}^r (k-2m) \, .
\eeq
Let us now handle the quantity $\Vert \E ( { X}_{\ell,m} - { X}^*_{\ell,m} | {\mathcal G}_{k,m}) \Vert_{\infty} $ for $\ell >k$. For this aim, let $(\varepsilon'_k)_k$ be an independent copy of $(\varepsilon_k)_k$, independent also of $ ( (\varepsilon^*_k)_k,W_0)$. With the notation ${\mathcal H}_{k,m }= \sigma ( (\varepsilon_{i})_{i\leq k}, W_0, \varepsilon^*_{m+1}, \ldots, \varepsilon^*_{2m} )$, one has, for any integers $k, \ell$ in $ [2m+1, 3m]$ such that $\ell >k$,
\begin{multline*}
\E ( { X}_{\ell,m} - { X}^*_{\ell,m} | {\mathcal G}_{k,m}) = \E \big (f_{m} (\varepsilon_{\ell-m+1}, \ldots,\varepsilon_{2m}, \varepsilon_{2m+1}, \ldots, \varepsilon_{k}, \varepsilon'_{k+1} \ldots, \varepsilon'_{\ell}) | {\mathcal H}_{k,m } \big ) \\
-\E \big (f_{m} (\varepsilon^*_{\ell-m+1}, \ldots,\varepsilon_{2m}^*, \varepsilon_{2m+1}, \ldots, \varepsilon_{k}, \varepsilon'_{k+1}, \ldots, \varepsilon'_{\ell}) | {\mathcal H}_{k,m } \big ) \, .
\end{multline*}
Therefore, by simple arguments and using stationarity, we infer that, for $ k, \ell$ in $ [2m+1, 3m]$ such that $\ell >k$,
\beq \label{new3EFmr}
\Vert \E ( { X}_{\ell,m} - { X}^*_{\ell,m} | {\mathcal G}_{k,m}) \Vert_{\infty} \leq \sup_{{\bar x}, {\bar y} \in X} |\E (X_{ \ell - k, {\bar x}} ) - \E (X_{\ell -k , {\bar y}} )| \leq \delta_{1, \infty} (\ell-k) \, .
\eeq
So, overall,
\[
\Vert U_2 - U_2^* \Vert^2_{2} \ll \sum_{k=1}^m \delta_{2, \infty}^2 (k) + \Big ( \sum_{k=1}^m \delta_{1, \infty} (k) \Big )^2 \, .
\]
Taking into account \eqref{estimatedelta} and the fact that $\mu$ has a moment of order $q \in ]2,3]$, it follows that
\begin{equation} \label{P4lma49}
\Vert U_2 - U_2^* \Vert^2_{2} \ll m^{3-q} \, .
\end{equation}
On another hand, by stationarity, $ \Vert R_2 \Vert_{2} = \Vert R_1 \Vert_{2}$, and by Lemma \ref{lmaR1normep}, since $\mu$ has a moment of order $q\in ]2,3]$, we have $ \Vert R_1 \Vert_{2} \ll m^{(3-q)/2}$.} Moreover, by using \eqref{P4lma49}, Lemma \ref{lmamomentp}
and the fact that ${ X}^*_{k,m}$ is distributed as $X_{k,m}$, we get that $\Vert U_2 \Vert_{2} + \Vert U^*_2 \Vert_{2} \ll \sqrt{m}$.
So, the inequalities \eqref{P1lma49}, \eqref{P2lma49} and \eqref{P4lma49} together with the above considerations, lead to
\begin{equation} \label{P3lma49}
\Big \Vert \E_{{\mathbb F}_m} \Big [ f \Big ( t \frac{Y_2^{(1)}}{\sqrt{2m}} \Big ) \Big ] - \E \big [ f ( t s N /{\sqrt 2} ) \big ] \Big \Vert_{1} \ll
\Big |\E \Big [ f \Big ( t \frac{U^*_2}{\sqrt{2m}} \Big ) \Big ] - \E \big [ f ( t s N /\sqrt{2}) \big ] \Big | + \frac{ t^2 }{m^{q/2 -1}} \, .
\end{equation}
Next, taking into account that $x \mapsto f(tx)$ is $t$-Lipschitz and the fact that $U_2^*=^{\mathcal D} \sum_{k=1}^{m} X_{k+m,m}$ and $S_m=^{\mathcal D}S_{2m}-S_m$, we get
\[
\Big | \E \Big [ f \Big ( t \frac{U^*_2}{\sqrt{2m}} \Big ) \Big ] - \E \Big [ f \Big ( t \frac{S_{m}}{\sqrt{2m}} \Big ) \Big ] \Big | \leq \frac{|t|}{\sqrt{2m}} \Big \Vert \sum_{k=1}^m ( X_{k+m,m} - X_{k+m} ) \Big \Vert_{1} \, .
\]
But, by stationarity, \cite[Lemma 24]{CDM} and \eqref{estimatedelta}, we have
\[
\Big \Vert \sum_{k=1}^m ( X_{k+m,m} - X_{k+m} ) \Big \Vert_{1} \leq m \delta_{1, \infty} (m) \ll 1/m^{q-2} \, ,
\]
implying that
\beq \label{P5lma49}
\Big | \E \Big [ f \Big ( t \frac{U^*_2}{\sqrt{2m}} \Big ) \Big ] - \E \Big [ f \Big ( t \frac{S_{m}}{\sqrt{2m}} \Big ) \Big ] \Big | \ll \frac{|t|}{m^{q-3/2}} \, .
\eeq
Hence starting from \eqref{P3lma49} and taking into account \eqref{P5lma49}, we derive that
\begin{multline} \label{P6lma49}
\Big \Vert \E_{{\mathbb F}_m} \Big [ f \Big ( t \frac{Y_2^{(1)}}{\sqrt{2m}} \Big ) \Big ] - \E \big [ f ( t sN / \sqrt{2} ) \big ] \Big \Vert_{1}
\\ \ll \Big |\E \Big [ f \Big ( t \frac{S_m}{\sqrt{2m}} \Big ) \Big ] - \E \big [ f ( t s N /\sqrt{2}) \big ] \Big | +\frac{ t^2 }{m^{q/2 -1}} + \frac{ |t| }{m^{q-3/2}} \, .
\end{multline}
Next note that $ x \mapsto f(tx)$ is such that its first derivative is $t^2$-Lipshitz. Hence, by the definition of the Zolotarev distance of order $2$ (see for instance the introduction of \cite{DMR09} for the definition of those distances),
\[
\Big |\E \Big [ f \Big ( t \frac{S_m}{\sqrt{2m}} \Big ) \Big ] - \E \big [ f ( t s N /{\sqrt 2}) \big ] \Big | \leq t^2 \zeta_2 \big ( P_{S_m/{\sqrt {2 m}}} , G_{s^2/2} \big) \, .
\]
We apply \cite[Theorems 3.1 and 3.2]{DMR09} and, since $\mu$ has a finite moment of order $q \in ]2,3]$, we derive
\[
\zeta_2 \big ( P_{S_m/{\sqrt{2 m}}} , G_{s^2/2} \big) \ll m^{- ( q/2-1)} \, .
\]
Note that the fact that the conditions (3.1), (3.2), (3.4) and (3.5) required in \cite[Theorems 3.1 and 3.2]{DMR09} hold when $\mu$ has a finite moment of order $q \in ]2,3]$ has been established in the proof of \cite[Theorem 2]{CDJ}.
Hence
\begin{equation} \label{P7lma49}
\Big |\E \Big [ f \Big ( t \frac{S_m}{\sqrt{2m}} \Big ) \Big ] - \E \big [ f ( t s N /\sqrt{2}) \big ] \Big | \ll \frac{ t^2 }{m^{q/2 -1}} \, .
\end{equation}
Starting from \eqref{P6lma49} and considering \eqref{P7lma49}, the first part of Lemma \ref{lma4.9} follows. Now to prove the second part, we note that
\begin{multline*}
\Big \Vert \E_{{\mathbb F}_m} \Big [ f \Big ( t \frac{Y_1^{(1)}}{\sqrt{2m}} \Big ) \Big ] - \E \big [ f ( t s N /{\sqrt 2} ) \big ] \Big \Vert_{1} \\
\leq \Big \Vert \E \Big [ f \Big ( t \frac{S_m}{\sqrt{2m}} \Big ) \Big ] - \E \big [ f ( t s N /{\sqrt 2} ) \big ] \Big \Vert_{1} + \frac{ t^2 }{ 2m} \big ( \Vert R_1 \Vert_{2} \Vert S_m \Vert_{2} + \Vert R_1 \Vert^2_{2} \big ) \, ,
\end{multline*}
where we used the fact that $S_m$ is independent of ${\mathbb F}_m$. Hence the second part of Lemma \ref{lma4.9} follows by using \eqref{P7lma49}, Lemma \ref{lmaR1normep} and the fact that, by Lemma \ref{lmamomentp}, $\Vert S_m \Vert_2 \ll \sqrt{m}$. \qed
\begin{Lemma} \label{lmaU1U1starnormep} Let $p \geq 2$. Assume that $\mu$ has a moment of order $q$ in $ ]p, p+1]$. Then $ \Vert U_2 - U_2^* \Vert^p_{p} \ll m^{p+1-q}$, where $U_2$ is defined by \eqref{defUj} and $U^*_2$ is defined by \eqref{defU2*}.
\end{Lemma}
\noindent{{\bf Proof of Lemma \ref{lmaU1U1starnormep}.} When $p=2$, the lemma has been proved in \eqref{P4lma49}. Let us complete the proof for any $p \geq 2$. We shall follow the same strategy as in the proof of Lemma \ref{lmaR1normep}. Let
$Z_{k,m}:= X_{k,m} - { X}^*_{k,m}$ where ${X}^*_{k,m}$ is defined by \eqref{defU2*}. Setting ${\mathcal F}^Z_j = \sigma ( \varepsilon_{m+2}, \ldots, \varepsilon_{j}, \varepsilon^*_{m+2}, \ldots, \varepsilon^*_{2m})$,
\[
d_{k,m}^Z := Z_{k,m} - \E ( Z_{k,m} | {\mathcal F}^Z_{k-1} ) \, \text{ and } \, r_{k,m}^Z = \E ( Z_{k,m} | {\mathcal F}^Z_{k-1} ) \, ,
\]
we have
\begin{equation} \label{dec1UUstarmart}
\Vert U_2 - U_2^* \Vert_{p} \leq \sum_{k=2m+1}^{3m} \Vert \E ( X_{k,m} | {\mathbb F}_m )\Vert_{p} + \Big \Vert \sum_{k=2m+1}^{3m} d^Z_{k,m} \Big \Vert_p + \Big \Vert \sum_{k=2m+1}^{3m} r^Z_{k,m} \Big \Vert_p \, .
\end{equation}
Recall the notation $ {\mathcal G}_{\ell} = \sigma (W_0, \varepsilon_1, \ldots, \varepsilon_{\ell})$. Note that
\begin{equation} \label{dec1UUstarmart-1}
\Vert \E ( ( d^Z_{k,m} )^2 | {\mathcal F}^Z_{k-1} ) \Vert_{\infty} \leq 4 \Vert \E ( X^2_{k,m} | {\mathcal G}_{k-1} ) \Vert_{\infty} \leq 4 \int_G ( \log (N(g)) )^2 \mu (dg ) < \infty
\end{equation}
and
\begin{equation} \label{dec1UUstarmart-2}
\Vert r_{k,m}^Z\Vert_{\infty} \leq 2 \Vert \E ( |X_{k,m} | | {\mathcal G}_{k-1} ) \Vert_{\infty} \leq 2 \int_G \log (N(g)) \mu (dg ) < \infty \, .
\end{equation}
Next, by \eqref{new3EFmr}, for any integers $k,i$ in $[2m+1, 3m]$ such that $k \geq i$,
\begin{equation} \label{dec1UUstarmart-3}
\Vert \E ( r_{k,m}^Z | {\mathcal F}^Z_{i-1} ) \Vert_{\infty} \leq \delta_{1, \infty} (k - i+1) \, .
\end{equation}
In addition, for any $r \geq 1$, $\Vert d^Z_{k,m} \Vert_r \leq 2 \Vert Z_{k,m} \Vert_r$ and, for any integer $k \in [2m+1, 3m]$,
\begin{align*}
\E \big | {Z}_{k,m} \big |^r
& = \E \Big | f_m (\varepsilon_{k-m+1}, \ldots, \varepsilon_m, \varepsilon_{2m+1}, \ldots, \varepsilon_{k}) - f_m (\varepsilon^*_{k-m+1}, \ldots, \varepsilon^*_{2m}, \varepsilon_{2m+1}, \ldots, \varepsilon_{k}) \Big |^r \nonumber \\
& \leq \int \!\! \int \E \Big | f_m (u_{k-m+1}, \ldots, u_{2m}, \varepsilon_{2m+1}, \ldots, \varepsilon_{k}) \nonumber \\& \quad \quad - f_m (v_{k-m+1}, \ldots, v_{2m}, \varepsilon_{2m+1}, \ldots, \varepsilon_{k}) \Big |^r \prod_{i=k-m+1}^{2m} d\mu (v_i) \prod_{i=k-m+1}^{2m} d\mu (u_i) \nonumber \\
& \leq \sup_{{\bar x} , {\bar y} \in X} \E | X_{k-2m,{\bar x}} - X_{k-2m,{\bar y}}|^r= \delta^r_{r,\infty}\ (k-2m) \, ,
\end{align*}
implying that
\begin{equation} \label{P1lmaU1normer}
\Vert d^Z_{k,m} \Vert_r \leq 2 \delta^r_{r,\infty}\ (k-2m) \, .
\end{equation}
Starting from \eqref{dec1UUstarmart}, considering the upper bound \eqref{new1EFmr} and proceeding as in the proof of Lemma \ref{lmaR1normep} by taking into account the upper bounds
\eqref{dec1UUstarmart-1}-\eqref{P1lmaU1normer}, the lemma follows. \qed
\medskip
For the lemmas below, we recall the definitions \eqref{defUj}, \eqref{defRj}, \eqref{notaSm1} and \eqref{defU2*} for $U_2$, $R_2$, $Y_2^{(1)}$ and $U_2^*$.
\begin{Lemma} \label{zolotarev3-step1} Let $r \in ]2,3]$. Assume that $\mu $ has a finite moment of order $r+1$. Let $\alpha_m = \sqrt{\frac{\E_{{\mathbb F}_m} ( (U_2+R_2)^2)}{\E_{{\mathbb F}_m} ( (U_2^{*})^2)}}$. Then for $f(x) \in \{ \cos x, \sin x \}$, we have
\[
\Big \Vert \E_{{\mathbb F}_m} \Big [ f \Big ( t \frac{Y_2^{(1)}}{\sqrt{2m}} \Big ) \Big ] - \E_{{\mathbb F}_m} \Big [ f \Big ( t \alpha_m \frac{U_2^{*}}{\sqrt{2m}} \Big ) \Big ] \Big \Vert_{1} \ll |t|^r m^{-1/2} \, .
\]
\end{Lemma}
\noindent {\bf Proof of Lemma \ref{zolotarev3-step1}.} Note that $h = f/2^{3-r} $ is such that $| h''(x) - h''(y) | \leq |x-y|^{r-2}$. Using the arguments developed in the proof of \cite[Lemma 5.2, Item 3]{DMR09} and setting $V = U_2+R_2 - U_2^{*}$ and ${\tilde V} = V + (1 - \alpha_m) U_2^*$, we get
\begin{multline} \label{p1zolotarev3}
2^{r-3} (r-1) \times (2m)^{r/2} \Big | \E_{{\mathbb F}_m} \Big [ f \Big ( t \frac{Y_2^{(1)}}{\sqrt{2m}} \Big ) \Big ] - \E_{{\mathbb F}_m} \Big [ f \Big ( t \alpha_m \frac{U_2^{*}}{\sqrt{2m}} \Big ) \Big ] \Big | \\ \leq |t|^r \Big \{ \alpha_m^{r-1} \big ( \E_{{\mathbb F}_m} (|{\tilde V}|^r)\big )^{1/r}
\big ( \E (|U_2^*|^r)\big )^{(r-1)/r} \\+ \alpha^{r-2}_m \big ( \E_{{\mathbb F}_m} (|{\tilde V}|^r)\big )^{2/r}
\big ( \E (|U_2^*|^r)\big )^{(r-2)/r} +
\E_{{\mathbb F}_m} (|{\tilde V}|^r) \Big \}
\, .
\end{multline}
Next, note that, by H\"older's inequality,
\begin{align*}
\E \big ( \alpha_m^{r-1} \big ( \E_{{\mathbb F}_m} (|{\tilde V}|^r)\big )^{1/r} \big ) & \leq \E \big ( \alpha_m^{r-1} \big ( \E_{{\mathbb F}_m} (|V|^r)\big )^{1/r} \big ) + \E \big ( \alpha_m^{r-1} \times |1 - \alpha_m| \big ) \Vert U_2^* \Vert_r \\
& \leq \Vert \alpha_m \Vert_r^{r-1} \Vert V \Vert_r + \Vert \alpha_m \Vert_r^{r-1} \Vert 1 - \alpha_m \Vert_r \Vert U_2^* \Vert_r \, .
\end{align*}
Proceeding similarly for the two last terms in \eqref{p1zolotarev3} and taking the expectation, we derive
\begin{align*}
2^{r-3} (r-1) \times (2m)^{r/2} & \Big \Vert \E_{{\mathbb F}_m} \Big [ f \Big ( t \frac{Y_2^{(1)}}{\sqrt{2m}} \Big ) \Big ] - \E_{{\mathbb F}_m} \Big [ f \Big ( t \alpha_m \frac{U^*_2}{\sqrt{2m}} \Big ) \Big ] \Big \Vert_1 \\ & \leq |t|^r \Vert \alpha_m \Vert_r^{r-1} \Vert V \Vert_r \Vert U_2^* \Vert^{r-1}_r+ |t|^r \Vert \alpha_m \Vert_r^{r-1} \Vert 1 - \alpha_m \Vert_r \Vert U_2^* \Vert^r_r \\ & \quad + 2 |t|^r \Vert \alpha_m \Vert_r^{r-2} \Vert V \Vert^{2}_r \Vert U_2^* \Vert^{r-2}_r+ 2 |t|^r \Vert \alpha_m \Vert^{r-2}_r \Vert 1 - \alpha_m \Vert^2_r \Vert U_2^* \Vert^r_r \\ & \quad + 2^{r-1} |t|^r \Vert V \Vert^r_r + 2^{r-1} |t|^r \Vert 1 - \alpha_m \Vert^r_r \Vert U_2^* \Vert^r_r
\, .
\end{align*}
According to Lemmas \ref{lmaR1normep} and \ref{lmaU1U1starnormep}, since $\mu $ has a moment of order $r+1$, $\Vert V \Vert_r \ll 1$. Moreover, by Lemma \ref{lmamomentp}, $ \Vert U_2^* \Vert_r = \Vert \sum_{k=1}^m X_{k+m,m}\Vert_r \leq \sqrt{m} $. On another hand,
\begin{multline*}
\Vert U_2^* \Vert_2 \times \Vert 1 - \alpha_m \Vert_r = \Big \Vert \sqrt{\E_{{\mathbb F}_m} ( (U_2+R_2)^2) } - \sqrt{\E_{{\mathbb F}_m} ( (U^*_2)^2) } \Big \Vert_r \\ \leq \Big \Vert \sqrt{\E_{{\mathbb F}_m} ( (U_2+R_2 - U_2^*)^2) } \Big \Vert_r \leq \Vert V \Vert_r \ll 1 \, .
\end{multline*}
Since $ \lim_{m \rightarrow \infty} m^{-1} \Vert U_2^* \Vert_2^2 = s^2>0$, it follows that for $m$ large enough
\beq \label{diff1alpha} \Vert 1 - \alpha_m \Vert_r \ll m^{-1/2} \, .
\eeq The lemma follows from all the above considerations. \qed
\begin{Lemma} \label{zolotarev3-step2} Let $r \in ]2,3]$. Assume that $\mu $ has a finite moment of order $q=r+1$. Recall the notation $\alpha_m = \sqrt{\frac{\E_{{\mathbb F}_m} ( (U_2+R_2)^2)}{\E_{{\mathbb F}_m} ( (U^*_2)^2)}}$. Then for $f(x) \in \{ \cos x, \sin x \}$, we have
\[
\Big \Vert \E_{{\mathbb F}_m} \Big [ f \Big ( t \alpha_m \frac{U^*_2}{\sqrt{2m}} \Big ) \Big ] - \E_{{\mathbb F}_m} \Big [ f \Big ( t \alpha_m \frac{s_m N}{\sqrt{2}} \Big ) \Big ] \Big \Vert_{1} \ll |t|^r m^{-1/2} + |t| m^{-(r-1/2)} \, ,
\]
where $s_m^2 = \E (S_m^2) /m$ and $N $ is a standard Gaussian random variable independent of ${\mathbb F}_m$.
\end{Lemma}
\noindent {\bf Proof of Lemma \ref{zolotarev3-step2}.} Let $W_0^*$ be distributed as $W_0$ and independent of $W_0$. Let $(\varepsilon^*_k)_{k \geq 1}$ be an independent copy of $(\varepsilon_k)_{k \geq 1}$, independent of $(W_0^* ,W_0)$. Define $S^*_m = \sum_{k=m+1}^{2m} X_k^*$ where $X_k^*= \sigma ( \varepsilon_k^*, W^*_{k-1}) - \lambda_{\mu}$ with $W^*_{k}=\varepsilon_k^* W^*_{k-1}$, for $k \geq 1$. Note that $S^*_m$ is independent of
${\mathbb F}_m$ and has the same law as $S_m$. In addition, by stationarity, \cite[Lemma 24]{CDM} (applied with $M_k = +\infty$) and \eqref{estimatedeltabis},
\begin{multline} \label{zolotarev3-step2-p1}
\Big \Vert \E_{{\mathbb F}_m} \Big [ f \Big ( t \alpha_m \frac{S_m^{*}}{\sqrt{2m}} \Big ) \Big ] - \E_{{\mathbb F}_m} \Big [ f \Big ( t \alpha_m \frac{U^*_2}{\sqrt{2m}} \Big ) \Big ] \Big ] \Big \Vert_{1} \\
\ll
\frac{|t|}{\sqrt{2m}} \E |\alpha_m | \times \sum_{k=m+1}^{2m} \Vert X_{k,m} -X_k \Vert_1 \ll \frac{|t|}{\sqrt{m}} \times m \delta_{1, \infty} (m) \ll |t| m^{-(r-1/2) } \, .
\end{multline}
On another hand, let $h = f/2^{3-r} $ and note that $| h''(x) - h''(y) | \leq |x-y|^{r-2}$. Hence, by the definition of the Zolotarev distance of order $r$,
\[
\Big \Vert \E_{{\mathbb F}_m} \Big [ f \Big ( t \alpha_m \frac{S_m^{*}}{\sqrt{2m}} \Big ) \Big ] - \E_{{\mathbb F}_m} \Big [ f \Big ( t \alpha_m \frac{s_m N}{\sqrt{2}} \Big ) \Big ] \Big \Vert_{1} \leq 2^{3-r} |t|^r \times \Vert \alpha_m \Vert_r^r \zeta_r \big ( P_{S_m/{\sqrt{ 2 m}}} , G_{s_m^2/2} \big) \, .
\]
Next we apply \cite[Theorem 3.2, Item 3.]{DMR09} and derive that since $\mu$ has a moment of order $q>3$,
\[
\zeta_r \big ( P_{S_m/{\sqrt{2 m}}} , G_{s_m^2/2} \big) \ll m^{-1/2} \, .
\]
As we mentioned before, the fact that the conditions (3.1), (3.4) and (3.5) required in \cite[Theorem 3.2]{DMR09} hold when $\mu$ has a moment of order $q>3$ has been proved in the proof of \cite[Theorem 2]{CDJ}.
Hence, since $ \Vert \alpha_m \Vert_r \ll 1$ (see \eqref{diff1alpha}),
\begin{equation} \label{zolotarev3-step2-p2}
\Big \Vert \E_{{\mathbb F}_m} \Big [ f \Big ( t \alpha_m \frac{S_m^{*}}{\sqrt{2m}} \Big ) \Big ] - \E_{{\mathbb F}_m} \Big [ f \Big ( t \alpha_m \frac{s_m N}{\sqrt{2}} \Big ) \Big ] \Big \Vert_{1} \ll \frac{| t|^r }{\sqrt{m}} \, .
\end{equation}
Considering the upper bounds \eqref{zolotarev3-step2-p1} and \eqref{zolotarev3-step2-p2}, the lemma follows. \qed
\begin{Lemma} \label{zolotarev3-step3} Let $r \in ]2,3]$. Assume that $\mu $ has a finite moment of order $q=r+1$. Recall the notations $\alpha_m = \sqrt{\frac{\E_{{\mathbb F}_m} ( (U_2+R_2)^2)}{\E_{{\mathbb F}_m} ( (U^*_2)^2)}}$ and $s_m^2 = \E (S_m^2) /m$. . Then, for $f(x) \in \{ \cos x, \sin x \}$,
\[
\Big \Vert \E_{{\mathbb F}_m} \Big [ f \Big ( t \alpha_m \frac{s_m N}{\sqrt{2}} \Big ) \Big ] - \E_{{\mathbb F}_m} \Big [ f \Big ( t \frac{s N}{\sqrt{2}} \Big ) \Big ] \Big \Vert_{1} \ll \frac{ |t| }{ m^{1/2 + \eta } } \, .
\]
where $\eta = \min ( \frac{3}{10} , \frac{r-2}{2} , \frac{r-2}{2r-3} )$ and $N $ is a standard Gaussian random variable independent of ${\mathbb F}_m$.
\end{Lemma}
\noindent {\bf Proof of Lemma \ref{zolotarev3-step3}.} We have
\begin{multline} \label{zolotarev3-step3P1}
\Big \Vert \E_{{\mathbb F}_m} \Big [ f \Big ( t \alpha_m \frac{s_m N}{\sqrt{2}} \Big ) \Big ] - \E_{{\mathbb F}_m} \Big [ f \Big ( t \frac{s N}{\sqrt{2}} \Big ) \Big ] \Big \Vert_{1} \\
\leq |t | \E |N| \big ( \Vert \alpha_m \Vert_1 | s - s_m | + s \times\Vert 1 - \alpha_m \Vert_1 \big ) \, .
\end{multline}
But, since $ \lim_{m \rightarrow \infty} m^{-1} \Vert U_2^* \Vert_2^2 = s^2>0$,
\[
\Vert 1 - \alpha_m \Vert_1 \leq \Vert 1 - \alpha^2_m \Vert_1 \sim \frac{1}{s^2 m} \big \Vert \E_{{\mathbb F}_m} ( (U_2+R_2)^2) - \E_{{\mathbb F}_m} ( (U_2^*)^2)\big \Vert_1 \, .
\]
On another hand
\[
\big \Vert \E_{{\mathbb F}_m} ( (U_2+R_2)^2) - \E_{{\mathbb F}_m} ( (U_2^*)^2)\big \Vert_1 \leq \big \Vert \E_{{\mathbb F}_m} (U_2^2) - \E ( (U_2^*)^2) \big \Vert_1 + \Vert R_2 \Vert_2^2 + 2 \Vert \E_{{\mathbb F}_m} ( U_2R_2) \Vert_1 \, .
\]
But, by stationarity,
\begin{multline*}
\big \Vert \E_{{\mathbb F}_m} (U_2^2) - \E ( (U_2^*)^2) \big \Vert_1 \leq \Big \Vert \E_m \Big ( \sum_{k=m+1}^{2m} { \bar X}_{k,m} \Big )^2 - \E \Big ( \sum_{k=m+1}^{2m} { \bar X}_{k,m} \Big )^2 \Big \Vert_{1} \\+ \Big ( \sum_{k=2m+1}^{3m} \Vert \E_{{\mathbb F}_m} (X_{k,m} ) \Vert_2 \Big )^2 \, ,
\end{multline*}
where ${ \bar X}_{k,m} = X_{k,m} - \E_m (X_{k,m} )$ and $\E_m (\cdot) = \E(\cdot | \sigma (W_0, \varepsilon_1, \ldots, \varepsilon_m))$.
Hence, by \eqref{new1EFmr} and Lemma \ref{lma4.7}, since $q=r+1 $ and $r >2$,
\[
\big \Vert \E_{{\mathbb F}_m} (U_2^2) - \E( U_2^2) \big \Vert_1 \ll m^{1/5}\, .
\]
By stationarity and Lemma \ref{lmaR1normep}, we also have $\Vert R_2 \Vert_2= \Vert R_1 \Vert_2 \ll 1$. Therefore
\begin{equation*} \label{zolotarev3-step3P1bis}
\big \Vert \E_{{\mathbb F}_m} ( (U_2+R_2)^2) - \E_{{\mathbb F}_m} ( (U_2^*)^2)\big \Vert_1 \ll m^{1/5} + \Vert \E_{{\mathbb F}_m} ( U_2R_2) \Vert_1 \, .
\end{equation*}
Next, note that
\[
\Vert \E_{{\mathbb F}_m} ( U_2R_2) \Vert_1 = \Big \Vert \E_{{\mathbb F}_m} \Big ( R_2 \sum_{k=2m+1}^{3m} X_{k,m}\Big ) \Big \Vert_1 \, .
\]
Let $h(m) $ be a positive integer less than $m$. Using stationarity, Lemma \ref{lmaR1normep} and similar arguments as those developed in the proof of Lemma \ref{lmamomentp}, we first notice that
\[
\Big \Vert \E_{{\mathbb F}_m} \Big ( R_2 \sum_{k=3m-h(m) +1}^{3m} X_{k,m}\Big ) \Big \Vert_1 \leq \Vert R_2 \Vert_2 \Big \Vert \sum_{k=3m-h(m) +1}^{3m} X_{k,m} \Big \Vert_2 \ll \sqrt{h(m) } \, .
\]
We handle now the term $ \Vert \E_{{\mathbb F}_m} \big ( R_2 \sum_{k= 2m+1}^{3m-h(m) }X_{k,m}\big ) \Vert_1$.
For $2m+1 \leq k \leq 3m$, define $X_{k,m}^* $ as in \eqref{defU2*}. Using \eqref{new2EFmr} and \eqref{estimatedeltabis}, note that
\[
\sum_{k= 2m +1}^{3m - h(m) } \Vert X_{k,m} -X_{k,m}^* \Vert_{2} \leq \sum_{k=2m+1}^{3m} \delta_{2, \infty} (k-2m) \ll \sum_{k=1}^{m} k^{-(q/2-1)} \, . \]
Hence
\[
\sum_{k= 2m +1}^{3m - h(m) } \Vert X_{k,m} -X_{k,m}^* \Vert_{2} \ll m^{(3-r) /2} {\bf 1}_{r < 3} + {\bf 1}_{r = 3} \log (m) \, .
\]
This estimate combined with $\Vert R_2 \Vert_2 \ll 1$ entails
\[
\Big \Vert \E_{{\mathbb F}_m} \Big ( R_2 \sum_{k= 2m +1}^{3m - h(m) } X_{k,m}\Big ) \Big \Vert_1 \ll m^{(3-r) /2} {\bf 1}_{r < 3} + {\bf 1}_{r = 3} \log (m) + \Big \Vert \E_{{\mathbb F}_m} \Big ( R_2 \sum_{k= 2m +1}^{3m - h(m) } X^*_{k,m}\Big ) \Big \Vert_1 .
\]
Since $(X_{k,m}^* )_{2m+1 \leq k \leq 3m}$ is independent of ${\mathbb F}_m$, we have $\E ( X_{k,m}^* | {\mathbb F_m} )=0$ for any $2m+1 \leq k \leq 3m$. Hence
\[ \Big \Vert \E_{{\mathbb F}_m} \Big ( R_2 \sum_{k= 2m +1}^{3m - h(m) } X^*_{k,m}\Big ) \Big \Vert_1 = \Big \Vert \E_{{\mathbb F}_m} \Big ( \sum_{k=2m+1}^{3m-h(m)} X^*_{k,m} \sum_{\ell=3m + 1}^{4m} X_{\ell,m} \Big ) \Big \Vert_1 \, .
\]
Next, note that if $\ell-m+1 \geq k+1$, conditionally to ${\mathbb F}_m$, $X^*_{k,m} $ is independent of $ X_{\ell,m} $, which implies that $ \E_{{\mathbb F}_m}( X^*_{k,m} X_{\ell,m} ) =0$. Hence
\[
\Big \Vert \E_{{\mathbb F}_m} \Big ( \sum_{k=2m+1}^{3m-h(m) } X^*_{k,m} \sum_{\ell=3m+1}^{4m} X_{\ell,m} \Big ) \Big \Vert_1 = \Big \Vert \E_{{\mathbb F}_m} \Big ( \sum_{k=2m+1}^{3m-h(m)} X^*_{k,m} \sum_{\ell=3m+1}^{4m-h(m) -1} X_{\ell,m} \Big ) \Big \Vert_1 \, .
\]
Now, for any $3m+1 \leq \ell \leq 4m - h(m) -1$, let
\[
X_{\ell,m}^{ (h(m) ,*)} = f_m ( \varepsilon^*_{\ell-m+1}, \ldots, \varepsilon^*_{3m-h(m)}, \varepsilon_{3m-h(m) +1}, \ldots \varepsilon_\ell ) \, ,
\]
and note that $ \E_{{\mathbb F}_m} ( X^*_{k,m} X_{\ell,m}^{ (h(m),*)} ) =0$ for any $k \leq 3m-h(m)$ and any $\ell \geq 3m+1$. So, overall, setting $q' = q/(q-1)$,
\begin{multline*}
\Big \Vert \E_{{\mathbb F}_m} \Big ( R_2 \sum_{k=2m+1}^{3m-h(m)} X^*_{k,m}\Big ) \Big \Vert_1 = \Big \Vert \E_{{\mathbb F}_m} \Big ( \sum_{k=2m+1}^{3m-h(m) } X^*_{k,m} \sum_{\ell=3m+1}^{4m - h(m) -1} ( X_{\ell,m} - X_{\ell,m}^{ (h(m) ,*)} ) \Big ) \Big \Vert_1 \\
\leq \Big \Vert \sum_{k=2m +1}^{3m-h(m) } X^*_{k,m} \Big \Vert_q \sum_{\ell=3m+1}^{4m - h(m) -1} \Vert X_{\ell,m} - X_{\ell,m}^{ (h(m),*)} \Vert_{q'} \, .
\end{multline*}
Proceeding as in the proof of \cite[Lemma 24]{CDM}, we infer that the following inequality holds: $\Vert X_{\ell,m} - X_{\ell,m}^{ (h(m),*)} \Vert_{q'} \leq \delta_{q', \infty} (\ell - 3m + h(m))$.
Hence, taking into account \eqref{estimatedeltabis} and Lemma \ref{lmamomentp}, we get
\[
\Big \Vert \E_{{\mathbb F}_m} \Big ( R_2 \sum_{k=2m+1}^{3m-h(m) } X^*_{k,m}\Big ) \Big \Vert_1 \ll \sqrt{m} \sum_{\ell \geq h(m)}\frac{1}{\ell^{q-2}} \ll \sqrt{m} (h(m))^{2-r} \, .
\]
Taking into account all the above considerations and selecting $h(m) = m^{1/(2r-3)}$, we derive
\begin{equation} \label{zolotarev3-step3P2}
m \Vert 1 - \alpha_m \Vert_1 \ll m^{(3-r)/2} {\bf 1}_{r < 3} + m^{1/ (4r-6)} + m^{1/5} \, .
\end{equation}
On another hand, since $s^2 >0$, $ | s-s_m | \leq s ^{-1} | s^2-s^2_m |$. Hence by using Remark \ref{remonvar}, the definition of $s^2_m$ and stationarity, we derive that
\[
| s-s_m | \leq \frac{2 }{s m } \sum_{k\geq 1} k |{\rm Cov} (X_0,X_k)| \, .
\]
By the definition of $\delta_{1, \infty}(k)$, $ |{\rm Cov} (X_0,X_k)| \leq \Vert X_0 \Vert_1 \delta_{1, \infty} (k) $. So, by using \eqref{estimatedelta} and the fact that $q \geq 2$, we get
\begin{equation} \label{zolotarev3-step3P3}
| s-s_m | \ll m^{-1} \, .
\end{equation}
Starting from \eqref{zolotarev3-step3P1} and taking into account \eqref{zolotarev3-step3P2} and \eqref{zolotarev3-step3P3}, the lemma follows. \qed
\medskip
Combining Lemmas \ref{zolotarev3-step1}, \ref{zolotarev3-step2} and \ref{zolotarev3-step3}, we derive
\begin{Lemma} \label{lma4.9q=4} Let $r \in ]2,3]$. Assume that $\mu $ has a finite moment of order $q=r+1$. Then, for $f(x) \in \{ \cos x, \sin x \}$,
\[
\Big \Vert \E_{{\mathbb F}_m} \Big [ f \Big ( t \frac{Y_2^{(1)}}{\sqrt{2m}} \Big ) \Big ] - \E \big [ f ( t s N /{\sqrt 2} ) \big ] \Big \Vert_{1} \ll |t|^r m^{-1/2} + |t| m^{-(1/2 + \eta )} \,,
\]
where $\eta = \min ( \frac{3}{10} , \frac{r-2}{2} , \frac{r-2}{2r-3} )$.
\end{Lemma}
Let $R_1$ be defined by \eqref{defRj}. Proceeding similarly as to derive the previous lemma, we get
\begin{Lemma} \label{lma4.9q=4bis}Let $r \in ]2,3]$. Assume that $\mu $ has a finite moment of order $q=r+1$. Then for $f(x) \in \{ \cos x, \sin x \}$,
\[
\Big \Vert \E_{{\mathbb F}_m} \Big [ f \Big ( t \frac{ \sum_{k=1}^m X_{k} +R_1}{\sqrt{2m}} \Big ) \Big ] - \E \big [ f ( t s N /{\sqrt 2} ) \big ] \Big \Vert_{1} \ll |t|^r m^{-1/2} + |t| m^{-(1/2 + \eta )} \, ,
\]
where $\eta = \min ( \frac{3}{10} , \frac{r-2}{2} , \frac{r-2}{2r-3} )$.
\end{Lemma}
\section*{Acknowledgements}
This research was partially supported by the NSF grant DMS-2054598. The authors would like to thank two anonymous referees for their valuable suggestions, which improved the presentation of the paper.
|
1,314,259,996,628 | arxiv | \section{Introduction}
Large momentum transfer processes have a long history of providing information
on the substructure of hadrons, the nature of their constituents, and how they
interact\cite{JO:review}. Photons provide an excellent probe for such purposes
due to their pointlike electromagnetic coupling to the quark constituents of
the colliding hadrons. The production of large momentum transfer photons
has played dual roles of providing information of parton distribution
functions (PDFs) \cite{ABFOW} and testing the adequacy of the perturbative
techniques used to calculate the hard scattering subprocesses
\cite{Huston:gamma, Aurenche:old_study, Aurenche:new_study}.
The single photon cross section, either inclusive or subject to photon
isolation cuts, provides the basic observable for direct photon studies.
The calculation of this cross section involves integrations over the phase
space variables of the accompanying partons, thereby limiting the information
which can be obtained about the underlying subprocesses. More information
can be obtained if one can measure an associated jet as has been done
recently by the D\O collaboration \cite{Dzero:photonjet}. Even here,
however, one is summing over many subprocesses with various flavors of
partons. Additional information can be obtained if the flavor of the
produced jets is tagged. Identifying jets which contain a heavy quark
provides exactly this opportunity.
In this paper we investigate in detail one particular piece of the direct
photon calculation, namely the associated production of direct photons and
heavy quarks, where the heavy quarks are either charm or bottom. Some of the
contributing subprocesses are dependent on the charge of the heavy quark
while others are not. By considering both charm and bottom quarks one can
examine the relative roles of the two contributions. In some kinematic
regions the results are dependent on the heavy quark PDFs, opening the
possibility of testing the current calculation of such PDFs.
New measurements of this process by the D\O\ and CDF groups are in progress
and should be available in the near future. A comparison with these
measurements will be presented in a forthcoming paper.
The production of heavy quarks at high-$p_T$ has the potential to generate
logarithms of the form $ln(p_T/m_Q)$ as a result of collinear configurations
involving $Q\rightarrow Qg$ and $g\rightarrow Q \bar Q$. These logarithms can
be resummed via the DGLAP equations for appropriately defined PDFs and
fragmentation functions (FFs). This is commonly referred to as a variable
flavor scheme with either four or five flavors. In such schemes the heavy
quarks are treated as massless. The dominant remaining mass effect is due to
the imposition of a threshold constraint such that the PDFs and FFs are taken
to be zero when the hard scattering scales are smaller than the quark mass.
The calculation presented here has been performed using the variable flavor
scheme.
There have been previous studies of this process
\cite{Bailey:charm, Berger:analyt,Vogel:mass,Berger:spin}. In Ref.
\cite{Bailey:charm, Berger:analyt} results are shown for the production of a
direct photon plus charm. Here we also provide results for direct photon plus
bottom production, and a comparison between the charm and bottom case.
As noted above, this comparison depends on the relative roles of terms which
depend on the heavy quark charge and those which do not. We also extend the
calculation by treating the photon fragmentation contribution to
next-to-leading-order (NLO). The previously cited references treated this
component only in leading order (LO). One further technical point is related
to the treatment of final state collinear singularities in the case when a
gluon is emitted collinearly to a final state heavy quark.
In Ref.\cite{Bailey:charm} this singularity is factored into a charm FF. The
present calculation is for the case of a photon produced in association with a
jet which has been tagged as containing a heavy quark, so the fragmentation
function is replaced by an appropriate jet definition.
In order to be able to work in the massless approximation the heavy quarks and
photons produced need to carry a transverse momentum, $p_T$, which is few
times larger than the mass of the heavy quark $m_Q$, i.e.
$p_T \geq 10 {\rm\ GeV}$. Since the lower bounds for the values of the
transverse momenta for direct photons and heavy quarks measurable at both
the D\O\ and CDF collaborations at Fermilab are above
$p_T \geq 10 {\rm\ GeV}$, a comparison with a massless calculation is
appropriate. If however we are close to the threshold region for production
of heavy quarks, {\itshape i.e.} $m_Q\sim p_T$, their mass needs to be
retained, and they are assumed to be only produced externally in pairs,
as end products of the hard-scattering. This is called the Fixed Flavor
Number scheme (FFN), as the number of flavors that compose the nucleon
remains fixed and it does not depend on the center of mass energy of the hard
scattering. Here the proton is assumed to be composed only of light flavors,
and in lowest order there are only two subprocesses in which the direct
photon and heavy quarks can be produced. These are
$gg\rightarrow \gamma Q\bar Q$, and $q\bar q\rightarrow \gamma Q\bar Q$. A
study of the case when the quark masses are retained has been done at LO
\cite{Vogel:mass}, where a comparison between the LO massive and massless
approaches has been shown to give very similar results. In Ref.
\cite{Vogel:mass} a differential cross section in the transverse momentum of
the photon up to values of $p_{T\gamma} \sim 50 {\rm\ GeV}$ is presented.
There the difference between the two approaches in the LO is minimal.
This paper proceeds as follows: in Section II a description of the theory and
techniques for the calculation are outlined. In Section III results for the
differential cross sections are shown. Predictions for both the Tevatron and
LHC are presented and compared. The effects of including the NLO
fragmentation terms are shown, as well as the effect of the use of different
charm PDFs on the cross section. In Section IV we summarize our findings.
\section{Theory}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=1.0,angle=0]{Compton}
\end{center}
\caption{\label{Compton} \small{Compton Scattering}}
\end{figure}
Denoting the electromagnetic and strong couplings by $\alpha \mbox{\rm \ and }
\alpha_s$, respectively, the lowest order subprocess for the production of a
photon plus a heavy quark is the Compton subprocess,
$g+Q\rightarrow \gamma +Q$, shown in Fig.\ref{Compton}. This subprocess is of order
$\alpha \alpha_s$ and in the variable flavor scheme employed here there is
only this one subprocess to this order. When one considers higher order
subprocesses such as $qQ \rightarrow qQ\gamma $, for example, there will be
a region of phase space where the photon may be emitted collinear with
the final state $q$, giving rise to a collinear singularity. This singular
contribution may be absorbed into a photon fragmentation function
$D_{\gamma/q}$. The photon fragmentation functions satisfy a set of
inhomogeneous DGLAP equations, the solutions of which are of order
$\alpha/\alpha_s$. More specifically, one has
\begin{eqnarray*}
\frac{d D_{\gamma/q}(z,t)}{dt} &=& \frac{\alpha}{2\pi}P_{\gamma/q}(z) +
\frac{\alpha_s}{2 \pi}[D_{\gamma/q}\otimes P_{qq} + D_{\gamma/g}\otimes P_{gq}] \\
\frac{d D_{\gamma/g}(z,t)}{dt} &=& \frac{\alpha_s}{2 \pi}[D_{\gamma/q}\otimes
P_{qg} + D_{\gamma/g}\otimes P_{gg}]
\end{eqnarray*}
where $t=ln(Q^2/\Lambda_{QCD}^2)$ and $\otimes$ denotes a convolution.
Writing $\alpha_s(t)=1/bt$ it is easy to see that the solutions for both
$D_{\gamma/q} \mbox{\rm \ and } D_{\gamma/g}$ are proportional to both $t
\mbox{\rm \ and } \alpha$ so that the fragmentation functions may be thought
of as being \cal{O}($\alpha/\alpha_s$). Therefore, another class of
contributions of order $\alpha \alpha_s$ consists of $2\rightarrow 2$
QCD subprocesses with at least one heavy quark in the final state convoluted
with the appropriate photon FF. An example is shown in Fig.\ref{Fragm_LO}.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=1.0,angle=0]{Fragm_LO}
\end{center}
\caption{\label{Fragm_LO} \small{An example of Leading Order Fragmentation
Contributions 1) $gg \rightarrow Q\bar Q \gamma$, where the photon can
fragment off from either one of the final state heavy quarks,
2) $gQ\rightarrow gQ \gamma$, where again the photon can fragment off from
either one of the final state partons, the gluon or the heavy quark}}
\end{figure}
At the next order in perturbation theory, $\alpha \alpha_s^2$, the phase space
for producing a photon in association with a heavy quark increases and now
there are seven possible subprocesses, which are listed in Table 1. As in the
LO case, there are fragmentation contributions that need to be taken into
account in order to have a complete NLO calculation. Thus all
$2\rightarrow 3$ QCD subprocesses of order $\alpha_s^3$ once convoluted
with $D_{\gamma/q,g}(z,Q)$, give something of the
NLO: $O(\alpha_s^3)\otimes D_{\gamma/q,g}\sim\alpha_s^3\alpha/\alpha_s=
\alpha\alpha_s^2$, Fig.\ref{Fragm_NLO}.
\begin{table}[t]
\begin{center}
\begin{tabular}{|c|} \hline
\em NLO subprocesses \\\hline
$gg\rightarrow \gamma Q\bar Q$\\
$gQ\rightarrow \gamma gQ$\\
$Qq\rightarrow \gamma qQ$ \\
$Q\bar q\rightarrow \gamma qQ$ \\
$q\bar q\rightarrow \gamma Q\bar Q$\\
$Q\bar Q\rightarrow \gamma Q\bar Q$\\
$QQ\rightarrow \gamma QQ$ \\\hline
\end{tabular}
\caption{\small{a list of all $2\rightarrow 3$ NLO hard-scattering
subprocesses}}
\end{center}
\end{table}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=1.0,angle=0]{Fragm_NLO}
\end{center}
\caption{\label{Fragm_NLO} \small {An example of Next-to-Leading Order
Fragmentation Contributions 1) $gg\rightarrow Q\bar Q g \gamma$, where
again the photon is produced by fragmenting from either of the final state
partons produced in the hard scattering, 2) another example of NLO
fragmentation $gQ\rightarrow ggQ \gamma$}}
\end{figure}
As soon as we go beyond LO, ultraviolet (UV), soft, and collinear divergences
appear. The UV singularities arise from virtual diagrams when the momenta of
the virtual particles go to infinity. To take care of these the theory is
renormalized, and the singularities are absorbed into the now renormalized
strong coupling $\alpha_s$. The soft singularities appear in the case where
the energy of a massless particle like the gluon goes to zero, and the
collinear singularities arise when two massless particles are emitted
collinearly. Since generally the energies that are considered are much larger
than those of the quark masses, the quarks are treated as massless, and the
calculation is done in the massless approximation. To take care of the
divergences the calculation is regularized. The regularization scheme that is
used here is Dimensional Regularization (DR). In DR the scattering amplitudes
are computed in $d=4-\epsilon$ dimensions, and the singularities are exposed
as poles in $1/\epsilon$. These poles cancel, once the virtual, soft and
collinear contributions are added or they have to be absorbed into Parton
Distribution Functions (PDFs) and FFs with the use of the DGLAP evolution
equations. Once this is done it is safe to go back to $4$ dimensions.
To perform the NLO calculation the two cutoff phase space slicing method
\cite{Owens:PS}
is used. In it the phase space is divided into $2\rightarrow2$
and $2\rightarrow3$ body contributions. The $2\rightarrow3$ body phase space
is further divided into a hard region, where no singularities are present, a
collinear region, where the collinear singularities are present and a soft
region, where the soft singularities occur. The separation between the
different regions is done with the help of two parameters : the soft
cutoff - $\delta_s$, and the collinear cutoff - $\delta_c$. In the phase
space slicing method a gluon is considered to be soft, if
$E_g<\delta_s\sqrt{\hat s}/2$, where $\sqrt{\hat s}$, is the hard scattering
center of mass energy. In order to simplify the integration in this region
the double pole or soft gluon approximation is used and the 4-momentum of the
gluon is set to zero, when it appears in the numerator. The collinear region
is taken to be where either $s_{ij}$ or $|t_{ij}|<\delta_c\hat s$, where
$s_{ij}, t_{ij}$ are the Mandelstam variables. In the collinear region the
leading pole approximation is used, so the relative transverse momentum of the
two collinear particles in the numerators of the expansion is neglected. The
integration over phase space is done with the use of Monte Carlo. This is
very useful, as the cross section can be calculated as differential in any
variable that is needed, such as the transverse momentum - $p_T$ or the
rapidity - $y$, etc. of a given particle, without having to worry about
calculating different Jacobians. There will be a dependence on the cutoff
parameters in both the $2\rightarrow2$ contributions (which include the
collinear and soft regions) and the $2\rightarrow3$ contributions, but this
dependence will disappear once the two contributions are added together.
One final point needs to be addressed concerning the subprocess $q \overline q
\rightarrow \gamma Q \overline Q$. There is a collinear singularity
associated with the region where the final state $Q \mbox{\rm \ and }\overline
Q$ are collinear. Physically, this corresponds to a $\gamma g$ final state
with the gluon splitting into the $Q\overline Q$ pair. Normally, this singular
region would be integrated over yielding a two-body contribution dependent on
$\delta_c$ which would be proportional to the subprocess $q \overline q
\rightarrow \gamma g$. This would be added to the one-loop corrections
for the $q \overline q \rightarrow \gamma g$ subprocess, the poles in
$\epsilon$ would cancel and there would be a residual $\delta_c$ contribution
to the $q \overline q \rightarrow \gamma g$ subprocess. This would cancel
against a similar contribution from $q \overline q \rightarrow \gamma Q
\overline Q$ once a suitable jet definition was implemented in the
calculation. However, once one tags the jet as containing a heavy quark, the
problem arises in that there is no contribution from the subprocess
$q \overline q \rightarrow \gamma g$. Hence, there is an uncanceled
$\delta_c$ dependence in the $2 \rightarrow 3$ contribution. This problem
is addressed by realizing that physically the final state gluon can not
fragment into a $Q \overline Q$ pair unless its invariant mass exceeds
$4m_Q^2$. Imposing this constraint on the events generated for $q \overline q
\rightarrow \gamma Q \overline Q$ avoids the problem of the uncanceled
$\delta_c$ dependence.
\section{Results}
\subsection{Tevatron Predictions}
For the numerical results shown below the CTEQ6.6M PDFs \cite{Cteq:66M} were
used, unless otherwise stated, with a 2-loop $\alpha_s$ corresponding to
$\alpha_s(M_Z)=0.118$. The cross section was calculated for a center of mass
energy of $\sqrt{S}=1.96{\rm\ TeV}$ corresponding to the measurements
being made at the Tevatron. The cuts applied reproduce the ones used by the
D\O\ experiment, where the lower bounds for the transverse momenta of the
photon and heavy quark are as follows: $p_{T \gamma}>30 {\rm\ GeV},
p_{TQ}>15 {\rm\ GeV}$, and their rapidities are limited to the central region
of the detector $|y_{\gamma}|<1,|y_b|<0.8$. If two final state partons lie
within a cone of radius $\Delta R=0.5$, where
$R=\sqrt{\Delta \eta^2 +\Delta \phi^2}$ then they are merged into a single
jet. If a final state has two heavy quark jets within the detectable region,
it is counted only once, taking into account the transverse momentum of the
more energetic jet. To be experimentally detectable, a photon needs to be
isolated. This means that it should not be surrounded by hadronic energy more
then $E_h=\epsilon*E_{\gamma}$ in a cone of radius $R=R_{iso}$ around it.
The photon isolation requirements imposed model those needed in the D0
detection of a direct photon and are: $R_1<0.2$, $\epsilon_1<0.04$ and
$R_2<0.4$, $\epsilon_2<0.07$.
The differential cross section for the process $p\bar p\rightarrow \gamma b X$
as a function of the transverse momentum of the photon is shown in
Fig.\ref{NLO_b}. It is interesting to note in Fig.\ref{NLO_b} that as
$p_{T\gamma}$ grows, the difference between the LO and NLO curves increases
substantially.
To understand the origin of this effect it is necessary to look at how
the different subprocesses listed in Table 1 contribute to the cross section.
This decomposition is shown in Fig.\ref{parts}.
\begin{figure}
\begin{minipage}[t]{8cm}
\begin{center}
\includegraphics[scale=0.3,angle=270]{b_nloff_fy_paper}
\caption{\label{NLO_b} \small{The differential cross section,
$d\sigma /dp_{T\gamma}$ for the production of a direct photon and a bottom
quark as a function of $p_{T\gamma}$ for $\sqrt{S}=1.96{\rm\ TeV}$, at NLO -
solid line, and at LO - dashed line }}
\end{center}
\end{minipage}
\hfill
\begin{minipage}[t]{8cm}
\begin{center}
\includegraphics[scale=0.3,angle=270]{b_parts_fy_paper}
\caption{\label{parts} \small{contributions of the different subprocesses to
the differential cross section, NLO -solid line, annihilation
$q\bar q\rightarrow Q\bar Q\gamma$ - dashed line,
$qQ\rightarrow qQ \gamma$ - dotted line, $gQ\rightarrow gQ\gamma $ -
dot dashed line, $gg\rightarrow Q\bar Q\gamma + $LO - dash dot dotted line,
$Q\bar Q\rightarrow \gamma Q\bar Q$, and $QQ\rightarrow \gamma QQ$ - dot dash
dotted line}}
\end{center}
\end{minipage}
\end{figure}
It is apparent from Fig.\ref{parts} that the effect shown in Fig.\ref{NLO_b} is
driven by the annihilation subprocess, $q\bar q\rightarrow \gamma Q\bar Q$,
which overtakes the Compton contribution and starts dominating the cross
section at
$p_{T\gamma}\sim 70 {\rm\ GeV}$. The $gQ\rightarrow \gamma gQ$ and
$Qq\rightarrow \gamma qQ$ /$Q\bar q\rightarrow \gamma qQ$ subprocesses
contribute to the cross section about equally, with $gQ\rightarrow \gamma gQ$
prevailing over $Qq\rightarrow \gamma qQ$ /$Q\bar q\rightarrow \gamma qQ$ at
small $p_{T\gamma}$, where the gluon PDF is larger than the light quark PDF,
and then at large $p_{T\gamma}$,
$Qq\rightarrow \gamma qQ$ /$Q\bar q\rightarrow \gamma qQ$ takes over when
the light quark PDFs become larger than the gluon PDFs. The Compton
subprocess and $gg\rightarrow \gamma Q\bar Q$ are added together, since the
$gg\rightarrow \gamma Q\bar Q$ contribution is negative. This negative
contribution is what remains after the appropriate collinear terms are
subtracted. The size of the $2\rightarrow3 \ gg $ contribution is scale
dependent, with the compensating collinear terms contributing to the
$2\rightarrow2$ component. The role of the
$Q\bar Q\rightarrow \gamma Q\bar Q$ / $QQ\rightarrow \gamma QQ$ subprocesses
is almost negligible, as the heavy quark PDFs are much smaller than the light
quark and gluon PDFs.
Since the annihilation subprocess dominates
the cross section at large $p_{T\gamma}$, it is useful to look at some of
the Feynman diagrams contributing to it, as shown in Fig.\ref{qqb}.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=1.0,angle=0]{qqbar}
\end{center}
\caption{\label{qqb} \small{Some typical Feynman diagrams for the
annihilation subprocess
$q\bar q\rightarrow \gamma Q\bar Q$ where 1) the photon is emitted from the
final
state heavy quarks and 2) the photon is emitted from the initial state light
quarks}}
\end{figure}
There are two channels through which the annihilation subprocess can be
produced, an s-channel shown in diagram 1) and a t-channel in diagram 2) of
Fig.\ref{qqb}. Since the photon couples to the final state heavy quarks in
diagram
1), this diagram is proportional to the heavy quark charge,
$e_Q$, whereas
in diagram 2) the photon couples to the initial state light quarks, and thus
this diagram is proportional to the light quark's charge,
$e_q$. Diagram 2) begins to
dominate as $p_{T\gamma}$ grows and, since it does not depend on the heavy quark
charge, we expect the difference between the bottom and charm cross sections
to decrease as $p_{T\gamma}$ increases. This indeed is the case as can be seen
from Fig.\ref{b_c}, where the NLO differential cross section for the charm
quark (solid line) and the one for the bottom quark (dashed line) tend to come
closer as the value of the transverse momentum increases. However, the
difference
between the LO cross sections stays about the same as can be seen from
the dot-dashed (charm) and dotted curves (bottom) in Fig.\ref{b_c}, and also
from Fig.\ref{b_c-ratio}, where the ratio of the NLO and LO charm and bottom
differential cross sections is shown. The ratio of the LO cross sections
stays almost constant since the main contribution to the LO cross section
comes from the Compton subprocess, with the difference between the charm and
bottom curves arising from the difference in the charges of the charm and
bottom quarks and the relative sizes of the heavy quark PDFs. The ratio of
the two LO cross sections depends on the ratio of
the charges squared which is $e_c^2/e_b^2=4$, and is driven up from that value
to about $\sim 7$ because the charm PDF is larger than the bottom PDF.
The fact that the annihilation subprocess dominates the cross section at large
$p_{T\gamma}$ also increases the scale dependence of the cross-section in that
region. There is no Born term which involves a $q \bar q$ initial state
and, therefore, the contributions from the annihilation subprocess
start in \cal{O}$(\alpha \alpha_s^2)$. As such, the typical compensation
between LO and NLO contributions for this subprocess is missing and the
annihilation subprocess can be thought of a leading order.
\begin{figure}
\begin{minipage}[t]{8cm}
\begin{center}
\includegraphics[scale=0.3,angle=270]{b_c_nloff_fy_paper}
\caption{\label{b_c} \small {a comparison between the differential cross
sections, $d\sigma /dp_{T\gamma}$ for the production of a direct photon and a
bottom quark, and that of a direct photon plus a charm quark at NLO and LO,
charm at NLO - solid line and for bottom at NLO- dashed line, charm at LO -
dot dashed line, bottom at LO - dotted line}}
\end{center}
\end{minipage}
\hfill
\begin{minipage}[t]{8cm}
\begin{center}
\includegraphics[scale=0.3,angle=270]{b_c_nloff_lo_ratio_fy_paper}
\caption{\label{b_c-ratio} \small{the ratio of the charm and bottom
differential cross sections versus $p_{T\gamma}$, at NLO - solid line and at
LO - dashed line}}
\end{center}
\end{minipage}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.3,angle=270]{b_nloff_scales_fy_paper}
\end{center}
\caption{\label{scale} \small{Scale dependence of the differential cross
section, $d\sigma /dp_{T\gamma}$ for the production of a direct photon and a
bottom quark, where the three different scales have been set to be equal
$\mu=\mu_r=\mu_f=\mu_F$, $\mu=p_{T\gamma}$ - solid line, $\mu=p_{T\gamma}/2$ -
dashed line, $\mu=2p_{T\gamma}$ - dotted line }}
\end{figure}
As can be seen from Fig.\ref{scale} the scale dependence increases at large
$p_{T\gamma}$, where the annihilation starts to dominate. The renormalization,
$\mu_r$, factorization, $\mu_f$ and fragmentation, $\mu_F$ scales have been
set to be equal and are denoted by $\mu$.
\begin{figure}
\begin{minipage}[t]{8cm}
\begin{center}
\includegraphics[scale=0.3,angle=270]{b_nloff_LHC_fy_paper}
\caption{\label{LHC} \small{the differential cross section versus the
transverse momentum of the photon $d\sigma /dp_{T\gamma}$ for the production of
a direct photon and a bottom quark at LHC center of mass energies,
$\sqrt{S}=14{\rm\ TeV}$, NLO - solid line, LO - dashed line}}
\end{center}
\end{minipage}
\hfill
\begin{minipage}[t]{8cm}
\begin{center}
\includegraphics[scale=0.3,angle=270]{k_factor_b_LHC_fy_paper}
\caption{\label{Kfac} K factor, or the ratio of the NLO to the LO differential
cross section for $pp\rightarrow b \gamma X$ at $\sqrt{S}=14{\rm\ TeV}$}
\end{center}
\end{minipage}
\end{figure}
\subsection{LHC Predictions}
When data become available from the Large Hadron Collider (LHC), it will be
very important to have a good understanding of what Standard Model (SM)
processes are going to look like at center of mass energies of
$\sqrt{S}=14{\rm\ TeV}$, as these processes will provide important
means of calibrating and understanding the detectors and, ultimately, are
likely to provide significant backgrounds to new physics signals. The
differential cross section versus the transverse
momentum of the photon is shown in Fig.\ref{LHC}. It is apparent that the
increase of the difference between the NLO (solid line) and the LO
(dashed line) grows much less rapidly with
increasing $p_{T\gamma}$ than was the case for the Tevatron. In Fig.\ref{Kfac}
the K-factor, which is the ratio between the NLO and LO cross section for b
quarks is shown, which
stays stable and is around $2$. To understand the difference between the
LHC and Tevatron curves, the contributions of the different parts contributing
to the LHC cross section are shown in Fig.\ref{LHC-p}.
From Fig.\ref{LHC-p} it can be seen that the annihilation subprocess no
longer drives the cross section at high $p_{T\gamma}$, and now it is the LO,
and the $gQ\rightarrow \gamma gQ$ subprocesses that are the most prominent.
These differences come about for two reasons. As the LHC collides two beams
of protons, instead of the proton and antiproton beams at Fermilab, there is
no longer any valence light antiquarks present. Hence, the relative
contribution of the annihilation subprocess is decreased. Also, because the
LHC will ultimately operate at a center of
mass energy which is about seven times larger than that of the Tevatron,
lower values of $x \sim p_T/\sqrt{s}$ are probed at the LHC.
For the kinematic region shown in Fig. \ref{LHC-p} the gluon
PDF is dominant, accounting for the continued importance of the $gQ$
initiated subprocesses.
An interesting consequence of this pattern of subprocess contributions is
that the the dominant parts are all proportional to the heavy quark PDFs.
Such was not the case for the Tevatron curves, except for the low end of the
$p_{T\gamma}$ range. Accordingly, heavy quark + photon measurements at the LHC
will have the potential to provide important cross checks on the
perturbatively calculated heavy quark PDFs. These PDFs are likely to provide
important contributions to other physics signals -- either standard model
or new physics -- and such checks will be an important part of the search for
new physics.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.3,angle=270]{b_parts_LHC_fy_paper}
\end{center}
\caption{\label{LHC-p} \small{contributions of the different subprocesses to
the differential cross section, $d\sigma /dp_{T\gamma}$ for $pp\rightarrow b
\gamma X$ at $\sqrt{S}=14{\rm\ TeV}$, NLO -solid line, annihilation $q\bar
q\rightarrow Q\bar Q\gamma$ - dashed line, $qQ\rightarrow qQ \gamma$ - dotted
line, $gQ\rightarrow gQ\gamma $ - dot dashed line, $gg\rightarrow Q\bar
Q\gamma + $LO - dash dot dotted line, $Q\bar Q\rightarrow \gamma Q\bar Q$,
and $QQ\rightarrow \gamma QQ$ - dot dash dotted line }}
\end{figure}
\subsection{NLO Fragmentation and Photon Isolation}
It is interesting to investigate what effect the NLO Fragmentation
contributions have upon the cross section. Fig.\ref{ratio_iso} shows the ratio
between the full NLO calculation and the cross section with only LO
fragmentation. If there are no isolation
requirements imposed on the photon, the cross section increases up to
$\sim 30\%$, solid curve in Fig.\ref{ratio_iso}. As mentioned above a photon
needs to be isolated in order to give a clear signal at a detector. The
isolation requirements affect the photon which is produced by fragmentation
the strongest, as it is emitted in close proximity to the parton from which
it is fragmenting. This can be seen from the dashed line in Fig.
\ref{ratio_iso}, where the NLO fragmentation contribution has now decreased to
a few $\%$. Fig.\ref{iso} shows the comparison between the differential cross
section with the inclusion of isolation and without it. As can be seen this
difference is larger at low $p_{T\gamma}$, but the two curves come close to
each other with the increase of the photon's transverse momentum, where as
seen from Fig.\ref{parts} the $q\bar q\rightarrow \gamma Q\bar Q$ subprocess
takes over the cross section.
\begin{figure}
\begin{minipage}[t]{8cm}
\begin{center}
\includegraphics[scale=0.3,angle=270]{ratio_nloff_b_fy_paper}
\caption{\label{ratio_iso} \small{ratio between the differential cross section
$d\sigma /dp_{T\gamma}$, with NLO fragmentation contribution included and the
differential cross section with just LO fragmentation included, solid line no
isolation required, dashed line -isolation}}
\end{center}
\end{minipage}
\hfill
\begin{minipage}[t]{8cm}
\begin{center}
\includegraphics[scale=0.3,angle=270]{b_nloff_iso_no_iso_fy_paper}
\caption{\label{iso} \small{comparison between the differential cross section,
$d\sigma /dp_{T\gamma}$ without isolation requirements and with them, no
isolation - solid line , isolation - dashed line}}
\end{center}
\end{minipage}
\end{figure}
\subsection{Intrinsic Charm}
In the CTEQ6.6M PDFs used in the previous sections, the charm quark is
radiatively generated from the gluon's PDF with the use of the DGLAP
equations. Thus it follows that there are no charm quarks present at scales
below the charm mass,$m_c$, or that the charm PDF, $c(x,\mu)=0$, when
$\mu<m_c$. This however does not need to be the case, and there are models
that study the possibility for an intrinsic charm component of the nucleon
\cite{Cteq:IC}. Two such models are the BHPS model, which is a light-cone
model, and the sea-like model in which the charm distribution follows the
shape of the light flavor sea quarks. The difference between the three cases
is shown in Fig.\ref{ic}, where the solid curve shows the CTEQ6.6M or
radiatively generated charm scenario, the dashed curve is the CTEQ6.6C2, or
BHPS model, and the dotted curve is CTEQ6.6C4 PDF or the sea-like model. The
difference between the BHPS distribution and the radiatively generated case
are most noticeable at large x, whereas the sea-like model is about equally
larger than the CTEQ6.6M PDF at all values of x. How these different PDFs
affect the cross section can be seen from Fig.\ref{iso_nloff}. The dotted
curve shows the cross section generated with the use of the sea-like intrinsic
charm PDF, and it is larger than the solid curve by about the same amount
at all values
of $p_{T\gamma}$. The difference between the radiatively generated charm cross
section and the BHPS charm however is not great at small transverse momentum,
but it increases at large $p_{T\gamma}$, as is expected given the differences
between the CTEQ6.6M and CTEQ6.6C2 PDFs at large x.
\begin{figure}
\begin{minipage}[t]{8cm}
\begin{center}
\includegraphics[scale=0.3,angle=270]{cteq_charm_ic_fy_paper}
\caption{\label{ic} \small{comparison between the three different charm PDFs
at scale $Q=40 {\rm\ GeV}$, CTEQ6.6M - solid line, CTEQ6.6C2 - dashed line,
CTEQ6.6C4 - dotted line}}
\end{center}
\end{minipage}
\hfill
\begin{minipage}[t]{8cm}
\begin{center}
\includegraphics[scale=0.3,angle=270]{c_nloff_ic_2_fy_paper}
\caption{\label{iso_nloff} \small{the differential cross section,
$d\sigma /dp_{T\gamma}$, for the production of a direct photon and a charm
quark for the three different PDF cases, CTEQ6.6M - solid line, CTEQ6.6C2 -
dashed line, CTEQ6.6C4 - dotted line}}
\end{center}
\end{minipage}
\end{figure}
\section{Conclusion}
We have presented the results for the inclusive cross section for the
production of a direct photon in association with a heavy quark,
$p \bar p / pp \rightarrow \gamma Q X$, up to $O(\alpha \alpha_s^2)$ with NLO
fragmentation included. The inclusion of NLO
fragmentation has a noticeable effect on the cross section if no isolation is
imposed. However, this effect decreases if isolation cuts,
needed for a clean photon signal, are imposed.
Predictions were presented for $p\bar p$ collisions at $\sqrt{S}=1.96 TeV$ and
for $pp$ collisions at 14 TeV.
At the Tevatron, due to the $p \bar p$ beams, the valence quarks and
antiquarks are dominant, and thus it is the annihilation subprocess
$q\bar q\rightarrow \gamma Q\bar Q$ that dominates the cross section at large
$p_{T\gamma}$. Therefore the sensitivity to the initial state heavy quarks and
their content in the nucleon decreases, and the difference between the bottom
and charm differential cross sections, $d\sigma /dp_{T\gamma}$, also
diminishes. At the LHC, where two beams of protons are colliding, there are
no longer any valence antiquarks present, and processes with initial gluons
and heavy quarks dominate. Thus there should be a greater possibility to
learn more about the heavy quark role in the nucleon. In particular, the
perturbatively calculated heavy quark PDFs may be checked using such data.
|
1,314,259,996,629 | arxiv | \section{Introduction}
The propagation of elastic waves in rotating media has been a subject
of continuous interest in the last three decades or so.
Ever since the publication of a seminal article by Schoenberg and
Censor \cite{ScCe73},
numerous workers have studied how uniform rotation affects
time-dependent solutions to the governing equations (pointers to such
studies can be found in recent articles on waves in rotating media
such as Refs.\cite{AhKh01, ZhJi01,Auri04,Dest04}.)
The starting point of these studies is the inclusion
of the Coriolis and centrifugal accelerations into the equations of
motion:
\begin{equation} \label{1.1}
\text{div } \mathbf{T} = \rho \ddot{\mathbf{y}} + 2 \rho
\mbox{\boldmath
$\Omega$} \times \dot{\mathbf{y}} + \rho \mbox{\boldmath $\Omega$}
\times (\mbox{\boldmath $\Omega$} \times \mathbf{y}).
\end{equation}
Here $\mathbf{T}$ is the Cauchy stress tensor,
$\rho$ is the mass density,
$\mathbf{y} = \mathbf{y}(\mathbf{x},t)$ denotes the current position
of a particle in the material initially at $\mathbf{x}$ in the
reference configuration, and $\mbox{\boldmath $\Omega$}$ is the constant rotation
rate vector.
Also, a dot denotes differentiation with respect to
time $t$ in a \textit{fixed} (non-rotating) frame;
in other words, if ($\mathbf{e_1}$, $\mathbf{e_2}$, $\mathbf{e_3}$)
is one such frame, then $\mathbf{y} = y_i \mathbf{e_i}$ and
$\dot{\mathbf{y}} := (\partial y_i/\partial t)\mathbf{e_i}$.
The second term on the right hand-side of Eq.~\eqref{1.1} is the
Coriolis force and the third term is the centrifugal force.
This latter term is the source of an obvious concern in a linearly
elastic material with infinite dimension(s) because it grows linearly
with the distance between the particle and the axis of rotation.
Most (and perhaps all) previous works on the subject have dealt with
this potential problem simply by focusing on the so-called
``time-dependent'' part of the equations of motion.
In this approach, the solution $\mathbf{y}$ is split into a
``time-independent'' part and a ``time-dependent'' part as
$\mathbf{y}(\mathbf{x},t)
= \mathbf{y^s}(\mathbf{x}) + \mathbf{u}(\mathbf{x},t)$ (say).
Then, the constitutive equation of the elastic material being linear,
the Cauchy stress can also be split:
$\mathbf{T}(\mathbf{x},t) = \mathbf{T^s}(\mathbf{x})
+ \mbox{\boldmath $\sigma$}(\mathbf{x},t)$ (say) and the linearization
of the equations of motion allows for the \textit{separate} resolution
of a time-independent problem and of a time-dependent problem,
\begin{align}
& \text{div } \mathbf{T^s}(\mathbf{x})
= \rho \mbox{\boldmath $\Omega$}
\times (\mbox{\boldmath $\Omega$} \times \mathbf{y^s}(\mathbf{x})),
\label{1.2}
\\
& \text{div } \mbox{\boldmath $\sigma$}(\mathbf{x},t)
= \rho \ddot{\mathbf{u}}(\mathbf{x},t)
+2\rho \mbox{\boldmath $\Omega$} \times \dot{\mathbf{u}}(\mathbf{x},t)
+ \rho \mbox{\boldmath $\Omega$}
\times (\mbox{\boldmath$\Omega$}\times \mathbf{u}(\mathbf{x},t)).
\label{1.3}
\end{align}
Although the resolution of Eq.~\eqref{1.3} has generated a
wealth of results in a variety of contexts, the resolution of
Eq.~\eqref{1.2} seems to have been left aside, at least as long as
potentially infinite distances from the rotation axis are involved.
This paper aims at providing a context in which the \textit{global}
equations of motion in a rotating elastic media Eq.~\eqref{1.1},
possibly inclusive of finite strain effects and of a nonlinear
constitutive equation, can be posed and solved.
Because large strains might appear in a rotating elastic solid, we
place ourselves in the framework of finite nonlinear elasticity.
We focus on materials subject to the internal constraint of
incompressibility, first because many actual materials with a
nonlinear elastic response such as rubber or biological soft tissue
can be considered to be incompressible, and second because the
inherent introduction of an arbitrary scalar quantity
(the ``pressure'') leads to an immediate simplification of the
equations of motion Eq.~\eqref{1.1}.
Indeed, as we show in the next Section, the arbitrariness of the
$p\mathbf{1}$ term in the constitutive equation of an
incompressible body allows for the centrifugal force to be absorbed by
this pressure term.
Once this manipulation is done, the resolution of the
equations of motion can be conducted quite naturally.
As noted by Schoenberg and Censor \cite{ScCe73}, two features
characterize waves in rotating bodies as opposed to waves in
non-rotating bodies: a new direction of anisotropy
(linked to the rotation axis) and more dispersion (linked to the
rotation frequency).
To illustrate these features, we revisit some classic results on
finite amplitude elastic motion due to Carroll \cite{C1,C1a,C2,C3} and
extend them to the case of a body in rotation.
The exact solutions of Carroll are versatile in their fields of
application because they are valid not only for nonlinearly
elastic solids, but also for viscoelastic solids
\cite{C4}, Reiner-Rivlin fluids \cite{C4, C5}, Stokesian fluids
\cite{C4}, Rivlin-Ericksen fluids \cite{C5}, liquid crystals
\cite{C6}, dielectrics \cite{C7}, magnetic materials \cite{C8}, etc.
They also come in a great variety of forms,
as circularly-polarized harmonic progressive waves, as motions with
sinusoidal time dependence, as motions with sinusoidal space
dependence, etc.
In our revisiting his findings, we note
a striking analogy between the equations of motion obtained for a
motion general enough to include all of the above motions, and the
equations obtained in the problem of the motion of a
nonlinear string, as considered by Rosenau and Rubin \cite{RR}.
Then we show how the method of \cite{RR} can be used to derive all
(and more) of the different results obtained by Carroll, which turn out
to be a direct consequence of material isotropy and of the Galilean
invariance of the field equations.
The paper is organized in the following manner.
In the next Section the basic equations for motions in a rotating
nonlinearly elastic incompressible solid
and their specialization to finite amplitude transverse waves are
given.
In Section 3 we recast the determining
equations in a general complex form and we show that they admit
some special separable solutions.
In Section 4 we investigate in detail the case of
circularly-polarized harmonic progressive waves.
We give the dispersion relation and solve it
for Mooney-Rivlin materials and for some other strain energy density
functions relevant to the modelling of rubberlike materials (some of
these results are new even in the non-rotating case).
Next we show that in rotating solids, motions with a sinusoidal time
dependence (Section 5) and motions with a sinusoidal spatial
dependence (Section 6) are determined by solving a reduced system of
ordinary differential equations, equivalent to that of a central
motion problem.
The main difference with Carroll's results for the non-rotating case
is that, for special values of the angular velocity, the central force
may be repulsive;
this possibility is ruled out in the non-rotating case by the
empirical inequalities \cite{BB}.
\section{Preliminaries}
\subsection{Equations of motion in a rotating elastic solid}
Let the initial and current coordinates of a point of the body,
referred to the same fixed rectangular Cartesian system of axes,
be denoted by $x_{i}$ and $y_{i}$, respectively, where the indices
take the values $1$, $2$, $3$.
A motion of the body is defined by
\begin{equation} \label{1}
\mathbf{y}=\mathbf{y}(\mathbf{x},t).
\end{equation}
The response of a homogeneous isotropic incompressible elastic solid to
deformations from an undistorted reference configuration is described
by the constitutive relation,
\begin{equation} \label{2}
\mathbf{T} = -\widetilde{p} \mathbf{1}
+ \alpha \mathbf{B}-\beta \mathbf{B}^{-1},
\end{equation}
where $\mathbf{T}$ is the Cauchy stress tensor, $\mathbf{1}$ is the
unit tensor, and $\mathbf{B}$ is the left Cauchy-Green strain tensor,
defined by
\begin{equation} \label{3}
\mathbf{B} := \mathbf{FF}^{T},
\end{equation}
$\mathbf{F} := \partial \mathbf{y}/\partial \mathbf{x}$ being the
deformation gradient tensor.
Also in Eq.~\eqref{2}, $\widetilde{p}$ is an arbitrary scalar
function associated with the internal constraint of incompressibility
\begin{equation} \label{4}
\det \mathbf{F} = 1,
\end{equation}
to be determined from the equations of motion and eventual
boundary/initial conditions.
The response parameters $\alpha $ and $\beta $ are functions of
the first and second invariants of $\mathbf{B}$:
$\alpha = \alpha(I,II)$, $\beta = \beta(I,II)$, where
\begin{equation} \label{5}
I=\text{tr }\mathbf{B}, \quad II=\text{tr }\mathbf{B}^{-1}.
\end{equation}
For a hyperelastic material, a strain energy density per unit of
volume $W=W(I,II)$ is defined and $\alpha$, $\beta$ are given by
\begin{equation} \label{6}
\alpha = 2\frac{\partial W}{\partial I}, \quad
\beta = 2\frac{\partial W}{\partial II}.
\end{equation}
Now we consider that the elastic medium rotates with a uniform
rotation vector $\mathbf{\Omega }$, about a given axis.
In the absence of body forces, the equations of motions relative to a
rotating frame (see for instance \cite[pp.60--61]{Liu02}) are given by
Eq.\eqref{1.1}.
Using the constitutive equation Eq.~\eqref{2}, we obtain
\begin{equation} \label{7}
-\text{grad } \widetilde{p}
+ \text{div } (\alpha \mathbf{B} - \beta \mathbf{B}^{-1})
= \rho \ddot{\mathbf{y}}
+ 2 \rho \mbox{\boldmath $\Omega$} \times \dot{\mathbf{y}}
+ \rho \mbox{\boldmath $\Omega$} \times (\mbox{\boldmath$\Omega$}
\times \mathbf{y}).
\end{equation}
Now write $\widetilde{p}$ in the form
\begin{equation} \label{9}
\widetilde{p} = p
- \textstyle{\frac{1}{2}}\rho [\mathbf{\Omega \times}
(\mathbf{\Omega\times y})] \mathbf{\cdot y},
\end{equation}
where $p = p(\mathbf{x},t)$ is yet another arbitrary pressure scalar.
Then Eq.~\eqref{7} reduces to
\begin{equation} \label{10}
-\text{grad } p
+ \text{div } (\alpha \mathbf{B} - \beta \mathbf{B}^{-1})
= \rho \ddot{\mathbf{y}}
+ 2 \rho \mbox{\boldmath $\Omega$} \times \dot{\mathbf{y}}.
\end{equation}
Hence the equations of motion can be tackled independently of the
centrifugal acceleration, which does not appear here.
Once Eqs.\eqref{10} are solved, the solution $\mathbf{y}$ will lead to
a pressure field $\widetilde{p}$ given by Eq.~\eqref{9}
which does depend on the centrifugal force.
\subsection{Finite amplitude shearing motions}
Following Carroll \cite{C1}, we study for the remainder of the paper
the propagation of plane transverse waves in a bi-axially deformed
incompressible material.
Thus we consider the following class of shearing motions,
\begin{equation}
y_1 = \mu x_1 + u(z,t), \quad
y_2 = \mu x_2 + v(z,t), \quad
y_3 = \lambda x_3 =:z, \label{11}
\end{equation}
that is, a transverse wave polarized in the ($x_1 x_2$) plane and
propagating in the $x_3$-direction of a material subject to a pure
homogeneous pre-stretch with constant principal stretch ratios $\mu $,
$\mu$, $\lambda$ ($\mu^2 \lambda =1$) in the $x_{1}$, $x_{2}$, $x_{3}$
directions, respectively.
For these motions, we find
\begin{equation}
\mathbf{B}=
\begin{bmatrix}
\mu^2 + \lambda^2 u_z^2 & & \\
\lambda^2 u_z v_z & \mu^2 + \lambda^2 v_z & \\
\lambda^2 u_z & \lambda^2 v_z & \lambda^2
\end{bmatrix},
\quad
\mathbf{B}^{-1}=
\begin{bmatrix}
\lambda & & \\
0 & \lambda & \\
-\lambda u_z & -\lambda v_z & \lambda (u_z^2 + v_z^2)+\mu^4
\end{bmatrix}.
\end{equation}
Here and henceforward, a subscript letter denotes partial
differentiation (i.e. $u_{z}:=\partial u/\partial z$,
$v_{tt}:=\partial ^{2}v/\partial t^{2}$, etc.)
It follows from Eq.~\eqref{5} that
\begin{equation} \label{12}
I = 2 \mu^2 + \lambda^2 (1+u_z^2 + v_z^2), \quad
II = \mu^4 + \lambda (2 + u_z^2 + v_z^2),
\end{equation}
so that both invariants, and consequently the response parameters
$\alpha$, $\beta$, are functions of $u_z^2 + v_z^2$ alone,
\begin{equation}
\alpha = \alpha (u_z^2 + v_z^2),\quad
\beta = \beta (u_z^2 + v_z^2).
\end{equation}
Then the equations of motion Eq.~\eqref{10} read
\begin{align}
& -p_{y_1} + (Qu_z)_z = \rho (u_{tt} - 2\Omega_3 v_t),
\notag \label{13} \\
& -p_{y_2} + (Qv_z)_z = \rho (v_{tt} + 2\Omega _3 u_t),
\notag \\
& -p_z + [\alpha \lambda^2 + \beta \mu^4
+ \beta \lambda (u_z^2 + v_z^2)]_z
= 2\rho (\Omega_1 v_t - \Omega_2 u_t),
\end{align}
where the function $Q=Q(u_z^2 + v_z^2)$ is defined by
\begin{equation} \label{15}
Q:= \alpha \lambda^2 + \beta \lambda.
\end{equation}
By inspection of Eqs.~\eqref{13}, we find that $p$ can be taken in the
form
\begin{equation}
p = p(z,t) =
\alpha \lambda^2 + \beta \mu^4 - \beta \lambda (u_z^2 + v_z^2)
- 2 \rho \textstyle{\int }(\Omega_1 v_t - \Omega_2 u_t) \text{d}z.
\label{16}
\end{equation}
Then Eq.~\eqref{13}$_{3}$ is satisfied and Eqs.~\eqref{13}$_{1,2}$
reduce to
\begin{equation} \label{motion}
(Qu_z)_z = \rho (u_{tt} - 2\Omega_3 v_t),
\quad
(Qv_z)_z = \rho (v_{tt} + 2\Omega_3 u_t).
\end{equation}
Eqs.~\eqref{motion} form a system of two coupled nonlinear hyperbolic
partial differential equations, generalizing the system derived by
Carroll in \cite{C1} for a non-rotating body.
\section{Separable solutions}
\subsection{Link with another problem (string motion)}
By inspection of the system Eqs.~\eqref{motion}, an analogy can be
drawn with the system of equations governing the motion of a nonlinear
string, as treated by Rosenau and Rubin \cite{RR}.
Indeed, if the position of a particle in a string is denoted by the
rectangular Cartesian coordinates $x(\xi ,t)$, $y(\xi ,t)$, where
$\xi$ is a curvilinear abscissa, then the equations of
motion of the string can be put in the form,
\begin{equation} \label{string}
\left[ (T/a) x_\xi \right]_\xi = \rho_0 (x_{tt} - f_1), \quad
\left[ (T/a) y_\xi \right]_\xi = \rho_0 (y_{tt} - f_2).
\end{equation}
Here, $T$ is the internal tension in the string (acting along the
tangent to the string curve), $f_{1}$ and $f_{2}$ are the components
of the body force per unit mass, $\rho_0 = \rho_0(\xi)$ is the mass
density, and $a$ is the metric associated with the stretch of the
string: $a=\sqrt{ x_\xi^2 + y_\xi^2}$.
Finally, a constitutive equation $T=T(a)$ for the internal tension
characterizes a the string material.
The similarity between the two systems Eqs.~\eqref{motion} and
Eqs.~\eqref{string} is striking.
Accordingly we now adapt the analysis devised by Rosenau and Rubin
\cite{RR} for a nonlinear string to our system of governing equations.
\subsection{Separation of variables}
Seeking some exact solutions, we follow Rosenau's and Rubin's \cite{RR}
steps.
First we differentiate Eqs.~\eqref{motion} with respect to $z$,
and obtain
\begin{equation} \label{motion1}
[ Q U]_{zz} = \rho (U_{tt} - 2\Omega_3 V_t), \quad
[Q V]_{zz} = \rho (V_{tt} + 2\Omega_3 U_t),
\end{equation}
where $U := u_z$ and $V := v_z$.
Next, we define the complex function $Z$ as
\begin{equation} \label{ses}
Z(z,t) =\eta (z,t) \text{e}^{i\xi (z,t)} := U + iV,
\end{equation}
so that
\begin{equation}
U = \Re(Z) = \eta \cos \xi, \quad V = \Im(Z) = \eta \sin \xi.
\label{ses1}
\end{equation}
Then, we rewrite the system Eqs.~\eqref{motion1} as a single complex
equation,
\begin{equation}
[Q (\eta^2) Z]_{zz} = \rho (Z_{tt} + 2i \Omega_3 Z_t).
\label{ses2}
\end{equation}
To reduce further this equation to a set of ordinary differential
equations, we look for a class of solutions admitting the separable
forms:
\begin{equation} \label{ses3}
\eta(z, t) = \eta_1(z) \eta_2 (t), \quad
\xi(z,t) = \xi_1(z) + \xi_2 (t),
\end{equation}
where $\eta_1$ and $\xi_1$ ($\eta_2$ and $\xi_2$) are functions of
space (time) only.
Then Eq.~\eqref{ses2} can be cast in the form
\begin{equation} \label{ses4}
\dfrac{[Q(\eta_1^2 \eta_2^2) \eta_1 \text{e}^{i\xi_1}]_{zz}}
{\eta_1 \text{e}^{i \xi_1}}
= \rho \dfrac{(\eta_2 \text{e}^{i\xi_2})'' +
2i\Omega_3 (\eta_2 \text{e}^{i \xi_2})'}
{\eta_2 \text{e}^{i\xi_2}},
\end{equation}
where the prime denotes differentiation with respect to the argument
of a single-variable function.
Rosenau and Rubin \cite{RR} noted that a sufficient condition to ensure
complete separation of time functions from space functions in this
equation is that the material response function $Q$ be itself
separable.
Indeed if
\begin{equation} \label{ses5}
Q(\eta_1^2 \eta_2^2) = Q_1(\eta_1^2) Q_2(\eta_2^2),
\end{equation}
(say) then we end up with the two ordinary differential equations,
\begin{align} \label{Q1Q2}
& [Q_1 (\eta_1^2) \eta_1 \text{e}^{i\xi_1}]''
= h \eta_1 \text{e}^{i\xi_1},
\notag \\
& \rho [(\eta_2 \text{e}^{i\xi_2})''
+ 2i \Omega_3 (\eta_2 \text{e}^{i\xi_2})']
= h Q_2(\eta_2^2) \eta_2 \text{e}^{ i\xi_2},
\end{align}
for some constant $h$.
The separation condition Eq.~\eqref{ses5} is however rather strong and
might be fulfilled only for very specific constitutive equations.
Another possibility, not mentioned by Rosenau and Rubin,
for the separation of space functions from time functions
arises when either $\eta_1(z)$ or $\eta_2(t)$ are constant functions
(independent of their argument).
Hence, when $\eta_1 = k_1$ (say), Eq.~\eqref{ses4} yields
\begin{equation} \label{eta1Const}
( \text{e}^{i\xi_1})'' = h \text{e}^{i\xi_1}, \quad
\rho [(\eta_2 \text{e}^{i\xi_2})''
+ 2i \Omega_3 (\eta_2 \text{e}^{i\xi_2})']
= h Q (k_1^2\eta_2^2) \eta_2 \text{e}^{i\xi_2},
\end{equation}
and when $\eta_2 = k_2$ (say), it yields
\begin{equation} \label{eta2Const}
[Q (k_2^2\eta_1^2) \eta_1 \text{e}^{i\xi_1}]''
= h \eta_1 \text{e}^{i\xi_1},
\quad
\rho [(\text{e}^{i\xi_2})'' + 2i \Omega_3 (\text{e}^{i\xi_2})']
= h \text{e}^{ i\xi_2}.
\end{equation}
The conditions $\eta_1=$const. or $\eta_2 =$const. do not impose any
restriction on the strain energy function.
Thus, the solutions to Eqs.~\eqref{eta1Const} or Eqs.~\eqref{eta2Const}
are valid for any type of material,
in contrast to the solutions to Eqs.~\eqref{Q1Q2}, which require
Eq.~\eqref{ses5} to be satisfied.
For instance, consider the solution
\begin{equation}
Z(z,t) = [\psi(t) + i \phi(t)] k \text{e}^{i(kz + \theta(t))},
\label{ses13}
\end{equation}
where $k$ is a constant and $\psi$, $\phi$, $\theta$ are arbitrary real
functions of time.
A simple check shows that $Z$ is indeed of the form given by
Eqs.~\eqref{ses} and Eqs.~\eqref{ses3}, with the following
identifications:
$\eta_1(z) = k = $ const.,
$\eta_2(t) = [\phi^2 + \psi^2]^{\textstyle{\frac{1}{2}}}$,
$\xi_1(z) = kz$, and $\xi_2(t) = \theta + \tan^{-1}(\phi/\psi)$.
Once the ordinary differential equations Eqs.~\eqref{eta1Const} are
solved, the displacement field is given by
\begin{align} \label{field1}
& u(z,t) = \phi(t) \cos (kz+\theta (t)) + \psi(t) \sin(kz+\theta (t)),
\notag \\
& v(z,t) = \phi (t)\sin (kz+\theta (t)) - \psi(t) \cos (kz+\theta (t)).
\end{align}
On the other hand, consider the solution
\begin{equation}
Z(z,t) =
[(i\phi(z) + \psi(z)) \theta'(z) + (\phi'(z) - i\psi'(z))]
\text{e}^{i(\omega t+\theta(z))},
\label{ses14}
\end{equation}
where $k$ is a constant and $\psi $, $\phi $, $\theta $ are arbitrary
functions of space.
Here $Z$ is of the form given by Eqs.~\eqref{ses} and
Eqs.~\eqref{ses3}, with the identifications
$\eta_1(z) =
[(\phi' + \psi \theta')^2
+ (\phi \theta' - \psi')^2]^{\textstyle{\frac{1}{2}}}$,
$\eta_2(t) = 1 =$ const.,
$\xi_1(z) = \theta + \tan^{-1}[(\phi \theta' - \psi')/
(\psi \theta' + \phi')]$,
and $\xi_2(t) = \omega t$.
Once the ordinary differential equations Eqs.~\eqref{eta2Const}
are solved, the displacement field is given by
\begin{align}
& u(z,t) = \phi(z) \cos(\omega t + \theta(z))
+ \psi(z)\sin(\omega t + \theta(z)),
\notag \label{field2} \\
& v(z,t) = \phi(z) \sin(\omega t + \theta(z))
- \psi(z) \cos(\omega t + \theta(z)).
\end{align}
The two sets of displacement fields Eqs.~\eqref{field1} and
Eqs.~\eqref{field2} provide a great variety of possible finite
amplitude motions, valid in every deformed incompressible nonlinearly
elastic solid.
They are inclusive of the solutions discovered and analyzed by Carroll
over the years.
Thus the motion Eqs.~\eqref{field1} written at $\psi (t)=0$
corresponds to the ``oscillatory shearing motions'' treated in
\cite{C1a};
the motion Eqs.~\eqref{field2} written
at $\psi (z)=0$ corresponds to the ``motions with time-independent
invariants'' treated in \cite{C1a};
the motion Eqs.~\eqref{field2} written at $\theta (z)=0$ corresponds
to the ``motions with sinusoidal time dependence''
or ``finite amplitude circularly-polarized standing waves'' treated in
\cite{C2,C3};
the motion Eqs.~\eqref{field1} written at $\phi (z)=$const.,
$\psi (z)=0$, $\theta (z)=-kz$,
or equivalently the motion Eqs.~\eqref{field2} written at
$\phi (t)=$const., $\psi (t)=0$, $\theta (t)=-\omega t$, corresponds
to the celebrated finite-amplitude circularly-polarized harmonic
progressive waves of \cite{C1}.
Before we consider in turn each of these finite-amplitude motions for a
rotating body, we sum up the main results established in this
Section.
We used a formalism proposed by Rosenau and Rubin \cite{RR} for the
plane motion of a nonlinear string to derive separable solutions to
the equations of motion of a deformed rotating solid in which
finite-amplitude shearing motions might propagate.
In the process, we noticed that two classes of
solutions, not considered by Rosenau and Rubin, were valid for any
form of the strain energy function.
Each class provided solutions which generalize
those proposed by Carroll \cite{C1,C1a,C2,C3,C4,C5,C6,C7,C8} and which
put them into a wider context.
On the other hand, the complex formalism makes it clear
that the solutions considered here are related to natural symmetry
properties of the coupled wave equations Eqs.~\ref{motion1}.
These properties are natural because they come out from material
symmetries and frame indifference requirements \cite{BB}.
We refer to the works of Olver \cite{O} and of Vassiliou \cite{V}
for further information on the application of group analysis to
coupled wave equations.
\section{Circularly-polarized harmonic waves}
First we consider a finite amplitude circularly-polarized harmonic
progressive wave propagating in the $z$-direction,
\begin{equation} \label{circle}
u(z,t) = A \cos (kz-\omega t), \quad v(z,t) = \pm A \sin (kz-\omega t),
\end{equation}
which is a subcase of Eqs.~\eqref{field1} or of Eqs.~\eqref{field2}.
Here the amplitude $A$, the wave number $k$, and the frequency
$\omega$ are real positive constants,
and the plus (minus) sign for $v(z,t)$ corresponds to a
left (right) circularly-polarized wave.
For the choice of motion Eqs.~\eqref{circle}, we have
\begin{equation}
u_z^2 + v_z^2 = A^2 k^2,
\end{equation}
and Eqs.~\eqref{motion} reduce to the following dispersion
equation,
\begin{equation} \label{dispersion}
k^2 Q (A^2 k^2) = \rho (\omega^2 \mp 2\Omega_3 \omega).
\end{equation}
The actual explicit form of the dispersion depends on a given
constitutive equation.
However we recall that, according to considerations by Carroll
\cite{C1} pertaining to the non-rotating case, $k^2 Q (A^2 k^2)$ must
be a positive, monotonically increasing function tending to infinity
with $k^2$.
It follows from the dispersion equation Eq.~\eqref{dispersion}, that
for a given left circularly-polarized wave, the rotation rate
$\Omega_3$ has a
cut-off frequency of $\omega /2$ and the wave does not exist for
rotation rates $\Omega_3$ beyond that cut-off frequency.
We now treat in turn three types of constitutive equations, which have
proved useful for the modelling of some incompressible rubberlike and
soft biological materials.
\subsection{Waves in deformed Mooney-Rivlin materials}
As a first illustration we consider a Mooney-Rivlin hyperelastic
material, with strain energy density,
\begin{equation} \label{MR}
W_\text{MR} = C(I-3)/2 + D(II-3)/2,
\end{equation}
where $C$ and $D$ are constants, satisfying \cite{BoHa95}
$C>0$, $D\geq 0$ or $C \geq 0$, $D > 0$.
It follows at once from Eqs.~\eqref{6} that $\alpha =C$, $\beta =D$,
and by Eq.~\eqref{15}, that $Q$ is also independent of $z$.
Introducing the speed $c$ of circularly-polarized waves
in a bi-axially deformed, non-rotating Mooney-Rivlin
material \cite{C1,BoHa95},
\begin{equation}
\rho c^2 := Q = C\lambda^2 + D\lambda,
\end{equation}
we find that the dispersion equation Eq.~\eqref{dispersion} reads here,
\begin{equation}
c^2 k^2 = \omega^2 \mp 2 \Omega_3 \omega.
\end{equation}
From this equation we easily deduce the phase speed
$v_\varphi :=\omega/k$ and the group speed
$v_g:=\partial \omega /\partial k$, as well as their
Taylor expansion to third-order for small ratios of the rotation rate
$\Omega_3$ with respect to the wave frequency $\omega$.
Introducing $\delta$, the ratio of these two frequencies,
$\delta := \Omega_3/\omega$, we find
\begin{align}
& \dfrac{v_\varphi}{c} = \dfrac{1}{\sqrt{1 \mp 2\delta}}
= 1 \pm \delta + \textstyle{\frac{3}{2}} \delta^2 + O( \delta^3) ,
\notag \\
& \dfrac{v_g}{c} = \dfrac{\sqrt{1 \mp 2\delta}}{1 \mp \delta}
= 1 + \textstyle{\frac{1}{2}} \delta^2 + O(\delta^3).
\end{align}
Clearly, the right circularly-polarized wave is defined for any value
of the rotation rate whereas the left circularly-polarized wave only
exists for a limited range of $\Omega _{3}$, with $\omega /2$ as a
cut-off frequency.
Note also that a left circularly-polarized wave is accelerated when the
Mooney-Rivlin material is put into rotation and that a right
circularly-polarized wave is slowed down.
To investigate further nonlinear stress-strain responses, we consider
two types of incompressible materials belonging to the class of
`neo-Hookean generalized materials'.
These are materials whose strain-energy function depends only on the
first invariant: $W = W(I)$.
For simplicity, we consider that the solids are not
prestressed ($\lambda = \mu =1$) prior to the rotation and wave propagation
although this assumption is not essential.
\subsection{Waves in undeformed Gent materials}
Consider the following strain energy density:
\begin{equation} \label{Gent}
W_\text{G} = - \dfrac{CJ_m}{2} \ln \left( 1- \dfrac{I-3}{J_m}\right),
\end{equation}
where $C(>0)$ is the infinitesimal shear modulus and $J_m$ is a
material parameter.
Gent \cite{Gent} introduced the strain energy function $W_\text{G}
$ to take into account the effect of the finite chain length for the
macromolecular chains composing elastomeric materials
(see also \cite{HS}).
Hence, the parameter $J_m$ has a physical interpretation: it is the
constant limiting value for $I-3$, and it reflects the mesoscopic
finite chain length limiting effect.
As $J_m \rightarrow \infty$, the limiting effect vanishes and the
strain energy density Eq.~\eqref{Gent} tends to that of a neo-Hookean
solid (Eq.~\eqref{MR} with $D=0$.)
For the motion considered in this Section, $I = 3 + A^2k^2$ and so, the
limiting chain condition imposes $A^2k^2<J_m$. From the strain energy
density Eq.~\eqref{Gent} we find that the response parameters $\alpha$
and $\beta$ defined in Eqs.~\eqref{6} are:
\begin{equation}
\alpha = C\frac{J_{m}}{J_m - A^2 k^2}, \quad \beta = 0.
\end{equation}
It follows from the definition Eq.~\eqref{15} of $Q$, written at
$\lambda = 1$, that the dispersion equation Eq.~\eqref{dispersion}
reads, for finite-amplitude circularly-polarized harmonic waves in a
rotating undeformed Gent material, as
\begin{equation}
C \frac{J_m}{J_m - A^2 k^2} k^2 = \rho (\omega^2 \mp 2\Omega_3 \omega).
\end{equation}
Introducing $\delta := \Omega_3/\omega$, we find that the phase
velocity $v_\varphi := \omega /k$ is given by
\begin{equation} \label{GH6}
\rho v_\varphi^2 =
\frac{CJ_m + \rho \omega^2 A^2 (1 \mp 2\delta)} {J_m(1 \mp 2\delta)},
\end{equation}
and is defined everywhere for the right wave and only below the cut-off
frequency for the right wave.
The group velocity, $v_g := \partial \omega /\partial k$, is found as
\begin{equation}
v_g =
\frac{ \rho v_\varphi^3}{C} \frac{(1 \mp 2\delta)^2}{1 \mp \delta}.
\end{equation}
In contrast to the case of a Mooney-Rivlin material, the waves are also
dispersive when the body is not rotating; then $\Omega_3 = 0$ and
\begin{equation}
v_\varphi = \sqrt{\dfrac{CJ_m + \rho \omega^2 A^2}{\rho J_m}},
\quad
v_g = \frac{ \rho v_\varphi^3}{C}.
\end{equation}
These latter results are worth mentioning because in \cite{C1},
Carroll treated explicitly only the case of Mooney-Rivlin materials.
Moreover they may be used as benchmarks for an acoustical
determination of the limiting chain parameter $J_m$.
Acoustical evaluation is non-invasive and non-destructive, and is
therefore appropriate for an estimation \textit{in vivo}
of $J_m$, whose numerical value can be linked to the ageing and
stiffening of a soft biological tissue such as an arterial wall
\cite{HS}.
\subsection{Waves in undeformed power-law materials}
Now consider the following strain energy density,
\begin{equation} \label{power}
W_\text{K} =
\frac{C}{b}\left[ \left( 1+\frac{b}{n}\left( I-3\right) \right)
^{n}-1\right] ,
\end{equation}
where $C(>0)$, $b$, and $n$ are constitutive parameters.
Knowles \cite{Kn} proposed that this strain energy could account for
\textit{strain softening} when $n<1$ and for \textit{strain hardening}
when $n>1$.
These effects have been observed for many real materials.
Here we find that the dispersion equation Eq.~\eqref{dispersion} is
given by
\begin{equation}
C\left( 1 + \frac{b}{n} A^2 k^2 \right)^{n-1} k^2
= \rho (\omega^2 \mp 2 \Omega_3 \omega).
\end{equation}
Taking $n=2$ in Eq.~\eqref{power} as an example of strain energy for a
hardening material, we find that the corresponding phase and group
velocities are given by
\begin{align}
& \rho v_\varphi^2
= \frac{C}{2 (1 \mp 2\delta)}
\left[1 + \sqrt{1+2\frac{\rho\omega^2}{C} bA^2(1 \mp 2\delta)}\right],
\notag \\
& v_g = \frac{C}{\rho v_\varphi (1 \mp \delta)}
\sqrt{1 + 2\frac{\rho \omega^2}{C} bA^2(1 \mp 2\delta)}.
\end{align}
The choice $n = \textstyle{\frac{1}{2}}$ in Eq.~\eqref{power} provides
an example of strain energy for a softening material.
As pointed out by Knowles \cite{Kn}, this choice is a borderline value
for $n$, as the material is elliptic but not uniformly elliptic.
We compute the corresponding phase speed as
\begin{equation}
\rho v_\varphi^2 = \frac{C}{1 \mp 2 \delta}
\sqrt{1 + \left[\frac{\rho \omega^2}{C}bA^2(1 \mp 2\delta) \right]^2}
- \rho \omega^2 b A^2,
\end{equation}
and we omit to display the group speed because its expression is too
cumbersome.
\section{Motions with sinusoidal time dependence}
In this section we consider finite-amplitude shearing motions with a
sinusoidal time-independence,
\begin{equation}
u(z,t) = \phi (z)\cos (\omega t)+\psi (z)\sin (\omega t),
\quad
v(z,t)=\phi (z)\sin (\omega t)-\psi (z)\cos (\omega t),
\label{std1}
\end{equation}
which are a subcase of Eqs.~\eqref{field2}. For these solutions we have
\begin{equation}
u_{z}^{2}+v_{z}^{2}=\phi ^{^{\prime }2}+\psi ^{^{\prime }2},
\end{equation}
and so the strain invariants Eq.~\eqref{12} are spatially nonuniform
and constant in time \cite{C1a}.
The governing equations Eqs.~\eqref{motion} reduce to
\begin{equation}
(Q\phi')' = \rho (\omega^2 + 2\Omega_3 \omega)\phi,
\quad
(Q\psi')' = \rho (\omega^2 + 2\Omega_3 \omega)\psi .
\label{cm0}
\end{equation}
These equations are consistent at $\Omega _{3}=0$ with those derived
by Carroll \cite{C2} .
Following his lead, we reduce them to a problem in central
force motion.
We introduce the functions $\Phi(z)$ and $\Psi(z)$ defined by
\begin{equation}
\Phi := Q \phi', \quad \Psi := Q\psi'. \label{cm1}
\end{equation}
We assume that these latter equalities are invertible as
\begin{equation}
\phi' = \nu \Phi , \quad \psi' = \nu \Psi , \label{cm2}
\end{equation}
where \cite{C2,C3} the generalized shear compliance $\nu $ ($>0$) is a
function of the shear stress $\sigma$, itself given by
$\sigma^2 = \Phi^2 + \Psi^2$.
For example, in the case of a bi-axially deformed Mooney-Rivlin
material with strain energy Eq.~\eqref{MR}, $\nu$ is constant:
$\nu_\text{MR} = 1/(C \lambda^2 + D \lambda)$;
in the case of an undeformed Gent material with
strain energy Eq.~\eqref{Gent}, we find that $\nu$ is given by
$\nu_\text{G}
= (CJ_m / 2 \sigma^2) (\sqrt{1 + (4 \sigma^2)/(C^2J_m)} - 1)$.
Note that Carroll \cite{C3} proposed expressions for $\nu$ when the
strain-energy density is expanded up to sixth-order in the invariants
($I-3$) and $(II-3)$.
Substitution of Eq.~\eqref{cm2} into the derivative with respect to
$z$ of Eqs.~\eqref{cm0} leads to the system of coupled ordinary
differential equations,
\begin{equation}
\Phi'' - \rho (\omega^2 + 2\Omega_3 \omega)\nu \Phi = 0,
\quad
\Psi'' - \rho (\omega^2 + 2\Omega_3 \omega )\nu \Psi =0.
\label{cm3}
\end{equation}
This system is formally equivalent to the one governing the motion of a
particle in a plane under a field of central forces, after
identification of $\Phi $ and $\Psi $ with the rectangular Cartesian
coordinates and of $z$ with time.
The usual change of variables from rectangular Cartesian to polar
coordinates,
\begin{equation}
\Phi =r\cos \theta ,\quad \Psi =r\sin \theta , \label{cm4}
\end{equation}
leads to
\begin{equation}
r'' - r \theta^{'2} =
\rho (\omega^2 + 2 \Omega_3 \omega) \nu(r^2) r,
\quad
r \theta'' + 2 r' \theta' = 0. \label{cm5}
\end{equation}
These equations coincide at $\Omega_3 = 0$ with those of Carroll
\cite{C2}.
Eq.~\eqref{cm5}$_2$ is
integrated as $r^2 \theta' = A$, a constant.
Substituting this new equation into Eq.~\eqref{cm5}$_1$, multiplying
across by $r'$, and integrating yields
\begin{equation}
r^{' 2} + A r^{-2}
- \rho (\omega^2 + 2\Omega_3 \omega )\textstyle{\int} \nu(s)ds
= B,
\label{en}
\end{equation}
another constant.
For a further treatment and discussions on the interpretation
of the solution to this equation, we refer to the papers by
Carroll \cite{C1a,C2,C3,C4,C5}, at least as long as
$\Omega_3 > -\omega/2$.
We note that the nature of this equation and of its solutions is
dramatically altered as $\Omega_3$ tends to $-\omega/2$ and
beyond, where it is reasonable to expect that (for example)
what was a periodic solution to Eq.~\eqref{en} for
$\Omega_3 > -\omega/2$ has turned into an unbounded solution
for $\Omega_3 < -\omega/2$ because then, the central force of
Eq.~\eqref{cm5}$_1$ is repulsive instead of attractive.
\section{Motions with sinusoidal spatial dependence}
Finally, we consider a plane wave motion with sinusoidal spatial
variations,
\begin{equation}
u(z,t) = \phi(t) \cos(kz) + \psi(t) \sin(kz),
\quad
v(z,t) = \phi(t) \sin(kz) - \psi(t) \sin(kz).
\end{equation}
This standing wave \cite{C2,C3} generalizes the superposition of two
circularly-polarized wave propagating in opposite directions.
It is a subcase of Eqs.~\eqref{field1}.
Here,
\begin{equation}
u_z^2 + v_z^2 = k^2 (\phi^2 + \psi^2),
\end{equation}
so that $I$, $II$, $\alpha$, $\beta$, and $Q$ are independent of $z$.
The governing equations Eqs.~\eqref{motion} reduce to the system of
ordinary differential equations,
\begin{equation}
\rho \phi'' + 2 \rho \Omega_3 \psi' + k^2 Q \phi = 0,
\quad
\rho \psi'' - 2 \rho \Omega_3 \phi' + k^2 Q \psi =0.
\label{ss0}
\end{equation}
This system coincides at $\Omega_3 = 0$ with the system
established by Carroll \cite{C2}.
It is worth noting that the change of variables,
\begin{equation}
k \phi(z) = r(z) \cos(\theta(z) + \Omega_3 z),
\quad
k \psi(z) = r(z) \sin(\theta(z) + \Omega_3 z),
\label{ss1}
\end{equation}
leads to a \emph{modified} central field problem
\begin{equation}
r'' - r \theta^{'2} + [(k^2/\rho) Q(r^2) + \Omega_3^2] r = 0,
\quad
r \theta'' + 2 r' \theta' = 0. \label{ss4}
\end{equation}
Again, integration of the second equation Eq.~\eqref{ss4}$_2$
leads to $r^2 \theta' = A$, a constant.
Then, substitution into Eq.~\eqref{ss4}$_1$, multiplication, and
integration leads to
\begin{equation}
r^{' 2} + A r^{-2}
+ (k^2/\rho) \textstyle{\int} Q(s)ds
+ \Omega_3 r^2
= B,
\end{equation}
another constant.
Here the presence of rotation $\Omega_3 \neq 0$ always alters the
nature of the solution with respect to non-rotating case.
\section{Concluding remarks}
Incompressible nonlinear elasticity provided a coherent framework where the
equations of motion could be written in full, and possibly solved, for a rotating
elastic body, without having to be split into a ``time-dependent'' solution and a
a hypothetical ``time-independent'' solution.
The internal constraint of incompressibility played a crucial role in the
writing of these equations, because the arbitrary pressure term can englobe
the possibly troublesome centrifugal force.
As an illustration, the equations of motion were solved using
the finite amplitude motions introduced and developped
by Carroll in non-rotating elastic bodies.
Because his solutions constitute one of the few examples of finite amplitude
exact solutions, much emphasis was placed on how to derive them.
In particular, it was shown how the search for separable solutions
could recover and extend Carroll's results.
For circularly polarized harmonic waves, the dispersion equation was derived
explicitly and solved for the Mooney-Rivlin, Gent, and power-law strain
energy functions.
For motions with sinusoidal time dependence and for motions with
sinusoidal space dependence, the procedure of reduction to a set of
ordinary differential equations was outlined.
Their eventual resolution can be adapted from Carroll's works,
but is beyond the scope of this contribution.
The resolution of the full equations of motion in a rotating hyperelastic
\textit{compressible} material is also left open.
|
1,314,259,996,630 | arxiv | \section{Introduction}\label{sec:intro}
The backpressure routing and scheduling paradigm has emerged from the pioneering work \cite{tass_eph1}, \cite{tass_eph2}, which showed that, in wireless networks where nodes route and schedule packets based on queue backlog differences, one can stabilize the queues for any feasible traffic. This seminal idea has generated a lot of research interest. Moreover, it has been shown that backpressure can be combined with flow control to provide utility-optimal operation \cite{neely_mod}.
The strengths of these techniques have recently increased the interest in practical implementation of the backpressure framework over wireless networks as summarized in Section~\ref{sec:related}. One important practical problem that remains open, and is the focus of this paper, is the performance of backpressure with Transmission Control Protocol (TCP) flows.
TCP is the dominant transport protocol in the Internet today and is likely to remain so for the foreseeable future. Therefore, it is crucial to exploit throughput improvement potential of backpressure routing and scheduling for TCP flows. However, TCP flows are not compatible with backpressure.
Their joint behavior is so detrimental that some flows may never get a chance to transmit. To better illustrate this point, we first discuss the operation of backpressure in the following example.
\begin{figure}[t!]
\vspace{-10pt}
\centering
\subfigure[\scriptsize Backpressure with random arrivals with rates $A_1(t)$, $A_2(t)$]{ \label{fig:intro_example_a} \scalebox{.65}{\includegraphics[bb=0 0 165 180]{example_fig_a.eps}} }
\subfigure[\scriptsize Backpressure with TCP arrivals]{ \scalebox{.65}{\label{fig:intro_example_b}\includegraphics[bb=0 0 165 180]{example_fig_b.eps}} }
\vspace{-5pt}
\caption{\scriptsize Example one-hop downlink topology consisting of a transmitter node $I$, and two receiver nodes; $R_1$ and $R_2$. The two flows; $1$ and $2$ are destined to $R_1$ and $R_2$, respectively. $U_{I}^{1}(t)$ and $U_{I}^{2}(t)$ are per-flow queue sizes at time $t$. (a) Backpressure with random arrivals with rates $A_1(t)$ and $A_2(t)$. (b) Backpressure with TCP arrivals. }
\vspace{-20pt}
\label{fig:intro_example}
\end{figure}
\begin{example}\label{ex1}
Let us consider Fig.~\ref{fig:intro_example}, which shows an example one-hop downlink topology consisting of a transmitter node $I$, and two receiver nodes; $R_1$ and $R_2$. The two flows; $1$ and $2$ are destined to $R_1$ and $R_2$, respectively. $U_{I}^{1}(t)$ and $U_{I}^{2}(t)$ are per-flow queue sizes at time $t$. Let us focus on Fig.~\ref{fig:intro_example}(a). At time $t$, packets from the two flows arrive according to random arrival rates; $A_1(t)$ and $A_2(t)$, respectively. The packets are stored in per-flow queues.
The backpressure scheduling algorithm, also known as max-weight scheduling, determines the queue (hence the flow) from which packets should be transmitted at time $t$. The decision is based on queue backlog differences, {\em i.e., } $U_{I}^{1}(t) - U_{R_1}^{1}(t)$ and $U_{I}^{2}(t) - U_{R_2}^{2}(t)$, where $U_{R_1}^{1}(t)$ and $U_{R_2}^{2}(t)$ are per-flow queue sizes at $R_1$ and $R_2$, respectively. Since $R_1$ and $R_2$ are the destination nodes, the received packets are immediately passed to the higher layers, so $U_{R_1}^{1}(t) = U_{R_2}^{2}(t) = 0$, $\forall t$. Therefore, the scheduling algorithm makes the scheduling decision based on $U_{I}^{1}(t)$ and $U_{I}^{2}(t)$. In particular, the scheduling decision is $s^{*} = \argmax \{U_{I}^{1}(t),U_{I}^{2}(t)\}$ such that $s^{*} \in \{1,2\}$. Note that a packet(s) from flow $s^{*}$ is transmitted at time $t$. It was shown in \cite{tass_eph1}, \cite{tass_eph2} that if the arrivals rates $A_1(t)$ and $A_2(t)$ are inside the stability region, the scheduling algorithm stabilizes the queues. Note that the arrival rates $A_1(t)$ and $A_2(t)$ are independent from the scheduling decisions, {\em i.e., } the scheduling decisions do not affect $A_1(t)$ and $A_2(t)$. However, this is not true if the flows are regulated by TCP as explained next.\hfill $\blacksquare$
\end{example}
The fundamental goal of TCP, which applies to all TCP variants, is to achieve as much bandwidth as possible while maintaining some level of long-term rate fairness across competing flows. By their very design, all TCP algorithms (both the widely employed loss-based versions and the delay-based ones) have their own ``clock'', which relies on end-to-end acknowledgement (ACK) packets. Based on the received ACKs, TCP determines whether and how many packets should be injected into the network by updating its window size.
{\em Example 1 - continued:} Let us consider Fig.~\ref{fig:intro_example}(b) to illustrate the interaction of backpressure and TCP. In Fig.~\ref{fig:intro_example}(b), packet arrivals are controlled by TCP. Let us consider that loss-based TCP flavor, {\em e.g., } TCP-Reno or TCP-SACK, is employed. Assume that at time $t$, the TCP congestion window size of the first flow, {\em i.e., } $W_1(t)$, is small, {\em e.g., } $W_1(t) = 1$ segment, (note that 1-segment window size may be seen at the beginning of a connection or after a re-transmit timeout), while the TCP congestion window size of the second flow is $W_2(t) > 1$ ({\em e.g., } it may be the case that flow 2 has been transmitting for some time until $t$, and it has already increased its window size). As depicted in the figure, the example queue occupancies at time $t$ are $U_{I}^{1}(t) = 1$ and $U_{I}^{2}(t) = 3$. Since, $U_{I}^{2}(t) > U_{I}^{1}(t)$, a packet(s) from the second flow is transmitted. $R_2$ receives the transmitted packet, and passes it to TCP. TCP generates an ACK, and transmits it back to node $I$. TCP source of flow $2$ at node $I$ increases window size after receiving an ACK. Therefore, more packets are passed to $U_{I}^{2}(t)$. On the other hand, since $U_{I}^{1}(t) < U_{I}^{2}(t)$, no packets are transmitted from flow $1$. Thus, TCP does not receive any ACKs for the $1$st flow, does not increase its window size, and no (or sporadic) new packets are passed to $U_{I}^{1}(t)$. Eventually, the size of $U_{I}^{1}(t)$ almost never increases, so no packets are transmitted from flow $1$. Possible sample paths showing the evolution of $W_1$ and $W_2$ as well as $U_{I}^{1}$ and $U_{I}^{2}$ over time are shown in Fig.~\ref{fig:window_queue_evolution}. As can be seen, the joint behavior of TCP and backpressure is so detrimental that flow $1$ does not get any chance to transmit. We confirm this observation via simulations in Section~\ref{sec:performance}.\hfill $\blacksquare$
The incompatibility of backpressure is not limited to the loss-based versions of TCP. The delay-based TCP flavors, {\em e.g., } TCP Vegas is also incompatible with backpressure, as TCP-Vegas has its own clock, which relies on end-to-end ACK packets to calculate round-trip-times (RTTs). If some packets are trapped in buffers due to backpressure as in the above example, sporadic or no ACK packets are received. This increases RTTs, and reduces end-to-end rate of TCP Vegas as there is inverse relationship between RTT and rate. Furthermore, backpressure leads to timeouts which reduce the end-to-end rate in both loss-based and delay-based TCP versions, including new TCP versions; TCP-Compound \cite{tcp_compound} and TCP-Cubic \cite{tcp_cubic}.
\begin{figure}
\vspace{-10pt}
\centering
\scalebox{.4}{\includegraphics[bb=0 0 556 193]{window_queue_evolution_v2.eps}}
\vspace{-5pt}
\caption{\scriptsize Sample paths that show the evolution of $W_1$,$W_2$ and $U_{I}^{1}$, $U_{I}^{2}$ over time. Note that $W_1$,$W_2$ are the congestion window size of the TCP flows, and $U_{I}^{1}$, $U_{I}^{2}$ are the corresponding queue sizes for the example presented in Fig.~\ref{fig:intro_example}(b). Due to backpressure, $W_1$ does not increase and $U_{I}^{1}$ does not receive or transmit any packets, and its size stays the same; $U_{I}^{1}(t) = 1, \forall t$.}
\label{fig:window_queue_evolution}
\vspace{-20pt}
\end{figure}
In this paper, we propose ``TCP-aware backpressure'' that helps TCP and backpressure operate in harmony. In particular, TCP-aware backpressure takes into account the behavior of TCP flows, and gives transmission opportunity to flows with short queues. This makes all TCP flows transmit their packets, so the TCP clock, which relies on packet transmissions and end-to-end ACKs, continues to operate.
Furthermore, the throughput of TCP flows improves by exploiting the performance of the backpressure routing and scheduling. We note that backpressure introduces additional challenges when combined with TCP such as out of order delivery, high jitter RTTs, and packet losses due to corruption over wireless links. However, these challenges are not specific to backpressure, and exist when a multiple path routing scheme over wireless networks is combined with TCP. We address these challenges by employing network coding (in Section~\ref{sec:algs}). Yet, the main focus of this paper is the incompatibility of TCP and backpressure and developing a TCP-aware backpressure framework.
The following are the key contributions of this work:
\begin{itemize}
\item We identify the mismatch between TCP and the backpressure framework; {\em i.e., } their joint behavior is so detrimental that some flows may never get a chance to transmit. In order to address the mismatch between TCP and backpressure, we develop ``TCP-aware backpressure routing and scheduling''.
\item We show that (i) TCP-aware backpressure routing and scheduling stabilizes queues for any feasible traffic as the classical backpressure \cite{tass_eph1}, \cite{tass_eph2}, (ii) TCP-aware backpressure routing and scheduling provides the same utility-optimal operation guarantee when combined with a flow control algorithm as the classical backpressure \cite{neely_mod}.
\item We provide implementation details and explain how to tune TCP-aware backpressure in practice so that it complies with TCP. Moreover, we combine network coding and TCP-aware backpressure to address the additional challenges such as out of order delivery, packet loss, and jitter. Thanks to employing network coding, which makes TCP flows sequence agnostic (with respect to packet IDs), TCP-aware backpressure fully complies with TCP.
\item We evaluate our schemes in a multi-hop setting, using ns-2 \cite{ns2}. The simulation results (i) confirm the mismatch of TCP and backpressure, (ii) show that TCP-aware backpressure is compatible with TCP, and significantly improves throughput as compared to existing adaptive routing schemes, (iii) demonstrate that TCP-aware backpressure provides fairness across competing TCP flows.
\end{itemize}
The structure of the rest of the paper is as follows. Section~\ref{sec:system} gives an overview of the system model. Section~\ref{sec:opt} presents TCP-aware backpressure design and analysis. Section~\ref{sec:algs} presents the implementation details of TCP-aware backpressure as well as its interaction with TCP. Section~\ref{sec:performance} presents simulation results.
Section~\ref{sec:related} presents related work. Section~\ref{sec:conclusion} concludes the paper.
\section{System Model}\label{sec:system}
We consider a general network model presented in Fig.~\ref{fig:main-example}, where flows may originate from a source in the Internet and traverse multiple hops to reach their destination in a wireless network. An end-to-end TCP connection is set up for each flow.
Our goal in this paper is to develop TCP-aware backpressure routing and scheduling algorithms that operate in the wireless network. In this direction, we first develop our algorithms using the Lyapunov optimization framework (which is presented in Section~\ref{sec:opt}) by taking into account the incompatibility of TCP and classical backpressure. In this section, we provide an overview of the system model and assumptions that we use to develop the TCP-aware backpressure. Note that the interaction and implementation of TCP-aware backpressure routing and scheduling with actual TCP flows are presented in Section~\ref{sec:algs}.
\begin{figure}
\vspace{-10pt}
\centering
\scalebox{.58}{\includegraphics[bb=0 0 400 165]{main_example_v2.eps}}
\vspace{-5pt}
\caption{\scriptsize A general network model that we consider in this paper. A flow may originate from a source in the Internet and traverse multiple hops to reach its destination in a wireless network. An end-to-end TCP connection is set up for each flow. We explore the performance of backpressure for TCP flows in the wireless network.}
\label{fig:main-example}
\vspace{-20pt}
\end{figure}
{\em Wireless Network Setup:} The wireless network consists of $N$ nodes and $L$ links, where $\mathcal{N}$ is the set of nodes and $\mathcal{L}$ is the set of links in the network. In this setup, each wireless node is able to perform routing and scheduling. Let $\mathcal{S}$ be the set of unicast flows between source-destination pairs in the network
We consider in our formulation and analysis that time is slotted, and $t$ refers to the beginning of slot $t$.
{\em Channel Model:} At slot $t$, $\boldsymbol C(t)$ $= \{C_{1}(t),$ $...,$ $C_{l}(t),$ $..., C_{L}(t)\}$ is the channel state vector, where $l$ represents the edges such that $l = (i,j)$, $(i,j) \in \mathcal{L}$ and $i \neq j$. For the sake of analysis, we assume that $C_{l}(t)$ is the state of link $l$ at time $t$ and takes values from the set $\{ON,OFF\}$ according to a probability distribution which is i.i.d. over time slots. If $C_{l}(t) = ON$, packets can be transmitted with rate $R_l$. Otherwise; ({\em i.e., } if $C_{l}(t) = OFF$), no packets are transmitted. Note that our analysis can be extended to more general channel state models \cite{neely_book}. We also consider a Rayleigh fading model in our simulations.
Let $\Gamma_{\boldsymbol C(t)}$ denote the set of the link transmission rates feasible at time slot $t$ and for channel state $\boldsymbol C(t)$ and interference among wireless links. In particular, at every time slot $t$, the link transmission vector $\boldsymbol f(t) = \{f_1(t), ..., f_l(t), ... f_L(t)\}$ should be constrained such that $\boldsymbol f(t)$ $\in \Gamma_{\boldsymbol C(t)}$. Hence, $f_l(t)$ takes a value from the set $\{R_l,0\}$ depending on the channel state and interference among multiple wireless nodes. Also note that $\boldsymbol f(t)$ is determined by the scheduling algorithm.
{\em Stability Region:}
Let $(\lambda_s)$ be the vector of arrival rates $\forall s \in \mathcal{S}$. The network stability region $\Lambda$ is defined as the closure of all arrival rate vectors that can be stably transmitted in the network, considering all possible routing and scheduling policies \cite{tass_eph1}, \cite{tass_eph2}, \cite{neely_mod}. $\Lambda$ is fixed and depends only on channel statistics and interference.
{\em Flow Rates and Queue Evolution:} Each flow $s \in \mathcal{S}$ is generated at its source node according to an arrival process $A_s(t)$, $\forall s \in \mathcal{S}$ at time slot $t$. The arrivals are i.i.d. over the slots and $\lambda_s = E[A_s(t)]$, $\forall s \in \mathcal{S}$. We assume that $E[A_s(t)]$ and $E[A_s(t)^{2}]$ are finite. Note that we make i.i.d. arrivals assumption for the purpose of designing and analyzing our algorithms in the Lyapunov optimization framework. This assumption is relaxed in the practical setup when we combine our algorithms with TCP flows in Section~\ref{sec:algs}.
Each node $i$ constructs a per-flow queue $\mathcal{U}_{i}^{s}$ for each flow $s \in \mathcal{S}$. The size of the per-flow queue $\mathcal{U}_{i}^{s}$ at time $t$ is $U_{i}^{s}(t)$.
Let $o(s)$ be the source node of flow $s$. The packets generated according to the arrival process $A_s(t)$ are inserted in the per-flow queue at node $o(s)$, {\em i.e., } in $\mathcal{U}_{o(s)}^{s}$. These queues only store packets from flow $s \in \mathcal{S}$. Each node $i$ such that $i \in \mathcal{N}$ and $i \neq o(s)$, may receive packets from its neighboring nodes and insert them in $\mathcal{U}_{i}^{s}$. The transmission rate of flow $s$ from node $i$ to node $j$ is $f_{i,j}^{s}(t)$. Since the link transmission rate over link $(i,j)$ is $f_{i,j}(t)$ at time $t$, multiple flows could share the available rate, {\em i.e., } $\sum_{s \in \mathcal{S}} f_{i,j}^{s}(t) \leq f_{i,j}(t)$. Accordingly, at every time slot $t$, the size of per-flow queues, {\em i.e., } $U_{i}^{s}(t)$ evolves according to the following dynamics.
\begin{align} \label{eq:queue_U}
& U_{i}^{s}(t+1) \leq \max [U_{i}^{s}(t) - \sum_{j \in \mathcal{N}} f_{i,j}^{s}(t), 0] + \sum_{j \in \mathcal{N}} f_{j,i}^{s}(t) \nonumber \\
& + A_{s}(t)1_{[i=o(s)]},
\end{align} where $1_{[i=o(s)]}$ is an indicator function, which is $1$ if $i=o(s)$, and $0$, otherwise. Note that Eq.~(\ref{eq:queue_U}) is inequality, because the number of packets in the queue $U_{j}^{s}(t)$ may be less than $ f_{j,i}^{s}(t)$.
\section{TCP-Aware Backpressure: Design and Analysis}\label{sec:opt}
In this section, we design and analyze the TCP-aware backpressure scheme. In particular, we provide a stochastic control strategy including routing and scheduling to address the incompatibility between TCP and classical backpressure.
\textbf{\underline{TCP-Aware Backpressure:}}
\begin{itemize}
\item \textbf{Routing \& Intra-Node Scheduling.} The routing \& intra-node scheduling part of TCP-aware backpressure determines a flow $s$ from which packets should be transmitted at slot $t$ from node $i$, as well as the next hop node $j$ to which packets from flow $s$ should be forwarded. The algorithm works as follows.
Node $i$ observes per-flow queue backlogs in all neighboring nodes at time $t$, and determines queue backlog difference according to:
\begin{align} \label{eq:per_flow_difference}
D_{i,j}^{s}(t) = \max\{K,U_{i}^{s}(t)\} - U_{j}^{s}(t),
\end{align} where $K$ is a non-negative finite constant. Let $l = (i,j)$ s.t. $j \in \mathcal{N}$ and $j \neq i$. The maximum queue backlog difference among all flows over link $l \in \mathcal{L}$ is;
\begin{align} \label{eq:per_link_difference}
D_{l}^{*}(t) = \max_{[s \in \mathcal{S} | l \in \mathcal{L}_{s}]} \{ D_{l}^{s}(t) \}.
\end{align}
The flow that maximizes the queue backlog differences over link $l$ is $s_{l}^{*}(t)$ and expressed as;
\begin{align} \label{eq:selected_flow}
s_{l}^{*}(t) = \argmax_{[s \in \mathcal{S} | l \in \mathcal{L}_{s}]} \{ D_{l}^{s}(t) \}.
\end{align}
At time slot $t$, one or more packets are selected from the queue $\mathcal{U}_{i}^{s_{l}^{*}(t)}$ if $D_{l}^{*}(t)$ $>$ $0$ and $\mathcal{U}_{i}^{s_{l}^{*}(t)}$ has enough packets for transmission. The transmission of the selected packets depends on the channel conditions and interference constraints, and determined by inter-node scheduling.
Note that TCP-aware backpressure uses queue backlog difference $\max\{K,U_{i}^{s}(t)\} - U_{j}^{s}(t)$ in Eq.~(\ref{eq:per_flow_difference}) instead of $U_{i}^{s}(t) - U_{j}^{s}(t)$ in the classical backpressure. The advantage of using Eq.~(\ref{eq:per_flow_difference}) in TCP-aware backpressure is that node $i$ may select packets from flow $s$ even if queue size $U_{i}^{s}(t)$ is small.\footnote{\scriptsize Note that place-holder backlogs, such as using $U_{i}^{s}(t)+K$ instead of $U_{i}^{s}(t)$ has been considered in the literature \cite{neely_book}. Although place-holder algorithms are beneficial to improve end-to-end delay, they do not solve the problem we consider in this paper as they do not give transmission opportunity to small queues.} This advantage is clarified through an illustrative example later in this section.
\item \textbf{Inter-Node Scheduling.} The inter-node scheduling (as also called resource allocation \cite{neely_mod}) part of TCP-aware backpressure determines link transmission rates considering the link state information and interference constraints.
Each node $i$ observes the channel state $\boldsymbol C(t)$ at time $t$, and determines a transmission vector $\boldsymbol f(t) = \{f_1(t), ...,$ $f_l(t), ... f_L(t)\}$ by maximizing $\sum_{l \in \mathcal{L}} D_{l}^{*}(t) f_l(t)$. Note that $\boldsymbol f(t)$ should be constrained such that $\boldsymbol f(t) \in \Gamma_{\boldsymbol C(t)}$, {\em i.e., } interference among multiple nodes should be taken into account. The resulting transmission rate $f_{l}(t)$ is used to transmit packets of flow $s_{l}^{*}(t)$ over link $l$.
\end{itemize}
\begin{theorem}\label{theorem1}
If channel states are i.i.d. over time slots, the arrival rates $\lambda_s$, $\forall s \in \mathcal{S}$ are interior to the stability region $\Lambda$, and $K$ is a non-negative finite constant, then TCP-aware backpressure stabilizes the network and the total average queue size is bounded.
\end{theorem}
{\em Proof:} The proof is provided in Appendix A. \hfill $\blacksquare$
\begin{example}\label{ex2}
Let us consider again Fig.~\ref{fig:intro_example}(b) for the operation of TCP-aware backpressure. The example queue occupancies at time $t$ are $U_{I}^{1}(t) = 1$ and $U_{I}^{2}(t) = 3$. Assume that $K$ in Eq.~(\ref{eq:per_flow_difference}) is chosen as $K=10$. According to TCP-aware backpressure, the scheduling algorithm makes a decision based on the rule $s^{*} = \argmax \{\max \{K,U_{I}^{1}(t)\},\max$ $\{K,U_{I}^{2}(t)\}\}$ such that $s^{*} \in \{1,2\}$. Since $\max$ $\{K,U_{I}^{s}(t)\}$ $=$ $10$, $s=1,2$, both flows get equal chance for transmission. Thus, congestion window sizes of both TCP flows evolve in time, and the TCP flows can transmit their packets. We note that one can extend this example for the case; $U_{I}^{1}(t) = 7$ and $U_{I}^{2}(t) = 12$. In this case, as $K=10$, packets from the first flow may not get any chance for transmission. Therefore, it is crucial to determine $K$ in practice, which we explain in Section~\ref{sec:algs}. \hfill $\blacksquare$
\end{example}
Note that we propose TCP-aware backpressure; its routing, intra-node scheduling, and inter-node scheduling parts to work with TCP and TCP's end-to-end flow control mechanism. In the next section, we provide implementation details. However, TCP-aware backpressure can also be combined with flow control schemes other than TCP's, which is important for two reasons: (i) it may be possible or preferable to use personalized flow control mechanisms instead of TCP's in some systems, (ii) there may be both TCP and non-TCP flows in some systems, where a TCP-friendly flow control mechanism combined with non-TCP flows is crucial to accommodate both TCP and non-TCP flows. We consider the following flow control algorithm, developed in \cite{neely_mod}, to complement TCP-aware backpressure for non-TCP flows.
The flow control algorithm at node $i$ determines the number of packets from flow $s$ that should be passed to the per-flow queues; $\mathcal{U}_{i}^{s}$ at every time slot $t$ according to;
\begin{align} \label{eq:flow_control}
\max_{\boldsymbol x} & \sum_{[s \in \mathcal{S} | i=o(s)]} [Mg_{s}(x_s(t)) - U_{i}^{s}(t) x_{s}(t) ] \nonumber \\
\mbox{s.t. } & \sum_{[s \in \mathcal{S} | i=o(s)]} x_{s}(t) \leq R_{i}^{max}
\end{align} where $R_{i}^{max}$ is a constant larger than the maximum outgoing rate from node $i$, $M$ is a positive constant, $x_s(t)$ is the rate of packets that will be inserted to the per-flow queue $\mathcal{U}_{i}^{s}$, and $g_{s}(.)$ is the utility function of flow $s$.
\begin{theorem}\label{theorem2}
If there are only non-TCP flows in the system and they employ the flow control algorithm in Eq.~(\ref{eq:flow_control}) and TCP-aware backpressure (with non-negative finite value of $K$ in Eq.~(\ref{eq:per_flow_difference})), then the admitted flow rates converges to the utility optimal operating point (as the classical backpressure) in the stability region $\Lambda$ with increasing $M$.
\end{theorem}
{\em Proof:}
The proof of Theorem~\ref{theorem2} directly follows when Appendix A and drif+penalty approach \cite{neely_mod} are combined. \hfill $\blacksquare$
\section{TCP-Aware Backpressure: Implementation \& Interaction with TCP}\label{sec:algs}
We present practical implementation details of TCP-aware backpressure as well as its interaction with different layers in the protocol stack (summarized in Fig.~\ref{fig:protocol_stack}).
\begin{figure}
\hspace{-5pt}
\centering
\scalebox{.38}{\includegraphics[bb=0 0 576 272]{protocol_stack_v2.eps}}
\caption{\scriptsize TCP-aware backpressure operations at edge-points and intermediate nodes. The {\em inter-node scheduling} and {\em routing and intra-node scheduling} parts of TCP are implemented on top of 802.11 MAC and in network layers, respectively. The {\em NC layer} is implemented as a slim layer above the network layer at the edge points. Transport layer, {\em i.e., } TCP, only exists if the edge point is a TCP source.
}
\vspace{-10pt}
\label{fig:protocol_stack}
\vspace{-20pt}
\end{figure}
\subsection{Implementation}
\subsubsection{Inter-Node Scheduling}
The inter-node scheduling part of TCP-aware backpressure determines which links should be activated at time $t$. The inter-node scheduling is a hard problem, \cite{tutorial_doyle}, \cite{lin_schroff_tutorial}, so its practical implementation is challenging. Therefore, we implement its low complexity version in our system on top of IEEE 802.11 MAC as seen in Fig.~\ref{fig:protocol_stack}. The implementation details are as follows.
Each node uses 802.11 MAC to access the wireless medium. When a node $i$ is assigned a channel by the MAC protocol, inter-node scheduling determines the neighboring node to which a selected packet should be forwarded. Let us assume that a packet is selected from flow $s_{i,j}^{*}(t)$ to be forwarded to node $j$ by the routing and intra-node scheduling algorithm, which we explain later in this section. The next hop that the selected packet should be forwarded is $j^{*}$ and determined by $j^{*} = \argmax_{j \in \mathcal{N}} \{D_{i,j}^{*} \tilde{R}_{i,j} (1 - \tilde{p}_{i,j})\}$, where $\tilde{p}_l$ and $\tilde{R}_l$ are the estimated values of $p_l$ (loss probability) and $R_l$ (link transmission rate) over link $l=(i,j)$, respectively.\footnote{\scriptsize $\tilde{p}_l$ is calculated as one minus the ratio of successfully transmitted packets over all transmitted packets during a time interval $T$ on link $l$. $\tilde{R}_l$ is calculated as the average of the recent (over an interval) link rates over link $l$.}
Then, a packet from flow $s_{i,j^{*}}^{*}(t)$, {\em i.e., } from the network layer queue $U_{i}^{s_{i,j^{*}}^{*}(t)}$, is removed and passed to the MAC layer for transmission. The MAC layer transmits the packet to node $j^{*}$.
\subsubsection{Routing and Intra-Node Scheduling}
This algorithm determines the next hop(s) to which packets should be forwarded, and the packets that should be transmitted.
We construct per-flow queues, {\em i.e., } $\mathcal{U}_{i}^{s}$, at the network layer\footnote{\scriptsize Note that constructing per-flow queues at each node may not be feasible in some systems. However, this aspect is orthogonal to the focus of this paper, and the techniques developed in the literature \cite{diffmax}, \cite{locbui} to address this problem is complementary to our TCP-aware backpressure framework.}, where the routing and intra-node scheduling algorithm operates as seen in Fig.~\ref{fig:protocol_stack}. The algorithm requires each node to know the queue size of their neighbors. To achieve this, each node $i$ transmits a message containing the size of its per-flow queue sizes; $U_{i}^{s}$ at time $t$. These messages are piggy-backed to data packets. If there is no data transmission for some time duration, our algorithm uses independent control packets to exchange the queue size information. The transmitted message is overheard by all nodes in the neighborhood.
The queue size information is extracted from the overheard messages and recorded for future decisions.
At node $i$ at time $t$, the queue backlog difference is calculated according to Eq.~(\ref{eq:per_flow_difference}). Note that, although the algorithm exactly knows $U_{i}^{s}(t)$ at time $t$, it is difficult to exactly know $U_{j}^{s}(t)$ at time $t$. Therefore, the most recent report (until time $t$) of the size of $\mathcal{U}_{j}^{s}$ is used instead of $U_{j}^{s}(t)$.
When a transmission opportunity for link $(i,j)$ arises using inter-node scheduling algorithm, a packet from flow $s_{i,j}^{*}(t)$ is selected and passed to the MAC layer for transmission.
\subsubsection{Network Coding}
Out of order delivery, high jitter in RTTs, and packet losses over wireless links are among the challenges when backpressure and TCP are combined. We address these challenges by employing network coding \cite{NC_meets_TCP}, \cite{multipath_tcp_toledo}, \cite{i2nc}. This is an effective solution thanks to the properties of network coding such as masking wireless losses and making packets sequence agnostic in terms of packet IDs. We summarize our implementation in the following
We implement the generation based network coding \cite{practical_NC} at the edge points of the wireless network ({\em e.g., } access point, base station, proxy, or TCP source itself) as a slim network coding layer (NC layer) above the network layer as shown in Fig.~\ref{fig:protocol_stack}. Note that we do not make any updates to TCP, which makes our approach amenable to practical deployment.
The NC layer at the edge point receives and packetizes the data stream into packets $\eta_1^{s}, \eta_2^{s}, ...$ of flow $s \in \mathcal{S}$. The stream of packets are divided into blocks of size $H_s$, which is set to TCP congestion window size (or its average). The packets within the same block are linearly combined (assuming large enough field size) to generate $H_s$ network coded packets; $a_1^{s} = \alpha_{1,1} \eta_1^{s}$, $a_2^{s} = \alpha_{2,1}\eta_1^{s} + \alpha_{2,2}\eta_2^{s}$, $...$, $a_{H_{s}}^{s} = \alpha_{H_{s},1}\eta_1^{s} + ... + \alpha_{H_{s},H_{s}}\eta_{H_{s}}^{s}$, where $\alpha_{i,j}$, $\forall i,j$ are network coding coefficients from a finite field. Note that network coded packets are generated incrementally to avoid coding delay \cite{practical_NC}, \cite{i2nc}. The NC layer adds network coding header including block ID, packet ID, block size, and coding coefficients. The network coded packets are routed and scheduled by TCP-aware backpressure.
At the receiver node, when the NC layer receives a packet from a new block, it considers the received packet as the first packet in the block. It generates an ACK, sends the ACK back to the NC layer at the edge point, which matches this ACK to packet $\eta_1$, converts this ACK to $\eta_1$'s ACK, and transmits the ACK information to the TCP source. Similarly, ACKs are generated at the receiver side for the second, third, etc. received packets. As long as the NC layer at the receiver transmits ACKs, the TCP clock moves, and the window continues to advance.
The NC layer stores the received network coded packets in a buffer. When the last packet from a block is received, packets are decoded and passed to the application layer. If some packets are lost in the wireless network, the receiver side NC layer makes a request with the block ID and the number of missing packets, and the edge point side NC layer generates additional network coded packets from the requested block, and sends to the receiver. Note that the missing packet IDs are not mentioned in the request, since the network coding makes the packets sequence agnostic in terms of packet IDs.
Network coding makes packets sequence agnostic, which solves out of order delivery problem and eliminates jitter. Network coding also corrects packet losses in the wireless network as explained above. We explain how our system and NC layer reacts to congestion-based losses later in this section.
\subsection{Interaction with TCP}
\subsubsection{Congestion Control}
Now, let us consider the interaction of TCP congestion control and TCP-aware backpressure using well-known classical TCP analysis \cite{twsly_tcp}, \cite{low_tcp}. Using the similar approach as in \cite{twsly_tcp}, \cite{low_tcp}, and as detailed in \cite{tcp_aware_bp_tech_rep}, we find the steady state TCP throughput for flow $s$ as; $x_{s}^{2} = \frac{(1-q_{o(s)}^{s})}{T_{s}^{3}q_{o(s)}^{s}}$, where $q_{o(s)}^{s}$ is the buffer overflow probability at the TCP source/edge node $o(s)$, and $T_s$ is constant RTT.\footnote{\scriptsize The constant RTT is a common assumption in classical TCP analysis \cite{twsly_tcp}, \cite{low_tcp}, and also valid in our setup thanks to employing network coding, which reduces jitter in RTT and makes constant RTT assumption valid.}
Note that the steady state TCP throughput depends on the buffer overflow probability only at the source/edge node different from \cite{twsly_tcp}, \cite{low_tcp}, where TCP throughput depends on the buffer overflow probability over all nodes over the path of TCP flow.\footnote{\scriptsize Note that steady state TCP throughput does not depend on packet trapping events thanks to employing Eq.~(\ref{eq:per_flow_difference}). This does not hold for classical backpressure, because some packets may be trapped in buffers, which reduces TCP throughput, and should be taken into account in the steady state TCP throughput analysis.} The reason is that congestion in the wireless network is controlled by TCP-aware backpressure, and we do not expect losses due to congestion (buffer overflow) at the intermediate nodes. In particular, as TCP-aware backpressure makes transmission decisions based on queue backlog differences according to Eq.~(\ref{eq:per_flow_difference}), it would not transmit packets if the next hop queue is congested. Therefore, congestion-based losses only occur at the source/edge node. In our implementation, if the buffer at the source/edge node is congested, than a packet from the flow which has the largest queue size is dropped. This congestion-based loss information is passed to the NC layer. The NC layer creates a loss event by not masking the dropped packet so that TCP can detect the congestion-based loss event and back-off.
\subsubsection{Selection of $K$}
TCP-aware backpressure uses queue backlog difference in Eq.~(\ref{eq:per_flow_difference}), which depends on $K$, to make routing and scheduling decisions. As noted in Section~\ref{sec:opt}, the selection of $K$ is crucial in practice to make TCP and backpressure fully comply.
In particular, if $K$ is selected too small, the number of packets that are trapped in the buffers, {\em i.e., } the number of packets that do not get transmission opportunity, increases. This reduces TCP throughput. On the other hand, if $K$ is too large, TCP-aware backpressure may not exploit the throughput improvement benefit of backpressure routing and scheduling as the ability of identifying good routing and scheduling policies reduces with large $K$ values.
Our intuition is that flows passing through node $i$, {\em i.e., } $s \in \mathcal{S}_{i}$, should share the available buffer fairly. Assume that $B_i$ is the available buffer size at node $i$. In order to give transmission opportunity to all TCP flows and provide some level of fairness across the competing TCP flows, we set $K = B_{i} / |\mathcal{S}_{i}|$ at node $i$. In this setting, if per-flow queue sizes are smaller than $K$, it is highly possible that packets from all TCP flows are transmitted. On the other hand, if some per-flow queue sizes are larger than $K$, packets from the flows with smaller queue sizes may still be trapped in the buffers. However, in this case, since the total buffer occupancy is large, buffer overflow probability at the source/edge node increases. Upon buffer overflow, the TCP flow with larger queue size reduces its rate (since upon congestion a packet from the largest per-flow queue is dropped). This reduces the queue sizes, and packets from all flows could be transmitted again.
{\em Example 2 - continued:} Let us consider again Fig.~\ref{fig:intro_example}(b). If the queue occupancies are $U_{I}^{1}(t) = 7$, $U_{I}^{2}(t) = 12$, and $K=10$, packets only from the second flow are transmitted. Since $K=10$ and we set $K = B_{I} / |\mathcal{S}_{I}|$, and $|\mathcal{S}_{I}|=2$, the buffer size is $B_{I}=20$. The total queue occupancy is $U_{I}^{1}(t)+U_{I}^{2}(t)=19$. This means that the buffer at node $I$ is about to overflow, which will lead to back-off for the second flow (since a packet from the largest queue will be dropped). Thus, the TCP rate and queue size of the second flow will reduce, and the first flow will get transmission opportunity.
\hfill $\blacksquare$
We have observed through simulations that TCP-aware backpressure, when $K$ is set to $B_{i} / |\mathcal{S}_{i}|$, significantly reduces the number of the trapped packets in the buffers. Yet, very few packets may still be trapped. Such packets are easily masked thanks to error correction capabilities of network coding. Note that network coding does not help if large number of packets are trapped in the buffers ({\em e.g., } when $K$ is selected too small), as large number of trapped packets increases end-to-end delay too much, which leads to multiple timeouts and reduces TCP throughput.
\section{Performance Evaluation}\label{sec:performance}
We simulate our scheme, TCP-aware backpressure (TCP-aware BP) as well as classical backpressure (classical BP), in ns-2 \cite{ns2}. The simulation results; (i) confirm the mismatch of TCP and classical BP, (ii) show that TCP-aware BP is compatible with TCP, and significantly improves throughput as compared to existing routing schemes such as Ad-hoc On-Demand Distance Vector (AODV) \cite{aodv}, (iii) demonstrate that TCP-aware BP provides fairness across competing TCP flows. Next, we present the simulator setup and results in detail.
\begin{figure*}[t!]
\vspace{-5pt}
\centering
\subfigure[\scriptsize Tree topology]{ \label{fig:intro_example_a} \scalebox{.40}{\includegraphics[bb=0 0 246 164]{tree_topology.eps}} }
\subfigure[\scriptsize Diamond topology]{ \label{fig:intro_example_a} \scalebox{.45}{\includegraphics[bb=0 0 246 164]{diamond_topology.eps}} }
\subfigure[\scriptsize Grid topology]{ \label{fig:intro_example_b} \scalebox{.35}{\includegraphics[bb=0 0 299 179]{grid_topology.eps}} }
\vspace{-5pt}
\caption{\scriptsize Topologies used in simulations; (a) tree topology, (b) diamond topology, (c) grid topology.}
\vspace{-15pt}
\label{fig:topologies}
\end{figure*}
\subsection{Simulation Setup}
We consider three topologies: a tree topology, a diamond topology, and a grid topology shown in Fig.~\ref{fig:topologies}. The nodes are placed over $500m \times 500m$ terrain, and $S_1$, $S_2$ and $R_1$, $R_2$ are possible source-receiver pairs in the tree and diamond topologies. In the grid topology, $4 \times 3$ cells are placed over a $800m \times 600m$ terrain. A gateway, which is connected to the Internet, passes flows to nodes. Each node communicates with other nodes in its cell or neighboring cells, and there are $12$ nodes randomly placed to the cells.
We consider FTP/TCP traffic, and employ TCP-SACK and TCP-Vegas in our simulations. TCP flows start at random times within the first $5sec$ of the simulation and are on until the end of the simulation which is $200sec$. IEEE 802.11b is used in the MAC layer. In terms of wireless channel, we simulated the two-ray path loss model and a Rayleigh fading channel with average loss rates $0, 20, 30, 40, 50 \%$
Channel capacity is $2Mbps$, the buffer size at each node is set to $100$ packets, packet sizes are set to $1000B$. We have repeated each $200sec$ simulation for 10 seeds.
We compare our scheme, TCP-aware BP, to the classical BP and AODV. For fair comparison, we employ the network coding mechanism explained in Section~\ref{sec:algs} in the classical BP as well as in AODV. The comparisons are in terms of per-flow and total transport level throughput (added over all flows) as well as fairness. For the fairness calculation, we use Jain's fairness index \cite{fairness_index}: $F = \frac{(\sum_{s \in \mathcal{S}} \bar{x}_s)^2}{|\mathcal{S}|(\sum_{s \in \mathcal{S}} (\bar{x}_s)^2)}$, where $\mathcal{S}$ is the set of flows and $\bar{x}_s$ is the average throughput of flow $s$.
\subsection{Simulation Results}
Fig.~\ref{fig:tree_thrpt_time_results} shows throughput vs. time graphs for TCP-aware BP and classical BP.
There are two flows; Flow 1 is transmitted from node $A$ to node $B$, and Flow 2 is transmitted from node $A$ to node $D$. The links are not lossy. Fig.~\ref{fig:tree_thrpt_time_results}(a) and (b) are the results for TCP-SACK, while Fig.~\ref{fig:tree_thrpt_time_results}(c) and (d) are for TCP-Vegas. Fig.~\ref{fig:tree_thrpt_time_results}(b) shows that while Flow 1 is able to transmit, Flow 2 does not get any chance for transmission in classical BP due to the mismatch between congestion window size update mechanism of TCP and queue size-based routing and scheduling of backpressure. On the other hand, in TCP-aware BP, both flows get chance for transmission. In particular, Flow 1 and Flow 2 achieves average throughput of $205.76 kbps$ and $203.36 kbps$, respectively. Fig.~\ref{fig:tree_thrpt_time_results}(c) and (d) show throughput vs. time graphs of TCP-aware BP and classical BP for TCP-Vegas. Although classical BP performs better in TCP-Vegas than in TCP-SACK due to the delay based mechanism of TCP-Vegas, its performance is still quite poor as the throughput of Flow 2 frequently goes to 0 as seen in Fig.~\ref{fig:tree_thrpt_time_results}(d). On the other hand, TCP-aware BP improves throughput of both flows as seen in Fig.~\ref{fig:tree_thrpt_time_results}(c), where Flow 1 and Flow 2 achieve $469.36 kbps$ and $324.64 kbps$, respectively. The similar results are presented in Fig.~\ref{fig:diamond_thrpt_time_results} for the diamond topology.
\begin{figure}[t!]
\vspace{-0pt}
\begin{center}
\subfigure[\scriptsize{TCP-Aware BP with TCP-SACK}]{{\includegraphics[width=4cm]{tree_topology_tcp_bp_thrpt_vs_time.eps}}}
\subfigure[\scriptsize{BP with TCP-SACK}]{{\includegraphics[width=4cm]{tree_topology_bp_thrpt_vs_time.eps}}} \hspace{-0pt} \\
\subfigure[\scriptsize{TCP-Aware BP with TCP-Vegas}]{{\includegraphics[width=4cm]{tree_topology_tcp_bp_thrpt_vs_time_vegas.eps}}}
\subfigure[\scriptsize{BP with TCP-Vegas}]{{\includegraphics[width=4cm]{tree_topology_bp_thrpt_vs_time_vegas.eps}}} \hspace{-0pt}
\end{center}
\begin{center}
\vspace{-5pt}
\caption{\label{fig:tree_thrpt_time_results} \scriptsize Throughput vs. time in the tree topology for TCP-SACK and TCP-Vegas. There are two flows; Flow 1 is transmitted from node $A$ to node $B$, and Flow 2 is transmitted from node $A$ to node $D$. The links are not lossy.
}
\vspace{-15pt}
\end{center}
\vspace{-10pt}
\end{figure}
Fig.~\ref{fig:diamond_thrpt_vs_loss_sack} demonstrates throughput and fairness vs. average loss rate results of TCP-aware BP and AODV in the diamond topology. There are two flows transmitted from node $A$ to $B$ (Flow 1) and $A$ to $D$ (Flow 2). The link $A-B$ is a lossy link. The version of TCP is TCP-SACK.
Fig.~\ref{fig:diamond_thrpt_vs_loss_sack}(a) shows that TCP-aware BP improves throughput significantly as compared to AODV thanks to adaptive routing and scheduling. The throughput improvement of TCP-aware BP as compared to AODV increases as loss probability increases thanks to loss-aware routing and scheduling mechanism of TCP-aware BP. Moreover, Fig.~\ref{fig:diamond_thrpt_vs_loss_sack}(b) shows that the fairness index is close to $F=1$ (note that $F=1$ is the highest possible fairness index) when TCP-aware BP is employed. This means that both TCP flows are able to survive in TCP-aware BP. Note that the fairness index of TCP-aware BP is 0.94, while the fairness index of AODV is 0.98 when the packet loss probability is 0.5. This is due to the fact that TCP-aware BP exploits loss-free links better, and slightly favors the flows transmitted over such links. However, the throughput improvement of both flows as compared to AODV is higher. In particular, TCP-aware BP improves throughput as compared to AODV by \%10 and \%40 for the first and second flows, respectively. These results confirm the compatibility of TCP and TCP-aware BP.
\begin{figure}[t!]
\vspace{-0pt}
\begin{center}
\subfigure[\scriptsize TCP-Aware BP with TCP-SACK]{{\includegraphics[width=4cm]{diamond_topology_tcp_bp_thrpt_vs_time.eps}}}
\subfigure[\scriptsize BP with TCP-SACK]{{\includegraphics[width=4cm]{diamond_topology_bp_thrpt_vs_time.eps}}} \hspace{-0pt} \\
\subfigure[\scriptsize TCP-Aware BP with TCP-Vegas]{{\includegraphics[width=4cm]{diamond_topology_tcp_bp_thrpt_vs_time_vegas.eps}}}
\subfigure[\scriptsize BP with TCP-Vegas]{{\includegraphics[width=4cm]{diamond_topology_bp_thrpt_vs_time_vegas.eps}}} \hspace{-0pt}
\end{center}
\begin{center}
\vspace{-5pt}
\caption{\label{fig:diamond_thrpt_time_results} \scriptsize Throughput vs. time in the diamond topology for TCP-SACK and TCP-Vegas. There are two flows; Flow 1 is transmitted from node $A$ to node $B$, and Flow 2 is transmitted from node $A$ to node $D$. The links are not lossy.
}
\vspace{-15pt}
\end{center}
\end{figure}
\begin{figure}[t!]
\vspace{-0pt}
\begin{center}
\subfigure[\scriptsize Throughput]{{\includegraphics[width=4cm]{diamond_tcp_sack_thrpt_loss.eps}}}
\subfigure[\scriptsize Fairness]{{\includegraphics[width=4cm]{diamond_tcp_sack_fairn_loss.eps}}}
\end{center}
\begin{center}
\vspace{-5pt}
\caption{\label{fig:diamond_thrpt_vs_loss_sack} \scriptsize Throughput and fairness vs. average packet loss rate for TCP-aware BP and AODV in the diamond topology. There are two TCP flows transmitted from node $A$ to $B$ (Flow 1) and $A$ to $D$ (Flow 2). The link $A-B$ is a lossy link. The version of TCP is TCP-SACK.}
\end{center}
\vspace{-15pt}
\end{figure}
Let us consider the grid topology shown in Fig.~\ref{fig:topologies}. Four flows are transmitted from the gateway to four distinct nodes, which are randomly chosen. Half of the links, chosen at random, are lossy with loss probability ranging between $0-0.5$. Fig.~\ref{fig:grid_thrpt_time_results} shows throughput vs. time graphs for TCP-aware BP and classical BP. It is seen that all four flows could survive in TCP-aware BP for both TCP-SACK and TCP-Vegas, while one or more flows do not survive in classical BP. Fig.~\ref{fig:grid_thrpt_vs_loss_sack} shows throughput and fairness vs. average loss probability results for TCP-aware BP and AODV for TCP-SACK. TCP-aware BP improves throughput significantly as compared to AODV without violating fairness.
Fig.~\ref{fig:grid_thrpt_vs_loss_vegas} shows that TCP-aware BP improves throughput significantly as compared to AODV when TCP-Vegas is employed. This shows the effectiveness of our scheme in delay-based TCP versions.
\begin{figure}[t!]
\vspace{-0pt}
\begin{center}
\subfigure[\scriptsize TCP-Aware BP with TCP-SACK]{{\includegraphics[width=4cm]{grid_topology_tcp_bp_thrpt_vs_time.eps}}}
\subfigure[\scriptsize BP with TCP-SACK]{{\includegraphics[width=4cm]{grid_topology_bp_thrpt_vs_time.eps}}} \hspace{-0pt} \\
\subfigure[\scriptsize TCP-Aware BP with TCP-Vegas]{{\includegraphics[width=4cm]{grid_topology_tcp_bp_thrpt_vs_time_vegas.eps}}}
\subfigure[\scriptsize BP with TCP-Vegas]{{\includegraphics[width=4cm]{grid_topology_bp_thrpt_vs_time_vegas.eps}}} \hspace{-0pt}
\end{center}
\begin{center}
\vspace{-15pt}
\caption{\label{fig:grid_thrpt_time_results} \scriptsize Throughput vs. time in the grid topology for TCP-SACK and TCP-Vegas. There are four flows and the links are not lossy.
}
\end{center}
\vspace{-15pt}
\end{figure}
\begin{figure}[t!]
\vspace{-0pt}
\begin{center}
\subfigure[\scriptsize Throughput]{{\includegraphics[width=4cm]{grid_tcp_sack_thrpt_loss.eps}}}
\subfigure[\scriptsize Fairness]{{\includegraphics[width=4cm]{grid_tcp_sack_fairn_loss.eps}}}
\end{center}
\begin{center}
\vspace{-5pt}
\caption{\label{fig:grid_thrpt_vs_loss_sack} \scriptsize Throughput and fairness vs. average packet loss rate for TCP-aware BP and AODV in the grid topology. There are four TCP flows transmitted from the gateway to four distinct nodes. Half of the links are lossy. The version of TCP is TCP-SACK.
}
\end{center}
\vspace{-15pt}
\end{figure}
\begin{figure}[t!]
\vspace{-0pt}
\begin{center}
\subfigure[\scriptsize Throughput]{{\includegraphics[width=4cm]{grid_tcp_vegas_thrpt_loss.eps}}}
\subfigure[\scriptsize Fairness]{{\includegraphics[width=4cm]{grid_tcp_vegas_fairn_loss.eps}}}
\end{center}
\begin{center}
\vspace{-5pt}
\caption{\label{fig:grid_thrpt_vs_loss_vegas} \scriptsize Throughput and fairness vs. average packet loss rate for TCP-aware BP and AODV in the grid topology. There are four TCP flows transmitted from the gateway to four distinct nodes. Half of the links are lossy. The version of TCP is TCP-Vegas.}
\end{center}
\vspace{-15pt}
\end{figure}
As mentioned in Section~\ref{sec:opt}, there may be both TCP and non-TCP flows in the system, and non-TCP flows should be controlled in a TCP-friendly manner so that TCP flows could survive when non-TCP flows are on. Therefore, a flow control algorithm is presented in Eq.~(\ref{eq:flow_control}) for non-TCP flows. Now, we evaluate this scenario in the diamond topology with two flows. Flow 1 is a TCP flow (TCP-SACK) transmitted from node $A$ to node $B$, and Flow 2 is a non-TCP flow transmitted from node $A$ to node $D$. In our TCP-aware BP framework, the non-TCP flow is regulated by Eq.~(\ref{eq:flow_control}). The parameters in Eq.~(\ref{eq:flow_control}) are set as; $M=50$, $g(x_s(t))=log(x_s(t))$, $\forall t, s \in \mathcal{S}$. The implementation details including TCP-friendly parameter selection are provided in \cite{tcp_aware_bp_tech_rep}.
Fig.~\ref{fig:diamond_thrpt_time_results_tcp_udp} shows throughput vs. time graph of TCP-aware BP, classical BP, and AODV. The TCP flow does not survive in classical BP as packets are trapped in the buffers. It does not survive with AODV as well, because uncontrolled non-TCP flows ({\em i.e., } UDP flows) occupy buffers and TCP packets are constantly dropped from the buffers, which reduces TCP throughput. Yet, both TCP and non-TCP flows survive together in in TCP-aware BP thanks to TCP-aware routing and scheduling, and TCP-friendly flow control for non-TCP flows. Fig.~\ref{fig:diamond_thrpt_vs_loss_sack_UDP_TCP} shows the throughput improvement performance of TCP-aware BP as compared to AODV in the same setup for different packet loss probabilities. At low loss probabilities, although the throughput of AODV is better than TCP-aware BP, the fairness graph (and Fig.~\ref{fig:diamond_thrpt_time_results_tcp_udp} for no-loss) shows that the fairness of AODV is very low, which means that the TCP flow does not survive. At higher loss probabilities, TCP-aware BP is better than AODV thanks to choosing better routes and schedules as compared to AODV.
\begin{figure}[t!]
\vspace{-0pt}
\begin{center}
\subfigure[\scriptsize TCP-Aware BP]{{\includegraphics[width=2.7cm]{diamond_topology_tcp_bp_thrpt_vs_time_UDP_TCP_v2.eps}}}
\subfigure[\scriptsize Classical BP]{{\includegraphics[width=2.8cm]{diamond_topology_bp_thrpt_vs_time_UDP_TCP.eps}}} \hspace{-0pt}
\subfigure[\scriptsize AODV]{{\includegraphics[width=2.8cm]{diamond_topology_aodv_thrpt_vs_time_UDP_TCP.eps}}}
\end{center}
\begin{center}
\vspace{-5pt}
\caption{\label{fig:diamond_thrpt_time_results_tcp_udp} \scriptsize Throughput vs. time in the diamond topology for TCP-SACK. There are two flows; Flow 1 is a TCP flow, transmitted from node $A$ to node $B$, and Flow 2 is a non-TCP flow, transmitted from node $A$ to node $D$. The links are not lossy.
}
\end{center}
\vspace{-15pt}
\end{figure}
\begin{figure}[t!]
\vspace{-0pt}
\begin{center}
\subfigure[\scriptsize Throughput]{{\includegraphics[width=4cm]{diamond_tcp_sack_thrpt_loss_UDP_TCP.eps}}}
\subfigure[\scriptsize Fairness]{{\includegraphics[width=4cm]{diamond_tcp_sack_fairn_loss_UDP_TCP.eps}}}
\end{center}
\begin{center}
\vspace{-5pt}
\caption{\label{fig:diamond_thrpt_vs_loss_sack_UDP_TCP} \scriptsize Throughput and fairness vs. average packet loss rate for TCP-aware BP and AODV in the diamond topology. There are two flows transmitted from node $A$ to $B$ (Flow 1, {\em i.e., } TCP flow) and $A$ to $D$ (Flow 2, {\em i.e., } non-TCP flow). The link $A-B$ is a lossy link. The version of TCP is TCP-SACK.
}
\end{center}
\vspace{-25pt}
\end{figure}
\section{Related Work}\label{sec:related}
Backpressure, a routing and scheduling framework over communication networks \cite{tass_eph1}, \cite{tass_eph2} has generated a lot of research interest \cite{neely_book}, mainly in wireless ad-hoc networks.
It has also been shown that backpressure can be combined with flow control to provide utility-optimal operation guarantee \cite{neely_mod}, \cite{stolyar_greedy}.
The strengths of backpressure have recently increased the interest on practical implementation of backpressure over wireless networks. Backpressure has been implemented over sensor networks \cite{routing_wtht_routes} and wireless multi-hop networks \cite{xpress}. The multi-receiver diversity has been explored in wireless networks using backpressure in \cite{javidi_diversity}. The 802.11 compliant version of enhanced backpressure is evaluated in \cite{choumas}. Backpressure routing and rate control for intermittently connected networks was developed in \cite{backpressure_for_icns}.
Backpressure routing and (max-weight) scheduling with TCP over wireless has been considered in the literature. At the link layer, \cite{DiffQ}, \cite{umut_stolyar}, propose, analyze, and evaluate link layer backpressure-based implementations with queue prioritization and congestion window size adjustment. The interaction of TCP with backpressure in \cite{DiffQ} and \cite{umut_stolyar} is handled by updating the TCP congestion window evolution mechanism. In particular, if the queue size (at the TCP source) increases, the window size is reduced, otherwise, the window size is increased. Multi-path TCP scheme is implemented over wireless mesh networks \cite{horizon} for routing and scheduling packets using a backpressure based heuristic, which avoids incompatibility with TCP.
Max-weight scheduling is updated in \cite{ghaderi_tcp_theoric} to make decisions based only on MAC level queue size information. Although \cite{ghaderi_tcp_theoric} considers window based flow control mechanism similar to TCP, it does not consider existing TCP flavors.
The main differences in our work are: (i) we consider the incompatibility of TCP with backpressure, and develop TCP-aware backpressure framework to address the incompatibilities, (ii) TCP-aware backpressure provides the same stability and utility-optimal operation guarantees as classical backpressure, (iii) we do not make any changes at the TCP source, (iv) we employ network coding to gracefully combine TCP and TCP-aware backpressure.
Maximum weight matching (MWM) is a switch scheduling algorithm and has similar properties as the max-weight scheduling algorithm and backpressure. Similar to the backpressure, there is incompatibility between TCP and MWM \cite{TCP_MWM_switch1}, \cite{TCP_MWM_switch2}. Yet, we consider backpressure routing and scheduling over wireless networks rather than switch scheduling, and we take a holistic approach to address this problem; {\em i.e., } we propose TCP-aware backpressure to make TCP and backpressure compatible.
\section{Conclusion}\label{sec:conclusion}
We proposed TCP-aware backpressure routing and scheduling to address the incompatibility of TCP and backpressure while exploiting the performance of backpressure routing and scheduling over wireless networks. TCP-aware backpressure is developed by taking into account the behavior of TCP flows, and gracefully combines TCP and backpressure without making any changes to the TCP protocol. Simulations in ns-2 demonstrate that TCP-aware backpressure improves throughput of TCP flows significantly and provides fairness across competing TCP flows.
\bibliographystyle{IEEEtran}
|
1,314,259,996,631 | arxiv | \section{Introduction}
Understanding the precise structure of pore space is integral to fully understanding flow through porous media. This understanding is essential in research on single-phase incompressible flow in situations where another dominant flow factor is the fluid-wall surface tension \cite{lind1}. Determining the relationship of pore structure and bulk flow properties is complicated since the structure of pore space is so very random and awkward because of the challenge of precisely measuring pore space.
A throat is the smallest corss-section area that corresponds a branch-branch medial axis path. For non-crossed throats, their outer perimeter voxels have to exist on the boundary grain voxels. The 3DMA-Rock software package \cite{lind2} has three major throat-finding algorithms; (1) the wedge-based algorithm \cite{shin},(2) the Dijkstra-based shortest length algorithm \cite{lind1}, and (3) the planar dilation algorithm \cite{jw}. The wedge-based algorithm yields acceptable measures in only low porosity samples. It uses wedges to find the nearest boundary grain voxels and connects voxels in a way that allows the determination of border voxels which allow the connection of each wedge. For high porosity samples (over 30$\%$), It is very diffcult to find the connedcted throat-perimeter paths on the boundary grain voxels. The primary deficiency of the Dijkstra-based shortest length algorithm is that the shortest path perimeter does not reveal the smallest cross area (Figure 1). The main deficiency of the planar dilation algorithm is in finding approximated throats. In general, the throat shape is a non-planar type. The throat determined by the employment of the planar dilation algorithm is capable of spreading pore space without any restriction.
The goal in this paper is to find the accurate throat area (Figure 2 right panel) and calculate the circumference based on known mathematical formulae. Here, I present new throat-finding algorithms using the concepts in vector space and spherical coordinate system (decribed in section 2). In order to find an accurate throat, I need to classify throat shapes into three main types; (1) the simply-connected planar type, (2) the simply-connected non-planar type, and (3) the non-simply-connected non-planar type. For each thorat shape, I construct five algorithms; two of them for the simply-connected planar type, two of them for the simply-connected non-planar type, and one of them for the non-simply-connected non-planar type. The first algorithm is designed to reduce the total calculation time. It indicates the simply-connected planar throat perimeter, and only uses the visible path (the member voxels of the path are visible from the MA path, invisible path is not) from the pertinent MA path. Usually the perimeter (the solution of the first algorithm) is not a throat, but we can reduce the total calculated CPU time (involving 4 other of our algorithms) using this possible candidate perimeter. The second algorithm indicates the simply-connected planar throat perimeter. This second algorithm uses visible paths and invisible paths (from the medial axis voxel) that together make an enclosed loop. All 26-connected perimeter voxel paths are on the plane. The third algorithm is appropriate for determining the simply-connected non-planar throat perimeter. The perimeter voxels, in cases where we employ this third algorithm, consist of visible voxels and invisible voxels from the pertinent medial axis path. The visible voxels are on the plane, and the third algorithm connects discontinuous voxels using non-planar voxels (generally, the voxel set is not on the same plane of visible voxel set). The fourth algorithm is designed to determine the largely undulate throat perimeter (See Fig. 2). This algorithm makes four vertical and horizontal wedges in each neighboring direction (based on the normal plane to the medial axis path). For each wedge, the third algorithm finds the nearest voxels to the MA voxel, and connects eight perimeter voxels using Dijkstra’s algorithm \cite{dij}. The fifth algorithm is designed to determine non-simply non-planar throat. Also, this algorithm is useful when a small number of visible voxels diverge from the perimeter. After finding the visible continuous voxel set, the fifth algorithm tries to connect them. When this algorithm fails to connect voxels, we delete the voxels from the member of the candidate perimeter voxel set; we keep the path with the shortest distance from the MA path when there are two or more paths. To deal with discontinuous voxels in the perimeter voxel set, this algorithm uses Dijkstra’s algorithm. When Dijkstra’s algorithm finds several paths, our algorithm uses the innermost path from the medial axis voxel that results in affording the smallest throat area. These algorithms can detect more than 98$\%$ throats over higher than 29 $\%$ porosity samples.
\section{Throat-Finding Algorithms}
\subsection{Simply Connected Planar Throat-Finding algorithms}
The first algorithm is to find throat on a plane, called planar throat type. In this algorithm, both medial axis modiification \cite{lind2} and medial axis extraction \cite{lee}, which computed within the void space of a segmented \cite{oh} 3D XCMT image, are used. For constructing the first throat-finding algorithm, I need to check all possible perimeters of a throat from the first to the last voxel in the path of the medial axis since the area of throat does not show continuously change. The area of a throat can be varied by the position and direction of the medial axis voxels in three dimensional space. Here, I present two simply connected planar throat-finding algorithms. The first algorithm can be used when the boundary grain voxel set is visible from the medial axis path. The second algorithm is used when some voxels in the boundary grain voxel set is not visible from the medial axis path.
\subsubsection{The first Simply Connected Planar Throat-Finding algorithm}
The first algorithm is the following steps. Here, I define \newline
(1) $\nu_{k}$ : $k$-th voxel of the medial axis path, \newline
(2) $\overrightarrow{n_{k}}$ : unit tangent vector of the medial axis path, \newline
(3) $\overrightarrow{n_{k,i.j}}$ : unit directional vector where $i$ is a degree of polar angle from $\overrightarrow{n_{k}}$ and $j$ is a degree of azimuth angle in spherical coordinate system \newline
(4) $P_{k,i,j}$ : a plane that has the normal vector $\overrightarrow{n_{k,i.j}}$ at the center of $\nu_{k}$ \newline
(5) $\overrightarrow{u_{k,i,j,\theta}}$ : unit directional vector on the plane $P_{k,i,j}$
Step 1 : Compute the tangent vector $\overrightarrow{n_{k}}$ at the center of $\nu_{k}$ voxel in the path
Step 2 : Make unit directional vector set of $\overrightarrow{n_{k,i,j}}$ with zenith vector $\overrightarrow{n_{k}}$
Step 3 : Make a plane $P_{k.i,j}$ with a point at the center of $\nu_{k}$ and a normal verctor $\overrightarrow{n_{k,i,j}}$
Step 4 : Make 360 of unit directional vector set $\overrightarrow{u_{k,i,j,\theta}}$ on $P_{k,i,j}$
Step 5 : Find the boundary grain voxel set using $\overrightarrow{u_{k,i,j,\theta}}$ (Finding method will be discussed in below)
Step 6 : Check whether the connectivity of the boundary grain voxel set from Step 5 has 26 connectivity or not
Step 7 : If the connectivity of the boundary grain voxel set shows 26 connectivity then extend it to 6 connected closed loop voxel set. Otherwise, I use different throat-finding algorithms (these algorithms will be discussed in section 2.1.2 and section 2.2.1)
Step 8 : Calculate the area of the closed loop voxel set using 26 connected perimeter voxel set from Step 6.
Step 9 : Repeat from Step 1 to 7 for each $\nu_{k}$ to get local minimum throat
The main idea of this algorithm is to use the visible boundary grain voxels from the medial axis voxel on a given plane. All voxels originate as specific points within each coordinate. For the medial axis voxel $\nu{_k}$, $k = 1, 2, 3, \cdots, n$, where $n$ is the last voxel number in the medial axis path, we choose 5 consecutive medial axis voxels (the four proximal – at right and left, two at each side) to find the zenith direction $n_{k}$, which is the solution of the least square problem of the 5 involved voxels. If the medial voxel position is located within the two outermost voxels on each side, I choose the 5 outermost in the medial axis path that includes the medial axis voxel (Figure 4 B). If the total number of voxels in the path is less than 5, I use them to calculate the directional vector solution of the least square problem. Using the center of $\nu_{k}$ = $(\nu_{x_{k} },\nu_{y_{k}},\nu_{z_{k}})$ and $\overrightarrow{n_{k}}$ = $<n_{x_{k}},n_{y_{k}},n_{z_{k}}>$, I can make the sequence $\overrightarrow{n_{k,i,j}}$ (Figure 4 C), defined by the polar and azimuth angles, $i$ and $j$, $i$=1,2,$\cdots$, 45, $j$ = 1,2,$\cdots$, 45, including $n_{k}$. The center of $\nu_{k}$ and its normal vector $\overrightarrow{n_{k,i,j}}$ create the plane. I now need to make two unit vector sets: $\{u_{k,i,j,\theta}\}$ and $\{s_{k,i,j,\theta}\}$ where $\theta$ = $0,1, \cdots, 359$ (Figure 4 D). The ray set $\{u_{k,i,j,\theta}\}$ on the plane $P_{k,i,j}$ is normalized by the orthogonal projection of the unit vector set $\{s_{k,i,j,\theta}\}$. The unit ray set $\{s_{k,i,j,\theta}\}$ has an angle difference of $1^{o}$, and is on the coordinate plane that consists of the two coordinate axes that have two small absolute components of $\overrightarrow{n_{k,i,j}}$. The set $\{s_{k,i,j,\theta}\}$ uses the formula of one of the three different sets:
\begin{eqnarray*}
\{(cos(\theta^{o}), sin(\theta^{o}),0)\}, \{(cos(\theta^{o}), 0, sin(\theta^{o}))\}, \{(0, cos(\theta^{o}), sin(\theta^{o}))\}
\end{eqnarray*}
where $\theta$ = $0,1,\cdots, 359$.
I choose the set $\{(cos(\theta^{o}), sin(\theta^{o}),0)\}$ when $|n_{z_{k,i,j}} |\ge |n_{x_{k,i,j}}|$ and $|n_{y_{k,i,j}}|$. The two others are selected similarly. The directional vector $\overrightarrow{u_{k,i,j,\theta}}$ = $<u_{x_{k,i,j,\theta}},u_{y_{k,i,j,\theta}},u_{z_{k,i,j,\theta}}>$ starts from the center of $\nu_{k}$ and spreads until it touches the respective boundary grain voxel (Figure 5). Each ray yields three coefficient sets: $\{c_{x_{\alpha_{1}}}\}$, $\{c_{y_{\alpha_{1}}}\}$, and $\{c_{z_{\alpha_{1}}}\}$, $\alpha_{i}$ = $1,2,3$, $\cdots$ and $i$ = 1,2,3 that satisfy $|u_{\beta_{k,i,j,\theta}}|$$\cdot c_{\beta_{\alpha_{t}}}$ = $t-0.5$, $t=1,2,\cdots$ and $\beta = x, y, z$, when $u_{\beta_{k,i,j,\theta}}$ is not zero.If $u_{\beta_{k,i,j,\theta}}$ is zero, then the ray doesn’t go in the β coordinate direction. Those coefficients represent the boundary points that touch the voxels on the ray. Using the mixed set of $\{c_{x_{\alpha_{1}}}\}$, $\{c_{y_{\alpha_{2}}}\}$, and $\{c_{z_{\alpha_{3}}}\}$ with increasing order, we can determine the voxel order that the pertinent ray touches. The order of the new set, at each point, indicates the direction of the voxel movement from $\nu_{k}$ If two coefficients among the mixed set are the same, then the ray touches the edge of the next voxel, with the related coordinates indicating the coefficients from the current position. If three coefficients are the same, then the ray touches the vertex of the next voxel, with the related coordinates indicating the coefficients from the current position. When the ray touches an edge or a vertex, we only check the next position to see whether the voxel is grain or void on the ray, because the void space has 26- connectivity. When we construct a throat barrier with 6-connectivity using a ray, we add all touching void voxels. If a ray touches an edge and all contiguous voxels are void, then the three void voxels will be counted among the throat barrier voxels. If a ray touches a vertex and all contiguous voxels are void, then the seven void voxels will be counted among the throat barrier voxels. The ray set $\overrightarrow{u_{k,i,j,\theta}}$ is used to search for the boundary grain voxels that are the candidates for the perimeter voxel set on $P_{k,i,j}$ (Figure 4 E). Usually finded grain voxel set has 26-connectivity, so we change the connectivity to the grain’s (6-connectivity). The set is comprised of blue voxels to the 6-connected set by adding the neighboring diagonally positioned boundary grain voxels on the plane. Sometimes an added voxel is not on $P_{k,i,j}$. The candidate perimeter voxel set (Figure 4 F). The set is comprised of green Voxels) consists of piecewise 6-connected parts. When the 6-connected perimeter voxel set makes a closed loop, we calculate the area with the 26-connected voxel set. After we find all possible perimeters for all of the plane $P_{k,i,j}$ , we compare all of them to find a possible candidate throat perimeter.
\subsubsection{The Second Simply Connected Planar Throat-Finding algorithm}
The second planar throat-finding algorithm follows the same process as the first, which constructs the 6-connected perimeter voxel set. This algorithm calculates the triangular area using the centers of the finded 26-connected boundary grain voxel set and $\nu_{k}$. This algorithm allows the progression to the next step, when the summation of the triangular areas is smaller than the minimum area of the first planar throat-finding algorithm (Figure 6 B). This algorithm finds invisible boundary grain voxels (for the unseen, distended perimeter) using Dijkstra’s algorithm. The candidate perimeter voxel set (Figure The set is comprised of green Voxels) consists of piecewise 6-connected parts. For each discontinuous path (Figure 6 C Two orange voxels), we use Dijkstra’s algorithm to connect the end path voxels to the boundary grain voxels. This connection can be understood conceptually in the following way: The boundary grain voxels are on the plane $P_{k,i,j}$. The basic criteria used to determine that a grain voxel is on $P_{k,i,j}$ is $P_{k,i,j}$$(\nu_{x_{k} },\nu_{y_{k}},\nu_{z_{k}})$* $P_{k,i,j}$$(\nu_{x_{k} }+\epsilon_{x},\nu_{y_{k}}+\epsilon_{y},\nu_{z_{k}}+\epsilon_{z})$ $\le 0$, where $\epsilon_{x}$, $\epsilon_{y}$, and $\epsilon_{x}$ are 0 or 1, and at least one 1 is included. The region of interest concerning Dijkstra’s algorithm is conceptually within a cube of variable side length. The side length changes from $l_{c}+4$ to $l_{c}+8$, where $l_{c}$ is the side length of a minimum inclusion cube sharing the same center (Figure 6 C $l_{c}$ is the side length of the rose square). For each discontinuity, if there is no voxel connection in the cube, we assume it is a non-perimeter and we try to apply other algorithms. When we have found the perimeter voxel set that has 6-conectivity making a single and efficient encirclement (Figure 6 D Green voxels), we calculate the area with the 26-connected perimeter voxel set to find the minimum surface area using 3DMA. The 26-connected perimeter voxel set is selected within the 6-connected perimeter voxel set. After we find all possible perimeters for all of the plane $P_{k,i,j}$, we compare all of them to find a possible candidate throat perimeter. Our algorithm has only one rounding number, and applies Dijkstra’s algorithm to the small discontinuous consecutive voxels.
\subsection{Simply Connected Non-Planar Throat-Finding algorithms}
I made two throat-finding algorithms for the undulating throat type. The first undulating throat-finding algorithm is designed to find simply connected throats. Sometimes, this algorithm cannot find the smallest throat area, but it can indicate the throat perimeter when the other three throat-finding algorithms fail to find the perimeter.
\subsubsection{The first Simply Connected Non-Planar Throat-Finding algorithm}
The first non-planar throat-finding algorithm is the same as the planar throat-finding algorithm, except concerning the region of interest in applying Dijkstra’s algorithm in dealing with the nearest boundary grain voxel from medial axis voxel $\nu_{k}$ for each step. For this algorithm, we select as the possible candidate the perimeter of a (conceptual) cube where the side length is the minimum cube $l_{c}+4$ that envelops two voxels with the same center in 3D (Figure 7 A). Now, 3DMA calculates the area with the 26-connected perimeter (Figure 7 B), derived from the 6-connected perimeter. After we find all possible perimeters for all of the plane $P_{k,i,j}$, we compare all of them to find a possible candidate throat perimeter.
\subsubsection{The second Simply Connected Non-Planar Throat-Finding algorithm}
The second non-planar algorithm can be used to analyze largely undulate type throats. We allow that the maximum angle of the fluctuation of this algorithm is $\pm45^{o}$ from the base. Our assumption is that some perimeter voxels are situated nearest to an MA voxel. If the throat undulates a lot and satisfies our assumptions, then this algorithm can deduce its perimeter.
Before constructing the plane $P_{k,i,j}$, we use the same procedure as the plane $P_{k,i,j}$ of the planar algorithm. We draw the 91 unit ray set $\{u_{k,i,j,\theta}\}$ , θ=0,1,⋯,90, on the plane $P_{k,i,j}$, drawn from the projections and normalization of the 91 unit ray set $\{s_{k,i,j,\theta}\}$, $\theta$ = $0,1, \cdots, 90$, on the coordinate axis plane (Figure 8 A). The set $\{s_{k,i,j,\theta}\}$ is selected among the three different sets:
\begin{eqnarray*}
\{(cos(\theta^{o}), sin(\theta^{o}),0)\}, \{(cos(\theta^{o}), 0, sin(\theta^{o}))\}, \{(0, cos(\theta^{o}), sin(\theta^{o}))\}
\end{eqnarray*}
where $\theta$ = $0,1,\cdots, 90$.. This 90-degree angle makes the first wedge. The selected order of the set $\{s_{k,i,j,\theta}\}$ is the same order of the maximum component value of $\overrightarrow{n_{k,i,j}}$. The set $\{u_{k,i,j,\theta}\}$ on $P_{k,i,j}$ is generated by the orthogonal projection of $\{s_{k,i,j,\theta}\}$ onto the plane. Using the two related ray sets $\{u_{k,i,j,\theta}\}$ and $\{s_{k,i,j,\theta}\}$, we find the nearest grain boundary voxel $p_{1}$ and its angle ($\epsilon$ = $\alpha_{1}$) from $\nu_{k}$. The voxel $p_{1}$ is on $P_{k,i,j}$ and in the first wedge with the $90^{o}$ angle. The next step is making three horizontal wedges on $P_{k,i,j}$ (Figure 8 B). Their center angles are $\epsilon+90^{o}$, $\epsilon+180^{o}$, and $\epsilon+270^{o}$ and their respective wedge angles are each $60^{o}$. The three 61-unit ray sets are selected among $\{s_{k,i,j,\theta}\}$ (Figure 8 B). The selected angle sets are below:
\begin{eqnarray*}
\{\theta_{2} |\theta_{2} = \epsilon+60^{o},\epsilon+61^{o}, \cdots, \epsilon+120^{o}\},\\
\{\theta_{3} |\theta_{3} = \epsilon+150^{o},\epsilon+151^{o}, \cdots, \epsilon+210^{o}\}\\
\{\theta_{4}|\theta_{4}= \epsilon+240^{o},\epsilon+241^{o}, \cdots, \epsilon+300^{o}\}
\end{eqnarray*}
For each horizontal wedge, we find the three nearest boundary grain voxels $p_{3}$, $p_{5}$, $p_{7}$ and their angles $\alpha_{i}$ ($i$=2,3,4) from $\nu_{k}$ on $P_{k,i,j}$ using 61 rays $\{u_{k,i,j,\theta}\}$ (Figure 8 B). We add four vertical wedges with center angles {$\beta_{j}$},$j$=1,2,3,4, where the angles are bisected between each angle of the set {$\alpha_{i}$}, $i$ = 1,2,3,4. Each vertical wedge’s azimuth angle varies from $45^{o}$ to $135^{o}$ from the new zenith $\overrightarrow{n_{k,i,j}}$. We make a 91-ray set in the vertical wedge, where each unit vector’s azimuth angle varies from $45^{o}$ to $135^{o}$ (Figure 8 C). The first vertical unit ray set $\{u_{k,i,j,\theta}\}$ is $\overrightarrow{n_{k,i,j}}$, where $i = \alpha_{1}$ and $j$ =$\varphi+45^{o},\varphi+46^{o}, \cdots, \varphi+135^{o}$. The other three vertical unit ray sets are chosen similarly. Using four 91 ray sets, we find the four nearest boundary grain voxels $p_{2}$, $p_{4}$, $p_{6}$, and $p_{8}$ to $\nu_{k}$. There are eight candidate perimeter voxels. The final step is to connect those 8 voxels using Dijkstra’s algorithm in dealing with the nearest boundary grain voxel from the medial axis voxel $\nu_{k}$ for each step (Figure 8 D). If there is a complete 6-connected perimeter, 3DMA calculates the area with a 26-connected perimeter(Figure 8 D).
\subsection{Non-Simply Connected Non-Planar Throat-Finding algorithm}
The MA algorithm only preserves the topological structure of the given void space, so a throat’s type cannot be simply connected sometimes. For example, when there is a large peak extending from the bottom. If the path length is shorter than the peak, and the peak contains the projection of the path and its cluster voxels, then the throat has to be considered a non-simply connected type. This algorithm finds some other throat types, and is excellently suited to do so. Also, this algorithm can be used when perimeter parts indicate other pores’ boundaries. At this time, the wrong path parts can be nixed.
When the throat type is the same as the previous type, the fourth algorithm can find similar perimeters. For the non-simply-connected non-planar throat type, we use the same procedure to find the plane $P_{k,i,j}$ and the 6-connected candidate perimeter set as the second planar algorithm. The set { The 6-connected voxel set is comprised of blue voxels). The set consists of piecewise continuous paths (Figure 9 C). This algorithm starts with a continuous path among the set that constitutes the shortest voxel distance in the set from $\nu_{k}$. Also, the starting voxel can be selected as the longest path or the largest angle. The algorithm connects each piece using Dijkstra’s algorithm in dealing with the nearest boundary grain voxel from ma voxel ν for each step. When this, our forth algorithm, fails to connect two pieces (Figure 9 D), the algorithm deletes a part of the set (Figure 9 D brown voxels). By deleting the discontinuous path, we connect to the next path (Figure 9 E). This algorithm’s candidate perimeter consists of the compositions of the piecewise continuous parts of {$p_{i}$} ({$q_{i}$} is a visible perimeter voxel set, and {$p_{i}$} is 6-connected voxel set that comes from {$q_{i}$}) and their connection (Figure 9 F). Sometimes this algorithm deletes lots of continuous paths of {$p_{i}$}, so it requires checking the rounding number. The rounding number is defined as the sum of the angles of the projected perimeter voxels on $P_{k,i,j}$. The perimeter has to have one rounding number. Now, the algorithm changes the perimeter to 26-connetivity, and 3DMA calculates the outer area.
If the set {$o_{i}$} - i=1,2,⋯, the total number of the perimeter voxels - is of the outer perimeter voxels of this type, the subtraction of the inner grain voxels’ area is necessary to calculate the throat area. We make a unit ray set {$\overrightarrow{b_{i}}$}, i=1,2,⋯, the total number of the perimeter voxels. A unit ray $\overrightarrow{b_{i}}$ ̂ indicates the respective position of the perimeter voxel relative to the MA voxel. Each $\overrightarrow{b_{i}}$ ̂ spreads to the perimeter voxel $o_{i}$ from $\nu_{k}$. Our algorithm works when $\overrightarrow{b_{i}}$̂ touches the edge or vertex of voxels on the ray. When $\overrightarrow{b_{j}}$ ̂ and $\overrightarrow{b_{j+1}}$̂ go through the grain voxels (Figure 9 G), our algorithm saves the initial and final positions. We subtract the quadrilateral area from the outer area (Figure 9 H).
\section{Point Set algorithm}
The perimeter is used for the simulations of the drainage processes. The entry condition equation involves the area and perimeter. Digitized images are constituted of discrete values for each voxel. After segmentation, we only know from the data whether the voxel is grain or not. The segmented images consist of cubes, so we have to think about the difference between our calculations from the segmented images and the real length and area. Construction of the polygon (which is the boundary of the object in the CT image) by linear connection of the mid-points yields the relative error of its length for any random boundary. The reason for the relative error is that the mid-point connection contains only horizontal, vertical, and diagonal lines. A line integral needs the slope between two points $a_{n}$=($x_{n}$,$y_{n}$), $a_{n+1}$=($x_{n}$+$\Delta x$,$y_{n}$+$\Delta y$), so that the calculation expresses the real boundary length. The mid-point calculation of the length in CT images may yield a relative error of more than 8$\%$. Thus we needed to find a new method to calculate length with a new point set in the CT images. We use the 6-connected perimeter voxel set to calculate the exact length, and we use the 26-connected perimeter voxel set to calculate the area in 3D space. The mathematical concepts of the point set consist of: i) Differentiability, ii) Implicit Function Theorem, iii) Line Integral. For non-differentiable parts of the perimeter, we follow the inner part of the perimeter. When we apply a line integral to a CT image, we require only one mid-point on each $\Delta x$ instead of multiple points. (The extant mid-point method involves several horizontal and vertical points on each $\Delta x$.) For each $\Delta x$, the length is a multiple of the voxel size and all y values are the same. The mid-point locates the adjacent face of the perimeter voxel, and the point used is the middle of $\Delta x$ within part of the perimeter voxel set. We only use the perimeter voxel part that faces the 6-connected barrier voxel set in 3D space. To easily understand our algorithm, we here give an example for a line integral in a CT image in Figure 13 A and B.
We have to know which parts of the perimeter voxel set are related to the mathematical concepts of differentiability, line integral, and implicit function theorem. Using these math concepts, we select the point set on the boundary between perimeter voxels and throat barrier voxels, because the perimeter voxels are located outside of the throat. We will divide the perimeter voxel set into several parts to apply these three concepts. The first step of the new algorithm distinguishes which perimeter parts are related to the differential and non-differential functions. We start with these. For the non-differential parts (U, T, and L shapes), we select points that identify the perimeter section shape. Those shapes consist of the combination of continuous horizontal and vertical voxels, and each part of the (respective) continuous vertical and horizontal part has at least 3 voxels. The first point is on the inner-facing part of the initial voxel, in the non-differential part in 2D space. From the second point on, points are selected: at each right angle in the non-differential perimeter part. Let {an} be a perimeter array and {$p_{i}$} be a point set, and let {$a_{k_{n}}$} be a subset of {$a_{n}$}. U and T shaped voxels consist of horizontal-vertical-horizontal or vertical-horizontal-vertical voxel parts, and each part has the same x or y value in each of at least three voxels. To classify the U and T shaped voxels, a possible condition is $k_{2}$ – $k_{1}$ $\geq$ 2, $k_{3}$ – $k_{2}$ $\geq$ 1, and $k_{4}$ – $k_{3}$ $\geq$ 2. The point $a_{k_{i}}$ (1 $\leq$ i $\leq$ 4) indicates vertex points (Figure 10 A and B). If the distances between the created points from each two sequential voxels are greater than or equal to 2 voxels, then the part satisfies the U and T shape conditions and four points are added into the point set. There are eight possible basic U and T shapes. The L shaped perimeter voxel part consists of horizontal-vertical or vertical-horizontal voxel parts, and each part has the same x or y value in each of at least three voxels. To classify the L shaped perimeter voxel part, one possible condition is $k_{2}$ – $k_{1}$ $\geq$ 2 and $k_{3}$ – $k_{2}$ $\geq$ 2. This process is similar to the process of finding the U and T shapes. Three points $a_{k_{1}}$, $a_{k_{2}}$, and $a_{k_{3}}$ are our vertex point. If the distances between the created points from each two sequential voxels are greater than or equal to 2 voxels, then the part satisfies the L shape condition and three points are added into the points set (Figure 10 C). The second step of the new algorithm distinguishes the perimeter part related to the implicit function theorem that will give us the C shape. The distinguishing condition of the C shape is that two edge voxels of the vertically altered perimeter part changes the y-coordinate direction to connect in the same direction in 2D space. Some of the U shapes are constituents of the C shape group. When a perimeter part satisfies the U and C shape conditions together, the U shape condition is given priority; we use that first. A point that is within the point set of a C shape voxel set related to implicit function theorem is selected in the middle of voxels that are vertical. The last classifying step is to select points related to the mathematical concept of the line integral. The line integral is understood as a linear connection of the points on the curve. For each ∆x that continues with the same y value in the perimeter voxel set, we select the mid-point – an 8-connected voxel set in 2D (26-connected voxels in 3D space). Each line integral part starts and ends with either a U, T, L, or C shape. When our understanding of the boundary involves the implicit function theorem, the boundary point uses a point in the C shape. When our understanding of the boundary involves the non-differential part, then the boundary point uses a point in any of the U, T, or L shapes.
\section{Conclusions}
\subsection{Throat-Finding Algorithms}
To find accurate throat, we calculate all possible areas with five algorithms. The results can classify the throat types. In natural samples, non-simply connected type is rare. We test our new algorithms with a hypothetical and then with real high porosity samples of more than 20$\%$. When two cylinders cross each other with a small cross area, the throat is saddle-shaped (Figure 11). Our algorithms suggest two possible throat candidates: a planar throat and a non-planar throat (Figure 11 A, B, and C). The throat we analyze is the smallest one (Figure 11 D), the actual. The second test sample is sphere pack with 9*9*9 spheres with a rectangular bounday. My algorithm can find all throats (Figure 12). Also, we test 8 of the S3 samples and 4 sections of the Handford wet inlet t241 samples. The results for these twelve tests are shown in Table 1. For cubic samples (512*512*512 $voxel^{3}$), the total CPU time consumed for the throat finding algorithms and probability density function of throat area, pore volume, and coordination number is 4 to 10 hours. Our new algorithms include a crossed-throat algorithm [4]. The Pdf of throat area, pore volume, and coordination number is shown in Figure 13.
\begin{center}
\epsfig{figure=123.png,height=4.6in,width=6.0in} \ \ \ \ \ \ \ \small{Table 1 : Throat type and success ratio of the new algorithm}
\end{center}
\subsection{Point Set Algorithm}
To select a point set in 3D space, we project the real planar perimeter voxel set (in 3D space) onto the axial plane (2D) consisting of two coordinates, the lowest two absolute values among three components of the normal vector of the plane. To test our new algorithm, we apply our algorithm to lines with different slopes and circles with different radii. For the linear boundary, we draw the linear boundary $y=tan(k^{o})x$, where $k$=1,2,⋯,89, and the domain is from 0 to [50,000 $cos(k^{o})$ ], where [x] is a maximum integer less than or equal to x. We make digitized images of void and grain voxel areas. The grain voxels are selected when we have more than half of the voxel area below the linear boundary. The mid-point algorithm’s maximum relative error for the linear boundary is 8.23$\%$, and the new algorithm’s relative error is 1.29$\%$ (Figure 14 C). The circular boundary has two perimeter parts that relate to the C shape, and the boundary doesn’t have any U, T, or L shapes. We can apply two line integrals which indicate the upper boundary and the lower boundary. The mid-point algorithm’s relative error oscillates around 5.5$\%$, but the new algorithm has less than a 1$\%$ relative error (Figure 14 D). For non-planar throat type, triangular throat area incluiding a medial axis voxel can over-calculate its area (Figure 15 The orange ellipse area is my interested area. B, C, and D are the same image with triangular throat calculation including a medial axis voxel. E is my preferred solution.)
\newpage
\Large{\bf{Figures}}
\begin{center}
\epsfig{figure=area2.png,height=1.3in,width=4.7in}
\end{center}
\small{Figure 1 : It shows that the short path does not always indicate the small area. The area (light blue color) of a throat in the right panel is smaller than the area in the left panel but the distance of perimeter of boundary voxel (green color) is opposite. (Brown color : grain voxel, green color : perimeter and boundary grain voxel and light blue color : throat barrier voxels)}
\begin{center}
\epsfig{figure=cyl_example2.png,height=1.8in,width=2.8in}
\epsfig{figure=Throat2.png,height=1.8in,width=2.8in}
\end{center}
\small{Figure 2 : Example of Two cross cylinders: left figure is an example of non-planer throat type and the right figure shows the real throat (blue color) of left fiugre.}
\begin{center}
\epsfig{figure=3_1.png,height=1.8in,width=2.1in}
\epsfig{figure=3_2.png,height=1.8in,width=2.1in}
\epsfig{figure=3_3.png,height=1.8in,width=2.1in}
\end{center}
\small{Figure 3 : Test example for Dijkstra's algorithm and possible paths. Blue voxels are possible path grain voxels. Green color indicates the initial and terminal grain voxels. Left panel shows all possible paths using Dijkstra's algorithms and numbers indicate step number. Middle panel shows an example of errant approach using Dijkstra's algorithm with smallest tirangular area and Right panel illustrates real boudary grain path using Dijkstra's algorithm with shortest distance for each step voxel. }
\begin{center}
\begin{tabular}{ll}
{\bf A}&{\bf B}\\
\epsfig{file=1.png, height=2.0in, width=3.1in}&
\epsfig{file=2.png, height=2.0in, width=3.1in}\\
{\bf C}&{\bf D}\\
\epsfig{file=3.png, height=2.0in, width=3.1in}
&\epsfig{file=4.png, height=2.0in,
width=3.1in}\\
{\bf E}&{\bf F}\\
\epsfig{file=5.png, height=2.0in, width=3.1in}
&\epsfig{file=6.png, height=2.0in,
width=3.1in}\\
\end{tabular}
\end{center}
\small{Figure 4 : The first algorithm processing diagram. A : vertical subsection slide of 3D image (dark red, green and white colors indicate the grain, the medial axis, and void space, respectively). B : finding the tangent vector $\protect\overrightarrow{n_{k}}$ at the center of $\nu_{k}$ using five consecutive voxels. C : finding unit directional vector $\protect\overrightarrow{n_{k,i,j}}$. This vector is an unit directional vector in spherical coordinate system whose zenith directions is in the same direction of $\protect\overrightarrow{n_{k}}$. D : constructing a plane, $P_{k,i,j}$ with both a point (the center of $\nu_{k}$) and a normal vector $\protect\overrightarrow{n_{k,i,j}}$ and an unit directional vector set \{$\protect\overrightarrow{u_{k,i,j,\theta}}$\} on the plane $P_{k,i,j}$. E : finding the boundary grain voxel set using the unit directional vector set \{$\protect\overrightarrow{u_{k,i,j,\theta}}$\}. F : changing the connectivity from 26 to 6 connectivity of the founded boundary grain voxel set from E. }
\begin{center}
\epsfig{figure=7.png,height=3.2in,width=6.2in}
\end{center}
\small{Figure 5 : A unit ray spread in the digitized image. Yellow colored vector is in the direction of the unit vector $\protect\overrightarrow{u_{k,i,j,\theta}}$ and touches the boundary grain voxel.}
\begin{center}
\begin{tabular}{ll}
{\bf A}&{\bf B}\\
\epsfig{file=2_1.png, height=2.2in, width=3.3in}&
\epsfig{file=2_2.png, height=2.2in, width=3.3in}\\
{\bf C}&{\bf D}\\
\epsfig{file=2_3.png, height=2.2in, width=3.3in}
&\epsfig{file=2_4.png, height=2.2in,
width=3.3in}\\
\end{tabular}
\end{center}
\small{Figure 6 : The second algorithm processing diagram. A : $\protect\overrightarrow{u_{k,i,j,\theta}}$ detects the visible boundary grain voxels (blue colored voxels). B : Finding triangular area using both two centers of boudary grain voxels and the center of medial voxel (red colored voxel in the middle). C : The minimum (pink rectangle) cube encompasses two non-connected (orange color) voxels. The side length of the cube is $l_{c}$. The interested region is between $l_{c}$+ 4 and $l_{c}$+8 to find accurate throats. D : Final boundary grain voxels using the second algorithm. }
\newpage
\begin{center}
\begin{tabular}{ll}
{\bf A}&{\bf B}\\
\epsfig{file=4_1.png, height=2.2in, width=3.3in}&
\epsfig{file=4_2.png, height=2.2in, width=3.3in}\\
\end{tabular}
\end{center}
\small{Figure 7 : Detecting the boundary grain path in space using Dijkstra's algorithm. A : Dijkstra's algorithm is applied to interested region cube in space. B : detected boundary grain path in space. }
\newpage
\begin{center}
\begin{tabular}{ll}
{\bf A}&{\bf B}\\
\epsfig{file=5_1.png, height=2.5in, width=3.3in}&
\epsfig{file=5_2.png, height=2.5in, width=3.3in}\\
{\bf C}&{\bf D}\\
\epsfig{file=5_3.png, height=2.5in, width=3.3in}
&\epsfig{file=5_4.png, height=2.5in,
width=3.3in}\\
\end{tabular}
\end{center}
\small{Figure 8 : The processing of non-planar algorithm. A : detection of the nearest boundary grain voxel using $\protect\overrightarrow{u_{k,i,j,\theta}}$ where $\theta$ is between $0^{o}$ and $90^{o}$. B : Detecting another nearest boundary grain voxel with different $\theta$. C : make four verticle wedge (green ray) and dectect the boudary grain voxels not on the plane $P_{k,i,j}$ . D : Detected the boundary grain voxels in space. }
\begin{center}
\begin{tabular}{ll}
{\bf A}&{\bf B}\\
\epsfig{file=6_1.png, height=1.7in, width=2.7in}&
\epsfig{file=6_2.png, height=1.7in, width=2.7in}\\
{\bf C}&{\bf D}\\
\epsfig{file=6_3.png, height=1.7in, width=2.7in}
&\epsfig{file=6_4.png, height=1.7in,
width=2.7in}\\
{\bf E}&{\bf F}\\
\epsfig{file=6_5.png, height=1.8in, width=2.7in}&
\epsfig{file=6_6.png, height=1.8in, width=2.7in}\\
{\bf G}&{\bf H}\\
\epsfig{file=6_7.png, height=1.8in, width=2.7in}
&\epsfig{file=6_8.png, height=1.8in,
width=2.7in}\\
\end{tabular}
\end{center}
\small{Figure 9 : Non planar algorithm processing diagram. A: visible voxel sets (blue) from the medial axis on a plane. B : calculating the sum of triangular area using both the visible voxel sets and a medial voxel ($\nu_{k}$). C : connect to disjoint consecutive voxel using Dijkstra's algorithm. D : nixed voxel set (brown voxels). E : New Region of interest. F : Outer 26 connected perimeter voxel set. G : Find 4 points on the boundary of inner grain voxels. H : The inner grain boundary and the outer perimeter voxel set. }
\begin{center}
\begin{tabular}{ll}
{\bf A}\\
\epsfig{file=7_1.png, height=2.1in, width=5.2in}\\
{\bf B}\\
\epsfig{file=7_2.png, height=2.1in, width=5.1in}\\
{\bf C}\\
\epsfig{file=7_3.png, height=3.3in, width=5.0in}\\
\end{tabular}
\end{center}
\small{Figure 10 : The method how to find points with respect to non-differential section. A : Perimeter voxels and its points set in the T shape. B : Perimeter voxels and its points set in the U shape. C : Perimeter voxels and its points set in the L shape.}
\begin{center}
\begin{tabular}{ll}
{\bf A}&{\bf B}\\
\epsfig{file=10_1.png, height=2.5in, width=3.3in}&
\epsfig{file=10_2.png, height=2.5in, width=3.6in}\\
{\bf C}&{\bf D}\\
\epsfig{file=10_3.png, height=2.1in, width=3.4in}
&\epsfig{file=10_4.png, height=2.5in,
width=3.3in}\\
\end{tabular}
\end{center}
\small{Figure 11: Planar and non-planar throat types. Presented algorithms in this paper detect both planar and non-planar throat (red and light blue color in A and B), then I choose the ideal throat by taking the minimum of both detected planar and non-planar throats. Red color : planar throat barrier voxels. light blue color : non-planar throat barrier voxel set, purple color : final throat barrier voxel set }
\begin{center}
\begin{tabular}{ll}
{\bf A}&{\bf B}\\
\epsfig{file=case1.png, height=2.0in, width=3.3in}&
\epsfig{file=case2.png, height=2.0in, width=3.3in}\\
{\bf C}&{\bf D}\\
\epsfig{file=case3.png, height=2.0in, width=3.3in}
&\epsfig{file=case4.png, height=2.0in,width=3.3in}\\
{\bf E}\\
\epsfig{file=case5.png, height=2.0in, width=3.3in}
\\
\end{tabular}
\end{center}
\small{Figure 12 : Sphere pack sample with 5 representative throats. Blue color area indicate one of five representative throats. This sample has 3 layer spheres and each layer has 9 spheres.
\begin{center}
\begin{tabular}{ll}
{\bf A}&{\bf B}\\
\epsfig{file=pdf1.png, height=2.0in, width=3.1in}&
\epsfig{file=pdf2.png, height=2.0in, width=3.1in}\\
{\bf C}\\
\epsfig{file=pdf3.png, height=2.0in, width=3.1in}\\
\end{tabular}
\end{center}
\small{Figure 13 : Throat area (A), Pore volume distribution (B), and Coordination number (C) computed from the XCMT image of the Handford wet inlet t241 samples using 5 algorithms }
\begin{center}
\begin{tabular}{ll}
{\bf A}&{\bf B}\\
\epsfig{file=7_4.png, height=2.5in, width=3.3in}&
\epsfig{file=7_5.png, height=2.5in, width=3.3in}\\
{\bf C}&{\bf D}\\
\epsfig{file=1234_2.png, height=2.5in, width=3.3in}
&\epsfig{file=1234_1.png, height=2.5in,
width=3.3in}\\
\end{tabular}
\end{center}
\small{Figure 14 : A,B : Example of calculating the exact linear distance (A) and circumferences (B) using the point set algorithm. C,D : Relative error in the digitized image of the real boundary (a) line with different angel. (b) circle with different radii.}
\begin{center}
\begin{tabular}{ll}
{\bf A}&{\bf B}\\
\epsfig{file=9_1.png, height=2.3in, width=3.3in}&
\epsfig{file=9_2.png, height=2.3in, width=3.3in}\\
{\bf C}&{\bf D}\\
\epsfig{file=9_3.png, height=2.3in, width=3.3in}
&\epsfig{file=9_4.png, height=2.3in,
width=3.3in}\\
{\bf E}\\
\epsfig{file=9_5.png, height=2.3in, width=3.3in}\\
\end{tabular}
\end{center}
\small{Figure 15 : Counter example to find throat area using the sum of triangular area (in 3DMA thorat area calculation algorithm). Existing algorithm can not calculate the exact throat area of this surface. A : blue colored curve shows a perimeter of the surface. 3DMA throat algorithm cannot calculate the exact throat in orange colored region. C, D : different point of view of B. E : desired throat surface. }
|
1,314,259,996,632 | arxiv | \section{Introduction}
Finite hypergeometric functions were introduced independently, and with
different motivations, by John Greene \cite{greene} and Nick Katz \cite[p.258]{katz}
by the end of the 1980's.
They are complex-valued functions on finite fields and an analogue of the classical
analytic hypergeometric functions in one variable, also known as Thomae's functions.
To display the analogy, we first recall the definition of the analytic hypergeometric functions.
Consider two multisets
(sets with possibly repeating elements) $\bm{\alpha}=(\alpha_1,\ldots,\alpha_d)$
and $\bm{\beta}=(\beta_1,\ldots,\beta_d)$, where $\alpha_i,\beta_j\in{\msy Q}$ for all
$i,j$. We assume for the moment that none of the $\alpha_i,\beta_j$ is in ${\msy Z}_{\le0}$
and $\beta_d=1$.
The hypergeometric function with parameter sets $\bm{\alpha},\bm{\beta}$ is
the analytic function defined by the power series in $z$
$$_dF_{d-1}(\bm{\alpha},\bm{\beta}|z)=\sum_{n=0}^{\infty}
{(\alpha_1)_n\cdots(\alpha_d)_n\over(\beta_1)_n\cdots(\beta_d)_n}\ z^n\;.$$
Here $(x)_n=x(x+1)\cdots(x+n-1)$ denotes the so-called Pochhammer symbol
or rising factorial. Using the $\Gamma$-function we can rewrite the series as
$${\Gamma(\beta_1)\cdots\Gamma(\beta_d)\over\Gamma(\alpha_1)\cdots\Gamma(\alpha_d)}
\sum_{n=0}^{\infty}{\Gamma(\alpha_1+n)\cdots\Gamma(\alpha_d+n)\over\Gamma(\beta_1+n)
\cdots\Gamma(\beta_d+n)}\ z^n\;.$$
In order to display the similarity with the upcoming definition of finite hypergeometric
functions, using the identity $\Gamma(x)\Gamma(1-x)=\pi/\sin(\pi x)$ we rewrite this as
$$\sum_{n=0}^{\infty}\prod_{i=1}^d\left({\Gamma(\alpha_i+n)(1-\beta_i-n)
\over\Gamma(\alpha_i)\Gamma(1-\beta_i)}\right)\ (-1)^{dn}z^n\;.$$
If $\beta$ is a positive integer we interpret $\Gamma(1-\beta-n)/\Gamma(1-\beta)$ as
$(-1)^n/(n+\beta-1)!$, the limit of $\Gamma(1-x-n)/\Gamma(1-x)$ as $x\to\beta$.
We now define finite hypergeometric functions. Again take two multisets $\bm{\alpha}$
and $\bm{\beta}$, each consisting of $d$ elements in ${\msy Q}$. Assume from now on that
$\alpha_i\not\equiv\beta_j\mod{{\msy Z}}$ for all $i,j$. In the analytic case this condition
is equivalent to the irreducibility of the hypergeometric differential equation. In the finite case
we avoid certain degeneracies with this assumption. We need not assume $\beta_d=1$ any more.
Let ${\msy F}_q$ be the finite field with $q$ elements.
Let $\psi_q$ be a non-trivial additive character on ${\msy F}_q$
which we fix once and for all throughout this paper. For any multiplicative character
$\chi:\bbbf_q^{\times}\to{\msy C}^{\times}$ we define the Gauss sum
$$g(\chi)=\sum_{x\in\bbbf_q^{\times}}\chi(x)\psi_q(x)\;.$$
Let $\omega$ be a generator of the character group on $\bbbf_q^{\times}$
which we also fix throughout the paper. We use the notation
$g(m)=g(\omega^m)$ for any $m\in{\msy Z}$. Note that $g(m)$ is periodic in $m$ with
period $q-1$.
Very often we shall need characters on $\bbbf_q^{\times}$ of a given order.
For that we use the notation ${\mathbbm{q}}=q-1$ so that a character of order $d$ can be given by
$\omega^{{\mathbbm{q}}/d}$ for example, provided that $d$ divides ${\mathbbm{q}}$ of course.
Now we define finite hypergeometric sums. Let again $\bm{\alpha}$ and $\bm{\beta}$ be multisets
of $d$ rational numbers each, and disjoint modulo ${\msy Z}$.
Suppose in addition that $q$ is such that
$$(q-1)\alpha_i,(q-1)\beta_j\in{\msy Z}$$
for all $i$ and $j$.
\begin{definition}[Finite hypergeometric sum]\label{definitionHq}
Keep the above notation. We define for any $t\in{\msy F}_q$,
$$H_q(\bm{\alpha},\bm{\beta}|t)={1\over 1-q}\sum_{m=0}^{q-2}
\prod_{i=1}^d\left({g(m+\alpha_i{\mathbbm{q}})g(-m-\beta_i{\mathbbm{q}})
\over g(\alpha_i{\mathbbm{q}})g(-\beta_i{\mathbbm{q}})}\right)\ \omega((-1)^dt)^m\;.$$
\end{definition}
Note the analogy with the analytic hypergeometric function.
These sums were considered without the normalizing factor $(\prod_{i=1}^dg(\alpha_i{\mathbbm{q}})g(-\beta_i{\mathbbm{q}}))^{-1}$
by Katz in \cite[p258]{katz}. Greene, in \cite{greene}, has a definition involving Jacobi sums which, after some
elaboration, amounts to
$$\omega(-1)^{|\bm{\beta}|{\mathbbm{q}}}q^{-d}\prod_{i=1}^d{g(\alpha_i{\mathbbm{q}})g(-\beta_i{\mathbbm{q}})\over g(\alpha_i{\mathbbm{q}}-\beta_i{\mathbbm{q}})}
\ H_q(\bm{\alpha},\bm{\beta}|t)\;,$$
where $|\bm{\beta}|=\beta_1+\cdots+\beta_d$.
The normalization we adopt in this paper coincides with that of McCarthy, \cite[Def 3.2]{mccarthy}.
It took some time before people realized that Greene's finite hypergeometric functions were closely related
to point counting results on algebraic varieties over finite fields. As an example we mention the following
result, adapted to our notation.
\begin{theorem}[K.~Ono, 1998]\label{onotheorem}
Let $q$ be an odd prime power and $\lambda\in{\msy F}_q$ and $\lambda\ne0,1$. Let $E_{\lambda}$ be the projective
elliptic curve given by the affine equation $y^2=x(x-1)(x-\lambda)$ and $E_{\lambda}({\msy F}_q)$
the set of ${\msy F}_q$-rational points (including infinity). Then
$$|E_{\lambda}({\msy F}_q)|=q+1-(-1)^{(q-1)/2}H_q(1/2,1/2;1,1|\lambda)\;.$$
\end{theorem}
In this paper we propose a generalization of this theorem which applies to all hypergeometric sums which
are {\it defined over ${\msy Q}$}. By that we mean that both $\prod_{j=1}^d(x-e^{2\pi i\alpha_j})$ and
$\prod_{j=1}^d(x-e^{2\pi i\beta_j})$ are polynomials with coefficients in ${\msy Z}$. In other words, they
are products of cyclotomic polynomials. Let us assume we are in this case. Then we can find natural numbers
$p_1,\ldots,p_r$ and $q_1,\ldots,q_s$ such that
$$\prod_{j=1}^d{x-e^{2\pi i\alpha_j}\over x-e^{2\pi i\beta_j}}={\prod_{j=1}^r x^{p_j}-1\over
\prod_{j=1}^s x^{q_j}-1}\;.$$
Note that $p_1+\cdots+p_r=q_1+\cdots+q_s$. It is a small exercise to show that
$${(\alpha_1)_n\cdots(\alpha_d)_n\over(\beta_1)_n\cdots(\beta_d)_n}
=M^{-n}{(p_1n)!\cdots(p_rn)!\over(q_1n)!\cdots(q_sn)!},
\quad M={p_1^{p_1}\cdots p_r^{p_r}\over q_1^{q_1}\cdots q_s^{q_s}}\;.$$
It turns out that when the hypergeometric parameters are defined over ${\msy Q}$, it is possible to
extend the definition of $H_q$ to all prime powers $q$ which are relatively prime to the common
denominator of the $\alpha_i,\beta_j$.
Let $D(X)$ be the greatest common divisor of
the polynomials $\prod_{i=1}^r(X^{p_i}-1)$ and $\prod_{j=1}^s(X^{q_j}-1)$.
We rewrite our Katz sum in a different shape.
\begin{theorem}\label{rewriteZ}
With the above notation we have
$$H_q(\bm{\alpha},\bm{\beta}|t)={(-1)^{r+s}\over 1-q}
\sum_{m=0}^{q-2}q^{-s(0)+s(m)}g(\v pm,-\v qm)
\ \omega(\epsilon M^{-1}t)^m\;,$$
where
$$g(\v pm,-\v qm)=g(p_1m)\cdots g(p_rm)g(-q_1m)\cdots g(-q_sm),
\quad M=\prod_{j=1}^rp_j^{p_j}\prod_{j=1}^s q_j^{-q_j}$$
and $\epsilon=(-1)^{\sum_iq_i}$
and $s(m)$ is the multiplicity of the zero $e^{2\pi im/{\mathbbm{q}}}$ in $D(X)$.
\end{theorem}
This theorem is proven in Section \ref{overQ}.
\begin{assumption}
From now on, when we work over ${\msy Q}$,
we adopt the right-hand side of Theorem \ref{rewriteZ} as definition of $H_q$.
\end{assumption}
Let $\lambda\in\bbbf_q^{\times}$ and let $V_{\lambda}$ be the affine variety defined by the projective equations
\begin{equation}\label{Vequation}
x_1+x_2+\cdots+x_r-y_1-\cdots-y_s=0,\qquad \lambda x_1^{p_1}\cdots x_r^{p_r}=y_1^{q_1}\cdots y_s^{q_s}
\end{equation}
and $x_j,y_j\ne0$. The main theorem of this paper reads as follows.
\begin{theorem}\label{main}
Let the notation for $p_i,q_j,M$ be as above. Suppose that the greatest common divisor of
$p_1,\ldots,p_r,q_1,\ldots,q_s$ is one and suppose that $M\lambda\ne 1$. Then there
exists a suitable non-singular completion of $V_{\lambda}$, denoted by $\overline{V_{\lambda}}$, such that
$$|\overline{V_{\lambda}}({\msy F}_q)|=P_{rs}(q)+(-1)^{r+s-1}q^{\min(r-1,s-1)}H_q(\bm{\alpha},\bm{\beta}|M\lambda)\;,$$
where
$$P_{rs}(q)=\sum_{m=0}^{\min(r-1,s-1)}{r-1\choose m}{s-1\choose m}{q^{r+s-m-2}-q^m\over q-1}$$
and $\overline{V_{\lambda}}({\msy F}_q)$ is the set of ${\msy F}_q$-rational points on $\overline{V_{\lambda}}$.
\end{theorem}
What we mean by a suitable non-singular completion of $\overline{V_{\lambda}}$ is elaborated in
Section \ref{completion}, more precisely Definition \ref{completiondefinition}. Furthermore,
one easily checks that the condition $M\lambda\ne0,1$ corresponds to the non-singularity
of $V_{\lambda}$.
Here we give a few examples to illustrate the theorem.
\begin{corollary}Let $f(x)=x^3+3x^2-4t$ with $t\in{\msy F}_q$ and $t\ne0,1$. Let $N_f(t)$ be the
number of zeros of $f(x)$ in ${\msy F}_q$. Suppose that $q$ is not divisible by $2$ or $3$. Then
$$N_f(t)=1+H_q(1/3,2/3;1,1/2|t)\;.$$
\end{corollary}
\begin{proof}{} We take $\alpha_1=1/3,\alpha_2=2/3,\beta_1=1,\beta_2=1/2$ and note that
$${(x-e^{2\pi i/3})(x-e^{4\pi i/3})\over(x-1)(x+1)}={x^3-1\over(x-1)(x^2-1)}\;.$$
So $p_1=3,\ q_1=1,\ q_2=2$, and $M=27/4$. The variety $V_{\lambda}$ is given by the equations
$x-y_1-y_2=0,\lambda x^3=y_1y_2^2$. Eliminate $y_1$ and set $y_2=1$ (dehomogenization) to get
$x-\lambda x^3-1=0$. Replace $x$ by $-3/x$ to get $x^3+3x^2-27\lambda=0$. Application of
Theorem \ref{main} and the relation $t=27\lambda/4$ gives the desired result.
\end{proof}
\begin{corollary}Consider the elliptic curve $y^2+xy+y=\lambda x^3$ with $\lambda\in{\msy F}_q$.
Denote its completion with the point at infinity by $E_{\lambda}$. Suppose that $q$ is not divisible
by $2$ or $3$. Then
$$|E_{\lambda}({\msy F}_q)|=q+1-H_q(1/3,2/3;1,1|27\lambda)\;.$$
\end{corollary}
\begin{proof}{}
We take $\alpha_1=1/3,\alpha_2=2/3,\beta_1=1,\beta_2=1$ and note that
$${(x-e^{2\pi i/3})(x-e^{4\pi i/3})\over(x-1)^3)}={(x^3-1)\over(x-1)^3}\;.$$
So $p_1=3,\ q_1=q_2=q_3=1$, and $M=27$. The variety $V_{\lambda}$ is given by the equations
$$x_1-y_1-y_2-y_3=0,\lambda x_1^3=y_1y_2y_3\;.$$
Eliminate $y_3$ and set $x_1=1$ to dehomogenize. We get
$$1-y_1-y_2-\lambda y_1^{-1}y_2^{-1}=0\;.$$
Introduce new coordinates $x,y$ via
$$y_1=-y/x,\qquad y_2=-1/x\;.$$
Note that this is a birational map. In the new coordinates get the curve
$$y^2+xy+y=\lambda x^3\;.$$
Theorem \ref{main} tells us that the number of ${\msy F}_q$-rational points (including $\infty$) equals
$$q+1-H_q(1/3,2/3;1,1|27\lambda)\;.$$
\end{proof}
\begin{corollary} Consider the rational elliptic surface $S_{\lambda}$
given by the affine equation
$$y^2-xyz+x^3+z^5-\lambda^{-1}=0$$
with $\lambda\in\bbbf_q^{\times}$. Suppose that $q$ is not divisible by $2,3$ or $5$ and
$2^{14}3^95^5\lambda\ne 1$. Let $\overline{S_{\lambda}}$ be a suitable completion of $S_{\lambda}$.
Then
$$|\overline{S_{\lambda}}({\msy F}_q)|=q^2+3q+1+qH_q(\bm{\alpha},\bm{\beta}|2^{14}3^95^5\lambda)\;,$$
where
$$\bm{\alpha}=(1/30,7/30,11/30,13/30,17/30,19/30,23/30,29/30)$$ and
$$\bm{\beta}=(1/5,1/3,2/5,1/2,3/5,2/3,4/5,1)\;.$$
Moreover, $H_q(\bm{\alpha},\bm{\beta}|t)$ is integer valued.
\end{corollary}
From \cite{beukersheckman} it follows that the analytic function $_8F_7(\bm{\alpha},\bm{\beta}|z)$
is an algebraic function. Fernando Rodriguez-Villegas computed its degree over ${\msy C}(z)$
\cite{frv}, which is $483840$. He also noted that
$$_8F_7(\bm{\alpha},\bm{\beta}| 2^{14}3^95^5t)=\sum_{n\ge0}{(30n)!n!\over(15n)!(10n)!(6n)!}\ t^n\;.$$
The coefficients turn out to be integers and they were essentially used by Chebyshev in his proof
for his estimates of the prime counting function $\pi(x)$.
Another remark is that $y^2-xyz+x^3+z^5-\lambda^{-1}=0$ is a rational elliptic surface for any given
$\lambda$. The $\zeta$-function of such surfaces has been computed extensively by Shioda
in\cite{shioda}. There it turns out that the global $\zeta$-function of such a surface has the form
$\zeta(s)\zeta(s-1)^2\zeta(s-2)L(\rho,s-1)$, where $\zeta(s)$ is the Riemann $\zeta$ function and
$L(\rho,s)$ is the Artin $L$-series corresponding to a finite representation of Gal($\overline{{\msy Q}}/{\msy Q})$)
of dimension $8$.
\medskip
\begin{proof}{}We
take the elements of $\bm{\alpha}$ for the $\alpha_i$ and the elements of $\bm{\beta}$ for the
$\beta_j$. We verify that
$$\prod_{j=1}^8 {x-e^{2\pi \alpha_j}\over x-e^{2\pi i\beta_j}}=
{(x^{30}-1)(x-1)\over(x^{15}-1)(x^{10}-1)(x^6-1)}\;.$$
Compare the exponents with the remarks above. We have $M=2^{14}3^95^5$. Theorem \ref{main} implies that
$$q^2+3q+1+qH_q(\bm{\alpha},\bm{\beta}|M\lambda)$$
equals the number of ${\msy F}_q$-rational points on a suitable completion of
$$x_1+x_2-y_1-y_2-y_3=0,\qquad \lambda x_1^{30}x_2=y_1^{15}y_2^{10}y_3^6\;.$$
Eliminate $x_2$ and set $x_1=1$. We obtain
$$1+\lambda^{-1}y_1^{15}y_2^{10}y_3^6-y_1-y_2-y_3=0\;.$$
Substitute $y_1=x^{-1}yz^{-1},y_2=x^2y^{-1}z^{-1},y_3=x^{-1}y^{-1}z^4$. Note that
this monomial substitution is reversible because the matrix of exponents has determinant
$-1$. We obtain
$$-xyz-\lambda^{-1}+y^2+x^3+z^5=0\;.$$
The integrality of the values of $H_q$ follows from Theorem \ref{denominatorHq}.
\end{proof}
\medskip
Surprisingly enough Theorem \ref{main} does not immediately imply Ono's Theorem \ref{onotheorem}.
In this case we get the parameters $p_1=p_2=2,q_1=q_2=q^3=q^4=1$ and the threefold
$$x_1+x_2-y_1-y_2-y_3-y_4=0,\qquad \lambda x_1^2x_2^2=y_1y_2y_3y_4\;,$$
instead of the expected Legendre family of elliptic curves. However in this case we
have the relations $p_1=q_1+q_2,p_2=q_3+q_4$ besides the overall relation $p_1+p_2=q_1+\cdots+q_4$.
In such a case we can construct another variety whose point count also yields the desired hypergeometric
sum. We indicate how this works in Theorem \ref{legendrecount} in the last section of this paper.
This theorem does yield Legendre's family for our example.
{\bf Acknowledgements:} We are greatly indebted to Fernando Rodriguez-Villegas who inspired the subject,
and who co-organized the AIM/ICTP meeting on 'Hypergeometric motives' at Trieste in
July 2012. During this stimulating meeting the idea of reverse engineering hypergeometric
motives on an industrial scale arose. We thank both AIM and ICTP for
their hospitality.
We also like to thank the Mathematisches Forschungsinstitut Oberwolfach
for providing the wonderful atmosphere during the workshop 'Explicit methods in number theory'
in July 2013. Here the final form of the present paper was conceived.
Finally we thank the anonymous referee for some pertinent questions which led to a
substantial improvement of the manuscript.
\section{Gauss sums, basic properties}
We use the notation given in the introduction and recall some basic theorems.
The reference that we use is the second author's books on Number
Theory, \cite{cohen1}, \cite{cohen2} (where the notation $\tau(\chi,\psi)$ is
used instead of $g(\chi)$ though).
\begin{theorem}\label{cancelgauss}We have
\begin{enumerate}
\item $g(0)=-1$
\item $g(m)g(-m)=\omega(-1)^mq$ if $m\not\is0\mod{{\mathbbm{q}}}$.
\end{enumerate}
\end{theorem}
This is Lemma 2.5.8 and Proposition 2.5.9 of \cite{cohen1}.
\begin{theorem}\label{jacobisum}
Define for any two integers $m,n$ the Jacobi sum
$$J(m,n)={g(m)g(n)\over g(n+m)}\;.$$
Then,
$$J(m,n)=\cases{\sum_{x\in{\msy F}_q\setminus\{0,1\}}\omega(1-x)^m\omega(x)^n & \mbox{if $m,n,m+n\not\is0\mod{{\mathbbm{q}}}$}\cr
-1 & \mbox{if $m\is0\mod{{\mathbbm{q}}}$ or $n\equiv 0\mod{{\mathbbm{q}}} $}\cr
\omega(-1)^mq & \mbox{if $m\equiv-n\mod{{\mathbbm{q}}}$ and $m\not\is0\mod{{\mathbbm{q}}}$}\cr}$$
\end{theorem}
This follows from Corollary 2.5.17 of \cite{cohen1}, although we slightly
perturbed the definition of $J$.
A simple consequence is the following.
\begin{lemma}\label{integralJ}
For every pair of integers $m,n$ the quotient $g(m)g(n)/g(m+n)$ is an algebraic
integer in ${\msy Q}(\zeta_{{\mathbbm{q}}})$, where $\zeta_{{\mathbbm{q}}}$ is a primitive $q-1$-st root of
unity.
\end{lemma}
In Theorems \ref{denominatorHqgeneral} and \ref{denominatorHq}
(in the case over ${\msy Q}$),
we will compute a bound for the denominator of $H_q$. To do so we need a statement on the prime ideal factorization of Gauss sums. This is provided by Stickelberger's theorem.
We use Theorem 3.6.6 of \cite{cohen1}, in a weaker form.
\begin{theorem}[Stickelberger] \label{stickelberger}
Let ${\mathfrak{p}}$ be the ideal divisor of $p$ in ${\msy Z}[\zeta_{{\mathbbm{q}}}]$ such that $\omega^{-1}(x)\equiv x\mod{{\mathfrak{p}}}$
for all $x\in{\msy F}_q$. Note that ${\mathfrak{p}}$ has degree $f$, where $p^f=q$.
Let ${\mathfrak{P}}$ be the totally ramified prime ideal above in ${\mathfrak{p}}$ in ${\msy Z}[\zeta_{{\mathbbm{q}}},\zeta_p]$.
Let $0\le r<q-1$. Then
$g(r)$ is exactly divisible by ${\mathfrak{P}}^{\sigma(r)}$, where $\sigma(r)$ is the sum of the digits $r_i$ in the base
$p$ decomposition $r=r_0+r_1p+\cdots+r_{f-1}p^{f-1}$.
An alternative description of $\sigma(r)$, without the restriction $0\le r<q-1$, is given by
$$ \sigma(r)=(p-1)\sum_{i=1}^f\left\{{p^ir\over q-1}\right\}\;.$$
\end{theorem}
In order to rewrite $H_q$ when it is defined over ${\msy Q}$ we use the following result.
\begin{theorem}[Hasse-Davenport]\label{hassedavenport}
For any $N\in{\msy N}$ dividing ${\mathbbm{q}}$ we have
$$g(Nm)=-\omega(N)^{Nm}\prod_{j=0}^{N-1}{g(m+j{\mathbbm{q}}/N)\over g(j{\mathbbm{q}}/N)}\;.$$
\end{theorem}
This is Theorem 3.7.3 of \cite{cohen1}. Note the analogy with Euler's identity
$$\Gamma(Nz)=N^{Nz}{\prod_{j=0}^{N-1}\Gamma(z+j/N)\over
\prod_{j=1}^{N-1}\Gamma(j/N)}$$
for the $\Gamma$-function. The following proposition will be used
repeatedly. It is Fourier inversion on the multiplicative characters.
\begin{proposition}\label{fourier} Let $G:\bbbf_q^{\times}\to{\msy C}$ be a function. Then,
for any $\lambda\in\bbbf_q^{\times}$ we have
$$G(\lambda)={1\over q-1}\sum_{m=0}^{q-2}G_m\omega(\lambda)^m\;,$$
where
$$G_m=\sum_{\lambda\in\bbbf_q^{\times}}G(\lambda)\omega(\lambda)^{-m}\;.$$
\end{proposition}
Here is an application for later use.
\begin{lemma}\label{gaussproduct}
Let $\v a=(a_1,\ldots,a_n)\in{\msy Z}^n$. Define $a_{n+1}=-a_1-\cdots-a_n$ and
$a=\gcd(a_1,\ldots,a_n)$.
Then, for any integer $m$,
$$\sum_{v\in{\msy F}_q,\v x\in(\bbbf_q^{\times})^n}\psi_q(v(1+x_1+\cdots+x_n))\omega(\v x^{\v a})^m
=(q-1)^n\delta(\v a m)+g(a_1m)\cdots g(a_{n+1}m)\;,$$
where we use the vector notations $\v x=(x_1,\ldots,x_n)$, $\v x^{\v a}=x_1^{a_1}\cdots x_n^{a_n}$ and
$\delta(\v x)=\delta(x_1)\cdots\delta(x_n)$ and $\delta(x)=1$ if $x\is0\mod{{\mathbbm{q}}}$ and $0$ otherwise.
\end{lemma}
\begin{proof}{}We
carry out the summation over $x_1,\ldots,x_n$ and $v=0$ to get
$$\sum_{\v x\in(\bbbf_q^{\times})^n}\omega(\v x^{\v a})^m=(q-1)^n\prod_{i=1}^n \delta(a_im)=(q-1)^n\delta(\v a m)\;.$$
The summation over $x_1,\ldots,x_n$ and $v\in\bbbf_q^{\times}$ yields
\begin{eqnarray*}
\sum_{v,x_i\in\bbbf_q^{\times}}\psi_q(v(1+x_1+\cdots+x_n))\omega(\v x^{\v a})^m
&=&\sum_{v\in\bbbf_q^{\times}}g(a_1m)\cdots g(a_nm)\psi_q(v)\omega(v^{(-a_1-\cdots-a_n)m})\\
&=&g(a_1m)\cdots g(a_{n+1}m).
\end{eqnarray*}
From these two summations our result follows.
\end{proof}
\section{Katz hypergeometric functions}\label{katzfunction}
Let $\bm{\alpha}=(\alpha_1,\ldots,\alpha_d)$ and $\bm{\beta}=(\beta_1,\ldots,\beta_d)$ be
disjoint multisets in ${\msy Q}/{\msy Z}$. We assume that both $(q-1)\bm{\alpha}$ and
$(q-1)\bm{\beta}$ are subsets of ${\msy Z}$. In \cite[p 258]{katz}
Katz considered for any $t\in\bbbf_q^{\times}$ the exponential sum
$$Hyp_q(\bm{\alpha},\bm{\beta}|t)=
\sum_{(\v x,\v y)\in T_{t}}\psi_q(x_1+\cdots+x_d-y_1-\cdots-y_d)
\ \omega(\v x)^{\bm{\alpha}{\mathbbm{q}}}\omega(\v y)^{-\bm{\beta}{\mathbbm{q}}}\;,$$
where $T_{t}$ is the toric variety defined by $tx_1\cdots x_d=y_1\cdots y_d$ and
$\omega(\v x)^{\bm{\alpha}{\mathbbm{q}}}$ stands for $\omega(x_1)^{\alpha_1{\mathbbm{q}}}\cdots \omega(x_d)^{\alpha_d{\mathbbm{q}}}$, etc.
Note that we have taken Katz's formula for the case $n=m=d$. Define
$$S_q(\bm{\alpha},\bm{\beta}|t)={1\over q-1}
\sum_{m=0}^{q-2}\prod_{i=1}^d\ g(m+\alpha_i{\mathbbm{q}})g(-m-\beta_i{\mathbbm{q}})
\ \omega((-1)^dt)^m\;.$$
\begin{proposition}[Katz]\label{exponentialsum}
With the above notation we have
$$Hyp_q(\bm{\alpha},\bm{\beta}|t)=\omega(-1)^{|\bm{\beta}|{\mathbbm{q}}}S_q(\bm{\alpha},\bm{\beta}|t)$$
for all $t\in\bbbf_q^{\times}$, where $|\bm{\beta}|=\sum_i\beta_i$.
\end{proposition}
\begin{proof}{}We consider
$Hyp_q(\bm{\alpha},\bm{\beta}|t)$ as function of $t$ and determine the $m$-th coefficient of its
Fourier expansion. Using Proposition \ref{fourier}, this reads
$${1\over q-1}\sum_{t,x_i,y_j\in\bbbf_q^{\times},tx_1\cdots x_d=y_1\cdots y_d}
\psi_q(x_1+\cdots+x_d-y_1-\cdots-y_d)
\ \omega(\v x)^{\bm{\alpha}{\mathbbm{q}}}\omega(\v y)^{-\bm{\beta}{\mathbbm{q}}}\omega(t)^{-m}\;.$$
We substitute $t=(y_1\cdots y_d)(x_1\cdots x_d)^{-1}$ to get
$${1\over q-1}\sum_{x_i,y_j\in\bbbf_q^{\times}}
\psi_q(x_1+\cdots+x_d-y_1-\cdots-y_d)
\ \omega(\v x)^{m+\bm{\alpha}{\mathbbm{q}}}\omega(\v y)^{-m-\bm{\beta}{\mathbbm{q}}}\;.$$
Summation over the $2d$ variables $x_i,y_j$ then gives the desired result.
\end{proof}
\medskip
Notice that $Hyp_q$ (and $S_q$) are algebraic integers in the field
${\msy Q}(\zeta_p,\zeta_{q-1})$, where $\zeta_n$ denotes a primitive $n$-th
root of unity. A simple consideration shows that $S_q$ is in
${\msy Q}(\zeta_{q-1})$ if $\sum_{i=1}^d(\alpha_i-\beta_i)\in {\msy Z}$.
Moreover we have the following divisibility properties.
\begin{proposition}
For any integer $m$ the product $\prod_{i=1}^dg(m+\alpha_i{\mathbbm{q}})g(-m-\beta_i{\mathbbm{q}})$
is divisible by $g(|\bm{\alpha}-\bm{\beta}|{\mathbbm{q}})$, where $|\bm{\alpha}-\bm{\beta}|=
\sum_{i=1}^d(\alpha_i-\beta_i)$, and by
$\prod_{i=1}^dg((\alpha_i-\beta_i){\mathbbm{q}})$. Moreover, the quotients are in ${\msy Q}(\zeta_{q-1})$.
\end{proposition}
This is a direct consequence of Lemma \ref{integralJ}.
Consequently $S_q(\bm{\alpha},\bm{\beta}|t)$ is divisible by these
numbers for any $t\in\bbbf_q^{\times}$. This proposition suggests that we
might normalize $S_q$ by the factor $1/g(|\bm{\alpha}-\bm{\beta}|{\mathbbm{q}})$
or by $1/\prod_ig((\alpha_i-\beta_i){\mathbbm{q}})$. The latter normalization
is taken by John Greene in his definition of finite
hypergeometric sums, see \cite{greene}. More precisely, Greene uses the
function
$${\pm q^{1-d}\over \prod_ig((\alpha_i-\beta_i){\mathbbm{q}})}S_q(\bm{\alpha},\bm{\beta}|t)\;.$$
We will adopt the normalization from the introduction. It keeps the symmetry in the $\alpha_i$ and
$\beta_j$, but accepts the possibility
that the function values may not be integers. Namely,
\begin{equation}\label{normalize}
H_q(\bm{\alpha},\bm{\beta}|t)=-\prod_{i=1}^d{1\over g(\alpha_i{\mathbbm{q}})g(-\beta_i{\mathbbm{q}})}
S_q(\bm{\alpha},\bm{\beta}|t)\;.
\end{equation}
To get a bound on the denominator of $H_q$ we introduce the {\it Landau function}
$$\lambda(\bm{\alpha},\bm{\beta},x)=\sum_{i=1}^d\left(\{x+\alpha_i\}
-\{\alpha_i\}+\{-x-\beta_i\}-\{-\beta_i\}\right)\;,$$
where $\{x\}$ denotes the fractional part of the real number $x$.
\begin{theorem}\label{denominatorHqgeneral} Keep the above notation and let
$N$ be the common denominator of the $\alpha_i,\beta_j$. Define
$$\lambda=-\min_{x\in[0,1],\gcd(k,N)=1}\lambda(k\bm{\alpha},k\bm{\beta},x)\;.$$
Then $q^{\lambda}H_q(\bm{\alpha},\bm{\beta}|t)$ is an algebraic integer.
\end{theorem}
\begin{proof}{}The
definition of $Hyp_q$ and Proposition \ref{exponentialsum} imply that $S_q(\bm{\alpha},
\bm{\beta}|t)$ is an algebraic integer. Since $H_q$ is a normalization of $S_q$ by a
factor $\prod_{i=1}^dg(\alpha_i{\mathbbm{q}})g(\beta_i{\mathbbm{q}})$ the denominator of $H_q$ is a power of
$p$. We determine the $p$-adic valuation of the coefficients of the expansion of $H_q$.
Let ${\mathfrak{p}}$ be defined as in Theorem \ref{stickelberger}. Then the ${\mathfrak{p}}$-adic valuation
of the $m$-th coefficient of $H_q(\bm{\alpha},\bm{\beta}|t)$ equals
$$\sum_{j=1}^d\sum_{i=0}^{f-1}\left(\left\{p^i{m\over q-1}+\alpha_j\right\}+\left\{p^i{-m\over q-1}-\beta_j\right\}
-\{\alpha_j\}-\{-\beta_j\}\right)\;.$$
Note that this is greater than or equal to $-f\lambda$. Let $\sigma\in{\rm Gal}({\msy Q}(\zeta_{q-1})/{\msy Q})$ and $k$
such that $\sigma(\zeta_{q-1})=\zeta_{q-1}^k$. Then the $\sigma({\mathfrak{p}})$-adic valuations of the
coefficients of $H_q(\bm{\alpha},\bm{\beta}|t)$ are equal to the
${\mathfrak{p}}$-adic valuations of the coefficients of
$H_q(k^*\bm{\alpha},k^*\bm{\beta}|t)$, where $kk^*\is1\mod{q-1}$.
The latter valuations are also bounded below by $-f\lambda$, so our
theorem follows.
\end{proof}
\medskip
We end by noting some obvious identities for $S_q$.
\begin{theorem}\label{functional1}
Let $\bm{\alpha},\bm{\beta}$ be as before.
Let $\mu\in{\msy Q}$ be such that $\mu{\mathbbm{q}}\in{\msy Z}$ and denote
$(\alpha_1+\mu,\ldots,\alpha_d+\mu)$ by $\mu+(\alpha_1,\ldots,\alpha_d)$, etc.
For any $\lambda\in\bbbf_q^{\times}$ we have
$$\omega(t)^{\mu{\mathbbm{q}}}S_q(\mu+\bm{\alpha},\mu+\bm{\beta}|t)=S_q(\bm{\alpha},\bm{\beta}|t)\;.$$
Hence, by \fref{normalize},
$$\omega(t)^{\mu{\mathbbm{q}}}H_q(\mu+\bm{\alpha},\mu+\bm{\beta}|t)=\prod_{i=1}^d
{g(\mu+\alpha_i{\mathbbm{q}})g(-\mu-\beta_i{\mathbbm{q}})\over g(\alpha_i{\mathbbm{q}})g(-\beta_i{\mathbbm{q}})}
H_q(\bm{\alpha},\bm{\beta}|t).$$
\end{theorem}
This follows directly from the definition of $S_q$ and the replacement of the summation variable
$m$ by $m-\mu{\mathbbm{q}}$. We recall the analytic functions
$_2F_1(\alpha,\beta,\gamma|z)$ and $z^{1-\gamma}
\ _2F_1(\alpha+1-\gamma,\beta+1-\gamma,2-\gamma|z)$ (when $\gamma$ is not an integer)
which form a basis of solutions around $z=0$ for the Gauss hypergeometric equation.
Note that they can be considered as analogues of the functions in Theorem \ref{functional1}
with $\mu=(1-\gamma){\mathbbm{q}}$. Note that the latter differ by a factor independent of $t$.
In contrast with the finite case, the analytic functions are linearly independent over the
complex numbers.
We also have
\begin{theorem}\label{functional2}
Let $\bm{\alpha},\bm{\beta}$ be as before
and suppose $t\in\bbbf_q^{\times}$. Then
$$S_q(\bm{\alpha},\bm{\beta}|t)=S_q(-\bm{\beta},-\bm{\alpha}|1/t)\;.$$
\end{theorem}
Again the proof follows directly from the definition of $S_q$ and replacing
the summation variable $m$ by $-m$.
\ifx
Some remarks on weight and Hodge polynomial. Suppose that the $\alpha_i,\beta_j$ are
in the interval $[0,1)$. Let $m_j=\#\{i|\beta_i<\alpha_j\}$ for $j=1,\ldots,d$.
Then, up to a power of $T$ the Hodge polynomial is given by
$$h(T)=\sum_{j=1}^d T^{m_j-j}\;.$$
The weight of the motive is $\max_j(m_j-j)-\min_j(m_j-j)-1$.
It follows from \cite[Thm 4.5]{beukersheckman} that the signature of the invariant
hermitian form is $|h(-1)|$. And of course $d=h(1)$.
\fi
\section{Hypergeometric motive over ${\msy Q}$}\label{overQ}
From now on we concentrate on the case when the Katz sums are defined over ${\msy Q}$.
In other words, as in the introduction, we assume the following:
\begin{assumption}
There exist natural numbers $p_1,\ldots,p_r,q_1,\ldots,q_s$ such that
$$\prod_{j=1}^d{X-e^{2\pi i\alpha_j}\over X-e^{2\pi i\beta_j}}
={\prod_{i=1}^r(X^{p_i}-1)\over \prod_{j=1}^s(X^{q_j}-1)}\ {\rm and}\ \gcd(p_1,\ldots,q_s)=1\;.$$
\end{assumption}
We now prove Theorem \ref{rewriteZ}, which tells us that we can rewrite our Katz sum
in a different shape.
\medskip
\begin{proof}{of Theorem \ref{rewriteZ}}We
use the notation given in the introduction.
Let $\delta$ be the degree of $D(X)$.
Suppose that the zeros of $D(X)$ are given by $e^{2\pi ic_j/{\mathbbm{q}}}$ with $j=1,\ldots,\delta$, where possible repetitions of the roots are allowed. The coefficient
of $\omega(t)^m$ (without the $1/(1-q)$) in $H_q$ can be rewritten as
$$\omega(-1)^{dm}\left(\prod_{i=1}^r\prod_{j=0}^{p_i-1}{g(m+j{\mathbbm{q}}/p_i)\over g(j{\mathbbm{q}}/p_i)}\right)
\left(\prod_{i=1}^s\prod_{j=0}^{q_i-1}{g(-m-j{\mathbbm{q}}/q_i)\over g(-j{\mathbbm{q}}/q_i)}\right)
\prod_{j=1}^{\delta}{g(c_j)g(-c_j)\over g(m+c_j)g(-m-c_j)}\;.$$
Using Hasse-Davenport, the first product can be rewritten as
$$\prod_{i=1}^r-\omega(p_i)^{-p_im}g(p_im)\;.$$
Similarly, the second product is equal to
$$\prod_{i=1}^s-\omega(q_i)^{q_im}g(-q_im)\;.$$
To rewrite the third product, we use the evaluation $g(n)g(-n)=\omega(-1)^nq$
if $n\not\is0\mod{{\mathbbm{q}}}$ and $g(0)g(0)=1$. We get
$$\omega(-1)^{\delta m}q^{-s(0)+s(m)}\;.$$
Finally notice that $d+\delta$ equals the degree of $\prod_{i=1}^s(X^{q_i}-1)$,
in particular $d+\delta=\sum_{i=1}^sq_i$. Hence the coefficient (without the factor
$1/(1-q)$) of $\omega(t)^m$ becomes
$$(-1)^{r+s}q^{-s(0)+s(m)}\prod_{i=1}^r g(p_im)\prod_{j=1}^s g(-q_jm)
\omega\left((-1)^{\sum_iq_i}{\prod_i q_i^{q_i}\over\prod_j p_j^{p_j}}\right)^m\;,$$
as asserted.
\end{proof}
\medskip
As we noted in the introduction the new summation makes sense for any choice of $q$ relatively
prime with the common denominator of the $\alpha_i,\beta_j$. We now continue to work with
$H_q$ as defined by the new summation.
Before we proceed we can now be more specific about the arithmetic nature of $H_q$.
We need our main Theorem \ref{main} for this, which we will prove in
Section \ref{completion}.
\begin{theorem}\label{denominatorHq}
Suppose $H_q$ is defined over ${\msy Q}$. Define
$$\lambda=\min_{x\in[0,1]}\{p_1x\}+\cdots+\{p_rx\}+\{-q_1x\}+\cdots+\{-q_sx\}\;,$$
where as usual $\{x\}$ denotes the fractional part of $x$. Then $\lambda\ge1$ and
$$q^{\min(r,s)-\lambda}H_q(\bm{\alpha},\bm{\beta}|t)\in{\msy Z}\;.$$
\end{theorem}
In most cases it turns out that $\lambda-\min(r,s)$ is the exact $p$-adic
valuation of $H_q$. but we have not tried to prove this.
\medskip
\begin{proof}{}From Theorem \ref{main} it follows that the values of $H_q$ have a
denominator dividing $p^{\min(r,s)-1}$. To determine the $p$-adic valuation of $H_q$
we use Theorem \ref{stickelberger} and the notation therein.
To be more precise, we determine the ${\mathfrak{p}}$-adic valuation of each coefficient
$g(p_1m)\cdots g(p_rm)g(-q_1m)\cdots g(-q_sm)$. Because $p_1+\cdots p_r-q_1-\cdots-q_s=0$
each such coefficient is in ${\msy Q}(\zeta_{q-1})$. According to Theorem \ref{stickelberger} its
${\mathfrak{p}}$-adic valuation is given by
$$\sum_{i=1}^f \left\{{p_1m\over q-1}\right\}+\cdots+\left\{{p_rm\over q-1}\right\}
+\left\{{-q_1m\over q-1}\right\}+\cdots+\left\{{-q_sm\over q-1}\right\}\;.$$
This is bounded below by $f\lambda$. Hence the ${\mathfrak{p}}$-adic valuation of $H_q$ is bounded
below by $f\lambda$, and since the values of $H_q$ are in ${\msy Q}$ our theorem follows.
\end{proof}
\medskip
The main purpose of this paper is to show that the Katz hypergeometric function occurs
naturally in the point counting numbers of the completion and normalization
of the variety $V_{\lambda}$, which is defined by the points of ${\msy P}^{r+s-1}$ satisfying
the equations
$$x_1+x_2+\cdots+x_r-y_1-\cdots-y_s=0,\qquad \lambda x_1^{p_1}\cdots x_r^{p_r}=y_1^{q_1}\cdots y_s^{q_s}$$
with $\lambda\in\bbbf_q^{\times}$.
We have the following preliminary result.
\begin{proposition}\label{affinepointcount}
Assume that $\gcd(p_1,\ldots,p_r,q_1,\ldots,q_s)=1$.
Let $V_{\lambda}(\bbbf_q^{\times})$ be the set of points on
$V_{\lambda}$ with coordinates in $\bbbf_q^{\times}$. Then
$$|V_{\lambda}(\bbbf_q^{\times})|={1\over q}(q-1)^{r+s-2}+
{1\over q(q-1)}\sum_{m=0}^{q-2}g(\v pm,-\v qm)\omega(\epsilon\lambda)^m\;,$$
where
$$g(\v pm,-\v qm)=g(p_1m)\cdots g(p_rm)g(-q_1m)\cdots g(-q_sm)
\ {\rm and}\ \epsilon=(-1)^{\sum_iq_i}\;.$$
\end{proposition}
\begin{proof}{}We will use Lemma \ref{gaussproduct}. Let us take $n+1=r+s$ and write
$\v a=(a_1,a_2,\ldots,a_{n+1})=(p_1,\ldots,p_r,-q_1,\ldots,-q_s)$. In particular $n=r+s-1$.
We introduce the new variables $x_{r+j}=-y_j$ for $j=1,\ldots,s$. The equations for
$V_{\lambda}$ now read
$$x_1+x_2+\cdots+x_{n+1}=0,\qquad \lambda x_1^{a_1}\cdots x_{n+1}^{a_{n+1}}=\epsilon\;.$$
In vector notation the latter equation reads $\lambda \v x^{\v a}=\epsilon$. We dehomogenize
by setting $x_{n+1}=1$. Then
$$q|V_{\lambda}(\bbbf_q^{\times})|
=\sum_{v\in{\msy F}_q,\v x\in(\bbbf_q^{\times})^n,\lambda\v x^{\v a}=\epsilon}\psi_q(v(1+x_1+\cdots+x_n))\;.$$
This is a function of $\lambda\in\bbbf_q^{\times}$.
Let $\sum_{m=0}^{q-2}c_m\omega(\lambda)^m$ be its Fourier series.
The coefficient $c_m$ is given by
$$c_m={1\over q-1}\sum_{v\in{\msy F}_q,\lambda,x_i\in\bbbf_q^{\times},\lambda
\v x^{\v a}=\epsilon}\psi_q(v(1+x_1+\cdots+x_n))
\ \omega(\lambda)^{-m}\;.$$
Substitute $\lambda=\epsilon\v x^{-\v a}$ to get
$$c_m={1\over q-1}\sum_{v\in{\msy F}_q,x_i\in\bbbf_q^{\times}}\psi_q(v(1+x_1+\cdots+x_n))
\ \omega(\epsilon\v x^{\v a})^m\;.$$
Then Lemma \ref{gaussproduct}, using $\gcd(a_1,\ldots,a_n)=1$, yields
$$c_m=(q-1)^{r+s-2}\delta(m)+{1\over q-1}\prod_{i=1}^{r+s} g(a_im)\omega(\epsilon)^m\;.$$
Putting everything together and dividing by $q$ gives us
$$|V_{\lambda}(\bbbf_q^{\times})|={1\over q}(q-1)^{r+s-2}+{1\over q(q-1)}
\sum_{m=0}^{q-2}g(a_1m)\cdots g(a_{r+s}m)\omega(\epsilon\lambda)^m\;,$$
which yields the desired statement.
\end{proof}
\medskip
Notice that most terms in the summation in Proposition \ref{affinepointcount} and
in the summation of Theorem \ref{rewriteZ} agree except for a few, which
differ by a factor which is a power of $q$. The final goal of
this paper is to show that this difference is caused by the difference between $V_{\lambda}$ and
its completion. In others words, the extra factors $q$ arise from addition of
the components of the completion of $V_{\lambda}$.
In the next section we shall elaborate on this idea and prove our main Theorem \ref{main}.
\section{Proof of the main theorem}\label{completion}
Let notation be as in the introduction. In order to complete the variety $V_{\lambda}$
given by equations (\ref{Vequation}), we need to fill in the points where one or more of the coordinates
are zero. In this section we shall do that quite explicitly, but first like to motivate
our approach by linking it to the theory of toric varieties. Our main reference
is the book by Cox, Little, Schenck, \cite{cls}. First of all, consider the variety $X_{\lambda}$
in ${\msy P}^{r+s-1}$ defined by the projective equation
$\lambda\prod_{i=1}^r x_i^{p_i}=\prod_{j=1}^sy_j^{q_j}$. Take any solution $(a_1,\ldots,a_r;
b_1,\ldots,b_s)$ with non-zero coordinates. Then $x_i/a_i,y_i/b_i$ are the coordinates of
points on the projective variety $X_1$. The variety $X_1$ is a toric variety, the
points with non-zero coordinates form an open subvariety which is a torus (of rank $r+s-2$)
and the action of this torus can be extended to all points of $X_1$, see \cite[Def 3.1.1]{cls}.
To a toric variety one can associate a so-called fan (see \cite[Def 3.2.1]{cls}),
which we shall denote by $\Sigma$. To compute it
we use \cite[Thm 3.2.6]{cls} which gives a one-to-one correspondence between limits of group homomorphisms ${\msy C}^{\times}\to X_1$ and the cones of a fan. Concretely, choose
$(\xi_1,\ldots,\xi_r;\eta_1,\ldots,\eta_s)\in{\msy R}^{r+s}$ such that $\sum_{i=1}^rp_i\xi_i
=\sum_{j=1}^sq_j\eta_j$. Call this hyperplane $H$. We consider $H$ modulo shifts
with the vector $(1,1,\ldots,1)$. Any vector in $H/(1,1,\ldots,1)$ has a representing
element with $\min_{i,j}(\xi_i,\eta_j)=0$. Consider the embedding ${\msy R}_{>0}^{\times}\to X_1$
given by $t\mapsto (t^{\xi_1},\ldots,t^{\xi_r};t^{\eta_1},\ldots,t^{\eta_s})$. Now let
$t\downarrow 0$. The limit consists of vectors with ones and zeros.
A component is zero if and only if the corresponding $\xi_i$ or $\eta_j$ is positive.
Two points in $H/(1,\ldots,1)$ lie in the same cone of $\Sigma$ if and only if their limits
are the same. Therefore, an open cone in $\Sigma$
is characterised by the index sets $S_x,S_y$ defined by $i\in S_x\iff \xi_i>0$
and $j\in S_y\iff \eta_j>0$.
It turns out that usually the toric variety $X_1$ corresponding to $\Sigma$
is highly singular.
We like to construct a partial blow up of $X_1$.
To do this one can construct a refinement $\Sigma'$ of $\Sigma$
and consider the toric variety corresponding to $\Sigma'$ (which may not be projective
ay more). The refinement we choose is a triangulation of $\Sigma$, which need not completely remove
the singularities from $X_1$, but it will be enough for our purposes.
Denote the blown up variety by $\hat{X}_1$.
Completeness of our blowup is garantueed because $\Sigma$ is a fan whose support is
all of $H/(1,\ldots,1)$, see \cite[Thm 3.1.19]{cls}.
Denote the corresponding completion of the translated variety $X_{\lambda}$
by $\hat{X}_{\lambda}$.
The variety $V_{\lambda}$ is the intersection of $\sum_{i=1}^rx_i=\sum_{j=1}^sy_j$ with the
affine part of $X_{\lambda}$ consisting of all points with non-zero coordinates.
\begin{definition}\label{completiondefinition}
Let notations be as above.
We define $\overline{V_{\lambda}}$ as the completion of $V_{\lambda}$ in $\hat{X}_{\lambda}$.
\end{definition}
We have seen above that the fan $\Sigma$ of $X_1$ has its support in $H/(1,\ldots,1)$.
We choose representatives of $H/(1,\ldots,1)$ in the set ${\cal H}$ defined by
$$\xi_i,\eta_j\in{\msy R}:\ \sum_{i=1}^r p_i\xi_i=\sum_{j=1}^sq_j\eta_j,
\ {\rm and}\ \min_{i,j}(\xi_i,\eta_j)=0.$$
The (closed) cones of $\Sigma$ are then characterized by sets $S_x\subset\{1,\ldots,r\}$ and
$S_y\subset\{1,\ldots,s\}$ via
$i\not\in S_x\Rightarrow \xi_i=0$ and $j\not\in S_y\Rightarrow \eta_j=0$.
We now proceed to choose a refinement of $\Sigma$. Let $\overline{\cal H}$ be
the convex closure of ${\cal H}$.
Note that it is the ${\msy R}_{\ge0}$ span of
the vectors $\v v_{ij},\ i=1,\ldots,r;j=1,\ldots,s$ defined by
$$\v v_{ij}=(0,\ldots,q_j,\ldots,0;0,\ldots,p_i,\ldots,0)\in{\msy R}^{r+s},$$
where $q_j$ is at the $i$-th coordinate position of ${\msy R}^r$ and
$p_i$ at the $j$-th position of ${\msy R}^s$.
Let $\Delta^{r-1}$ be the standard simplex defined by $p_1\xi_1+\cdots+p_r\xi_r=1$
and $\xi_i\ge0$ and $\Delta^{s-1}$ the standard simplex defined by
$q_1\eta_1+\cdots+q_s\eta_s=1,\ \eta_j\ge0$. Then we
see that the points corresponding to the vectors $\v v_{ij}/(p_iq_j)$ are
vertices of a polytope isomorphic to $\Delta^{r-1}\times\Delta^{s-1}$. Thus
$\overline{\cal H}$ is a cone over the $(r+s-2)$-dimensional product simplex
$\Delta^{r-1}\times\Delta^{s-1}$.
A {\it simplicial subcone} of $\overline{\cal H}$ is the ${\msy R}_{\ge0}$-span of any set of
vectors $\v v_{i_1j_1},\ldots,\v v_{i_{r+s-1}j_{r+s-1}}$ such that
$\{i_1,\ldots,i_{r+s-1}\}=\{1,\ldots,r\}$ and $\{j_1,\ldots,j_{r+s-1}\}=
\{1,\ldots,s\}$. By abuse of language we will call the corresponding set of
index pairs
$$(i_1,j_1),\ldots,(i_{r+s-1},j_{r+s-1})$$
a {\it simplex}. We now triangulate $\overline{\cal H}$. That is, we write
$\overline{\cal H}$ as a union of simplicial cones whose interiors do not intersect. To do
this we take a triangulation of $\Delta^{r-1}\times\Delta^{s-1}$ and consider the
cones over the simplices of this triangulation. The triangulation we choose
is the so-called {\it staircase triangulation}, see \cite[Lemma 6.2.8]{lrs}.
\begin{proposition}A triangulation of $\overline{\cal H}$ is given by the set of cones over
all simplices $(i_1,j_1),\ldots,(i_{r+s-1},j_{r+s-1})$ with $i_1\le i_2\le\cdots\le i_{r+s-1}$
and $j_1\le j_2\le\cdots\le j_{r+s-1}$.
\end{proposition}
Thus simplices of the staircase triangulation are in one-to-one correspondence with
lattice paths in the plane that go from $(1,1)$ to $(r,s)$ by making steps of size $1$ in north
or east direction.
To illustrate this proposition we give the staircase triangulation for $\Delta^2\times \Delta^1$,
which is a triangular prism. In this case $r=3$, $s=2$. The sequences $(i_1,j_1),\ldots,
(i_4,j_4)$ are depicted as dots in the following pictures,
\centerline{\includegraphics[height=2cm]{prismtriangulation.pdf}}
There are three possible pictures which corresponds to the fact that a triangulation of
a triangular prism consists of three simplices.
Any subset of a simplex is called a {\it cell}. The cone spanned by the corresponding vectors
$\v v_{ij}$ is called a {\it cellular cone}. From now on we restrict to cells which are subset
of the simplices belonging to the staircase triangulation. They are characterized by
sequences $(i_1,j_1),\ldots,(i_l,j_l)$ with $i_1\le\cdots\le i_l$ and $j_1\le\ldots\le j_l$, but
$\{i_1,\ldots,i_l\}$ need not equal $\{1,2,\ldots,r\}$ anymore and similarly for $\{j_1,\ldots,j_l\}$.
The corresponding cellular cones have dimension $l$ and can be pictured in a similar way as above.
To any cell $C$ we can associate the supports $S_x(C)=\{i_1,\ldots,i_l\}$ and
$S_y(C)=\{j_1,\ldots,j_l\}$. Let $S(C)$ be the disjoint union of $S_x$ and $S_y$.
Then clearly, $l+1\le |S(C)|\le\min(r+s,2l)$. We say that the support of $C$ is {\it maximal} if
$|S(C)|=r+s$. The cellular cone corresponding to a maximal cel is not contained in ${\cal H}$,
the cone corresponding to a non-maximal cel is contained in ${\cal H}$.
Furthermore, if $S(C)$ is not maximal, then $|S(C)|\le r+s-2$, since points
with only one non-zero coordinate cannot exist in ${\cal H}$ because of the equation
$\sum_ip_i\xi_1=\sum_jq_j\eta_j$.
\begin{definition}
The fan $\Sigma'$ is defined by all
cellular cones whose support is not maximal.
\end{definition}
Choose a cell $C$ given by $(i_1,j_1),\ldots,(i_l,j_l)$. To it we shall associate a component,
denoted by $W_{C,\lambda}$, of the completion of $V_{\lambda}$.
Choose an index $\sigma$ not in $S_x$, or if that is not possible, $\sigma\not\in S_y$.
Assume the first case happens. Then set $x_{\sigma}=1$ (dehomogenization). However,
for the sake of elegance, we continue to write $x_{\sigma}$.
We replace the coordinates $x_i,i\in S_x,y_j,j\in S_y$ by monomials
in $|S|$ new variables, namely, $u_1,\ldots,u_l, w_1,\ldots,w_{|S|-l-1},z$. It could happen that
there are no variables $w_i$, namely if $|S|$ has the minimal value $l+1$. We begin by choosing
$l$ lattice points $\v u_1,\ldots,\v u_l$ in the ${\msy R}_{\ge0}$-span of
$\v v_{i_1j_1},\ldots,\v v_{i_lj_l}$ such that
they form a basis of the lattice points in the ${\msy R}$-span of
$\v v_{i_1j_1},\ldots,\v v_{i_lj_l}$.
We next extend this basis to a basis of the lattice points in the space
$p_1\xi_1+\cdots+p_r\xi_r=q_1\eta_1+\cdots+q_s\eta_s$
with support in $S$. That is, for any such lattice point we have $\xi_i=0$
if $i\not\in S_x$ and $y_j=0$ if
$j\not\in S_y$. The additional vectors are denoted by $\v w_1,\ldots,\v w_{|S|-l-1}$.
Finally we extend this
basis to a basis of all lattice points with support in $S$ by adding a suitable vector $\v z$.
Let us denote the $m$-th component of $\v u_1$ by $u_{1i_1}$, and similarly for the other indices.
Now replace $x_{i_1}$ in equation (\ref{Vequation}) by
$$u_1^{u_{1i_1}}u_2^{u_{2i_1}}\cdots u_l^{u_{li_1}}w_1^{w_{1i_1}}\cdots z^{z_{1i_1}}$$
and similarly for the other variables $x_m,m\in S_x,y_m,m\in S_y$. It is important to notice that the
minimum of the $i_1$-th components of $\v u_1,\ldots,\v u_l$ is positive. The component
$W_{C,\lambda}$ to
be added consists of the points with $u_1=\cdots=u_l=0$ and all other coordinates non-zero.
Setting $u_1=u_2=\cdots=u_l=0$ in our equation will render $x_i,i\in S_x,y_j\in S_y$ zero. Furthermore, since the $\v u_i$ and $\v w_i$ all satisfy the
equation $p_1\xi_1+\cdots+p_r\xi_r=q_1\eta_1+\cdots+q_s\eta_s$ the variables $u_i,w_i$ will
be absent from the equation $\lambda x_1^{p_1}\cdots x_r^{p_r}=y_1^{q_1}\cdots y_s^{q_s}$
after we made the substitution. The variable $z$ will occur in the equation with exponent $\pm a_S$
where $a_S=\gcd(p_{i_1},\ldots,p_{i_l},q_{j_1},\ldots,q_{j_l})$.
After the variable substitution and setting $u_1=\cdots=u_l=0$ we are left with the equations
\begin{equation}\label{Wequation}
\sum_{i\not\in S_x}x_i=\sum_{j\not\in S_y}y_j,
\quad \lambda z^{a_S}\prod_{i\not\in S_x}x_i^{p_i}=\prod_{j\not\in S_y}y_j^{q_j}.
\end{equation}
Notice that the variables $w_i$ have disappeared from the equations. We choose them arbitrarily
and non-zero. The variables $x_i,i\not\in S_x$ and $y_j,j\not\in S_y$ are also taken
non-zero. So the component $W_{C,\lambda}$ is ${\msy G}_m^{|S|-l-1}$ times the affine variety
of points with non-zero coordinates given by the equations (\ref{Wequation}).
Before giving the point count on $W_{C,\lambda}$ we present an example of the
evaluation of some components. Consider the case $p_1=6,p_2=3,p_3=2,p_4=1,q_1=8,q_2=q_3=4$.
We have $r=4,s=2$ and the equations of $V_{\lambda}$ read
\begin{equation}\label{Vexample}
x_1+x_2+x_3+x_4=y_1+y_2,\quad \lambda x_1^6x_2^3x_3^2x_4=y_1^8y_2^4.
\end{equation}
We would like to construct the components $W_{C,\lambda}$ for the following
cells $C$:
\centerline{\includegraphics[height=1.5cm]{superprismtriangulation.pdf}}
In all three examples we shall set $x_4=1$.
First consider the cell given by $(1,1)$. The relevant variables that become $0$ are $x_1,y_1$.
Choose $\v u_1=(4,3)$ (actually $\v u_1\in{\msy R}^6$, but we write down only the non-zero coordinates
at the positions corresponding to $x_1,y_1$). We choose $\v z=(1,1)$.
This gives rise to the substitution $x_1=u_1^4z,y_1=u_1^3z$. After this substitution equations
(\ref{Vexample}) become
$$u_1^4z+x_2+x_3+1=u_1^3z+y_2,\quad \lambda x_2^3x_3^2=z^2y_2^4.$$
After setting $u_1=0$ we get
$$x_2+x_3+1=y_2,\qquad \lambda x_2^3x_3^2=z^2y_2^4.$$
Now consider the cell given by $(1,1),(2,1)$. The relevant variables that become $0$ are $x_1,x_2,y_1$.
For $\v u_1,\v u_2$ we have to find a lattice basis in the cone spanned by $(4,0,3),(0,8,3)$.
Choose $\v u_1=(4,0,3),\v u_2=(3,2,3)$. Finally we choose $\v z=(0,3,1)$. Our substitution becomes
$x_1=u_1^4u_2^3,x_2=u_2^2z^3,y_1=u_1^3u_2^3z$. Equations (\ref{Vexample}) become
$$u_1^4u_2^3+u_2^2z^3+x_3+1=u_1^3u_2^3z,\quad \lambda zx_3^2=y_2^4.$$
Setting $u_1=u_2=0$ leaves us with
$$x_3+1,\qquad \lambda zx_3^2=y_2^4.$$
Now consider the cell given by $(1,1),(2,2)$. That means $x_1,x_2,y_1,y_2$ are the relevant variables
that go to $0$. We choose $\v u_1=(4,0,3,0),\v u_2=(0,4,0,3)$. Since $|S|-l-1=1$ in this case we can choose
a vector $\v w_1$, say $\v w_1=(1,-2,1,-2)$. Note that all three vectors satisfy
$6t_1+3t_2=8t_3+4t_4$. Finally we take $\v z=(0,1,0,1)$. This yields the substitution
$$x_1=u_1^4w_1,\ x_2=u_2^4w_1^{-2}z,\ y_1=u_1^3w_1,\ y_2=u_2^3w_1^{-2}z$$
and equations (\ref{Vexample}) become
$$u_1^4w_1+u_2^4w_1^{-2}z+x_3+1=u_1^3w_1+u_2^3w_1^{-2}z,\quad
\lambda x_3^2=z.$$
Setting $u_1=u_2=0$ yields $x_3+1=0,\lambda x_3^2=z$. We see that $w_1$ has disappeared from the equations.
\begin{proposition}
The points of $W_{C,\lambda}$ are non-singular.
\end{proposition}
\begin{proof}{}Let
$V$ be an affine variety given by two equations
$$F(x_1,\ldots,x_n)=G(x_1,\ldots,x_n)=0\;.$$
Then a point $P\in V$ is
non-singular if and only if the vectors $(F_1(P),\ldots,F_n(P))$ and $(G_1(P),\ldots,G_n(P))$ are dependent.
Here $F_i,G_i$ denotes differentiation of $F$, resp $G$, with respect to $x_i$. For $F$ and $G$ we take the equations (\ref{Vequation})
with $x_{\sigma}=1$ and where we have made the necessary change of variables in the construction of $W_{C,\lambda}$.
Let us say $F$ comes from the linear equation. Then $F$ has the form $\Sigma+ 1+\sum_{i\not\in S_x\cup\sigma}x_i=
\sum_{j\not\in S_y}y_j$, where $\Sigma$ is the sum of all terms containing powers of the $u_j$.
The polynomial $G$ has the form $\lambda z^a\prod_{i\not\in S_x\cup\sigma}x_i^{p_i}=\prod_{j\not\in S_y}y_j^{q_j}$.
Choose a point $P\in W(C\lambda)$ and consider the derivates of $F,G$ at that point. In particular the derivative of $F$ with respect
to $z$ is $0$, whereas $G_z(P)$ is $az^{a-1}\prod_ix_i^{p_i}\ne0$. If the vectors of derivations were dependent we conclude
that all derivatives of $F$ are zero at $P$. However, since $|S|\le r+s-2$,
there exists $i\not\in S_x\cup\sigma$ or $j\not\in S_y$ and
the derivative of $F$ with respect to the corresponding variable is $\pm1$. So we have a contradiction and $P$ is non-singular.
\end{proof}
\begin{proposition}\label{componentcount}
Let $C$ be a cell as above, $l$ its number of elements and
$S$ its support. Let $W_{C,\lambda}$ be the product of ${\msy G}_m^{|S|-l-1}$
and the variety defined by the equations (\ref{Wequation}). Then
$$
|W_{C,\lambda}(\bbbf_q^{\times})|={1\over q}(q-1)^{r+s-l-2}+\\
{(-1)^{|S|}\over q}(q-1)^{|S|-l-1}\sum_{m=0}^{q-2}
\delta(a_Sm)g(\v pm,-\v qm)\omega(\epsilon\lambda)^m,
$$
where
$$g(\v pm,-\v qm)=\prod_{i=1}^rg(p_im)\prod_{j=1}^sg(-q_jm)\ {\rm and}\ \epsilon=(-1)^{\sum_{j=1}^sq_j}.$$
In addition, if we take $C$ empty, then $|S|=l=0$ and the point count coincides with
Proposition \ref{affinepointcount} if we set $a_S=0$.
\end{proposition}
\begin{proof}{}We work with a non-empty cell. First we count the number $N$ of solutions of
equations (\ref{Wequation}) and then multiply the result by $(q-1)^{|S|-l-1}$.
Let us rewrite the equation (\ref{Wequation}) in the more managable form
$$1+x_1+\cdots+x_n=0,\qquad \epsilon'=\lambda z^{a_S}\prod_{i=1}^nx_i^{a_i}.$$
To arrive at this we set $x_{\sigma}=1$ and replace $-y_j$ by variables $x_m$
and the set $\{a_1,\ldots,a_n\}$ to be the set consisting of $p_i,i\not\in S_x\cup\sigma$
and $-q_j,j\not\in S_y$. Finally, $\epsilon'=(-1)^{\sum_{j\in S_y}q_j}$. Notice
that $n=r+s-|S|-1$.
De cardinality $N$ is computed using
$$qN=\sum_{z,x_i\in\bbbf_q^{\times},v\in{\msy F}_q,\epsilon'=\lambda z^{a_S}\prod_ix_i^{a_i}}
\psi_q(v(1+\sum_ix_i))\;,$$
where the summation and product with index $i$ are to be taken over $i=1,2,\ldots,n$.
We determine the Fourier coefficient $c_m$ of this expression.
$$c_m={1\over q-1}\sum_{z,x_i,\lambda\in\bbbf_q^{\times},v\in{\msy F}_q,\epsilon'=\lambda z^{a_S}\prod_ix_i^{a_i}}
\psi_q(v(1+\sum_ix_i))\ \omega(\lambda)^{-m}\;.$$
Substitute $\lambda=\epsilon' z^{-a_S}\prod_ix_i^{-a_i}$ to get
$$c_m={1\over q-1}\sum_{z,x_i\in\bbbf_q^{\times},v\in{\msy F}_q}
\psi_q(v(1+\sum_ix_i))\ \omega (z^{a_S}\prod_ix_i^{a_i})^m\omega(\epsilon')^m\;.$$
Summation over $z$ yields the factor $(q-1)\delta(a_Sm)$. Then, using Lemma
\ref{gaussproduct}, summation over $v$ and the $x_i$ yields
$$c_m=\delta(a_Sm)\left((q-1)^{r+s-|S|-1}\delta(am)
+{1\over q-1}g(-\sum_ia_im)\prod_{i}g(a_im)\right)\omega(\epsilon')^m\;,$$
where $a=\gcd_{i=1,\ldots,n}(a_i)$.
We rewrite this expression for $c_m$ in terms of our original data.
First of all $a$ is the gcd of all $p_i,q_j$ with $i\not\in S_x\cup\sigma$ and
$j\not\in S_y$. Note that any divisor $d$ of both $a$ and $a_S$ divides all
$p_i,q_j$ except possibly $p_{\sigma}$. Because the sums of all $p_i$
equals the sum of all $q_j$, $d$ also divides $p_{\sigma}$. Since
the gcd of all $p_i,q_j$ is $1$ we conclude that $\gcd(a,a_S)=1$, hence
$\delta(am)\delta(a_Sm)=\delta(m)$.
Secondly, if $a_Sm\is0\mod{{\mathbbm{q}}}$ then
$p_im\is0\mod{{\mathbbm{q}}}$, hence $g(p_im)=-1$, for all $i\in S_x\cup\sigma$ and similarly
$g(-q_jm)=-1$ for all $j\in S_y$. Furthermore,
$-\sum_ia_im\equiv-\sum_{i=1}^rp_im+\sum_{j=1}^sq_jm+p_{\sigma}m\mod{{\mathbbm{q}}}\equiv p_{\sigma}m\mod{{\mathbbm{q}}}$. Hence
$$\delta(a_Sm)g(-\sum_ia_im)\prod_ig(a_im)=(-1)^{|S|}
\delta(a_Sm)\prod_{i=1}^rg(p_im)\prod_{j=1}^sg(-q_jm)\;.$$
Thirdly, if $\delta(a_Sm)=1$, then $m\sum_{j\in S_y}q_j$ is divisible by $a_Sm$, hence divisible
by $q-1$. If $q$ is odd this means that $m\sum_{j\not\in S_y}q_j\equiv m\sum_{j=1}^sq_j\mod{2}$.
hence $(\epsilon')^m=\epsilon^m$. If $q$ is even we work in characteristic $2$ and thus
$1=-1$. We obtain
$$c_m=(q-1)^{r+s-|S|-1}\delta(m)+
(-1)^{|S|}\delta(a_Sm)\prod_{i=1}^rg(p_im)\prod_{j=1}^sg(-q_jm)\omega(\epsilon)^m.$$
Division by $q$, multiplication by $(q-1)^{|S|-l-1}\omega(\lambda)^m$,
and summation over $m$ yields the desired result.
\end{proof}
\medskip
Denote by ${\cal T}_{rs}$ the set of all cells of the staircase triangulation of
$\Delta^{r-1}\times \Delta^{s-1}$.
Recall that any such cell $C$ of ${\cal T}_{rs}$ is
given by a sequence $(i_1,j_1),\ldots,(i_l,j_l)$ of distinct pairs such
that $i_{m+1}-i_m,j_{m+1}-j_m\ge0$ for $m=1,\ldots,l-1$. The number $l=l(C)$ is called the
length of the cell. As before, the support $S(C)$ is the disjoint union of the index sets
$\{i_1,\ldots,i_l\},\{j_1,\ldots,j_l\}$. Notice that the difference
$|S(C)|-l(C)-1$ is equal to the number of indices $m$ such that $i_{m+1}-i_m,j_{m+1}-j_m>0$.
We note specifically that the sequence may be empty, i.e. $l=0$, in which case we speak
of the empty cell.
\begin{proposition}\label{summationterm}
With the above notation we have
$$\sum_C (q-1)^{|S(C)|-l(C)}(-1)^{|S(C)|}=q^{\min(r,s)}\;,$$
where the summation extends over all cells $C$ of ${\cal T}_{rs}$, including the
empty one.
\end{proposition}
\begin{proof}{}
We divide the cells into two types. The set $A$ of cells for which there exists
$m$ such that $i_m\ne j_m$ and the set $B$ of cells for which $i_m=j_m$ for all
$m$, including the empty one.
We show that the total contribution in the sum coming from the cells in $A$
is zero. Denote by $p$ the minimum of all
$\min(i_m,j_m)$ for which $i_m\ne j_m$. We distinguish two types in the set $A$.
\begin{itemize}
\item[(i)] The point $(p,p)$ is contained in the cell.
\item[(ii)] The point $(p,p)$ is not contained in the cell.
\end{itemize}
We can map the cells from $A(i)$ one-to-one to $A(ii)$ by deletion of the
point $(p,p)$. In the process the difference $|S(C)|-l(C)$ does not change but the
value of $|S(C)|$ changes by one. Hence the terms that are paired by this
mapping cancel. So it remains to compute the contribution from the cells
of $B$. They are all of the form $(i_1,i_1),\ldots,(i_l,i_l)$ and the
length is at most $\min(r,s)$. For each cell in $B$ the number $|S|=2l$ is
even and also, $|S|-l=l$. The number of cells in $B$ of length $l$
equals ${\min(r,s)\choose l}$. Hence the sum reads
$$\sum_{l=0}^{\min(r,s)}{\min(r,s)\choose l}(q-1)^{l}=(q-1+1)^{\min(r,s)}=q^{\min(r,s)}\;,$$
as asserted.
\end{proof}
\begin{proposition}\label{summationmain}
With the above notations we have
$$\sum_C (q-1)^{r+s-l(C)-1}=\sum_{m=0}^{\min(r-1,s-1)}{r-1\choose m}{s-1\choose m}q^{r+s-m-1}\;,$$
where the summation extends over all cells $C$ of ${\cal T}_{rs}$, including the
empty one.
\end{proposition}
\begin{proof}{}
Denote the sum on the left by $A_{rs}$ and the sum on the right by $B_{rs}$.
When $r=1$ the number of cells
of length $l$ is equal to ${s\choose l}$. Hence
$$A_{1s}=\sum_{l=0}^s{s\choose l}(q-1)^{1+s-l-1}=q^s\;,$$
which equals $B_{1s}$. is the desired answer when $r=1$. Similarly we have
$A_{r1}=B_{r1}$ for all $r$.
We claim that for all $r,s>1$,
$$A_{rs}=qA_{r-1,s}+qA_{r,s-1}-(q^2-q)A_{r-1,s-1}\;.$$
The summation over the cells of ${\cal T}_{rs}$ consists of the cells
in ${\cal T}_{r-1,s}$, the cells ${\cal T}_{r,s-1}$ and the cells
with $(i_l,j_l)=(r,s)$. The contribution to
$A_{rs}$ from ${\cal T}_{r-1,s}$ equals $(q-1)A_{r-1,s}$ because
$k$ is increased by $1$. Similarly we get a contribution from ${\cal T}_{r,s-1}$.
However, cells in ${\cal T}_{r-1,s-1}$ have
been counted doubly. So we have to subtract the latter contribution once, which
equals $(q-1)^2A_{r-1,s-1}$. This gives the contribution
$$(q-1)A_{r-1,s}+(q-1)A_{r,s-1}-(q-1)^2A_{r-1,s-1}\;.$$
The cells with endpoint $(r,s)$ have
not been counted yet. They arise by adding $(r,s)$ to the cells
in ${\cal T}_{r-1,s-1},{\cal T}_{r-1,s}$ and ${\cal T}_{r,s-1}$. The length of the cell increases
by $1$ and $k$ increases by 1, except with ${\cal T}_{r-1,s-1}$ in which case $k$
increases by $2$. Using inclusion-exclusion we arrive at a contribution
$$A_{r-1,s}+A_{r,s-1}-(q-1)A_{r-1,s-1}\;.$$
Our claim now follows. We see that the $A_{rs}$ are uniquely determined
by the recursion and $A_{r1}=q^r,A_{1s}=q^s$.
It remains to verify
that $B_{rs}$ satisfies the same recursion.
Consider $B_{rs}-qB_{r-1,s}-qB_{r,s-1}+(q^2-q)B_{r-1,s-1}$ for $r,d\ge1$.
It equals
\begin{eqnarray*}
&&\sum_{m\ge0}\left({r-1\choose m}{s-1\choose m}-{r-2\choose m}{s-1\choose m}
-{r-1\choose m}{s-2\choose m}+{r-2\choose m}{s-2\choose m}\right)q^{r+s-m-1}\\
&&-{r-2\choose m}{s-2\choose m}q^{r+s-m-2}.
\end{eqnarray*}
where we use the convention that we sum over all $m\ge0$ and use ${a\choose m}=0$
if $a$ is an integer $<m$.
The terms of the sum on the first line are easily seen to equal
${r-2\choose m-1}{s-2\choose m-1}q^{r+s-m-1}$ if $m\ge1$ and $0$ if $m=0$. So we
are left with
$$\sum_{m\ge1}{r-2\choose m-1}{s-2\choose m-1}q^{r+s-m-1}
-\sum_{m\ge0}{r-2\choose m}{s-2\choose m}q^{r+s-m-2}\;,$$
which equals $0$. Hence $B_{rs}$ satisfies the recurrence and the equality $A_{rs}=B_{rs}$ holds
for all $r,s$.
\end{proof}
\begin{proposition}\label{summationmaximal}
With the above notation we have
$$\sum_{S(C)=r+s} (q-1)^{r+s-l(C)-1}=\sum_{m=0}^{\min(r-1,s-1)}{r-1\choose m}{s-1\choose m}q^m\;,$$
where the summation extends over all cells $C$ of ${\cal T}_{rs}$ with $|S(C)|=r+s$.
\end{proposition}
We call cells with $|S(C)|=r+s$ {\it maximal cells}. They correspond to the
cellular cones that are not contained in ${\cal H}$.
Although these maximal cells do not contribute to the pointcount on
$\overline{V_{\lambda}}$ it is convenient to include them in summations over the
cells of ${\cal T}_{rs}$.
\medskip
\begin{proof}{}The cells $C$ with $|S(C)|=r+s$ are given by $(i_1,j_1),\cdots,(i_l,j_l)$
with $(i_{m+1},j_{m+1})-(i_m,j_m)\in\{(1,0),(0,1),(1,1)\}$ for all $m$
and $(i_1,j_1)=(1,1)$ and $(i_l,j_l)=(r,s)$.
Let $D_{rs}$ be the corresponding
sum. We claim that
$$D_{rs}=D_{r-1,s}+D_{r,s-1}+(q-1)D_{r-1,s-1}$$
for all $r,s>1$. This is a consequence of the fact that maximal cells in ${\cal T}_{rs}$
arise by adding $(r,s)$ to a maximal cell in ${\cal T}_{r-1,s}$,
${\cal T}_{r,s-1}$ or ${\cal T}_{r-1,s-1}$. It is clear that $D_{1s}=D_{r1}=1$.
Together with the recurrence and an argument as in the previous proposition
we arrive at our assertion.
\end{proof}
According to Proposition \ref{componentcount} we associate to any cell $C$ the counting number
$$
N(C)={1\over q}(q-1)^{r+s-l(C)-2}+{(q-1)^{|S(C)|-l(C)}\over q(q-1)}\sum_{m=0}^{q-2}
(-1)^{|S(C)|}\delta(a_{S(C)}m)g(\v pm,-\v qm)\omega(\epsilon\lambda)^m\;,$$
where
$$g(\v pm,-\v qm)=\prod_{i=1}^rg(p_im)\prod_{j=1}^sg(-q_jm).$$
In particular, if $|S(C)|=l(C)=0$ ($C$ empty), then this number coincides
with $|V_{\lambda}(\bbbf_q^{\times})|$ according to Proposition \ref{affinepointcount}.
To obtain the cardinality of $\overline{V_{\lambda}}({\msy F}_q)$, we take the sum of $N(C)$ over
all cells $C$ with $|S(C)|\le r+s-2$. A straightforward check shows that our definition
gives $N(C)=0$ if $|S(C)|=r+s-1$. Hence we can take the sum of $N(C)$ over all cells $C$ and subtract
the contribution from the maximal cells.
We first sum over all $C$ and do this term by term in the above expression for $N(C)$.
The summation of ${1\over q}(q-1)^{r+s-l(C)-2}$ yields, according to Proposition
\ref{summationmain},
$${1\over q-1}\sum_{m=0}^{\min(r-1,s-1)}{r-1\choose m}{s-1\choose m}q^{r+s-m-2}\;.$$
Now choose the $m$-th term in the Fourier sum. Let $I(m)$ be the set of $i$ such that $p_im\is0
\mod{{\mathbbm{q}}}$ and $J(m)$ the set of $j$ with $q_jm\is0\mod{{\mathbbm{q}}}$.
Since $\delta(a_{S(C)}m)$ is only non-zero if
$S_x(C)\subset I(m)$ and $S_y(C)\subset J(m)$, we must sum over all such $C$.
According to Proposition \ref{summationterm} the sum of $(-1)^{S(C)}(q-1)^{|S(C)|-l(C)}$ over such cells is equal
to $q^{\min(|I(m)|,|J(m)|)}$. The total contribution in the sum of the $N(C)$ reads
$${1\over q-1}q^{\min(|I(m)|-1,|J(m)|-1)}g(p_1m)\cdots g(p_rm)g(-q_1m)\cdots g(-q_sm)\;.$$
Note that $\min(|I(m)|,|J(m)|)$ equals $s(m)$, which occurs in Theorem \ref{rewriteZ}.
We now subtract the contributions coming from the maximal cells. If $C$ is maximal,
then $a_{S(C)}=1$ since the gcd of all $p_i,q_j$ is $1$.
Hence $\delta(a_{S(C)}m)=1$ if and only if $m=0$, and $0$ otherwise.
So, for a maximal cell $C$ we get
$$N(C)={(q-1)^{r+s-l(C)-1}\over q}((q-1)^{-1}+1)=(q-1)^{r+s-l(C)-2}\;.$$
According to Proposition \ref{summationmaximal} summation over these terms
gives
$${1\over q-1}\sum_{m=0}^{\min(r-1,s-1)}{r-1\choose m}{s-1\choose m}q^m\;.$$
Together these contributions yield
$$\sum_{m=0}^{\min(r-1,s-1)}{r-1\choose m}{s-1\choose m}{q^{r+s-m-2}-q^m\over q-1}
+{1\over q-1}\sum_{m=0}^{q-2}q^{s(m)-1}g(\v pm,-\v qm)\omega(\epsilon\lambda)^m\;.$$
Combination of this result with Theorem \ref{rewriteZ} yields Theorem \ref{main}.
\section{Some alternative varieties}
Let $k=r+s$. Notice that the dimension of the variety $V_{\lambda}$ equals $k-3$.
In some cases this
is higher than expected. For example, if $\bm{\alpha}=\{1/2,1,2\}$ and
$\bm{\beta}=\{1,1\}$ one would get $p_1=p_2=2,q_1=q_2,q_3=q_4=1$.
So $k=6$ and the dimension of $V_{\lambda}$ equals $3$. However, one would expect
that the Legendre family of elliptic curves is associated to this particular choice
of $\bm{\alpha}$ and $\bm{\beta}$. One can remedy this by noticing that the
set $\{2,2,-1,-1,-1,-1\}$ can be divided into two sets $\{2,-1,-1\}$ both of
whose elements sum up to $0$. In such cases the dimension of $V_{\lambda}$ can be
reduced by considering an alternative variety.
Let us write $(a_1,\ldots,a_k)=(p_1,\ldots,p_r,-q_1,\ldots,-q_s)$.
Suppose $\{1,\ldots,k\}$ is a union of disjoint subsets $K_1,\ldots,K_l$ such
that $\sum_{r\in K_i}a_r=0$ for $i=1,\ldots,l$. Let $A_i=(a_j)_{j\in K_i}$ for $i=1,\ldots,l$.
Define the variety ${\cal V}_{\lambda}$ as the subvariety of ${\msy P}^{|K_1|-1}\times
\cdots\times{\msy P}^{|K_l|-1}$ given by the equations
$$\sum_{i\in K_1}x_i=\cdots=\sum_{i\in K_l}x_i=0,\qquad \lambda x_1^{a_1}\cdots x_k^{a_k}=\epsilon\;.$$
Note that ${\cal V}_{\lambda}$ has dimension $k-2l-1$. In our particular example we get
dimension one.
\begin{theorem}\label{legendrecount} We use the above notation. In addition we assume that
the gcd of the elements $\{a_i|i\in K_j\}$ is one for all $j=1,\ldots,l$. Then
$$|{\cal V}_{\lambda}(\bbbf_q^{\times})|={1\over q-1}\prod_{j=1}^l Q_{|K_j|}(q)
+{1\over q^l(q-1)}\sum_{m=1}^{q-2}g(a_1m)\cdots g(a_km)
\ \omega(\epsilon\lambda)^m\;,$$
where $Q_r(x)=((x-1)^{r-1}+(-1)^r)/x$.
\end{theorem}
\begin{proof}{}For each $K_i$ we choose $k_i\in K_i$ and set $x_{k_i}=1$ (dehomogenization).
We introduce the short-hand notation $\Sigma_j=1+\sum_{i\in K_j,i\ne k_j}x_i$. Then
$$q^l|{\cal V}_{\lambda}(\bbbf_q^{\times})|=
\sum_{u_1,\ldots,u_l\in{\msy F}_q,\v x\in(\bbbf_q^{\times})^k,\lambda\v x^{\v a}=1}
\psi_q(u_1\Sigma_1+\cdots+u_l\Sigma_l)\;.$$
The latter sum is a function of $\lambda\in\bbbf_q^{\times}$ and the $m$-th coefficient of its
Fourier series reads
$$c_m={1\over q-1}\sum_{u_1,\ldots,u_l\in{\msy F}_q,\lambda,x_i\in\bbbf_q^{\times},\lambda\v x^{\v a}=1}
\psi_q(u_1\Sigma_1+\cdots+u_l\Sigma_l)\ \omega(\lambda)^{-m}\;.$$
In the summation over $\v x$ we let $x_{k_i}=1$ for $i=1,\ldots,l$.
Substitution of $\lambda=\epsilon\v x^{-\v a}$ yields
$$c_m={1\over q-1}\sum_{u_1,\ldots,u_l\in{\msy F}_q,x_i\in\bbbf_q^{\times}}
\psi_q(u_1\Sigma_1+\cdots+u_l\Sigma_l)\ \omega(\epsilon\v x^{\v a})^{m}\;.$$
We now use Lemma \ref{gaussproduct} to each of the sets $K_j$ to get
$$c_0={q^{l}\over q-1}(q-1)^l\prod_{j=1}^lQ_{|K_j|}(q)$$
and for $m\not\is0\mod{{\mathbbm{q}}}$,
$$c_m=(q-1)^{l-1}g(a_1m)\cdots g(a_km)\;.$$
Thus we get
$$|{\cal V}_{\lambda}(\bbbf_q^{\times})|=
{1\over q-1}\prod_{j=1}^lQ_j(q)+{1\over q^l(q-1)}\sum_{m=1}^{q-2}g(a_1m)\cdots
g(a_km)\omega(\epsilon\lambda)^m\;,$$
as asserted.
\end{proof}
|
1,314,259,996,633 | arxiv | \section{Introduction}
A transport system is used to move goods from sources to targets. In
building such a system, one typically aims at minimizing the total
transportation cost. This consideration has motivated the theoretical
studies of many optimal transport problems. For instance, the well-known
Monge-Kantorovich problem (e.g. \cite{Ambrosio}, \cite{Brenier}, \cite%
{caffarelli}, \cite{evan2}, \cite{mccann}, \cite{otto}, \cite{mtw}, \cite{monge}, \cite%
{villani}) studies how to find an optimal transport map or transport plan
between two general probability measures with the optimality being measured
by minimizing some cost function. Applications of the Monge-Kantorovich problem
to economics may be found in the literature such as
\cite{kantorovich}, \cite{buttazzo} and \cite{mccann1}. The present paper gives
another application by introducing the economics notion of an ``exchange
value'' which is suitable for a ramified transport system.
\textit{Ramified optimal transportation}
has been recently proposed and studied (e.g. \cite{gilbert}, \cite{xia1}, \cite{msm}, \cite{xia2}
, \cite{BCM}, \cite{buttazzo}, \cite{xia4}, \cite{Solimini}, \cite{paolini}, %
\cite{book}, \cite{xia5}, \cite{xia6}) to model a branching transport
system. An essential feature of such a transportation is to favor
transportation in groups via a cost function which depends concavely on
quantity. Transport systems with such branching structures are observable
not only in nature as in trees, blood vessels, river channel networks,
lightning, etc. but also in efficiently designed transport systems such as
used in railway configurations and postage delivery networks. Those studies
have focused on the cost value of a branching transport system in terms of
its effectiveness in reducing transportation cost.
In this article, we show that there is another value, named as \textit{\
exchange value}, embedded in some ramified transport systems.
\begin{figure}[h]
\centering
\subfloat[
$G_1$]{\label{G_1}\includegraphics[width=0.4\textwidth,
height=2.25in]{Fig1a.PNG}} \hspace{0.5in}
\subfloat[
$G_2$]{\label{G_2}\includegraphics[width=0.4\textwidth, height=2.25in]{Fig1b.PNG}}
\caption{Unlike a traditional transport system $G_{1}$, a ramified transport
system $G_{2}$ provides an exchange value.}
\end{figure}
As an illustration, we consider a spacial economy with two goods located at two
distinct points $\left\{ x_{1},x_{2}\right\} $ and two consumers living at
two different locations $\left\{ y_{1},y_{2}\right\} $. The spacial
distribution is shown in Figure 1. Suppose consumer 1 favors good 2 more
than good 1. However, good 2 may be more expensive than good 1 for some
reason such as a higher transportation fee. As a result, she buys good 1
despite the fact that it is not her favorite. On the contrary, consumer 2
favors good 1 but ends up buying good 2, as good 1 is more expensive than
good 2 for him. Given this purchase plan, a traditional transporter will
ship the ordered items in a transport system like $G_{1}$ (see Figure \ref%
{G_1}). However, as shown in \cite{xia1} etc, a transport system like $G_{2}$
(see Figure \ref{G_2}) with some branching structure might be more cost
efficient than $G_{1}$. One may save some transportation cost by using a
transport system like $G_{2}$ instead of $G_{1}$. Now, we observe another
very interesting phenomenon about $G_{2}$. When using this transport system,
one can simply switch the items which leads to consumer 1 getting good 2 and
consumer 2 receives good 1. This exchange of items makes both consumers
better off since they both get what they prefer. More importantly, no extra
transportation cost is incurred during this exchange process. In other
words, a ramified transport system like $G_{2}$ may possess an exchange
value, which cannot be found in other transport systems like $G_{1}$.
The \textit{exchange value} concept of a transport system that we propose
here is valuable for both economics and mathematics. Existing market
theories (e.g. \cite{Arrow}, \cite{Debreu}, \cite{Debreu2}, \cite{Eaton}, %
\cite{Koopmans}, \cite{Lange}, \cite{Mas-Colell}, \cite{Samuelson}) focus on
the mechanism of exchanges between economic agents on an abstract market
with relatively few discussions on its form. Our study complements the
existing theories by showing that a transport system actually serves as a
concrete market whose friction for exchange depends on the structure of the
transport system as well as factors like preferences, prices, spatial
distribution, etc. The existence of such an exchange value is due to the
fact that the transport system provides a medium for potential exchange
between agents. From the perspective of mathematical theory on optimal
transport problem, our study provides another rationale for ramified
structure which usually implies a potential exchange value. Furthermore, a
new optimality criterion needs to be considered when building a transport
system which leads to a new mathematical problem. Instead of simply
minimizing the transportation cost, one might have to minimize the
difference between transportation cost and exchange value.
The remainder of this paper is organized as follows. Section 2 describes the
model environment with a brief review of consumer's problem and related
concepts from ramified optimal transportation. Sections 3 and 4 contain the
main results of the paper. Section 3 proposes an explicit valuation formula
to measure the exchange value for a given compatible transport system. The
exchange value is defined by solving a maximization problem, which has a
unique solution under suitable conditions. Criteria based on transport
structures, preferences and prices are provided to determine the existence
of a positive exchange value. We show that a reasonable combinations of
these factors guarantees a positive exchange value. Section 4 studies a new
optimal transport problem with an objective taking into account of both
transportation cost and exchange value.
In this paper, we will use the following notations:
\begin{itemize}
\item $X$: a compact convex subset of a Euclidean space $\mathbb{R}^{m}$.
\item $\mathbb{R}_{+}^{k}$: a subset of $\mathbb{R}^{k}$ defined as $\left\{
\left( x_{1},...,x_{k}\right) \in \mathbb{R}^{k}:x_{i}\geq 0,\text{ }
i=1,...,k\right\} .$
\item $\mathbb{R}_{++}^{k}$: a subset of $\mathbb{R}^{k}$ defined as $
\left\{ \left( x_{1},...,x_{k}\right) \in \mathbb{R}^{k}:x_{i}>0,\text{ }
i=1,...,k\right\} .$
\item $p_{j}$: a price vector in $\mathbb{R}_{++}^{k}$ faced by consumer $j$
, $j=1,...,\ell .$
\item $q_{j}$: a consumption vector in $\mathbb{R}_{+}^{k}$ of consumer $j$,
$j=1,...,\ell .$
\item $\mathcal{E}$: an economy as defined in (\ref{economy}).
\item $\bar{q}$: the consumption plan as defined in (\ref{q_bar}).
\item $e_{j}\left( p_{j},\tilde{u}_{j}\right) $: the expenditure function of
consumer $j$, $j=1,...,\ell $, as defined in (\ref{ex_min}).
\item $\mathbf{a}$: the atomic measure representing sources of goods, see ( %
\ref{source}).
\item $\mathbf{b}$: the atomic measure representing consumers, see (\ref%
{consumer}).
\item $G$: a transport path from $\mathbf{a}$ to $\mathbf{b}$.
\item $q$: a transport plan from $\mathbf{a}$ to $\mathbf{b}$.
\item $S(q)$: the total expenditure function as defined in (\ref{S_function}%
).
\item $\Omega \left( \bar{q}\right) $: the set of all transport paths
compatible with $\bar{q}$, as defined in (\ref{Omega}).
\item $\mathcal{F}_{G}$: the set of all feasible transport plans of $G$ as
defined in (\ref{feasible}).
\item $\mathcal{V}\left( G\right) $: the exchange value of a transport path $%
G$, as defined in (\ref{main_problem}).
\item $\mathbf{M}_{\alpha }\left( G\right) $: the transportation cost of a
transport path $G$ as defined in (\ref{M_a_cost}).
\end{itemize}
\section{Consumer's Problem and Ramified Optimal Transportation}
\subsection{Consumer's Problem}
Suppose there are $k$ sources of different goods which could be purchased by
$\ell $ consumers distributed on $X$. Each source $x_{i}\in X$ supplies only
one type of goods, $i=1,...,k$. Each consumer $j$ located at $y_{j}\in X$
derives utility from consuming $k$ goods according to a utility function $%
u_{j}:\mathbb{R}_{+}^{k}\rightarrow \mathbb{R}:\left(
q_{1j},...,q_{kj}\right) \mapsto u_{j},$ $j=1,...,\ell ,$ where $u_{j}:%
\mathbb{R}_{+}^{k}\rightarrow \mathbb{R}$ is \textit{continuous, concave and
increasing}$,$ $j=1,...,\ell .$ Each consumer $j$ has an initial wealth $%
w_{j}>0$ and faces a price vector $p_{j}=\left( p_{1j},...,p_{kj}\right) \in
\mathbb{R}_{++}^{k},$ $j=1,...,k.$ We allow the prices to vary across
consumers to accommodate the situation where consumers on different
locations may have to pay different prices for the same good. This variation
could be possibly due to different transportation fees. We denote this
economy as
\begin{equation}
\mathcal{E}=\left( U,P,W;x,y\right) . \label{economy}
\end{equation}
Now, we give a brief review of a consumer's decision problem. Discussions of
these materials can be found in most advanced microeconomics texts (e.g. %
\cite{Mas-Colell}). Each consumer $j$ will choose an utility maximizing
consumption plan given the price $p_{j}$ and wealth $w_{j}.$ More precisely,
the consumption plan $\bar{q}_{j}$ is derived from the following utility
maximizing problem:
\begin{equation}
\bar{q}_{j}\in \arg \max \left\{ u_{j}\left( q_{j}\right) \text{ }|\text{ }%
q_{j}\in \mathbb{R}_{+}^{k},\text{ }p_{j}\cdot q_{j}\leq w_{j}\right\} .
\label{q_bar}
\end{equation}%
Given the continuity and concavity of $u_{j},$ we know this problem has a
solution.
As will be used in defining the exchange value, we also consider the
expenditure minimizing problem for a given utility level $\tilde{u}%
_{j}>u_{j}\left( \mathbf{0}\right) $:
\begin{equation}
e_{j}\left( p_{j},\tilde{u}_{j}\right) =\min \left\{ p_{j}\cdot q_{j}\text{ }%
|\text{ }q_{j}\in \mathbb{R}_{+}^{k},\text{ }u_{j}\left( q_{j}\right) \geq
\tilde{u}_{j}\right\} , \label{ex_min}
\end{equation}%
which is actually a problem dual to the above utility maximization problem.
The continuity and concavity of $u_{j}$ guarantee a solution to this
minimization problem. Here, $e_{j}\left( p_{j},\tilde{u}_{j}\right) $
represents the minimal expenditure needed for consumer $j$ to reach a
utility level $\tilde{u}_{j}.$ Since $\tilde{u}_{j}>u_{j}\left( \mathbf{0}%
\right) $, we know that $e_{j}\left( p_{j},\tilde{u}_{j}\right) >0.$ Lemma %
\ref{Properties_e} (see \cite{Mas-Colell}) shows several standard properties
of the expenditure function $e_{j}$.
\begin{lemma}
\label{Properties_e}Suppose that $u_{j}$ is a continuous, increasing utility
function on $\mathbb{R}_{+}^{k}.$ The expenditure function $e_{j}\left(
p_{j},\tilde{u}_{j}\right) $ is
\begin{enumerate}
\item Homogeneous of degree one in $p_{j}.$
\item Strictly increasing in $\tilde{u}_{j}$ and nondecreasing in $p_{ij}$
for any $i=1,...,k.$
\item Concave in $p_{j}.$
\item Continuous in $p_{j}$ and $\tilde{u}_{j}.$
\end{enumerate}
\end{lemma}
The following lemma shows a nice property of $e_{j}$ when $u_{j}$ is
homogeneous. This property will be used in the next section to characterize
the solution set of the maximization problem defining exchange value.
\begin{lemma}
\label{homogeneous}If $u_{j}:\mathbb{R}_{+}^{k}\rightarrow \mathbb{R}$ is
homogeneous of degree $\beta _{j}>0$, then $e_{j}\left( p_{j},\tilde{u}%
_{j}\right) $ is homogeneous of degree $\frac{1}{\beta _{j}}$ in $\tilde{u}%
_{j}$, which implies
\begin{equation*}
e_{j}\left( p_{j},\tilde{u}_{j}\right) =e_{j}\left( p_{j},1\right) \left(
\tilde{u}_{j}\right) ^{\frac{1}{\beta _{j}}}.
\end{equation*}
\end{lemma}
\begin{proof}
For any $\lambda >0$, since $u_{j}$ is homogeneous of degree $\beta _{j}$,
we have
\begin{eqnarray*}
e_{j}\left( p_{j},\lambda \tilde{u}_{j}\right) &=&\min \left\{ p_{j}\cdot
q_{j}\text{ }|\text{ }q_{j}\in \mathbb{R}_{+}^{k},\text{ }u_{j}\left(
q_{j}\right) \geq \lambda \tilde{u}_{j}\right\} \\
&=&\min \left\{ p_{j}\cdot q_{j}\text{ }|\text{ }q_{j}\in \mathbb{R}_{+}^{k},%
\text{ }u_{j}\left( \left( 1/\lambda \right) ^{1/\beta _{j}}q_{j}\right)
\geq \tilde{u}_{j}\right\} \\
&=&\min \left\{ \left( \lambda \right) ^{1/\beta _{j}}p_{j}\cdot \tilde{q}%
_{j}\text{ }|\text{ }\tilde{q}_{j}\in \mathbb{R}_{+}^{k}\text{, }u_{j}\left(
\tilde{q}_{j}\right) \geq \tilde{u}_{j}\right\} \text{, where }\tilde{q}%
_{j}=(1/\lambda) ^{1/\beta _{j}}q_{j}, \\
&=&\left( \lambda \right) ^{1/\beta _{j}}e_{j}\left( p_{j},\tilde{u}%
_{j}\right) .
\end{eqnarray*}%
Therefore, $e_{j}\left( p_{j},\tilde{u}_{j}\right) $ is homogeneous of
degree $\frac{1}{\beta _{j}}$ in $\tilde{u}_{j}.$
\end{proof}
\subsection{Ramified Optimal Transportation}
Let $X$ be a compact convex subset of a Euclidean space $\mathbb{R}^{m}$.
Recall that a Radon measure $\mathbf{a}$ on $X$ is \textit{atomic} if $%
\mathbf{a}$ is a finite sum of Dirac measures with positive multiplicities.
That is
\begin{equation*}
\mathbf{a}=\sum\limits_{i=1}^{k}m_{i}\delta _{x_{i}}
\end{equation*}%
for some integer $k\geq 1$ and some points $x_{i}\in X$, $m_{i}>0$ for each $%
i=1,\cdots ,k$.\
In the environment of the previous section, the $k$ sources of goods can be
represented as an atomic measure on $X$ by
\begin{equation}
\mathbf{a}=\sum\limits_{i=1}^{k}m_{i}\delta _{x_{i}},\text{ where }%
m_{i}=\sum_{j=1}^{\ell }\bar{q}_{ij}\text{,} \label{source}
\end{equation}%
where $\bar{q}_{j}=\left( \bar{q}_{1j},\cdots ,q_{kj}\right) $ is given by (%
\ref{q_bar}). Also, the $\ell $ consumers can be represented by another
atomic measure on $X$ by
\begin{equation}
\mathbf{b}=\sum\limits_{j=1}^{\ell }n_{j}\delta _{y_{j}}\text{, where }%
n_{j}=\sum_{i=1}^{k}\bar{q}_{ij}\text{.} \label{consumer}
\end{equation}%
Without loss of generality, we may assume that
\begin{equation*}
\sum_{ij}\bar{q}_{ij}=1,
\end{equation*}%
and thus both $\mathbf{a}$ and $\mathbf{b}$ are probability measures on $X$.
\begin{definition}
(\cite{xia1}) A \textit{transport path from }$\mathbf{a}$\textit{\ to }$%
\mathbf{b}$ is a weighted directed graph $G$ consists of a vertex set $%
V\left( G\right) $, a directed edge set $E\left( G\right) $ and a weight
function $w:E\left( G\right) \rightarrow \left( 0,+\infty \right) $ such
that $\{x_{1},x_{2},...,x_{k}\}\cup \{y_{1},y_{2},...,y_{\ell }\}\subseteq
V(G)$ and for any vertex $v\in V(G)$, there is a balance equation
\begin{equation}
\sum_{e\in E(G),e^{-}=v}w(e)=\sum_{e\in E(G),e^{+}=v}w(e)+\left\{
\begin{array}{c}
m_{i},\text{\ if }v=x_{i}\text{\ for some }i=1,...,k \\
-n_{j},\text{\ if }v=y_{j}\text{\ for some }j=1,...,\ell \\
0,\text{\ otherwise }%
\end{array}%
\right. \label{balance}
\end{equation}%
where each edge $e\in E\left( G\right) $ is a line segment from the starting
endpoint $e^{-}$ to the ending endpoint $e^{+}$.
\end{definition}
Note that the balance equation (\ref{balance}) simply means the conservation
of mass at each vertex. Viewing $G$ as a one dimensional polyhedral chain,
we have the equation $\partial G=b-a$.
Let
\begin{equation*}
Path\left( \mathbf{a,b}\right)
\end{equation*}%
be the space of all transport paths from $\mathbf{a}$ to $\mathbf{b}$.
\begin{definition}
(e.g. \cite{Ambrosio}, \cite{villani}) A \textit{transport plan} from $%
\mathbf{a}$ to $\mathbf{b}$ is an atomic probability measure
\begin{equation}
q=\sum_{i=1}^{k}\sum_{j=1}^{\ell }q_{ij}\delta _{\left( x_{i},y_{j}\right) }
\label{transport_plan}
\end{equation}%
in the product space $X\times X$ such that
\begin{equation}
\sum_{i=1}^{k}q_{ij}=n_{j}\text{ and }\sum_{j=1}^{\ell }q_{ij}=m_{i}
\label{margins}
\end{equation}%
for each $i$ and $j$. Denote $Plan\left( \mathbf{a},\mathbf{b}\right) $ as
the space of all transport plans from $\mathbf{a}$ to $\mathbf{b}$.
\end{definition}
For instance, the $\bar{q}$ given by (\ref{q_bar}) is a transport plan in $%
Plan\left( \mathbf{a,b}\right) $.
Now, we want to consider the compatibility between a pair of transport path
and transport plan (\cite{xia1}, \cite{book}). Let $G$ be a given transport
path in $Path\left( \mathbf{a,b}\right) $. From now on, we assume that for
each $x_{i}$ and $y_{j}$, there exists at most one directed polyhedral curve
$g_{ij}$ from $x_{i}$ to $y_{j}$. In other words, there exists a list of
distinct vertices
\begin{equation}
V\left( g_{ij}\right) :=\left\{ v_{i_{1}},v_{i_{2}},\cdots ,v_{i_{h}}\right\}
\label{V_g}
\end{equation}%
in $V\left( G\right) $ with $x_{i}=v_{i_{1}}$, $y_{j}=v_{i_{h}}$, and each $%
\left[ v_{i_{t}},v_{i_{t+1}}\right] $ is a directed edge in $E\left(
G\right) $ for each $t=1,2,\cdots ,h-1$. For some pairs of $\left(
i,j\right) $, such a curve $g_{ij}$ from $x_{i}$ to $y_{j}$ may fail to
exist, due to reasons like geographical barriers, law restrictions, etc. If
such curve does not exists, we set $g_{ij}=0$ to denote the {\it empty} directed polyhedral curve. By doing so, we construct a
matrix
\begin{equation}
g=\left( g_{ij}\right) _{k\times \ell } \label{g_matrix}
\end{equation}%
with each element of $g$ being a polyhedral curve. A very simple example
satisfying these conditions is illustrated in Figure \ref{simple_graph}.
\begin{figure}[h]
\centering \includegraphics[width=0.6\textwidth, height=1.5in]{Fig4.PNG}
\caption{A transport path from $4\delta_{x_1}+3\delta_{x_2}+4\delta_{x_3}$ to
$3\delta_{y_1}+5\delta_{y_2}+3\delta_{y_3}$ with $g_{13}=0, g_{31}=0$.}
\label{simple_graph}
\end{figure}
\begin{definition}
A pair $\left( G,q\right) $ of a transport path $G\in Path\left( \mathbf{a,b}%
\right) $ and a transport plan $q\in Plan\left( \mathbf{a,b}\right) $ is
compatible if $q_{ij}=0$ whenever $g_{ij}$ does not exist and
\begin{equation}
G=q\cdot g. \label{compatible_pair}
\end{equation}
\end{definition}
Here, the equation (\ref{compatible_pair}) means
\begin{equation*}
G=\sum_{i=1}^{k}\sum_{j=1}^{\ell }q_{ij}g_{ij}.
\end{equation*}
In terms of edges, it says that for each edge $e\in E\left( G\right) $, we
have
\begin{equation*}
\sum_{e\subseteq g_{ij}}q_{ij}=w\left( e\right) .
\end{equation*}
\begin{example}
\label{G_bar} Let $x^{\ast }\in X\setminus \left\{ x_{1},\cdots
,x_{k},y_{1},\cdots ,y_{\ell }\right\} $. We may construct a path $\bar{G}%
\in Path\left( \mathbf{a,b}\right) $ as follows. Let
\begin{eqnarray*}
V\left( \bar{G}\right) &=&\left\{ x_{1},\cdots ,x_{k}\right\} \cup \left\{
y_{1},\cdots ,y_{\ell }\right\} \cup \left\{ x^{\ast }\right\} , \\
E\left( \bar{G}\right) &=&\left\{ \left[ x_{i},x^{\ast }\right] :i=1,\cdots
,k\right\} \cup \left\{ \left[ x^{\ast },y_{j}\right] :j=1,\cdots ,\ell
\right\} ,
\end{eqnarray*}%
and
\[
w\left( \left[ x_{i},x^{\ast }\right] \right) =m_{i},
w\left( \left[ x^{\ast },y_{j}\right] \right) =n_{j}
\]
for each $i$ and $j$. In this case, each $g_{ij}$ is the union of two edges $%
\left[ x_{i},x^{\ast }\right] \cup \left[ x^{\ast },y_{j}\right] $. Then,
each transport plan $q\in Plan\left( \mathbf{a,b}\right) $ is compatible
with $\bar{G}$ because
\begin{equation*}
\sum_{\left[ x_{i^{\ast }},x^{\ast }\right] \subseteq
g_{ij}}q_{ij}=\sum_{j=1}^{\ell }q_{i^{\ast }j}=m_{i^{\ast }}=w\left( \left[
x_{i^{\ast }},x^{\ast }\right] \right)
\end{equation*}%
and
\begin{equation*}
\sum_{\left[ x^{\ast },y_{j^{\ast }}\right] \subseteq
g_{ij}}q_{ij}=\sum_{i=1}^{k}q_{ij^{\ast }}=n_{j^{\ast }}=w\left( \left[
x^{\ast },y_{j^{\ast }}\right] \right) .
\end{equation*}
\end{example}
\section{Exchange Value Of A Transport system}
In a transport system, a transporter can simply ship the desired bundle to
consumers as they have initially planned. This is a universal strategy.
However, we will see that allowing the exchange of goods between consumers
may make them better off without incurring any additional transportation
cost. In other words, there is an \textit{exchange value} embedded in some
transport system.
\subsection{Exchange Value}
For each probability measure $q=\left( q_{ij}\right) \in \mathcal{P}\left(
X\times X\right) $, we define
\begin{equation}
S\left( q\right) =\sum_{j=1}^{\ell }e_{j}\left( p_{j},u_{j}\left(
q_{j}\right) \right) =\sum_{j=1}^{\ell }\min \left\{ p_{j}\cdot t_{j}\text{ }%
|\text{ }t_{j}\in \mathbb{R}_{+}^{k},\text{ }u_{j}\left( t_{j}\right) \geq
u_{j}\left( q_{j}\right) \right\} , \label{S_function}
\end{equation}%
where $q_{j}=\left( q_{1j},q_{2j},...,q_{kj}\right) $ for each $j=1,\cdots
,\ell $. Here, $S\left( q\right) $ represents the least total expenditure
for each individual $j$ to reach utility level $u_{j}\left( q_{j}\right) .$
One can simply use Lemmas \ref{Properties_e} and \ref{homogeneous} to prove
the following lemma which shows several properties of this function $S.$
\begin{lemma}
\label{Properties_S}Suppose each $u_{j}$ is continuous, concave, and
increasing on $\mathbb{R}_{+}^{k},$ $j=1,...,\ell .$ The function $S\left(
q\right) $ is
\begin{enumerate}
\item Homogeneous of degree one in $p=\left( p_{1},...,p_{\ell }\right) .$
\item Increasing in $q$ and nondecreasing in $p_{ij}$ for any $i=1,...,k,$ $%
j=1,...,\ell .$
\item Concave in $p.$
\item Continuous in $p$ and $q.$
\end{enumerate}
\end{lemma}
Let $\bar{q}\in Plan\left( \mathbf{a,b}\right) $ be the initial plan given
by (\ref{q_bar}). Denote
\begin{equation}
\Omega \left( \bar{q}\right) =\left\{ G\in Path\left( \mathbf{a,b}\right)
\text{ }|\text{ }\left( G,\bar{q}\right) \text{ is compatible}\right\} .
\label{Omega}
\end{equation}
Let $G\in \Omega \left( \bar{q}\right) $ be fixed and $g=\left(
g_{ij}\right) $ be the corresponding matrix of $G$ as given in (\ref%
{g_matrix}). That is,
\begin{equation*}
G=g\cdot \bar{q}.
\end{equation*}%
Then, we introduce the following definition:
\begin{definition}
Each transport plan in the set%
\begin{equation}
\mathcal{F}_{G}=\left\{ q\in \mathcal{P}\left( X\times X\right) \left|
\begin{array}{c}
q\text{ is compatible with }G \\
u_{j}\left( q_{j}\right) \geq u_{j}\left( \bar{q}_{j}\right) ,\text{ }%
j=1,...,\ell .%
\end{array}%
\right. \right\} \label{feasible}
\end{equation}%
is called a feasible plan for $G.$
\end{definition}
Recall that $q$ is compatible with $G$ means that
\begin{equation}
q_{ij}=0\text{ if }g_{ij}\text{ does not exist} \label{zero_g_ij}
\end{equation}%
and
\begin{equation*}
g\cdot q=g\cdot \bar{q},
\end{equation*}%
in the sense that for each edge $e\in E\left( G\right) $, we have an
equality
\begin{equation}
\sum_{e\subseteq g_{ij}}q_{ij}=w\left( e\right) \text{, where }w\left(
e\right) =\sum_{e\subseteq g_{ij}}\bar{q}_{ij}. \label{edge_equation}
\end{equation}%
For any feasible plan $q\in \mathcal{F}_{G}$, the constraint $u_{j}\left(
q_{j}\right) \geq u_{j}\left( \bar{q}_{j}\right) $ means that $q_{j}$ is at
least as good as $\bar{q}_{j}$ for each consumer $j$.
Since $\bar{q}\in Plan\left( \mathbf{a,b}\right) $, the compatibility
condition automatically implies that $q\in Plan\left( \mathbf{a,b}\right) $
whenever $q\in \mathcal{F}_{G}$.
\begin{lemma}
$\mathcal{F}_{G}$ is a nonempty, convex and compact subset of $\mathcal{P}%
\left( X\times X\right) .$
\end{lemma}
\begin{proof}
Clearly, $\bar{q}\in \mathcal{F}_{G},$ showing that $\mathcal{F}_{G}\neq
\emptyset .$ The set $\mathcal{F}_{G}$ is convex since it is an intersection
of two convex sets $\left\{ q\in \mathcal{P}\left( X\times X\right) \text{ }|%
\text{ }g\cdot q=G\text{ }\right\} $ and $\prod\nolimits_{j=1}^{\ell
}\left\{ q_{j}\in \mathcal{P}\left( X\times X\right) \left| u_{j}\left(
q_{j}\right) \geq u_{j}\left( \bar{q}_{j}\right) \right. \right\} ,$ where
the convexity of $\left\{ q_{j}\in \mathcal{P}\left( X\times X\right) \left|
u_{j}\left( q_{j}\right) \geq u_{j}\left( \bar{q}_{j}\right) \right.
\right\} $ comes from the concavity of $u_{j}$, $j=1,...,\ell .$ Since each $%
u_{j}$ is continuous, we have $\mathcal{F}_{G}$ is a closed subset of $%
\mathcal{P}\left( X\times X\right) $ and hence it is compact.
\end{proof}
Note that when $G=\bar{G}$ as constructed in the example \ref{G_bar}, we
have
\begin{equation*}
\mathcal{F}_{\bar{G}}=\left\{ q\in Plan\left( \mathbf{a,b}\right) \left|
u_{j}\left( q_{j}\right) \geq u_{j}\left( \bar{q}_{j}\right) ,\text{ }%
j=1,...,\ell \right. \right\} .
\end{equation*}%
Clearly, for each $G\in Path\left( \mathbf{a,b}\right) $, we have
\begin{equation}
\bar{q}\in \mathcal{F}_{G}\subseteq \mathcal{F}_{\bar{G}}.
\label{comparison_G_bar}
\end{equation}
\begin{definition}
Let $\mathcal{E}$ be an economy as in (\ref{economy}). For each transport
path $G\in \Omega \left( \bar{q}\right) $, we define the exchange value of $%
G $ by
\begin{equation}
\mathcal{V}\left( G;\mathcal{E}\right) =\underset{q\in \mathcal{F}_{G}}{\max
}\text{ }S\left( q\right) -S\left( \bar{q}\right) , \label{main_problem}
\end{equation}%
where $S$ is given by (\ref{S_function}). Without causing confusion, we may
simply denote $\mathcal{V}\left( G;\mathcal{E}\right) $ by $\mathcal{V}%
\left( G\right) $.
\end{definition}
Since $S$ is a continuous function on a compact set, the exchange value
function $\mathcal{V}:\Omega \rightarrow \lbrack 0,\infty )$ is well
defined. Furthermore, for each $q\in \mathcal{F}_{G}$, given $u_{j}\left(
q_{j}\right) \geq u_{j}\left( \bar{q}_{j}\right) $ for all $j$, we have
\begin{equation}
S\left( q\right) \geq S\left( \bar{q}\right) . \label{S_q_comparison}
\end{equation}
\begin{remark}
Our way of defining the feasibility set $\mathcal{F}_{G}$ guarantees that
the exchange value is not obtained at the cost of increasing transportation
cost. This is because the compatibility condition ensures that replacing $%
\bar{q}$ by any feasible plan $q\in \mathcal{F}_{G}$ will not change the
transportation cost $M_{\alpha }\left( G\right) $ (to be defined later in (%
\ref{M_a_cost})), as the quantity on each edge $e$ of $G$ is set to be $%
w\left( e\right) $.
\end{remark}
The following proposition shows that the exchange value is always
nonnegative and bounded from above.
\begin{proposition}
\label{upperbound}For any $G\in \Omega \left( \bar{q}\right) ,$%
\begin{equation*}
0\leq \mathcal{V}\left( G\right) \leq \mathcal{V}\left( \bar{G}\right) .
\end{equation*}
\end{proposition}
\begin{proof}
This follows from the definition as well as (\ref{comparison_G_bar}).
\end{proof}
\begin{example}
Let's return to the example discussed in introduction. More precisely,
suppose $u_{1}\left( q_{11},q_{21}\right) =q_{11}+3q_{21},$ $w_{1}=1/2,$ $%
p_{1}=\left( 1,6\right) $ and $u_{2}\left( q_{12},q_{22}\right)
=3q_{12}+q_{22},$ $w_{2}=1/2,$ $p_{2}=\left( 6,1\right) .$ By solving (\ref%
{q_bar}), i.e.
\begin{eqnarray*}
\bar{q}_{1} &\in &\arg \max \left\{ u_{1}\left( q_{11},q_{21}\right) \text{ }%
|\text{ } p_{1}\cdot q_{1}\leq w_{1}\right\} \\
&=&\arg \max \left\{ q_{11}+3q_{21} \text{ }|\text{ } q_{11}+6q_{21}\leq
1/2\right\} \\
&=&\left\{ \left( 1/2,0\right) \right\} ,
\end{eqnarray*}%
we find \ $\bar{q}_{1}=\left( 1/2,0\right) $. Similarly, we have $\bar{q}%
_{2}=\left( 0,1/2\right) $. This gives the initial plan
\begin{equation*}
\bar{q}=\left(
\begin{array}{cc}
1/2 & 0 \\
0 & 1/2%
\end{array}%
\right) .
\end{equation*}%
Now, solving expenditure minimization problems (\ref{ex_min}) yields
\begin{eqnarray*}
e_{1}\left( p_{1},\tilde{u}_{1}\right) &=&\min \left\{ p_{1}\cdot q_{1}\text{
}|\text{ }q_{1}\in \mathbb{R}_{+}^{2},u_{1}\left( q_{1}\right) \geq \tilde{u}%
_{1}\right\} \\
&=&\min \left\{ q_{11}+6q_{21}\text{ }|\text{ }\left( q_{11},q_{21}\right)
\in \mathbb{R}_{+}^{2},q_{11}+3q_{21}\geq \tilde{u}_{1}\right\} \\
&=&\tilde{u}_{1}.
\end{eqnarray*}%
Similarly, we have $e_{2}\left( p_{2},\tilde{u_{2}}\right) =\tilde{u_{2}}$.
From these, we get
\begin{equation}
S\left( q\right) =e_{1}\left( p_{1},u_{1}\left( q_{1}\right) \right)
+e_{2}\left( p_{2},u_{2}\left( q_{2}\right) \right) =u_{1}\left(
q_{1}\right) +u_{2}\left( q_{2}\right) \notag
\end{equation}%
for each probability measure $q\in \mathcal{P}\left( X\times X\right) $.
Now, we find the exchange value embedded in the transport systems $G_{1}$
and $G_{2}$ as given in Figure 1.
\begin{itemize}
\item $G_{1}:$ The associated feasible set is%
\begin{equation*}
\mathcal{F}_{G_{1}}=\left\{ q=\left(
\begin{array}{cc}
q_{11} & q_{12} \\
q_{21} & q_{22}%
\end{array}%
\right) \in \mathcal{P}\left( X\times X\right) \left|
\begin{array}{c}
q_{11}=1/2,\text{ }q_{21}=0,\text{ }q_{12}=0,\text{ }q_{22}=1/2, \\
q_{11}+3q_{21}\geq u_{1}\left( \bar{q}_{1}\right) =1/2, \\
\text{ }3q_{12}+q_{22}\geq u_{2}\left( \bar{q}_{2}\right) =1/2.%
\end{array}%
\right. \right\} =\left\{ \bar{q}\right\} .
\end{equation*}%
Thus, the exchange value of $G_{1}$ is
\begin{equation*}
\mathcal{V}\left( G_{1}\right) =\underset{q\in \mathcal{F}_{G_{1}}}{\max }%
S\left( q\right) -S\left( \bar{q}\right) =S\left( \bar{q}\right) -S\left(
\bar{q}\right) =0.
\end{equation*}
\item $G_{2}:$ The associated feasible set is%
\begin{eqnarray*}
\mathcal{F}_{G_{2}} &=&\left\{ q=\left(
\begin{array}{cc}
q_{11} & q_{12} \\
q_{21} & q_{22}%
\end{array}%
\right) \in \mathcal{P}\left( X\times X\right) \left|
\begin{array}{c}
q_{11}+q_{12}=1/2,q_{21}+q_{22}=1/2, \\
q_{11}+q_{21}=1/2, \\
q_{11}+3q_{21}\geq u_{1}\left( \bar{q}_{1}\right) =1/2, \\
\text{ }3q_{12}+q_{22}\geq u_{2}\left( \bar{q}_{2}\right) =1/2.%
\end{array}%
\right. \right\} \\
&=&\left\{ q=\left(
\begin{array}{cc}
q_{11} & 1/2-q_{11} \\
1/2-q_{11} & q_{11}%
\end{array}%
\right) \left|
\begin{array}{c}
q_{11}\leq 1/2 \\
\text{ }q_{11}\geq 0%
\end{array}%
\right. .\right\}
\end{eqnarray*}%
Thus, we have the following exchange value
\begin{eqnarray*}
\mathcal{V}\left( G_{2}\right) &=&\underset{q\in \mathcal{F}_{G_{2}}}{\max }%
S\left( q\right) -S\left( \bar{q}\right) \\
&=&\underset{q\in \mathcal{F}_{G_{2}}}{\max }\left\{ \left(
q_{11}+3q_{21}\right) +\left( 3q_{12}+q_{22}\right) \right\} -1 \\
&=&\underset{0\leq q_{11}\leq \frac{1}{2}}{\max }\left\{ \left(
q_{11}+3\left( 1/2-q_{11}\right) \right) +\left( 3\left( 1/2-q_{11}\right)
+q_{11}\right) \right\} -1 \\
&=&\underset{0\leq q_{11}\leq \frac{1}{2}}{\max }\left\{ 3-4q_{11}\right\}
-1=2.
\end{eqnarray*}
\end{itemize}
\end{example}
Basically, there are three factors affecting the size of exchange value:
transport structures, preferences and prices. In the rest of this section,
we will study how these three factors affect the exchange value.
\subsection{ Transport Structures and Exchange Value}
For any $G\in \Omega \left( \bar{q}\right) $, define
\begin{equation*}
K\left( \bar{q},G\right) =\left\{ q\in \mathcal{P}\left( X\times X\right)
\left| q\text{ is compatible with }G\right. \right\} ,
\end{equation*}%
and
\begin{equation*}
U\left( \bar{q}\right) =\left\{ q\in \mathcal{P}\left( X\times X\right)
\left| u_{j}\left( q_{j}\right) \geq u_{j}\left( \bar{q}_{j}\right) ,\text{ }%
j=1,...,\ell .\right. \right\}
\end{equation*}%
Then,
\begin{equation*}
\mathcal{F}_{G}=K\left( \bar{q},G\right) \cap U\left( \bar{q}\right) .
\end{equation*}%
Clearly, the structure of a transport system influences the exchange value
through $K\left( \bar{q},G\right) .$ For this consideration, this subsection
will focus on the properties of $K\left( \bar{q},G\right) $ whose
implications on exchange value will be self-evident in the following
subsections.
\begin{proposition}
$K\left( \bar{q},G\right) $ is a polygon of dimension $N\left( G\right)
+\chi \left( G\right) -\left( k+\ell \right) $, where $\chi \left( G\right) $
is the Euler Characteristic number of $G$, and $N\left( G\right) $ is the
total number of existing $g_{ij}$'s in $G$.
\end{proposition}
\begin{proof}
For each interior vertex $v$ of $G$, let $\left\{ e_{1},e_{2},\cdots
,e_{h}\right\} \subseteq E\left( G\right) $ be the set of edges with $%
e_{i}^{-}=v$. Then, each $e_{i}$ corresponds to an equation of the form (\ref%
{edge_equation}). Nevertheless, due to the balance equation (\ref{balance}),
we may remove one redundant equation from these $h$ equations. As a result,
the total number of equations of the form (\ref{edge_equation}) equals the
total number of edges of $G$ minus the total number of interior vertices of $%
G$. Thus, $K\left( \bar{q},G\right) $ is defined by $k+\ell -\chi \left(
G\right) $ number of linear equations in the form of (\ref{edge_equation}),
and $\left( k\ell -N\left( G\right) \right) $ number of equations (\ref%
{zero_g_ij}). This shows that $K\left( \bar{q},G\right) $ is a convex
polygon of dimension
\begin{equation}
\dim \left( K\left( \bar{q},G\right) \right) \geq k\ell -\left( k+\ell -\chi
\left( G\right) \right) -\left( k\ell -N\left( G\right) \right) =N\left(
G\right) +\chi \left( G\right) -\left( k+\ell \right) . \label{dim1}
\end{equation}%
By the following Lemma \ref{dimension_less}, we have an inequality of the
other direction.
\end{proof}
\begin{lemma}
\label{dimension_less}The dimension of $K\left( \bar{q},G\right) $ is no
more than $N\left( G\right) +\chi \left( G\right) -\left( k+\ell \right) $.
\end{lemma}
\begin{proof}
Since $K\left( \bar{q},G\right) $ is defined by $k\ell $ variables $\left(
q_{ij}\right) _{k\times \ell }$ which satisfy equations (\ref{zero_g_ij})
and (\ref{edge_equation}). As the number of equations (\ref{zero_g_ij}) is $%
k\ell -N\left( G\right) $, it is sufficient to show that
\begin{equation*}
rank\left( A\right) \geq k\ell -\left( k\ell -N\left( G\right) \right)
-\left( N\left( G\right) +\chi \left( G\right) -\left( k+\ell \right)
\right) =\left( k+\ell -\chi \left( G\right) \right) ,
\end{equation*}%
where $A$ is the coefficient matrix given by linear equations (\ref%
{edge_equation}). We prove this by using induction on the number $k$. When $%
k=1$, then the coefficient matrix $A^{\left( 1\right) }$ is in the form of%
\begin{equation*}
A^{\left( 1\right) }=\left(
\begin{array}{c}
I \\
B%
\end{array}%
\right) ,
\end{equation*}%
where $I$ is the $N\left( G^{\left( 1\right) }\right) \times N\left(
G^{\left( 1\right) }\right) $ identity matrix $I_{N\left( G^{\left( 1\right)
}\right) }$, and $B$ is some matrix of $N\left( G^{\left( 1\right) }\right) $
columns. Thus, the rank of $A^{\left( 1\right) }$ is $N\left( G^{\left(
1\right) }\right) $. On the other hand, the Euler Characteristic number of $%
G^{\left( 1\right) }$ is
\begin{equation*}
\chi \left( G^{\left( 1\right) }\right) =1+\left( \ell -N\left( G^{\left(
1\right) }\right) \right) \text{,}
\end{equation*}%
which gives
\begin{equation}
rank\left( A^{\left( 1\right) }\right) =\left( 1+\ell \right) -\chi \left(
G^{\left( 1\right) }\right) . \label{rank_1}
\end{equation}%
Now, we may use induction by assuming that
\begin{equation}
rank\left( A^{\left( k\right) }\right) \geq \left( k+\ell \right) -\chi
\left( G^{\left( k\right) }\right) \label{rank_k}
\end{equation}%
for any $G^{\left( k\right) }$ from $k$ sources to $\ell $ consumers. We
want to show that
\begin{equation*}
rank\left( A^{\left( k+1\right) }\right) \geq \left( k+\ell +1\right) -\chi
\left( G^{\left( k+1\right) }\right)
\end{equation*}%
for any $G^{\left( k+1\right) }$ from $\left( k+1\right) $ sources to $\ell $
consumers.
Let
\begin{eqnarray*}
E_{1}^{\left( k+1\right) } &=&\left\{ e\in E\left( G^{\left( k+1\right)
}\right) :e\subseteq g_{ij}\text{ for some }i\in \left\{ 1,\cdots ,k\right\}
\text{ and }j\in \left\{ 1,\cdots ,\ell \right\} \right\} \\
E_{2}^{\left( k+1\right) } &=&E\left( G^{\left( k+1\right) }\right)
\setminus E_{1}^{\left( k+1\right) }.
\end{eqnarray*}%
For each $e\in E_{2}^{\left( k+1\right) }$, we know $e\subseteq g_{\left(
k+1\right) j}$ for some $j\in \left\{ 1,\cdots ,\ell \right\} $, but $%
e\notin E_{1}^{\left( k+1\right) }$. Then, for each $e\in E_{1}^{\left(
k+1\right) }$, we have
\begin{equation*}
\sum_{\substack{ 1\leq i\leq k,1\leq j\leq \ell \\ e\subseteq g_{ij}}}%
q_{ij}+\sum_{\substack{ 1\leq j\leq \ell \\ e\subseteq g_{\left( k+1\right)
j}}}q_{\left( k+1\right) j}=w\left( e\right) .
\end{equation*}%
Also, for each $e\in E_{2}^{\left( k+1\right) }$, we have
\begin{equation*}
\sum_{\substack{ 1\leq j\leq \ell \\ e\subseteq g_{\left( k+1\right) j}}}%
q_{\left( k+1\right) j}=w\left( e\right) .
\end{equation*}%
As a result, the matrix $A^{\left( k+1\right) }$ can be expressed in the
form
\begin{equation}
A^{\left( k+1\right) }=\left(
\begin{array}{cc}
A^{\left( k\right) } & B^{\left( k+1\right) } \\
0 & C^{\left( k+1\right) }%
\end{array}%
\right) . \label{A_(k+1)}
\end{equation}%
Now, we consider a new transport path
\begin{equation*}
\tilde{G}=\sum_{e\in E_{2}^{\left( k+1\right) }}w\left( e\right) \left[ e%
\right] \text{.}
\end{equation*}%
from a single source (i.e. $x_{k+1}$) to a few (say $\tilde{\ell}$ ) targets
(, which do not necessarily belong to the original consumers). The matrix $%
C^{\left( k+1\right) }$ here is the associated $A^{\left( 1\right) }$ matrix
for $\tilde{G}$, and thus has rank $\left( 1+\tilde{\ell}\right) -\chi
\left( \tilde{G}\right) =\tilde{\ell}$ as $\tilde{G}$ is contractible. Also,
we have
\begin{equation*}
\chi \left( G^{\left( k+1\right) }\right) =\chi \left( G^{\left( k\right)
}\right) +1-\tilde{\ell}.
\end{equation*}%
Therefore, by (\ref{rank_k}) and (\ref{A_(k+1)}),
\begin{eqnarray*}
rank\left( A^{\left( k+1\right) }\right) &\geq &rank\left( A^{\left(
k\right) }\right) +rank\left( C^{\left( k+1\right) }\right) \\
&\geq &\left( k+\ell \right) -\chi \left( G^{\left( k\right) }\right)
+\left( 1+\chi \left( G^{\left( k\right) }\right) -\chi \left( G^{\left(
k+1\right) }\right) \right) \\
&=&\left( k+1+\ell \right) -\chi \left( G^{\left( k+1\right) }\right) .
\end{eqnarray*}
\end{proof}
\begin{corollary}
\label{corollary_kl}Suppose $G\in \Omega \left( \bar{q}\right) $.
\end{corollary}
\begin{enumerate}
\item If $k+\ell \geq N\left( G\right) +\chi \left( G\right) $, then $%
\mathcal{F}_{G}=\left\{ \bar{q}\right\} $.
\item If $k+\ell <N\left( G\right) +\chi \left( G\right) $ and $\bar{q}$ is
an interior point of the polygon $K\left( \bar{q},G\right) $, then $\mathcal{%
F}_{G}$ is a convex set of positive dimension. In particular, $\mathcal{F}%
_{G}\neq \left\{ \bar{q}\right\} .$
\end{enumerate}
\begin{proof}
If $k+\ell \geq N\left( G\right) +\chi \left( G\right) $, the convex polygon
$K\left( \bar{q},G\right) $ becomes a dimension zero set, and thus $\mathcal{%
F}_{G}=\left\{ \bar{q}\right\} $. When $k+\ell <N\left( G\right) +\chi
\left( G\right) $, the polygon $K\left( \bar{q},G\right) $ has positive
dimension. Since each $u_{j}$ is concave, $U\left( \bar{q}\right) $ is a
convex set containing $\bar{q}$. When $\bar{q}$ is an interior point of $%
K\left( \bar{q},G\right) $, the intersection $\mathcal{F}_{G}=K\left( \bar{q}%
,G\right) \cap U\left( \bar{q}\right) $ is still a convex set of positive
dimension. Thus, $\mathcal{F}_{G}\neq \left\{ \bar{q}\right\} $.
\end{proof}
\begin{proposition}
\label{pairwise_dis}Suppose $G\in \Omega \left( \bar{q}\right) $ satisfies
the following condition: for any two pairs $\left( i_{1},i_{2}\right) $ with
$i_{1}\neq i_{2}$ and $\left( j_{1},j_{2}\right) $ with $j_{1}\neq j_{2}$,
we have
\begin{equation}
V\left( g_{i_{1}j_{2}}\right) \cap V\left( g_{i_{2}j_{1}}\right) =\emptyset
\text{,} \label{pairwise_disjoint}
\end{equation}%
where $V\left( g_{ij}\right) $ is given in (\ref{V_g}). Then, $k+\ell \geq
N\left( G\right) +\chi \left( G\right) $. Hence, by Corollary \ref%
{corollary_kl}, $\mathcal{F}_{G}$ is a singleton $\left\{ \bar{q}\right\} $.
\end{proposition}
\begin{proof}
We still use the notations that have been used in the proof of Lemma \ref%
{dimension_less}. When $k=1$, $\chi \left( G\right) =1+\ell -N\left(
G\right) $, and thus $k+\ell =N\left( G\right) +\chi \left( G\right) $. By
using induction, we assume that the result is true for any $k$ sources. We
want to show that it holds for $k+1$ sources. Suppose there are totally $d$
edges of $G^{\left( k+1\right) }$ connecting the vertex $x_{k+1}$, then as
discussed earlier, we may construct a transport path $\tilde{G}$ \ from a
single source $x_{k+1}$ to a few targets $\left\{ v_{1},v_{2},\cdots ,v_{%
\tilde{\ell}}\right\} $ with $v_{i}\in V\left( G^{\left( k\right) }\right) $%
. For each $v_{j}$, it corresponds to a unique $g_{\left( k+1\right) j^{\ast
}}$ for some $j^{\ast }\in \left\{ 1,2,\cdots ,\ell \right\} $ that passing
through the vertex $v_{j}$. Indeed, suppose both $g_{\left( k+1\right)
j_{1}} $ and $g_{\left( k+1\right) j_{2}}$ passing through $v_{j}$ with $%
j_{1}\neq j_{2}$. Since $v_{j}\in V\left( G^{\left( k\right) }\right) $,
there exists an $i^{\ast }\in \left\{ 1,2,\cdots ,k\right\} $ such that $%
x_{i^{\ast }}$ and $v_{j}$ are connected by a directed curve lying in $%
G^{\left( k\right) }$. Then, $v_{j}\in g_{i^{\ast }j_{2}}\cap g_{\left(
k+1\right) j_{1}}$, which contradicts condition (\ref{pairwise_disjoint}).
As a result,
\begin{equation*}
N\left( G^{\left( k+1\right) }\right) =N\left( G^{\left( k\right) }\right) +%
\tilde{\ell}.
\end{equation*}%
On the other hand, it is easy to see that $\chi \left( G^{\left( k+1\right)
}\right) =\chi \left( G^{\left( k\right) }\right) +1-\tilde{\ell}$. So, by
induction,
\begin{eqnarray*}
\left( k+1\right) +\ell &\geq &1+N\left( G^{\left( k\right) }\right) +\chi
\left( G^{\left( k\right) }\right) \\
&=&1+\left( N\left( G^{\left( k+1\right) }\right) -\tilde{\ell}\right)
+\left( \chi \left( G^{\left( k+1\right) }\right) +\tilde{\ell}-1\right)
=N\left( G^{\left( k+1\right) }\right) +\chi \left( G^{\left( k+1\right)
}\right) .
\end{eqnarray*}%
This shows that
\begin{equation*}
k+\ell \geq N\left( G\right) +\chi \left( G\right)
\end{equation*}%
for any $G$ satisfying condition (\ref{pairwise_disjoint}). Therefore, $%
\mathcal{F}_{G}$ is a singleton $\left\{ \bar{q}\right\} $.
\end{proof}
In Proposition \ref{colliner_general}, we will consider an inverse problem
of Proposition \ref{pairwise_dis} under some suitable conditions on the
prices.
Given two transport paths
\begin{eqnarray*}
G_{1} &=&\left\{ V\left( G_{1}\right) ,E\left( G_{1}\right) ,w_{1}:E\left(
G_{1}\right) \rightarrow \lbrack 0,+\infty )\right\} \text{ and} \\
G_{2} &=&\left\{ V\left( G_{2}\right) ,E\left( G_{2}\right) ,w_{2}:E\left(
G_{2}\right) \rightarrow \lbrack 0,+\infty )\right\} ,
\end{eqnarray*}%
we say $G_{1}$ is topologically equivalent to $G_{2}$ if there exists a
homeomorphism $h:X\rightarrow X$ such that
\begin{eqnarray*}
V\left( G_{2}\right) &=&h\left( V\left( G_{1}\right) \right) , \\
E\left( G_{2}\right) &=&\left\{ h\left( e\right) :e\in E\left( G_{1}\right)
\right\} \\
\text{ and }w_{2}\left( h\left( e\right) \right) &=&w_{1}\left( e\right)
\text{ for each }e\in E\left( G_{1}\right) \text{.}
\end{eqnarray*}%
Clearly, if $G_{1}$ is topologically equivalent to $G_{2},$ then $K\left(
\bar{q},G_{1}\right) =K\left( \bar{q},G_{2}\right) $. As a result, we know $%
\mathcal{V}$ is topologically invariant:
\begin{proposition}
\label{topology}If $G_{1}$ is topologically equivalent to $G_{2}$, then $%
\mathcal{V}\left( G_{1}\right) =\mathcal{V}\left( G_{2}\right) $.
\end{proposition}
As will be clear in the next section, the topological invariance of $%
\mathcal{V}$ is a very useful result because it enables us to inherit many
existing theories in ramified optimal transportation when studying a new
optimal transport problem there.
\subsection{Preferences and Exchange Value}
In this subsection, we will study the implications of preferences, which are
represented by utility functions, on the exchange value. The following
proposition shows that there is no exchange value when all consumers derive
their utilities solely from the total amount of goods they consume.
\begin{proposition}
\label{quantity_u}If $u_{j}:\mathbb{R}_{+}^{k}\rightarrow \mathbb{R} $ is of
the form $u_{j}\left( q_{j}\right) =f_{j}\left( \sum_{i=1}^{k}q_{ij}\right) $
for some $f_{j}:[0,\infty )\rightarrow \mathbb{R}$ for each $j=1,...,\ell ,$
then $\mathcal{V}\left( G\right) =0$ for any $G\in \Omega \left( \bar{q}%
\right) $.
\end{proposition}
\begin{proof}
For any $q\in \mathcal{F}_{G},$ by compatibility, we know
\begin{equation*}
\sum_{i=1}^{k}q_{ij}=\sum_{i=1}^{k}\bar{q}_{ij},\text{ }j=1,...,\ell ,
\end{equation*}%
which implies
\begin{equation*}
u_{j}\left( q_{j}\right) =f_{j}\left( \sum_{i=1}^{k}q_{ij}\right)
=f_{j}\left( \sum_{i=1}^{k}\bar{q}_{ij}\right) =u_{j}\left( \bar{q}%
_{j}\right) ,
\end{equation*}%
showing that all consumers find any feasible plan indifferent to $\bar{q}.$
Therefore, we get
\begin{equation*}
\mathcal{V}\left( G\right) =\underset{q\in \mathcal{F}_{G}}{\max }\text{ }%
S\left( q\right) -S\left( \bar{q}\right) =0.
\end{equation*}
\end{proof}
For any $G\in \Omega \left( \bar{q}\right) $, denote $Q\left( G\right) $ as
the solution set of the maximization problem (\ref{main_problem}) defining
exchange value, i.e.,
\begin{equation}
Q\left( G\right) =\left\{ \hat{q}\in \mathcal{F}_{G}\text{ }|\text{ }%
\mathcal{V}\left( G\right) =S\left( \hat{q}\right) -S\left( \bar{q}\right)
\right\} . \label{Q_G}
\end{equation}%
We are interested in describing geometric properties of the set $Q\left(
G\right) $. In particular, if $Q\left( G\right) $ contains only one element,
then the problem (\ref{main_problem}) has a unique solution.
\begin{proposition}
\label{uniqueness}For any $G\in \Omega \left( \bar{q}\right) ,$
\begin{enumerate}
\item The solution set $Q\left( G\right) $ is a compact nonempty set.
\item If $u_{j}:\mathbb{R}_{+}^{k}\rightarrow \mathbb{R}$ is homogeneous of
degree $\beta _{j}>0$ and $\left( u_{j}\left( q_{j}\right) \right) ^{\frac{1%
}{\beta _{j}}}$ is concave in $q_{j},$ $j=1,...,\ell ,$ then $Q\left(
G\right) $ is convex.
\item (Uniqueness) If $u_{j}:\mathbb{R}_{+}^{k}\rightarrow \mathbb{R}$ is
homogeneous of degree $\beta _{j}>0$ and$\ \left( u_{j}\left( q_{j}\right)
\right) ^{\frac{1}{\beta _{j}}}$ is concave in $q_{j}$ satisfying the
condition
\begin{equation}
\left( u_{j}\left( \left( 1-\lambda _{j}\right) \tilde{q}_{j}+\lambda _{j}%
\hat{q}_{j}\right) \right) ^{\frac{1}{\beta _{j}}}>\left( 1-\lambda
_{j}\right) \left( u_{j}\left( \tilde{q}_{j}\right) \right) ^{\frac{1}{\beta
_{j}}}+\lambda _{j}\left( u_{j}\left( \hat{q}_{j}\right) \right) ^{\frac{1}{%
\beta _{j}}} \label{nearly_concave}
\end{equation}%
for each $\lambda _{j}\in \left( 0,1\right) $, and any non-collinear $\tilde{%
q}_{j},\hat{q}_{j}\in \mathbb{R}_{+}^{k}$ , for each $j=1,...,\ell .$ Then $%
Q\left( G\right) $ is a singleton, and thus the problem (\ref{main_problem})
has a unique solution.
\end{enumerate}
\end{proposition}
\begin{proof}
Since the function $S\left( q\right) $ is continuous in $q$, $Q\left(
G\right) $ becomes a closed subset of the compact set $\mathcal{F}_{G}$, and
thus $Q\left( G\right) $ is also compact.
If $u_{j}:\mathbb{R}_{+}^{k}\rightarrow \mathbb{R}$ is homogeneous of degree
$\beta _{j}>0$, then Lemma \ref{homogeneous} implies that
\begin{equation}
e_{j}\left( p_{j},u_{j}\left( q_{j}\right) \right) =e_{j}\left(
p_{j},1\right) \left( u_{j}\left( q_{j}\right) \right) ^{\frac{1}{\beta _{j}}%
}\text{ and }S\left( q\right) =\sum\nolimits_{j=1}^{\ell }e_{j}\left(
p_{j},1\right) \left( u_{j}\left( q_{j}\right) \right) ^{\frac{1}{\beta _{j}}%
}. \label{S_concave}
\end{equation}%
Thus, when each $\left( u_{j}\left( q_{j}\right) \right) ^{\frac{1}{\beta
_{j}}}$ is concave in $q_{j}$, we have $S$ is concave in $q$. Now, for any $%
q^{\ast },\tilde{q}\in Q\left( G\right) $ and $\lambda \in \left[ 0,1\right]
,$ the convexity of $\mathcal{F}_{G}$ implies $\left( 1-\lambda \right)
q^{\ast }+\lambda \tilde{q}\in $ $\mathcal{F}_{G}$ and the concavity of $S$
implies
\begin{equation}
S\left( \left( 1-\lambda \right) q^{\ast }+\lambda \tilde{q}\right) -S\left(
\bar{q}\right) \geq \left( 1-\lambda \right) \left( S\left( q^{\ast }\right)
-S\left( \bar{q}\right) \right) +\lambda \left( S\left( \tilde{q}\right)
-S\left( \bar{q}\right) \right) =\mathcal{V}\left( G\right) ,
\label{S_inequality}
\end{equation}%
showing that $\left( 1-\lambda \right) q^{\ast }+\lambda \tilde{q}\in
Q\left( G\right) .$ Therefore, $Q\left( G\right) $ is convex.
To prove the uniqueness, we note that $\left( 1-\lambda \right) q^{\ast
}+\lambda \tilde{q}\in Q\left( G\right) $ implies an equality in (\ref%
{S_inequality}), i.e.,%
\begin{equation}
\left( u_{j}\left( \left( 1-\lambda \right) q_{j}^{\ast }+\lambda \tilde{q}%
_{j}\right) \right) ^{\frac{1}{\beta _{j}}}=\left( 1-\lambda \right) \left(
u_{j}\left( q_{j}^{\ast }\right) \right) ^{\frac{1}{\beta _{j}}}+\lambda
\left( u_{j}\left( \tilde{q}_{j}\right) \right) ^{\frac{1}{\beta _{j}}}
\label{equality_u_j}
\end{equation}%
for each $\lambda \in \left( 0,1\right) $, and each $j=1,2,\cdots ,\ell $.
When $\left( u_{j}\left( q_{j}\right) \right) ^{\frac{1}{\beta _{j}}}$ is
concave in $q_{j}$ and satisfies (\ref{nearly_concave}), the equality (\ref%
{equality_u_j}) implies that $q_{j}^{\ast }$ and $\tilde{q}_{j}$ are
collinear in the sense that $q_{j}^{\ast }=t_{j}\tilde{q}_{j}$ for some $%
t_{j}\geq 0$. By (\ref{margins}),
\begin{equation*}
n_{j}=\sum_{i}q_{ij}^{\ast }=\sum_{i}t_{j}\tilde{q}_{ij}=t_{j}\sum_{j}\tilde{%
q}_{ij}=t_{j}n_{j}.
\end{equation*}%
Therefore, $t_{j}=1$ as $n_{j}>0$. This shows $q^{\ast }=\tilde{q}$ and thus
$Q\left( G\right) $ is a singleton with an element $\tilde{q}.$
\end{proof}
Two classes of utility functions widely used in economics satisfy conditions
in Proposition (\ref{uniqueness}). One is Cobb-Douglas function (\cite%
{Mas-Colell})
\begin{equation*}
u:\mathbb{R}_{+}^{k}\rightarrow \mathbb{R}:u\left( q_{1},...,q_{k}\right)
=\prod\limits_{i=1}^{k}\left( q_{i}\right) ^{\tau _{i}},\text{ }\tau _{i}>0,%
\text{ }i=1,...,k.
\end{equation*}%
The other is Constant Elasticity of Substitution function (\cite{Mas-Colell}%
)
\begin{equation*}
u:\mathbb{R}_{+}^{k}\rightarrow \mathbb{R}:u\left( q_{1},...,q_{k}\right) =%
\left[ \sum_{i=1}^{k}\gamma _{i}\left( q_{i}\right) ^{\tau }\right] ^{\frac{%
\beta }{\tau }},\text{ }\tau \in (0,1),\text{ }\beta >0,\text{ }\gamma
_{i}>0,\text{ }i=1,...,k.
\end{equation*}
\begin{proposition}
\label{V_G_positive}Suppose $u_{j}:\mathbb{R}_{+}^{k}\rightarrow \mathbb{R}$
is homogeneous of degree $\beta _{j}>0$ and $\left( u_{j}\left( q_{j}\right)
\right) ^{\frac{1}{\beta _{j}}}$ is concave in $q_{j}$ satisfying (\ref%
{nearly_concave}) for each $j=1,...,\ell $. For any $G\in \Omega \left( \bar{%
q}\right) ,$ $\mathcal{V}\left( G\right) >0$ if and only if $\mathcal{F}%
_{G}\neq \left\{ \bar{q}\right\} $.
\end{proposition}
\begin{proof}
Clearly, if\ $\mathcal{F}_{G}=\left\{ \bar{q}\right\} $, then $\mathcal{V}%
\left( G\right) =0$. On the other hand, suppose $\mathcal{V}\left( G\right)
=\max_{q\in \mathcal{F}_{G}}S\left( q\right) -S\left( \bar{q}\right) =0$,
then by (\ref{S_q_comparison}), we have
\begin{equation*}
S\left( q\right) =S\left( \bar{q}\right) \text{ for each }q\in \mathcal{F}%
_{G}.
\end{equation*}%
This implies $Q\left( G\right) =\mathcal{F}_{G}$. By proposition \ref%
{uniqueness}, $\mathcal{F}_{G}$ is a singleton $\left\{ \bar{q}\right\} $.
\end{proof}
This proposition says that each transport path $G\in \Omega \left( \bar{q}%
\right) $ has a positive exchange value as long as $\mathcal{F}_{G}$
contains more than one element.
\begin{theorem}
\label{theorem 1}Suppose $u_{j}:\mathbb{R}_{+}^{k}\rightarrow \mathbb{R}$ is
homogeneous of degree $\beta _{j}>0$ and $\left( u_{j}\left( q_{j}\right)
\right) ^{\frac{1}{\beta _{j}}}$ is concave in $q_{j}$ satisfying (\ref%
{nearly_concave}) for each $j=1,...,\ell $. \ If $k+\ell <N\left( G\right)
+\chi \left( G\right) $ and $\bar{q}$ is an interior point of the polygon $%
K\left( \bar{q},G\right) $, then $\mathcal{V}\left( G\right) >0$.
\end{theorem}
\begin{proof}
This follows from Proposition \ref{V_G_positive} and Corollary \ref%
{corollary_kl}.
\end{proof}
\subsection{Prices and Exchange Value}
In this subsection, we show that one can observe the collinearity in prices
to determine the existence of a positive exchange value.
\begin{proposition}
\label{collinear1}If the price vectors are collinear, i.e., $p_{j}=\lambda
_{j}p_{1},$ for some $\lambda _{j}>0,$ $j=1,...,\ell ,$ then $\mathcal{V}%
\left( G\right) =0$ for any $G\in \Omega \left( \bar{q}\right) $.
\end{proposition}
\begin{proof}
Assume that $\mathcal{V}\left( G\right) >0$. Then we know there exists a
feasible plan $q\in \mathcal{F}_{G}$ such that $u_{j}\left( q_{j}\right)
\geq u_{j}\left( \bar{q}_{j}\right) $, $j=1,...,\ell ,$ with at least one
strict inequality. Without loss of generality, we assume $u_{j^{\ast
}}\left( q_{j^{\ast }}\right) >u_{j^{\ast }}\left( \bar{q}_{j^{\ast
}}\right) .$ For any $j=1,...,\ell ,$ $u_{j}\left( q_{j}\right) \geq
u_{j}\left( \bar{q}_{j}\right) $ implies $p_{j}\cdot q_{j}\geq p_{j}\cdot
\bar{q}_{j}.$ If not, i.e., $p_{j}\cdot q_{j}<p_{j}\cdot \bar{q}_{j},$ then
by the monotonicity of $u_{j}$, we can find a $\tilde{q}_{j}\in \mathbb{R}%
_{+}^{k}$ such that $\tilde{q}_{j}>q_{j}$,
\begin{equation*}
u_{j}\left( \tilde{q}_{j}\right) >u_{j}\left( q_{j}\right) \geq u_{j}\left(
\bar{q}_{j}\right) \text{ and }p_{j}\cdot \tilde{q}_{j}<p_{j}\cdot \bar{q}%
_{j},
\end{equation*}%
contradicting the assumption that $\bar{q}_{j}$ solves the utility
maximization problem (\ref{q_bar}) of consumer $j.$ Furthermore, for
consumer $j^{\ast }$, by definition of $\bar{q}_{j^{\ast }}$, the inequality
$u_{2}\left( q_{j^{\ast }}\right) >u_{j^{\ast }}\left( \bar{q}_{j^{\ast
}}\right) $ implies $p_{j^{\ast }}\cdot q_{j^{\ast }}>p_{j^{\ast }}\cdot
\bar{q}_{j^{\ast }}.$ Thus, we know $p_{j}\cdot q_{j}\geq p_{j}\cdot \bar{q}%
_{j}$ for all $j$ with a strict inequality for $j=j^{\ast }.$ Since $%
p_{j}=\lambda _{j}p_{1},$ $j=1,...,\ell $, we know $p_{1}\cdot q_{j}\geq
p_{1}\cdot \bar{q}_{j}$ for all $j$ with a strict inequality for $j=j^{\ast
} $. Summing over $j$ yields
\begin{equation*}
\sum_{j=1}^{\ell }p_{1}\cdot q_{j}>\sum_{j=1}^{\ell }p_{1}\cdot \bar{q}_{j}.
\end{equation*}%
Meanwhile, the feasibility of $q$ implies $\sum_{j=1}^{\ell
}q_{j}=\sum_{j=1}^{\ell }\bar{q}_{j}.$ Multiplying both sides by $p_{1}$
leads to
\begin{equation*}
\sum_{j=1}^{\ell }p_{1}\cdot q_{j}=\sum_{j=1}^{\ell }p_{1}\cdot \bar{q}_{j},
\end{equation*}%
a contradiction.
\end{proof}
\begin{corollary}
If there is only one good ($k=1$) or one consumer ($\ell =1$), then $%
\mathcal{V}\left( G\right) =0$ for any $G\in \Omega \left( \bar{q}\right) $.
\end{corollary}
\begin{proof}
When $k=1,$ define $\lambda _{j}=\frac{p_{j}}{p_{1}}>0,$ $j=1,...,\ell .$
The result follows from Proposition \ref{collinear1}. When $\ell =1,$ for
any $G\in \Omega \left( \bar{q}\right) ,$ the feasible set is
\begin{equation*}
\mathcal{F}_{G}=\left\{ q_{1}=\left( q_{11},\cdots ,q_{k1}\right) \in
Plan\left( \mathbf{a,b}\right) \left| q_{i1}=m_{i}=\bar{q}_{i1}\text{ \ for
each }i\right. \right\} =\left\{ \bar{q}_{1}\right\} ,
\end{equation*}%
which clearly yields $\mathcal{V}\left( G\right) =0.$
\end{proof}
\begin{proposition}
\label{collinear2}Let $k=2$ and $\ell =2$. Suppose $u_{j}$ is differentiable
at $\bar{q}$ with $\nabla u_{j}\left( \bar{q}_{j}\right) >0,$ $j=1,2$ and $%
\bar{q}_{ij}>0$ for each $i,j$. If $G\in \Omega \left( \bar{q}\right) $ with
\begin{equation}
V\left( g_{12}\right) \cap V\left( g_{21}\right) \neq \emptyset ,\text{ and }%
p_{21}>p_{11},p_{12}>p_{22}, \label{price_condition}
\end{equation}%
then $\mathcal{V}\left( G\right) >0$.
\end{proposition}
\begin{proof}
Since $g_{12}$ and $g_{21}$ overlap, we denote $\gamma _{2}$ to be the curve
where $g_{12}$ and $g_{21}$ overlap with endpoints $z_{1}$ and $z_{2}$. Let $%
\gamma _{1}$,$\gamma _{3}$, $\gamma _{4}$ and $\gamma _{5}$ be the
corresponding curves from $x_{1}$ to $z_{1}$, $z_{2}$ to $y_{1}$, $x_{2}$ to
$z_{1},$ and $z_{2}$ to $y_{2}$ respectively. Then, these $\gamma _{i}$'s
are disjoint except at their endpoints. See Figure \ref{fig2}. Now, we may
express $g_{ij}$'s as
\begin{eqnarray*}
g_{11} &=&\gamma _{1}+\gamma _{2}+\gamma _{3}, \\
g_{21} &=&\gamma _{4}+\gamma _{2}+\gamma _{3}, \\
g_{12} &=&\gamma _{1}+\gamma _{2}+\gamma _{5}, \\
g_{22} &=&\gamma _{4}+\gamma _{2}+\gamma _{5},
\end{eqnarray*}%
which imply
\begin{equation}
g_{11}+g_{22}=g_{12}+g_{21}. \label{equation_g}
\end{equation}%
\begin{figure}[h!]
\centering \includegraphics[width=0.4\textwidth, height=2in]{Fig2.PNG}
\caption{A positive exchange value}
\label{fig2}
\end{figure}
Now, let
\begin{equation*}
\tilde{q}=\bar{q}+\left(
\begin{array}{cc}
-\epsilon & \epsilon \\
\epsilon & -\epsilon%
\end{array}%
\right) ,
\end{equation*}%
where $\epsilon $ is a sufficiently small positive number. Then, by (\ref%
{equation_g}),
\begin{eqnarray*}
g\cdot \tilde{q} &=&g\cdot \left( \bar{q}+\left(
\begin{array}{cc}
-\epsilon & \epsilon \\
\epsilon & -\epsilon%
\end{array}%
\right) \right) \\
&=&g\cdot \bar{q}+\epsilon \left( -g_{11}+g_{12}+g_{21}-g_{22}\right)
=g\cdot \bar{q},
\end{eqnarray*}%
which shows that $\tilde{q}$ is compatible with $G$. Now, we show $%
u_{1}\left( \tilde{q}_{1}\right) >u_{1}\left( \bar{q}_{1}\right) $. Since $%
\bar{q}_{1}=\left( \bar{q}_{11},\bar{q}_{21}\right) \in \mathbb{R}_{++}^{2}$
is derived from the utility maximization problem (\ref{q_bar}) of consumer $%
1,$ it must satisfy the first order condition at $\bar{q}_{1}$:%
\begin{equation}
\partial u_{1}\left( \bar{q}_{1}\right) /\partial q_{1}=\lambda p_{11}\text{
and }\partial u_{1}\left( \bar{q}_{1}\right) /\partial q_{2}=\lambda p_{21}
\label{FOC1}
\end{equation}%
for some $\lambda >0$. Thus, using Taylor's Theorem, we have
\begin{eqnarray*}
u_{1}\left( \tilde{q}_{1}\right) &=&u_{1}\left( \bar{q}_{1}\right) +\frac{%
\partial u_{1}\left( \bar{q}_{1}\right) }{\partial q_{11}}\left( \tilde{q}%
_{11}-\bar{q}_{11}\right) +\frac{\partial u_{1}\left( \bar{q}_{1}\right) }{%
\partial q_{21}}\left( \tilde{q}_{21}-\bar{q}_{21}\right) +o\left( \epsilon
\right) \\
&=&u_{1}\left( \bar{q}_{1}\right) +\lambda p_{11}\left( -\epsilon \right)
+\lambda p_{21}\epsilon +o\left( \epsilon \right) \text{, by (\ref{FOC1})} \\
&=&u_{1}\left( \bar{q}_{1}\right) +\lambda \epsilon \left(
p_{21}-p_{11}\right) +o\left( \epsilon \right) \\
&>&u_{1}\left( \bar{q}_{1}\right) \text{, by (\ref{price_condition}).}
\end{eqnarray*}%
Similarly, we have $u_{2}\left( \tilde{q}_{2}\right) >u_{2}\left( \bar{q}%
_{2}\right) $. This shows that $\tilde{q}\in \mathcal{F}_{G}$. By Lemma \ref%
{Properties_e}, we have $S\left( \tilde{q}\right) >S\left( \bar{q}\right) $,
and thus
\begin{equation*}
\mathcal{V}\left( G\right) =\underset{q\in \mathcal{F}_{G}}{\max }\text{ }%
S\left( q\right) -S\left( \bar{q}\right) \geq S\left( \tilde{q}\right)
-S\left( \bar{q}\right) >0.
\end{equation*}
\end{proof}
\begin{theorem}
\label{colliner_general}Suppose $u_{j}$ is differentiable at $\bar{q}$ with $%
\nabla u_{j}\left( \bar{q}_{j}\right) \in \mathbb{R}_{++}^{k},$ $%
j=1,...,\ell ,$ and $\bar{q}\in \mathbb{R}_{++}^{k\ell }.$ If there exists
some $i_{1}\neq i_{2}\in \left\{ 1,...,k\right\} $, $j_{1}\neq j_{2}\in
\left\{ 1,...,\ell \right\} $ satisfies
\begin{equation*}
p_{i_{2}j_{1}}>p_{i_{1}j_{1}},p_{i_{1}j_{2}}>p_{i_{2}j_{2}}\text{ and }%
V\left( g_{i_{1}j_{2}}\right) \cap V\left( g_{i_{2}j_{1}}\right) \neq
\emptyset
\end{equation*}%
for $G\in \Omega \left( \bar{q}\right) $, then $\mathcal{V}\left( G\right)
>0 $.
\end{theorem}
\begin{proof}
This follows from an analogous proof of proposition \ref{collinear2}, as
shown in Figure \ref{fig3}.
\end{proof}
\begin{figure}[h!]
\label{picture3} \centering \includegraphics[width=0.45\textwidth]{Fig3.PNG}
\caption{A ramified transport system with positive exchange value.}
\label{fig3}
\end{figure}
To conclude this section, we've seen how transport structures, preferences
and prices jointly determine the exchange values. Each of these factors may
lead to a zero exchange value under very rare situations. More precisely,
when the structure of the transport system yields a singleton feasible set $%
\mathcal{F}_{G}$ (Corollary \ref{corollary_kl}, Proposition \ref%
{pairwise_dis})$,$ or the utility functions are merely quantity dependent
(Proposition \ref{quantity_u}), or price vectors are collinear across
consumers (Proposition \ref{collinear1}), the exchange value is zero.
However, under more regular situations, there exists a positive exchange
value for a ramified transport system. For instance, if the utility
functions satisfy the conditions in (3) of Theorem \ref{theorem 1} with a
non-singleton feasible set $\mathcal{F}_{G}$ (Theorem \ref{theorem 1}) or
the transport systems are of ramified structures with some non collinear
price vectors (Theorem \ref{colliner_general}), there exists a positive
exchange value.
\section{A New Optimal Transport Problem}
In the previous section, we have considered the exchange value $\mathcal{V}%
\left( G\right) $ for any $G\in \Omega \left( \bar{q}\right) $. A natural
question would be whether there exists a $G^{\ast }$ that maximizes $%
\mathcal{V}\left( G\right) $ among all $G\in \Omega \left( \bar{q}\right) $.
The answer to this question has already been provided in Proposition \ref%
{upperbound} as the particular transport path $\bar{G}\in \Omega \left( \bar{%
q}\right) $ is an obvious maximizer. However, despite the fact that $\bar{G}$
maximizes exchange value, it may be inefficient when accounting for
transportation cost. Nevertheless, as indicated previously, one should not
neglect the benefit of obtaining an exchange value from a transport system.
As a result, it is reasonable to consider both transportation cost and
exchange value together when designing a transport system.
Recall that in \cite{xia1} etc, a ramified transport system is modeled by a
transport path between two probability measures $\mathbf{a}$ and $\mathbf{b}$%
. For each transport path $G\in Path\left( \mathbf{a,b}\right) $ and any $%
\alpha \in \left[ 0,1\right] $, the $\mathbf{M}_{\alpha }$\textbf{\ }cost of
$G$ is defined by
\begin{equation}
\mathbf{M}_{\alpha }\left( G\right) :=\sum_{e\in E\left( G\right) }w\left(
e\right) ^{\alpha }length\left( e\right) . \label{M_a_cost}
\end{equation}%
When $\alpha <1$, a \textquotedblleft Y-shaped\textquotedblright\ path from
two sources to one target is usually more preferable than a
\textquotedblleft V-shaped\textquotedblright\ path. In general, a transport
path with a branching structure may be more cost efficient than the one with
a \textquotedblleft linear\textquotedblright\ structure. A transport path $%
G\in Path\left( \mathbf{a,b}\right) $ is called an $\alpha -$\textit{optimal
transport path} if it is an $\mathbf{M}_{\alpha }$ minimizer in $Path\left(
\mathbf{a,b}\right) $.
Based on the above discussions, we propose the following minimization
problem.
\begin{problem}
Given two atomic probability measures $\mathbf{a}$ and $\mathbf{b}$ on $X$
in an economy $\mathcal{E}$ given by (\ref{economy}), find a minimizer of
\begin{equation}
H_{\alpha ,\sigma }\left( G\right) :=M_{\alpha }\left( G\right) -\sigma
\mathcal{V}\left( G\right) \label{new transport}
\end{equation}%
among all $G\in \Omega \left( \bar{q}\right) $, where $\Omega \left( \bar{q}%
\right) $ is given by (\ref{Omega}), and $\alpha \in \lbrack 0,1)$ and $%
\sigma \geq 0$ are fixed constants.
\end{problem}
When the utility functions are merely quantity dependent (Proposition \ref%
{quantity_u}) or when price vectors are collinear across consumers
(Proposition \ref{collinear1}), the exchange value of any $G\in \Omega
\left( \bar{q}\right) $ is always zero. In these cases, $H_{\alpha ,\sigma
}\left( G\right) =M_{\alpha }\left( G\right) $ for any $\sigma $. Thus, the
study of $H_{\alpha ,\sigma }$ coincides with that of $M_{\alpha }$, which
can be found in existing literature (e.g. \cite{xia1}, \cite{book}).
However, as seen in the previous section, it is quite possible that $%
H_{\alpha ,\sigma }$ does not agree with $M_{\alpha }$ on $\Omega \left(
\bar{q}\right) $ for $\sigma >0$ in a general economy $\mathcal{E}$.
As $\mathcal{V}$ is topologically invariant (Proposition \ref{topology}),
many results that can be found in literature about $M_{\alpha }$ still hold
for $H_{\alpha ,\sigma }$. For instance, the Melzak algorithm for finding an
$M_{\alpha }$ minimizer (\cite{melzak}, \cite{gilbert}, \cite{book}) in a
fixed topological class still applies to $H_{\alpha ,\sigma }$ because $%
\mathcal{V}\left( G\right) $ is simply a constant within each topological
class. Also, as the balance equation (\ref{balance}) still holds, one can
still calculate angles between edges at each vertex using existing formulas (%
\cite{xia1}), and then get a universal upper bound on the degree of vertices
on an optimal $H_{\alpha ,\sigma }$ path.
However, due to the existence of exchange value, one may possibly favor an
optimal $H_{\alpha ,\sigma }$ path instead of the usual optimal $M_{\alpha }$
path when designing a transport system. The topological type of the optimal $%
H_{\alpha ,\sigma }$ path may differ from that of the optimal $M_{\alpha }$
path. This observation is illustrated by the following example.
\begin{figure}[h]
\centering
\subfloat[
$G_1$]{\includegraphics[width=0.3\textwidth,
height=1.6in]{Fig1a.PNG}\label{ga}} \hspace{0.25in}
\subfloat[
$G_2$]{\includegraphics[width=0.3\textwidth, height=1.6in]{Fig1b.PNG}\label{gb}}
\hspace{0.25in}
\subfloat[
$G_3$]{\includegraphics[width=0.3\textwidth, height=1.6in]{Fig1c.PNG}\label{gc}}
\caption{Three topologically different transport systems.}
\label{fig5}
\end{figure}
\begin{example}
Let us consider the transportation from two sources to two consumers.
If we only consider minimizing $\mathbf{M}_{\alpha }$ transportation cost,
each of the three topologically different types shown in Figure \ref{fig5}
may occur. However, when $\sigma $ is sufficiently large, only $G_{2}$ in
Figure \ref{gb} may be selected under suitable conditions of $u$ and $p$.
This is because $G_{2}$ has a positive exchange value which does not exist
in either $G_{1}$ or $G_{3}$.
\end{example}
|
1,314,259,996,634 | arxiv | \section{Introduction}\label{sec:introduction}
Logic programming~(LP) and Abstract Argumentation Frameworks~(AFs) are two well-established formalisms for
Knowledge Representation and Reasoning~(KR)
whose close relation is well-known since the introduction of the latter: besides introducing AFs, \citen{Dung95} studied how logic programs under the \emph{stable models}~\cite{GL88} and the \emph{well-founded semantics}~\cite{van1991well} can be translated into
abstract argumentation frameworks.
Since then, this initial connection has been further studied and extended,
providing relations between other semantics and ways to translate argumentation frameworks into logic programs~\cite{NievesCO08,Caminada2009,Wu2010ALJ,Toni2011,DvorakGWW11,CaminadaSAD15}.
On the other hand, Nelson's \emph{constructive logic}~\cite{nelson1949} is a conservative extension of \emph{intuitionistic logic}, which introduces the notion of \emph{strong negation} as a means to deal with constructive falsity, in an analogous way as intuitionism deals with constructive truth.
\citeauthor{Pearce96}~\citeyear{Pearce96,Pearce06}
showed that a particular selection of models of constructive logic, called \emph{equilibrium logic}, precisely characterize the stable models of a logic program.
This characterization was later extended to the \emph{three\nobreakdash-valued stable model}~\cite{przymusinski91a} and the well-founded semantics by~\citeN{caodpeva07a}.
Versions of constructive logic without the ``explosive'' axiom \mbox{$\varphi \to (\sneg\varphi \to \psi)$} have been extensively studied in the literature~\cite{nelson1959negation,escobar1972,thomason1969semantical,nelson1984,Odintsov2005,odintsov2015inference,kamide2015proof}
and can be considered a kind of \emph{paraconsistent} logics, in the sense, that some formulas may be constructively true and false at the same time.
The notion of equilibrium has been extended to one of these logics by \citen{OdintsovP05}, who also showed that this precise characterize the \emph{paraconsistent stable semantics}~\cite{SakamaI95}.
In this paper, we formalize in Nelson's constructive logic
a reasoning principle, to be called \emph{non-contradictory inference} (denoted~\ref{p:contradictory}), which states that
\begin{enumerate}[ itemindent=-4pt, leftmargin=29pt, rightmargin=15pt, label=\textbf{NC} ]
\item ``\emph{no belief can be held based on contradictory evidence.}''
\label{p:contradictory}
\end{enumerate}
Interestingly, though different from the logic studied by~\citeauthor{OdintsovP05},
the logic presented here is also
a conservative extension of equilibrium logic (and, thus, also of LP under the stable models semantics) that allows us to deal with inconsistent information in LP.
The interesting feature of this new logic is that, besides LP, it also captures several classes of AFs, under the stable semantics.
It is worth to mention that the representation of AFs in this new logic is modular and it is done
using an \emph{object language level}.
Recall that by object language level, we mean that AFs and its logical translation \emph{share the same language} (each argument in the AF becomes an atom in its corresponding logical theory)
and the relation between arguments in the AF (attacks or supports) are expressed by means of logical connectives.
This contrast with \emph{meta level approaches}, which talk about the AFs from ``above,'' using another language and relegating logic to talk about this new language.
It is important to note that, as highlighted by~\citen{Gabbay2015TheAA},
the object language oriented approaches
have the remarkable property of providing alternative intuitive meaning to the translated concepts through their interpretation\ in logic.
In this sense,
from the viewpoint of constructive logic, AFs can be understood as a
\emph{strengthened closed world assumption}~\cite{reiter1980logic} that we denote as~\mbox{\ref{p:cwa}}:
\begin{enumerate}[ itemindent=-4pt, leftmargin=30pt, rightmargin=15pt, label=\textbf{C\hspace{-1.5pt}W}, start=2 ]
\item ``\emph{everything for which we do not have evidence of being true or for which we have contradictory evidence, should be regarded as false}''
\label{p:cwa}
\end{enumerate}
The relation between AFs and logic has been extensively studied in the literature
and, as mentioned above, can be divided in two categories:
those that follow an object language approach~\cite{Caminada2009,Gabbay2015TheAA,gabbay2016attack}
and those that follow a meta level approach~\cite{BesnardD04,Caminada2009,Grossi2011,DvorakSW12,Ariel2013,DoutreHP14,BesnardDH14,DDvorakGLW14}.
In particular, the approach we take here shares with the work by~\citeN{Gabbay2015TheAA} the use of strong negation to capture attacks, but differs in the underlying logic: constructive logic in our case and classical logic in the case of~\citeauthor{Gabbay2015TheAA}'s work.
On the intuitive level, under the constructive logic point of view, \emph{attacks} can be understood as
\begin{enumerate}[ itemindent=-4pt, leftmargin=26pt, rightmargin=18pt, label=\textbf{AT}, start=2 ]
\item ``\emph{means to construct a proof of the falsity of the attacked argument based on the acceptability of the attacker}''
\label{p:attack}
\end{enumerate}
On the practical level, the use of constructive logic allows for a more \emph{compact} and \emph{modular translation}: each attack becomes a (rule-like) formula with the attacker -- or a conjunction of attackers in the case of set attacking arguments~\cite{Nielsen2007} -- as the antecedent and the attacked argument as the consequent.
Moreover, when attacks are combined with LP implication, we show that the latter captures the notion of \emph{support} in Evidential-Based Argumentation Frameworks (EBAFs;~\citeNP{OrenN08}):
for accepting an argument, these frameworks require, not only its \emph{acceptability} as in Dung's sense, but also that it is supported by some chain of supports rooted in a kind of special arguments called \emph{prima-facie}.
\section{Background}
In this section we recall the needed background regarding Nelson's constructive logic, logic programming and argumentation frameworks.
\subsection{Nelson's Constructive Logic}
The concept of constructive falsity was introduced into logic by~\citen{nelson1949} and it is often denoted
as $\Np$.
It was first axiomatized by \citen{vorob1952constructive}, and later studied by \citen{markov1953constructive}, who related intuitionistic and strong negation, and by~\citen{rasiowa1969n}, who provided an algebraic characterization.
Versions of constructive logic without the ``explosive'' axiom
\mbox{$\varphi \to (\sneg\varphi \to \psi)$}
are usually denoted as~$\Nn$
and they are based on a four valued assignment for each world corresponding to the values \emph{unknown}, \emph{(constructively) true}, \emph{(constructively) false} and \emph{inconsistent} (or \emph{overdetermined}).
The logic $\Np$ can be obtained by adding back the ``explosive'' axiom.
We describe next a Kripke semantics for a version of $\Nn$~\cite{thomason1969semantical,gurevich1977intuitionistic}
with the falsity constant~$\bot$,
which is denoted as $\Nn^\bot$ by~\citeN{odintsov2015inference}.
We follow here an approach with two forcing relations in the style of the work by~\citeN{akama1987}.
An alternative characterization using $2$-valued assignments plus an involution has been described by~\citeN{routley1974semantical}.
Syntactically, we assume a logical language with a \emph{strong negation} connective~``$\sneg$''.
That is, given some (possibly infinite) set of atoms $\at$,
a \emph{formula} $\fF$ is defined using the grammar:
\[
\fF \quad ::= \quad \bot \ \mid \ a \ \mid \ \sneg \fF \ \mid \ \fF \wedge \fF \ \mid \ \fF \vee \fF \ \mid \ \fF \to \fF
\]
with \mbox{$a \in \at$}.
We use Greek letters $\fF$ and $\fG$ and their variants to stand for propositional formulas.
\emph{Intuitionistic negation} is defined as ${\neg \fF \eqdef (\fF \to \bot)}$.
We also define the derived operators
${\fF \leftrightarrow \fG \eqdef (\fF \to \fG) \wedge (\fG \to \fF)}$
and
${\top \eqdef \sneg \bot}$.
A Kripke frame $\Fs = \tuple{W,\leq}$ is a pair
where $W$ is a
non-empty
set of worlds and $\leq$ is a partial order on $W$.
A valuation ${V : W \longrightarrow 2^{\at}}$ is a function mapping each world to a subset of atoms.
A Nelson's interpretation (\Ninterpretation) is a
3-tuple ${\sI = \tuple{\Fs,V^+,V^-}}$
where ${\Fs = \tuple{W,\leq}}$ is a Kripke frame and
where both $V^+$ and $V^-$ are valuations
satisfying, for every pair of worlds
$w,w' \in W$ with $w \leq w'$ and every atom $a \in \at$, the following preservation properties:
\begin{enumerate}[ label=\roman*) , leftmargin=20pt ]
\item $V^+(w) \subseteq V^+(w')$, and
\item $V^-(w) \subseteq V^-(w')$.
\end{enumerate}
Intuitively, $V^+$ represents our knowledge about constructive truth while
$V^-$ represents our knowledge about constructive falsity.
We say that $\sI$ is \emph{consistent} if, in addition, it satisfies:
\begin{enumerate}[ label=\roman*), start=3 , leftmargin=20pt]
\item $V^+(w) \cap V^-(w) = \varnothing$ for every world $w \in W$.
\end{enumerate}
Two forcing relations $\modelsp$ and $\modelsn$ are defined
with respect to any \Ninterpretation\ ${\sI = \tuple{\Fs,V^+,V^-}}$,
world $w \in W$ and atom $a \in \at$ as follows:
\begin{IEEEeqnarray*}{l ?C? l }
\sI\sep w \modelsp a &\text{ iff }& a \in V^+(w)
\\
\sI\sep w \modelsn a &\text{ iff }& a \in V^-(w)
\end{IEEEeqnarray*}
These two relations are extended to compounded formulas as follows:
\begin{IEEEeqnarray*}{l ,C, l }
\sI\sep w \not\modelsp \bot
\\
\sI\sep w \modelsp \varphi_1 \wedge \varphi_2
&\text{ iff }& \sI\sep w \modelsp \varphi_1 \text{ and } \sI\sep w \modelsp \varphi_2
\\
\sI\sep w \modelsp \varphi_1 \vee \varphi_2
&\text{ iff }& \sI\sep w \modelsp \varphi_1 \text{ or } \sI\sep w \modelsp \varphi_2
\\
\sI\sep w \modelsp \varphi_1 \!\to\! \varphi_2
&\text{ iff }& \forall w'\!\geq\! w \ \sI\sep w' \!\not\modelsp\! \varphi_1 \hspace{-0.5pt}\text{ or } \sI\sep w' \!\modelsp\! \varphi_2
\\
\sI\sep w \modelsp \sneg\varphi
&\text{ iff }& \sI\sep w \modelsn\varphi
\\
\sI\sep w \modelsn \bot
\\
\sI\sep w \modelsn \varphi_1 \wedge \varphi_2
&\text{ iff }& \sI\sep w \modelsn \varphi_1 \text{ or } \sI\sep w \modelsn \varphi_2
\\
\sI\sep w \modelsn \varphi_1 \vee \varphi_2
&\text{ iff }& \sI\sep w \modelsn \varphi_1 \text{ and } I\sep w \modelsn \varphi_2
\\
\sI\sep w \modelsn \varphi_1 \!\to\! \varphi_2
&\text{ iff }& \sI\sep w \modelsp \varphi_1 \text{ and } \sI\sep w \modelsn \varphi_2
\\
\sI\sep w \modelsn \sneg\varphi
&\text{ iff }& \sI\sep w \modelsp\varphi
\end{IEEEeqnarray*}
An \Ninterpretation\ is said to be an \emph{\Nmodel} of a formula~$\varphi$,
in symbols $\sI \modelsp \varphi$, iff $\sI \sep w \modelsp \varphi$ for every $w \in W$.
It is said to be \emph{\Nmodel} of a theory~$\Gamma$,
in symbols also ${\sI \modelsp \Gamma}$, iff it is an \Nmodel\ of all its formulas
${\sI \modelsp \varphi}$.
A formula $\varphi$ is said to be a \emph{consequence} of a theory~$\Gamma$
iff every model of $\Gamma$ is also a model of $\varphi$,
that is $\sI \modelsp \varphi$ for every $\sI \modelsp \Gamma$.
This formalization characterizes $\Nn$ while a restriction to consistent \Ninterpretation s would characterize $\Np$.
As mentioned above, $\Nn$ is ``somehow'' paraconsistent in the sense that a formula $\varphi$ and its strongly negated counterpart~$\sneg\varphi$ may simultaneously be consequences of some theory:
for instance,
we have that $\set{a, \sneg a} \modelsp a$ and $\set{a, \sneg a} \modelsp \sneg a$.
Intuitively, these two forcing relations determine the four values above mentioned:
a formula $\varphi$ satisfying $\sI \not\modelsp \varphi$ and $\sI \not\modelsn \varphi$
is understood as \emph{unknown}.
If it satisfies
$\sI \modelsp \varphi$ and $\sI \not\modelsn \varphi$,
is understood as \emph{true}.
\emph{False} if $\sI \not\modelsp \varphi$ and $\sI \modelsn \varphi$,
and \emph{inconsistent} if $\sI \modelsp \varphi$ and $\sI \modelsn \varphi$.
\subsection{Logic Programming, Equilibrium Logic and\\ Here-and-There Nelson's Models}
In order to accommodate logic programming conventions,
we will indistinctly write ${\varphi \larrow \psi}$ instead of ${\psi \to \varphi}$ when describing logic programs.
An \emph{explicit literal} is either an atom~\mbox{$a \in \at$} or an atom preceded by strong negation~$\sneg a$.
A \emph{literal} is either an explicit literal~$l$ or an explicit literal preceded by intuitionistic negation~$\neg l$.
A literal that contains intuitionistic negation is called \emph{negative}.
Otherwise, it is called \emph{positive}.
A \emph{rule} is a formula of the form \mbox{$H \larrow B$}
where $H$ is a disjunction of atoms and $B$ is a conjunction of literals.
A logic program~$\Pi$ is a set of rules.
Given some set of explicit literals $\bT$ and some formula $\varphi$,
we write ${\bT \modelsp \varphi}$ when \mbox{$\tuple{\Fs,V^+,V^-} \modelsp \varphi$} holds
for the Kripke frame~$\Fs$ with a unique world~$w$ and valuations:
\mbox{$V^+(w) = \bT \cap \at$}
and
\mbox{$V^-(w) = \setm{ a }{ \sneg a \in \bT}$}.
A set of explicit literals~$\bT$ is said to be \emph{closed} under $\Pi$ if $\bT \modelsp H \larrow B$ for every rule $H \larrow B$
in~$\Pi$.
Next, we recall the notions of reduct and answer set~\cite{gelfondL91}:
\begin{definition}[Reduct and Answer Set]\label{def:answer.set}
The \emph{reduct} of program $\Pi$ \wrt~some set of explicit literals~$\bT$ is defined as follows
\begin{enumerate}[ label=\it\roman*) , leftmargin=20pt ]
\item Remove all rules with $\neg l$ in the body s.t. $l \in \bT$,
\item Remove all negative literals for the remaining rules.
\end{enumerate}
Set~$\bT$ is a \emph{stable model} of $\Pi$ if $\bT$ is a $\subseteq$-minimal closed set under~$\Pi$.
\end{definition}
For characterizing logic programs in constructive logic,
\black
we are only interested in a particular kind of {\Ninterpretation s} over \mbox{\emph{Here-and-There}}~(HT) frames.
These frames are of the form \mbox{$\Fsht = \tuple{\set{h,t}, \leq }$}
where $\leq$ is a partial order satisfying \mbox{$h \leq t$}.
We refer to \Ninterpretation s\ with an HT-frame as \emph{\HTinterpretation s}.
A \emph{\HTmodel} is an \Nmodel\ which is also a \HTinterpretation.
We use the generic terms \emph{interpretation} (resp. \emph{model}) for both HT and \Ninterpretations\ (resp. models) when it is clear by the context.
At first sight, it may look that restricting ourselves to HT frames is an oversimplification.
However, once the closed world assumption is added to intuitionistic logic, this logic can be replaced without loss of generality by any proper intermediate logic~\cite{OsorioPA05,CabalarFC0V17}.
Given any \HTinterpretation,
\mbox{$\sI = \tuple{\Fsht,V^+,V^-}$}
we define
four sets of atoms as follows:
\begin{gather*}
\begin{IEEEeqnarraybox}{ l ,C, l }
H_\sI^+ &\eqdef& V^+(h)\\
H_\sI^- &\eqdef& V^-(h)
\end{IEEEeqnarraybox}
\hspace{2cm}
\begin{IEEEeqnarraybox}{ l ,C, l }
T_\sI^+ &\eqdef& V^+(t)\\
T_\sI^- &\eqdef& V^-(t)
\end{IEEEeqnarraybox}
\end{gather*}
These sets of atoms correspond to the atoms verified at each corresponding world and valuation.
Every \HTinterpretation~$\sI$ is fully determined by these four sets.
We will omit the subscript and write, for instance, $H^+$ instead of $H^+_\sI$ when $\sI$ is clear from the context.
Furthermore,
any \HTinterpretations\ can be succinctly rewritten as a pair
\mbox{$\sI = \tuple{\bH,\bT}$}
where
\mbox{$\bH = H^+ \cup \sneg H^-$}
and
\mbox{$\bT = T^+ \cup \sneg T^-$}
are sets of literals.\punctfootnote{We denote by $\sneg S \eqdef \setm{ \sneg \varphi }{ \varphi \in S}$ the of set strongly negated formulas of a given set~$S$.
Similarly, we also define
$\neg S \eqdef \setm{ \neg \varphi }{ \varphi \in S}$.}
Note that, by the preservation properties of \Ninterpretations, we have that ${\bH \subseteq \bT}$.
We say that an \HTinterpretation~\mbox{$\sI = \tuple{\bH,\bT}$}
is \emph{total} iff $\bH = \bT$.
Given \HTinterpretations\ $\sI = \tuple{\bH,\bT}$ and $\sI' = \tuple{\bH',\bT'}$,
we write $\sI \leq \sI'$ iff $\bH \subseteq \bH'$ and $\bT = \bT'$.
As usual, we write $\sI < \sI'$ iff $\sI \leq \sI'$ and $\sI \neq \sI'$.
Next, we introduce the definition of equilibrium model~\cite{Pearce96}.
\begin{definition}[Equilibrium model]\label{def:equilibrium}
A \HTmodel~$\sI$ of a theory $\Gamma$ is said to be an \emph{equilibrium model} iff it is total and there is no other \HTmodel~$\sI'$ of $\Gamma$ s.t. $\sI' < \sI$.
\end{definition}
Interestingly, consistent equilibrium models precisely capture the answer set of a logic program.
The following is a rephrase of Proposition~2 by~\citeN{Pearce96} using our notation.
\begin{proposition}\label{prop:per96}
Let~$\Pi$ be a logic program.
A consistent set~$\bT$ of explicit literals is a stable model of~$\Pi$ if and only if~$\bT$ is the set of explicit literals true in some consistent equilibrium model of~$\Pi$.
\end{proposition}
More in general, it has been shown by~\citeN{OdintsovP05} that the
(possible non-consistent) equilibrium models of a logic program capture its paraconsistent answer sets~\cite{SakamaI95}.
The following propositions characterizes some interesting properties of HT and strong negation that will be useful through the paper\footnote{For the sake of clarity, proofs of formal results are moving to an appendix.}
:
\begin{proposition}[Persistence]\label{prop:preservation}
Any \HTinterpretation~$\sI$, formula~$\varphi$ and world \mbox{$w \in \set{h,t}$} satisfy:
\begin{enumerate}
\item $I\sep w \modelsp \varphi$ implies $I\sep t \modelsp \varphi$, and
\item $I\sep w \modelsn \varphi$ implies $I\sep t \modelsn \varphi$.
\end{enumerate}
\end{proposition}
\begin{proposition}[HT-negation]{\label{prop:negation}}
Any \HTinterpretation~$\sI$, formula~$\varphi$ and world \mbox{$w \in \set{h,t}$} satisfy:
\begin{enumerate}[ label=\roman*) , leftmargin=20pt ]
\item $\sI\sep w \modelsp \neg\varphi$ iff $\sI\sep t \not\modelsp \varphi$, and
\label{item:1:prop:negation}
\item $\sI\sep w \modelsp \neg\neg\varphi$ iff $\sI\sep t \modelsp \varphi$, and
\label{item:5:prop:negation}
\item $\sI\sep w \modelsp \neg\neg\neg\varphi$
iff $\sI\sep w \modelsp \neg\varphi$, and
\label{item:7:prop:negation}
\item $\sI\sep w \modelsn \neg\varphi$
iff $\sI\sep w \modelsn \sneg\varphi$.
\label{item:2:prop:negation}
\end{enumerate}
\end{proposition}
\subsection{Abstract Argumentation Frameworks}
Since their introduction, the syntax of AFs have been extended in different ways.
One of these extensions, usually called SETAFs, consists in generalizing the notion of binary attacks to collective attacks such that a set of arguments~$B$ attacks some argument $a$~\cite{Nielsen2007}.
Another such extension, usually called Bipolar~AFs~(BAFs), consists in frameworks with a second positive relation called \emph{support}~\cite{karacapilidis2001computer,verheij2003deflog,AmgoudCL04}.
In particular,~\citen{Verheij03}
introduced the idea that, in AFs, arguments are considered as \emph{prima-facie} justified statements, which can be considered true until proved otherwise, that is, until they are defeated.
This allows introducing a second class of \emph{ordinary arguments}, which cannot be considered true unless get supported by the prima-facie ones.
Later,
\citen{Oren2014}
developed this idea by introducing Evidence-Based AFs (EBAFs),
an extension of SETAFs (and, this, of AFs) which incorporates the notions of support and prima-facie arguments.
Next we introduce an equivalent definition by~\citeN{Fandinno2018foiks},
which is closer to the logic formulation we pursue here.
\begin{definition}[Evidence-Based Argumentation framework]\label{def:EBA}
An Evidence-Based Argumentation framework $\EBAFF$ is a \mbox{$4$-tuple}
where
$\A$ represents a (possibly infinite) set of arguments,
\mbox{$\R_a \subseteq 2^{\A} \times {\A}$}
is an attack relation,
\mbox{$\R_s \subseteq 2^{\A} \times \A$} is a support relation
and $\PF \subseteq \A$ is a set of distinguished \emph{prima-facie} arguments.
We say that an $\EBAF$ is \emph{finitary} iff $B$ is finite for every attack or support
\mbox{$(B,a) \in \R_a \cup \R_s$}.
\end{definition}
The notion of acceptability is extended by requiring not only defense against all attacking arguments, but also support from some prima-facie arguments.
Furthermore, the defense can be provided not only by defeating all attacking sets of arguments, but also by denying the necessary support for some of the non-prima-facie arguments of these attacks.
\begin{definition}[Defeat/Acceptability]\label{def-inh-ASAF}
Given some argument \mbox{$a \in \A$} and set of arguments $\aA \subseteq \A$, we say
\begin{enumerate}[ topsep=3pt, itemsep=0pt, parsep=1pt, start=1, leftmargin=15pt ]
\item $a$ is \emph{defeated} \wrt\ $\aA$ iff
there is some $B \subseteq \aA$ s.t. $(B,a) \in \R_a$,
\end{enumerate}
$\Defeated{\aA}$
will denote the set of arguments that are defeated \wrt~$\aA$.
\begin{enumerate}[ topsep=3pt, itemsep=0pt, parsep=1pt, start=2, leftmargin=15pt ]
\item $a$ is \emph{supported} \wrt\ $\aA$ iff either \mbox{$a \in \PF$} or there is some
\mbox{$B \subseteq \aA \setminus\set{a}$} whose elements are supported \wrt\ $\aA \setminus\set{a}$ and such that \mbox{$(B,a) \in \R_s$},
\item $a$ is \emph{supportable} \wrt~$\aA$ iff it is supported \wrt~$\A \setminus \Defeated{\aA}$,
\item $a$ is \emph{unacceptable} \wrt\ $\aA$ iff it is either defeated or not supportable,
\item $a$ is \emph{acceptable} \wrt\ $\aA$ iff it is supported and,
for every $(B,a) \in \R_a$, there is $b \in B$ such that $b$ is unacceptable \wrt\ $\aA$
\end{enumerate}
$\Supported{\aA}$ (resp. $\UnAcceptable{\aA}$ and $\Acceptable{\aA}$)
will denote the set of arguments that are supported (resp. unacceptable and acceptable) \wrt~$\aA$.
\end{definition}
Then, semantics are defined as follows:
\begin{definition}\label{def:semantics}
A set of arguments $\aA \subseteq \A$
is said to be:
\begin{enumerate}[ topsep=2pt, itemsep=0pt, parsep=1pt, start=1 , leftmargin=15pt ]
\item \emph{\ssupporting} \iiff\ $\aA \subseteq \Supported{\aA}$,
\item \emph{\cfree}
\iiff\
$\aA \!\cap\! \Defeated{\aA} \!=\! \varnothing$,
\item \emph{admissible} \iiff\ it is \cfree\ and $\aA \subseteq \Acceptable{\aA}$,
\item \emph{complete} \iiff\ it is \cfree\ and
$\aA = \Acceptable{\aA}$,
\item \emph{preferred} \iiff\ it is a $\subseteq$-maximal admissible set,
\item \emph{stable}
iff
\mbox{$\aA = \A \setminus \UnAcceptable{\aA}$}.
\end{enumerate}
\end{definition}
\noindent
SETAFs can be seen as special cases where the set of supports is empty and all arguments are prima-facie.
In this sense, we write \mbox{$\SFF$}
instead
\mbox{$\EBAF = \tuple{\A,\R_a,\varnothing,\A}$}.
Furthermore, in their turn, AFs can be seen as a special case of SETAFs where all attacks have singleton sources.
In such case, we just write $\mbox{$\AFF$}$
instead $\SFF$,
where \mbox{$\R = \setm{ (b,a) }{ (\set{b},a) \in \R_a }$}
For this kind of frameworks, the respective notions of \mbox{conflict-free} (resp. admissible, complete, preferred or stable) coincide with those being defined by~\citeN{Nielsen2007} and~\citeN{Dung95}, respectively.
To illustrate the notions of support and prima-facie arguments,
consider the well-known Tweety example:
\begin{example}\label{ex:tweety}
Suppose we have the knowledge base that includes the following statements:
\begin{enumerate}
\item birds (normally) can fly,
\item penguins are birds,
\item penguins cannot fly and
\item Tweety is a penguin.
\end{enumerate}
We can formalize this by the following graph:
\begin{center}
\begin{tikzpicture}[tikzpict]
\matrix[row sep=0.35cm,column sep=2cm,ampersand replacement=\&] {
\node (a) [arg] {$\mathbf{pT}$};\&
\node (pb) [videphantom] {};\&
\node (d)[narg] {$\mathbf{fT}$};
\\
\\
};
\node (b)[narg, below of=pb, node distance=20pt] {$\mathbf{bT}$};
\draw [double distance=2pt,->,-open triangle 45] (a) to [out=-25,in=175] (b);
\draw [double distance=2pt,->,-open triangle 45] (b) to [out=0,in=210] (d);
\draw [->,-triangle 45] (a) to [out=15,in=165] (d);
\end{tikzpicture}
\end{center}
where $pT$, $bT$ and $fT$ respectively stand for ``Tweety is a penguin'', ``Tweety is a bird''
and ``Tweety can fly.''
Double arrows represent support while simple ones represent attacks.
Furthermore, circles with solid border represent prima-facie arguments while dashed border
ones represent ordinary ones.
That is,
``Tweety is a penguin'' is considered a prima-facie argument that supports that
``Tweety is a bird'' which, in its turn, supports that
``Tweety can fly.''
The latter is then considered also prima-facie, that is, true unless proven otherwise.
Note that
``Tweety is a penguin''
also attacks that
``Tweety can fly'',
so the latter cannot be accepted as true.
Formally, this corresponds to the framework
\mbox{$\newef\label{ef:tweety} \hspace{-1pt}=\hspace{-1pt}
\tuple{\A\hspace{-1pt},\hspace{-1pt}\R_a\hspace{-1pt},\hspace{-1pt}\R_s\hspace{-1pt},\PF}$}
with
\mbox{$\R_a = \set{ (\set{pT},fT) }$}
and
\mbox{$\R_s = \set{ (\set{pT},bT), \, (\set{bT},fT) }$}
and
\mbox{$\PF = \set{ pT }$}
whose unique admissible, complete, preferred and stable extension is
\mbox{$\set{pT , bT}$}.
In other words, we conclude that ``Tweety cannot fly.''
Note that ``Tweety is a penguin'' provides conflicting evidence for whether it can fly or not.
In EBAFs, this is solved by giving priority to the attack relation, so ``Tweety cannot fly''
is inferred.
\end{example}
\section{Reasoning with Contradictory Evidence in Equilibrium Logic}
In this section, we formalize principles~\ref{p:contradictory} and~\ref{p:cwa} in constructive logic,
obtaining as a result a formalism which is a conservative extension of logic programming under the answer set semantics (see Theorem~\ref{thm:conservative.ht} and Corollary~\ref{cor:conservative.asp} below)
and which is capable of reasoning with contradictory evidence.
We start by defining a new implication connective that captures~\ref{p:contradictory} in terms of intuitionistic implication and strong negation:
\begin{gather}
\varphi_1 \sup \varphi_2 \ \ \eqdef \ \ (\neg\!\sneg\varphi_1 \wedge \varphi_1) \to \varphi_2
\label{eq:sup.def}
\end{gather}
Recall that intuitionistic implication \mbox{$\varphi_1 \to \varphi_2$} can be informally understood as a means to construct a proof of the truth of the consequent~$\varphi_2$ in terms of a proof of truth of the antecedent~$\varphi_1$.
In this sense,~\eqref{eq:sup.def}
can be understood as a means to construct a proof of the truth of the consequent~$\varphi_2$ in terms of proof of the truth of the antecedent~$\varphi_1$ and the absence of a proof of its falsity, or in other words,
in terms of a \emph{consistent proof} of the antecedent~$\varphi_1$.
It is easy to see that~\eqref{eq:sup.def} is weaker than intuitionistic implication, that is, that
\begin{gather*}
\varphi_1 \to \varphi_2 \ \modelsp \ \varphi_1 \sup \varphi_2
\end{gather*}
holds for every pair of formulas $\varphi_1$ and~$\varphi_2$.
We can use the following simple example to illustrate the difference between intuitionistic implication and~\eqref{eq:sup.def}.
\begin{example}\label{ex:impl.diff}
Let $\newtheory\label{th:imp.diff}$ be the following set of formulas:
\begin{align*}
&a
&
&b
&
\sneg&\hspace{1.5pt}b
&
a &\sup c
&
b &\sup d
\end{align*}
and let $\theory\ref{th:imp.diff}'$ be the theory obtained by replacing each occurrence of implication~$\sup$ by intuitionistic implication~$\to$.
On the one hand, we have that both, $\theory\ref{th:imp.diff}$ and $\theory\ref{th:imp.diff}'$,
entail atoms $a$ and $c$.
On the other hand,
we have:
\mbox{$\theory\ref{th:imp.diff}' \modelsp d$}
but
\mbox{$\theory\ref{th:imp.diff} \not\modelsp d$}.
This is in accordance with~\ref{p:contradictory},
since the only way to obtain a proof of $d$ is in terms of~$b$, for which we have contradictory evidence.
Note also that an alternative proof of $d$ could be obtained if new consistent evidence becomes available:
for the theory
\mbox{$\newtheory\label{th:imp.diff2} = \theory\ref{th:imp.diff} \cup \set{ a \sup d}$}
we obtain
\mbox{$\theory\ref{th:imp.diff2} \modelsp d$}.
It is also worth highlighting that, in contrast with intuitionistic implication, this new connective~\eqref{eq:sup.def} is not monotonic: for
\mbox{$\newtheory\label{th:imp.diff3} = \set{ b ,\ b \sup d}$}
we have
\mbox{$\theory\ref{th:imp.diff3} \modelsp d$}
and $\theory\ref{th:imp.diff3} \cup \set{ \sneg b} \not\modelsp d$.
Obviously, it is not antimonotonic either: $\theory\ref{th:imp.diff3} \setminus \set{ b} \not\modelsp d$.
\end{example}
The following result shows that, when dealing with consistent evidence, these differences disappear and~\eqref{eq:sup.def} collapses into intuitionistic implication:
\begin{proposition}{\label{prop:imp.consitent}}
Let $\sI$ be a consistent \Ninterpretation\ and let $\varphi_1$ and~$\varphi_2$ be any pair of formulas.
Then, $\sI \modelsp \varphi_1 \sup \varphi_2$ \ iff \ $\sI \modelsp \varphi_1 \to \varphi_2$.
\end{proposition}
Let us now formalize the~\ref{p:cwa} assumption.
As usual \mbox{non-monotonicity} is obtained by considering equilibrium models~(Definition~\ref{def:equilibrium}).
However, to capture~\ref{p:cwa}, we need to restrict the consequences of these models to those that are consistent.
We do so by introducing a new \emph{\mbox{cw\nobreakdash-inference}} relation which, precisely, restricts the consequences of~$\modelsp$ to those which are consistent:
\begin{gather}
\sI\sep w \models \varphi \quad\text{iff}\quad
\sI\sep w \modelsp \neg\!\sneg \varphi \wedge \varphi
\label{eq:val.def}
\end{gather}
Furthermore, as usual, we write
\mbox{$\sI \models \varphi$}
iff
\mbox{$\sI\sep w \models \varphi$}
for all
\mbox{$w \in W$}.
We also write
\mbox{$\Gamma \models \varphi$}
iff
\mbox{$\sI \models \varphi$}
holds for every equilibrium model~$\sI$ of $\Gamma$.
For instance, in Example~\ref{ex:impl.diff}, it is easy to see that
\mbox{$\theory\ref{th:imp.diff} \modelsp b$}
and
\mbox{$\theory\ref{th:imp.diff} \modelsp \sneg b$},
but
\mbox{$\theory\ref{th:imp.diff} \not\models b$}
and
\mbox{$\theory\ref{th:imp.diff} \not\models \sneg b$}
because the unique equilibrium model of
\mbox{$\theory\ref{th:imp.diff}$}
contains contradictory evidence for~$b$.
On the other hand, as may be expected, when we deal with non-contradictory evidence cw\nobreakdash-inference~$\models$ just collapses to the regular inference relation~$\modelsp$
(see Proposition~\ref{prop:consitent} below).
To finalize the formalization of~\ref{p:cwa},
we also need to define
\emph{default negation}.
This is accomplished by introducing a new connective $\Not$ and adding the following two items to the Nelson's forcing relations:
\begin{IEEEeqnarray*}{l ,C, l}
\sI\sep w \modelsp \Not \varphi &\text{ iff }& \sI\sep w \modelsp \neg \varphi \vee (\varphi \wedge \sneg\varphi)
\label{eq:neg.def}
\\
\sI\sep w \modelsn \Not \varphi &\text{ iff }& \sI\sep w \modelsp \varphi \text{ and } \sI\sep w \not\modelsn \varphi
\end{IEEEeqnarray*}
Then, an \emph{\mbox{extended formula}}~$\fF$ is defined using the following grammar:
\[
\fF \quad ::= \quad \bot \ \mid \ a \ \mid \ \sneg \fF \ \mid \ \Not \varphi \ \mid \ \fF \wedge \fF \ \mid \ \fF \vee \fF \ \mid \ \fF \to \fF
\]
with $a \in \at$ an atom.
The following result shows that \mbox{cw\nobreakdash-inference} and default negation are conservative extensions of the satisfaction relation~$\modelsp$ and HT\nobreakdash-negation~$\neg$
when restricted to consistent knowledge.
\begin{proposition}{\label{prop:consitent}}
Let $\sI$ be a consistent \Ninterpretation\ and $\varphi$ be any extended formula.
Then, the following conditions hold:
\begin{enumerate}[ label=\roman*), leftmargin=20pt]
\item $\sI \models \varphi$ \ iff \ \mbox{$\sI \modelsp \varphi$}
\label{item:models:prop:consitent}
\item $\sI \models \Not \varphi$ \ iff \ $\sI \models \neg \varphi$.
\label{item:neg:prop:consitent}
\end{enumerate}
\end{proposition}
Despite the relation between default negation~$\Not$ and HT\nobreakdash-negation~$\neg$ on consistent interpretations, in general, they no not coincide.
The following example illustrates the difference between these two kinds of negations:
\begin{example}\label{ex:negation}
Let $\newtheory\label{th:negation}$ be the following theory:
\begin{gather*}
a \hspace{1.25cm}
\sneg a \hspace{1.25cm}
\Not \sneg a \sup b
\end{gather*}
This theory has a unique equilibrium model $\sI = \tuple{\bT,\bT}$
with $\bT = \set{ a , \sneg a, b }$.
Note that, every model $\sJ$ of~$\theory\ref{th:negation}$
must satisfy \mbox{$\sJ \modelsp a \wedge \sneg a$}
and, thus, it must also satisfy $\sJ \models \Not \sneg a$
and $\sJ \modelsp b$ follows (Proposition~\ref{prop:cw-relation}).
Hence, $\sI$ is a $\leq$-minimal model and, thus, an equilibrium model.
On the other hand, let
$\newtheory\label{th:negation2}$ be the theory:
\begin{gather*}
a \hspace{1.25cm}
\sneg a \hspace{1.25cm}
\neg\!\sneg a \sup b
\end{gather*}
In this case, we can check that
\mbox{$\sJ = \tuple{\bH,\bT}$}
with
\mbox{$\bH = \set{ a, \sneg a}$}
is a model of~$\theory\ref{th:negation2}$
because
\mbox{$\sJ \not\models \neg\!\sneg a$}
and, thus, now $\sI$ is not an equilibrium model.
In fact, $\tuple{\bH,\bH}$ is the unique equilibrium model of~$\theory\ref{th:negation2}$.
\end{example}
The following result shows the relation between default negation, implication and \mbox{cw\nobreakdash-inference}.
\begin{proposition}{\label{prop:cw-relation}}
Let $\sI$ be any \Ninterpretation\ and $\varphi$ be any formula.
Then,
\begin{enumerate}[ label=\roman*), leftmargin=20pt]
\item $\sI \models \varphi$ and $\sI \modelsp \varphi \sup \psi$ implies \mbox{$\sI \modelsp \psi$},
\label{item:impl:prop:cw-relation}
\item $\sI \models \Not \varphi$ implies $\sI \not\models \varphi$.
\label{item:neg:1:prop:cw-relation}
\end{enumerate}
Furthermore, if $\sI$ is a total \HTinterpretation, then
\begin{enumerate}[ label=\roman*), leftmargin=20pt, start=3]
\item $\sI \models \Not \varphi$ iff $\sI \not\models \varphi$.
\label{item:neg:prop:cw-relation}
\end{enumerate}
\end{proposition}
\noindent
Condition~\ref{item:impl:prop:cw-relation} formalizes a kind of \emph{modus ponens} for $\sup$ in the sense that, if the we have a consistent proof of the antecedent, then we have a (possibly inconsistent) proof of the consequent.
It is clear that this statement cannot be strengthened to provide a consistent proof of the consequent because any other formula could provide the contradictory evidence to make it inconsistent.
Note also that this relation is non-monotonic as adding new information may result in a contradictory antecedent.
Condition~\ref{item:neg:prop:cw-relation} formalizes the~\mbox{\ref{p:cwa}} assumption, that is, $\Not \varphi$ holds whenever $\varphi$ is not known to be true or we have contradictory evidence for it.
Note that, according to this, the default negation of an inconsistent formula is true and, therefore, the evaluation of default negation itself is always consistent (even if the formula is inconsistent):
that is, \mbox{$\sI\sep w \not\modelsp \Not \varphi$}
or
\mbox{$\sI\sep w \not\modelsn \Not \varphi$}
holds for any extended formula.
On the contrary that implication~$\sup$, default negation~$\Not$ cannot be straightforwardly defined\footnote{It is still an open question whether it is definable in terms of Nelson's connectives or not.} in terms of Nelson's connectives.
Another alternative, we have investigated was defining $\Not \varphi$ as
and
$\neg \varphi \vee (\varphi \wedge \sneg\varphi)$.
in terms of \mbox{cw\nobreakdash-inference}.
The following result shades light on this attempt.
\begin{proposition}{\label{prop:default.negation.alt}}
Let $\sI$ be any \Ninterpretation\ and $\varphi$ be any formula.
Then, $\sI \models \neg \varphi \vee (\varphi \wedge \sneg\varphi)$ iff
$\sI \models \neg \varphi$.
\end{proposition}
\noindent
That is, in terms of \mbox{cw\nobreakdash-inference}, $\neg \varphi \vee (\varphi \wedge \sneg\varphi)$ is equivalent to HT\nobreakdash-negation.
As illustrated by Example~\ref{ex:negation}, default negation and HT\nobreakdash-negation do not behave in the same way.
The following example illustrates that, though default negation allows to derive new knowledge from contradictory information, it does not allow to self justify a contradiction.
\begin{example}
Let $\newtheory\label{th:default}$ be a logic program containing the following single rule:
\begin{gather}
\Not \sneg a \sup a
\label{eq:default.rule}
\end{gather}
stating, as usual, that $a$ holds by default.
As expected this theory has a unique equilibrium model~$\sI$ which satisfies
\mbox{$\sI \models a$}
and
\mbox{$\sI \not\models \sneg a$}.
Let now
\mbox{$\newtheory\label{th:default2} = \theory\ref{th:default} \cup \set{ \sneg a}$}.
This second theory also has
a unique equilibrium model $\sI$ which now satisfies
\mbox{$\sI \models \sneg a$}
and
\mbox{$\sI \not\models a$}.
To see that
\mbox{$\sJ = \tuple{\bT,\bT}$}
with
\mbox{$\bT = \set{ a , \sneg a }$}
is not an equilibrium model of~\theory\ref{th:default2},
let
\mbox{$\sJ' = \tuple{\bH,\bT}$}
with
\mbox{$\bH = \set{ \sneg a }$}
be an interpretation.
Since $\sJ'$ satisfies \mbox{$\sJ' < \sJ$} and it is a model of $\sneg a$,
it only remains to be shown that
$\sJ'$ is a model of~\eqref{eq:default.rule}.
For that, just note
\mbox{$\sJ \models \sneg a$}
and, thus,
\mbox{$\sJ \not\models \Not \sneg a$}
follows by Proposition~\ref{prop:cw-relation}.
This implies that
$\sJ'$ satisfies~\eqref{eq:default.rule}
and, consequently, that
$\sJ$ is not an equilibrium model.
In fact, $\tuple{\bH,\bH}$ is the unique equilibrium model of~$\theory\ref{th:default2}$.
\end{example}
\subsection{A Conservative Extension of Logic Programming}
Let us now consider the language formed with the set of logical connectives
$$\LanLP \eqdef \set{ \bot, \sneg\hspace{1.5pt}, \wedge, \vee, \sup, \Not }$$
In other words, a \emph{\mbox{$\LanLP$-formula}}~$\fF$ is defined using the following grammar:
\[
\fF \quad ::= \quad \bot \ \mid \ a \ \mid \ \sneg \fF \ \mid \ \Not \varphi \ \mid \ \fF \wedge \fF \ \mid \ \fF \vee \fF \ \mid \ \fF \sup \fF
\]
with \mbox{$a \in \at$} being an atom.
A $\LanLP$-literal is either an explicit literal~$l$ or is default negation~$\Not l$.
A $\LanLP$-rule is a formula of the form \mbox{$H \supl B$}
where $H$ is a disjunction of atoms and $B$ is a conjunction of \mbox{$\LanLP$-literals}.
\emph{\mbox{$\LanLP$-theories}} and \emph{\mbox{$\LanLP$-programs}} are respectively defined as sets of \mbox{$\LanLP$-formulas} and \mbox{$\LanLP$-rules}.
The definition of an answer set is applied straightforwardly as in Definition~\ref{def:answer.set}.
Given any theory $\LanLP$-theory $\Gamma$,
by $\LN{\Gamma}$ we denote the result of
\begin{enumerate}[ leftmargin=15pt ]
\item replacing every occurrence of $\sup$ by~$\to$ and
\item and every occurrence of $\Not$ by $\neg$.
\end{enumerate}
Then, the following results follow directly from Propositions~\ref{prop:imp.consitent} and~\ref{prop:consitent}:
\black
\begin{theorem}\label{thm:conservative.ht}
Let $\Gamma$ be any \emph{$\LanLP$-theory} and $\sI$ be any consistent interpretation.
Then, $\sI$ is an equilibrium model of $\Gamma$ iff $\sI$ is an equilibrium model of $\LN{\Gamma}$.
\end{theorem}
\begin{corollary}\label{cor:conservative.asp}
Let $P$ be a $\LanLP$-program and $\bT$ be any consistent set of explicit literals.
Then, $\sI = \tuple{\bT,\bT}$ is an equilibrium model of $P$ iff $\bT$ is an answer set of~$P$.
\end{corollary}
In other words, the equilibrium models semantics are a conservative extension of the answer set semantics.
The following example shows the usual representation of the Tweety scenario in this logic (an alternative representation using contradictory evidence will be discussed in the Discussion section).
\begin{examplecont}{ex:tweety}\label{ex:tweety2}
Consider again the Tweety scenario.
The following logic program~$\newprogram\label{prg:tweety}$ is a usual way of representing this scenario in LP:
\begin{IEEEeqnarray}{rlCl}
&\mathit{flyTweety} &\supl& \mathit{birdTweety} \wedge \Not \sneg\mathit{flyTweety}
\label{eq:rule.flyTweety}
\\
&\mathit{birdTweety} &\supl& \mathit{penguinTweety}
\\
\sneg\,&\mathit{flyTweety} &\supl& \mathit{penguinTweety}
\\
&\IEEEeqnarraymulticol{2}{l}{\mathit{penguinTweety}} \notag
\end{IEEEeqnarray}
where rule~\eqref{eq:rule.flyTweety} formalizes the statement~``birds normally can fly.''
This is achieved by considering $\sneg\mathit{flyTweety}$ as an exception to this rule.
It can be checked that $\program\ref{prg:tweety}$ has a unique equilibrium model~$\sI_{\ref{prg:tweety}}$, which is consistent, and which satisfies~$\sI_{\ref{prg:tweety}} \not\models \mathit{flyTweety}$
and~$\sI_{\ref{prg:tweety}} \models \Not\mathit{flyTweety}$.
In other words, Tweety cannot fly.
\end{examplecont}
\begin{examplecont}{ex:impl.diff}\label{ex:impl.diff.neg}
Consider now the theory obtained by replacing formulas $a \sup c$ and $b \sup d$
in~$\theory\ref{th:imp.diff}$ by the following two formulas:
\begin{IEEEeqnarray*}{rCl " rCl}
\Not e \wedge a &\sup& c
&
\Not e \wedge b &\sup& d
\end{IEEEeqnarray*}
Let $\newtheory\label{th:imp.diff.neg}$ be such theory.
It is easy to see that
neither
$\theory\ref{th:imp.diff.neg}$
nor
$\LN{\theory\ref{th:imp.diff.neg}}$
monotonically entail~$c$ nor~$d$.
This is due to the fact that the negation of $e$ is not monotonically entailed:
\mbox{$\theory\ref{th:imp.diff.neg} \not\modelsp \Not e$}
and
\mbox{$\LN{\theory\ref{th:imp.diff.neg}} \not\modelsp \neg e$}.
On the other hand, the negation of $e$ is \mbox{non-monotonically} entailed in both cases:
\mbox{$\theory\ref{th:imp.diff.neg} \models \Not e$}
and
\mbox{$\LN{\theory\ref{th:imp.diff.neg}} \models \neg e$}.
Note that
both \mbox{$\theory\ref{th:imp.diff.neg}$} and
\mbox{$\LN{\theory\ref{th:imp.diff.neg}}$}
have a unique equilibrium model,
\mbox{$\sI_{\ref{th:imp.diff.neg}} = \tuple{\bT,\bT}$}
and
\mbox{$\sI_{\ref{th:imp.diff.neg}}' = \tuple{\bT',\bT'}$}
with $\bT = \set{a,b ,\sneg b, c}$
and
\mbox{$\bT' = \set{a,b, \sneg b, c, d}$}, respectively,
and in both cases we have
\mbox{$\sI_{\ref{th:imp.diff.neg}} \models \Not e$} and
\mbox{$\sI_{\ref{th:imp.diff.neg}}' \models \neg e$}.
As a result, we get that both theories cautiously entail~$c$.
However, as happened in Example~\ref{ex:impl.diff}, only $\LN{\theory\ref{th:imp.diff.neg}}$ cautiously entails~$d$, because the unique evidence for $d$ comes from~$b$ for which we have inconsistent evidence.
This behavior is different from paraconsistent answer sets~\cite{SakamaI95,OdintsovP05}.
As pointed out by~\citeN{SakamaI95}, the truth of~$d$ is less credible than the truth of $c$, since $d$ is derived through the contradictory fact~$b$.
In order to distinguish such two facts~\citeN{SakamaI95} also define \emph{suspicious answer sets}
which do not consider $d$ as true.\footnote{
Suspicious answer sets are based on a 6-value lattice which add the values \emph{suspiciously true} and \emph{suspiciously false} to the four values of~$\Nn$.
In the unique suspicious answer set of
$\theory\ref{th:imp.diff.neg}$,
atom~$d$ gets assigned the suspiciously true value instead the true value.
A formal comparison with suspicious answer sets is left for future work.}
This example also helps us to illustrate the strengthened closed world assumption principle~\ref{p:cwa}.
On the one hand, we have that
\mbox{$\theory\ref{th:imp.diff.neg} \models \Not e$}
holds because there is no evidence for $e$.
On the other hand,
we have that
\mbox{$\theory\ref{th:imp.diff.neg} \models \Not b$}
holds because we have contradictory evidence for $b$.
Moreover, we have that \mbox{$\theory\ref{th:imp.diff.neg} \models \Not d$} holds
because the only evidence we have for $d$ is based on the contradictory evidence for $b$.
\end{examplecont}
\section{Argumentation Frameworks in Equilibrium Logic}
In this section, we show how AFs, SETAFs and EBAFs can be translated in this logic in a modular way and using only the object language.
This translation is a formalization of the intuition of an attack stated in~\ref{p:attack}.
Theorems~\ref{thm:af.stable<->emodel}, \ref{thm:sf.stable<->emodel} and~\ref{thm:ef.stable<->emodel} show that the equilibrium models of this translation precisely characterize the stable extension of the corresponding framework.
\subsection{Dung's Argumentation Frameworks}
Now, let us formalize the notion of attack introduced in~\ref{p:attack}, by defining the following connective:
\begin{gather}
\varphi_1 \attacks \varphi_2 \ \ \eqdef \ \ \varphi_1 \sup \sneg\varphi_2
\label{eq:att.def}
\end{gather}
Here we identify the acceptability of $\varphi_1$ with having a consistent proof of it, or in other words, as having a proof of the truth of $\varphi_1$ and not having a proof of its falsity.
Then, \eqref{eq:att.def} states that the acceptability of $\varphi_1$ allows to construct a proof of the falsity of $\varphi_2$.
In this sense, we identify a proof of the falsity of~$\varphi_2$
with $\varphi_2$ being defeated.
\begin{proposition}\label{prop:att.mponens}
Given any \Ninterpretation~$\sI$ and any pair of formulas $\varphi_1,\varphi_2$, the following conditions hold:
\begin{enumerate}[ label=\roman*)]
\item $\sI \models \varphi_1$ and $\sI \modelsp \varphi_1 \att \varphi_2$
imply $\sI \modelsn \varphi_2$
\label{item:2:prop:sup+att.models}
\end{enumerate}
\end{proposition}
Using the language $\LanAF = \set{\att}$, we can translate any AF as follows:
\begin{definition}\label{def:af.translation}
Given some framework $\AFF$, we define the theory:
\begin{IEEEeqnarray}{ l ,C, l}
\LAF{\AF} &\eqdef& \A \cup \setm{ a \att b }{ (a,b) \in \R }
\label{eq:def.gamma.af}
\end{IEEEeqnarray}
In addition, we assign a corresponding
set of arguments \mbox{$\SI \eqdef \setm{\! a \in \A \!}{\! \sI \models a \!}$}
to every interpretation~$\sI$.
\end{definition}
Translation~$\LAF{\cdot}$ applies the notion of attack introduced in~\ref{p:attack} to translate an AF into a logical theory.
The strengthened close world assumption~\ref{p:cwa} is used to retrieve the arguments~$\SI$ corresponding to each stable model~$\sI$ of the logical theory obtained from this translation.
\begin{example}\label{ex:af.line}
To illustrate this translation, let $\newaf\label{af:line}$ be the framework corresponding to the following graph:
\begin{center}
\begin{tikzpicture}[tikzpict]
\matrix[row sep=0.5cm,column sep=1.5cm,ampersand replacement=\&] {
\node (a) [arg] {$\mathbf{a}$};\&
\node (b)[arg] {$\mathbf{b}$};\&
\node (c)[arg] {$\mathbf{c}$};\\
};
\draw [->,-triangle 45] (a) to (b);
\draw [->,-triangle 45] (b) to (c);
\end{tikzpicture}
\end{center}
Then, we have that $\LAF{\af\ref{af:line}}$ is the theory containing the following two attacks:
\begin{gather*}
a \att b \hspace{2cm} b \att c
\end{gather*}
plus the facts $\set{a,b,c}$.
\end{example}
\begin{proposition}{\label{prop:af.model}}
Let \mbox{$\AF$} be some framework and
\mbox{$\sI$} be some \HTmodel\ of $\LAF{\AF}$.
Then, the following hold:
\begin{enumerate}[ label=\roman*), leftmargin=20pt]
\item if $a$ is defeated \wrt~$\SI$, then $\sI \modelsp \sneg a$
\label{item:1:prop:af.model}
\item $\SI$ is \cfree.
\end{enumerate}
If, in addition, $\sI$ is an $\leq$-minimal model, then
\black
\begin{enumerate}[ label=\roman*), start=3, leftmargin=20pt]
\item $a$ is defeated \wrt~$\SI$ iff $\sI \modelsp \sneg a$.
\label{item:3:prop:af.model}
\end{enumerate}
\end{proposition}
\begin{examplecont}{ex:af.line}
Continuing with our running example,
let
\mbox{$\sI_{\ref{af:line}} = \tuple{\bT_{\ref{af:line}},\bT_{\ref{af:line}}}$}
and
\mbox{$\sJ_{\ref{af:line}} = \tuple{\bT_{\ref{af:line}}',\bT_{\ref{af:line}}'}$}
be two total models of $\Gamma_{\af\ref{af:line}}$
with
\mbox{$\bT_{\ref{af:line}} = \set{a,b,c,\sneg b}$}
and
\mbox{$\bT_{\ref{af:line}}' = \set{a,b,c,\sneg a,\sneg c}$}.
Then, we have that both
\mbox{$S_{\sI_{\ref{af:line}}} = \set{a,c}$}
and
\mbox{$S_{\sJ_{\ref{af:line}}} = \set{b}$} are \mbox{conflict-free}
(though only $S_{\sI_{\ref{af:line}}}$ is stable).
Furthermore, we also can see that argument~$b$ is the unique defeated argument \wrt~$S_{\sI_{\ref{af:line}}}$ and the unique atom for which $\sI_{\ref{af:line}} \modelsp \sneg b$ holds.
On the other hand, we get that argument~$c$ is the unique defeated argument \wrt~$S_{\sJ_{\ref{af:line}}}$ and also
both
\mbox{$\sJ_{\ref{af:line}} \modelsp \sneg a$}
and
\mbox{$\sJ_{\ref{af:line}} \modelsp \sneg c$}
hold.
Note that, as stated by~\ref{item:3:prop:af.model} in Proposition~\ref{prop:af.model},
this implies that only $S_{\sI_{\ref{af:line}}}$ can be an equilibrium model.
Let us show that it is indeed the case that $\sJ_{\ref{af:line}}$ is not an equilibrium model
and let us define, for that purpose, an interpretation~$\sJ_{\ref{af:line}}' = \tuple{\bH_{\ref{af:line}}',\bT_{\ref{af:line}}'}$
with \mbox{$\bH_{\ref{af:line}}' = \bT_{\ref{af:line}}' \setminus \set{ \sneg a} = \set{a,b,c,\sneg c}$}.
In other words, interpretation~$\sJ_{\ref{af:line}}'$ is as $\sJ_{\ref{af:line}}$, but removing the non-defeated argument~$a$ as a negated conclusion~$\sneg a$.
It is easy to check that
\mbox{$\sJ_{\ref{af:line}}' \models b \att c$} because \mbox{$\sneg c \in \bH_{\ref{af:line}}'$} holds.
Besides, since \mbox{$\sneg a \in \bT_{\ref{af:line}}'$}, we have that $\sJ_{\ref{af:line}}' \not\models a$ and, therefore, that $\sJ_{\ref{af:line}}' \models a \att b$.
This implies that $\sJ_{\ref{af:line}}'$ is a model of~$\Gamma_{\af\ref{af:line}}$.
Since $\sJ_{\ref{af:line}}' < \sJ_{\ref{af:line}}$, we get that $\sJ_{\ref{af:line}}$ is not an equilibrium model.
\end{examplecont}
In fact, we can generalize this correspondence between the stable extensions and the equilibrium models to any argumentation framework as stated by the following theorem:
\begin{theorem}\label{thm:af.stable<->emodel}
Given some $\AFF$, there is a one-to-one correspondence between its stable extensions and the equilibrium models of $\LAF{\AF}$ such that
\begin{enumerate}[ label=\roman*)]
\item if $\sI$ is an equilibrium model of $\LAF{\AF}$, then $\SI$ is a stable extension of~$\AF$,\label{item:1:thm:stable<->emodel}
\item if $\aA$ is a stable extension of $\AF$ and $\sI$ is a total interpretation such that $T_\sI^+ \!=\! \A$ and $T_\sI^- \!=\! \Defeated{\aA}$, then $\sI$ is an equilibrium model of $\LAF{\AF}$.
\label{item:2:thm:stable<->emodel}
\end{enumerate}
\end{theorem}
\begin{proofs}\footnote{This theorem is a particualr case of Theorem~\ref{thm:sf.stable<->emodel} below. Recall that full proofs are provided in the appendix. }
First, note that condition~\ref{item:1:thm:stable<->emodel}
follows directly from~\ref{item:3:prop:af.model} in Proposition~\ref{prop:af.model}
and the facts that $(a)$ equilibrium models are $\leq$-minimal models and $(b)$
$\SI$ is a stable extension iff
$\SI$ are exactly the non-defeated arguments \wrt~$\SI$.
To show~\ref{item:2:thm:stable<->emodel},
it is easy to see that~$\SI$ being a stable extension implies that $\sI$ is a
model of~$\LAF{\AF}$.
Hence, to show that $\sI$ is an equilibrium model what remains is to prove that any $\sJ < \sI$ is not a model of $\LAF{\AF}$.
Any such $\sJ$ must satisfy
\mbox{$H_\sJ^+ = H_\sI^+ = \A$}
and
\mbox{$H_\sJ^- \subset H_\sI^- = T_\sI^- = \Defeated{\aA}$}.
Therefore, there is some defeated argument such that \mbox{$a \notin H_\sJ^-$}
and some defeating attack \mbox{$(b,a) \in \R_a$} such that
\mbox{$b \in \aA = H_\sI^+ \setminus T_\sI^- = H_\sJ^+ \setminus T_\sJ^-$}.
This implies that \mbox{$b \att a \in \LAF{\AF}$}
and $\sJ \models b$
which, in its turn, implies that
$a \in H_\sJ^-$.
This is a contradiction
and, consequently, $\sI$ is an equilibrium model.
\end{proofs}
Theorem~\ref{thm:af.stable<->emodel} captures the relation between the stable extensions of an~AF and its translation into a logical theory.
As mentioned above, this relation relies on the reasoning principles~\ref{p:attack} and~\ref{p:cwa}:
An~$\AFF$ is translated into a logical theory~$\LAF{\AF}$ using the notion of attack introduced in~\ref{p:attack}.
The stable extension~$\SI$ of this~AF is then retrieved from the equilibrium model~$\sI$ of $\LAF{\AF}$ using the~\ref{p:cwa} principle.
\subsection{Set Attack Argumentation Frameworks}
We may also extend the results of the previous section to SETAFs using the language $\LanSF = \set{\att, \wedge}$
and a similar translation.
\begin{definition}
Given some finitary set attack framework \mbox{$\SFF$}, we define
\begin{IEEEeqnarray}{ l ,C, l}
\Gamma_{\R_a} &\eqdef& \setBm{ \bigwedge A \att b }{ (A,b) \in \R_a}
\label{eq:def.gamma.sf}
\end{IEEEeqnarray}
and \mbox{$\LSF{\SF} \eqdef \A \cup \Gamma_{\R_a}$}.
\end{definition}
Similar to Definition~\ref{def:af.translation},
translation~$\LSF{\cdot}$ applies the notion of attack introduced in~\ref{p:attack} to translate an AF into a logical theory.
In this case the set of attacking arguments becomes a conjuntion in the antecedent of the attack connective.
\begin{theorem}{\label{thm:sf.stable<->emodel}}
Given some finitary $\SF$ there is a one-to-one correspondence between its stable extensions and the equilibrium models of $\LSF{\SF}$ such that
\begin{enumerate}[ label=\roman*)]
\item if $\sI$ is an equilibrium model of $\LSF{\SF}$, then $\SI$ is a stable extension of $\SF$,\label{item:1:thm:sf.stable<->emodel}
\item if $\aA$ is a stable extension of $\SF$ and $\sI$ is a total interpretation such that $T_\sI^+ \!=\! \A$ and $T_\sI^- \!=\! \Defeated{\aA}$, then $\sI$ is an equilibrium model of $\LSF{\SF}$.
\label{item:2:thm:sf.stable<->emodel}
\end{enumerate}
\end{theorem}
\begin{proofs}
The proof follows as in Theorem~\ref{thm:af.stable<->emodel} by noting that any interpretation~$\sI$ and set of arguments $B$ satisfy: $B \subseteq \SI$ iff
$\sI \models b$ for all $b \in B$ iff $\sI \models \bigwedge B$.
\end{proofs}
\subsection{Argumentation Frameworks with Evidence-Based Support}
Let us now extend the language of SETAFs with the LP~implication~\eqref{eq:sup.def}, in other words,
we consider the language possessing the following set of connectives
\mbox{$\LanEF = \set{\att,\wedge,\sup}$}, so that we can translate any EBAF
as follows:
\begin{definition}
Given any finitary evidence-based framework~\mbox{$\EBAFF$},
we define its corresponding theory as: \mbox{$\LEF{\EBAF} \,\eqdef\, \PF \cup \Gamma_{\R_a} \cup \Gamma_{\R_s}$} with
\begin{IEEEeqnarray}{ l ,C, l}
\Gamma_{\R_s} &\eqdef& \setBm{ \bigwedge A \sup b }{ (A,b) \in \R_s}
\label{eq:def.gamma.sup}
\end{IEEEeqnarray}
and $\Gamma_{\R_a}$ as stated in~\eqref{eq:def.gamma.sf}.
\end{definition}
Note that, in contrast with AFs and SETAFs, the theory corresponding to an EBAFs do not contain all arguments as atoms, but only those that are \mbox{prima-facie}~$\PF$.
This reflects the fact that in EBAFs not all arguments can be accepted, but only those that are prima-facie or are supported by those prima-facie.
Supports are represented using the LP implication $\sup$ and supported arguments are captured by the positive evaluation of each interpretation~$H_\sI^+$.
The following result extends Proposition~\ref{prop:af.model} to EBAFs including the relation between supported arguments and models.
\begin{proposition}{\label{prop:ef.model}}
Let \mbox{$\EBAF$} be some framework and
\mbox{$\sI$} be some \HTmodel\ of $\LEF{\EBAF}$.
Then, the following hold:
\begin{enumerate}[ label=\roman*), leftmargin=20pt]
\item if $a$ is supported \wrt~$\SI$, then $\sI \modelsp a$,
\label{item:1:prop:ef.model}
\item if $a$ is defeated \wrt~$\SI$, then $\sI \modelsp \sneg a$,
\label{item:2:prop:ef.model}
\item $\SI$ is \cfree.
\label{item:3:prop:ef.model}
\end{enumerate}
If, in addition, $\sI$ is an $\leq$-minimal \HTmodel, then
\begin{enumerate}[ label=\roman*), start=3, leftmargin=20pt]
\item $a$ is supported \wrt~$\SI$ iff $\sI \modelsp a$,
\label{item:4:prop:ef.model}
\item $a$ is defeated \wrt~$\SI$ iff $\sI \modelsp \sneg a$,
\label{item:5:prop:ef.model}
\item $\SI$ is \ssupporting.
\label{item:6:prop:ef.model}
\end{enumerate}
\end{proposition}
\begin{examplecont}{ex:tweety}\label{ex:tweety3}
Consider now framework~$\EBAF$ representing the Tweety scenario.
\refstepcounter{programcount}\label{th:tweety}
\begin{IEEEeqnarray}{lCl+x*}
\mathit{birdTweety} &\sup& \mathit{flyTweety}
\label{eq:flyTweety}
\\
\mathit{penguinTweety} &\sup& \mathit{birdTweety}
\\
\mathit{penguinTweety} &\att& \mathit{flyTweety}
\label{eq:flyTweety.att}
\\
&&\mathit{penguinTweety} \notag&
\end{IEEEeqnarray}
\end{examplecont}
As mentioned in Example~\ref{ef:tweety},
framework~\ef\ref{ef:tweety} has a unique stable extension
$$\set{\mathit{penguinTweety},\, \mathit{birdTweety}}$$
which does not include the argument $\mathit{flyTweety}$.
In other words, Tweety cannot fly.
Interestingly, $\LSF{\ef\ref{ef:tweety}}$ has also a unique equilibrium model
\mbox{$\sI_{\ref{th:tweety}} = \tuple{\bT_{\ref{th:tweety}},\bT_{\ref{th:tweety}}}$}
where $\bT_{\ref{th:tweety}}$ stands for the set:
\begin{gather*}
\set{ \mathit{penguinTweety},\, \mathit{birdTweety},\, \mathit{flyTweety},\, \sneg\mathit{flyTweety}}
\end{gather*}
This equilibrium model
precisely satisfies the two arguments in that stable extension:
\mbox{$\sI_{\ref{th:tweety}} \models \mathit{penguinTweety}$}
and
\mbox{$\sI_{\ref{th:tweety}} \models \mathit{birdTweety}$}.
Note that we get \mbox{$\sI_{\ref{th:tweety}} \not\models \mathit{flyTweety}$}
from the fact that
\mbox{$\sI_{\ref{th:tweety}} \modelsp \sneg\mathit{flyTweety}$}.
In fact,
this correspondence holds for any EBAF
as shown by Theorem~\ref{thm:ef.stable<->emodel} below.
Though more technically complex, the proof of Theorem~\ref{thm:ef.stable<->emodel} is similar that those of Theorems~\ref{thm:af.stable<->emodel} and~\ref{thm:sf.stable<->emodel}.
In particular, it is necessary to prove the following relation between equilibrium models and supportable arguments:
\begin{proposition}{\label{prop:ef.model.supportable}}
Let \mbox{$\EBAF$} be some framework and
\mbox{$\sI$} be some equilibrium model of $\LEF{\EBAF}$.
Then, the following statement holds:
\begin{enumerate}[ label=\roman*), start=3, leftmargin=20pt, start=1]
\item $a$ is supportable \wrt~$\SI$ iff $\sI \modelsp a$.
\label{item:6:prop:ef.model.supportable}
\end{enumerate}
\end{proposition}
In contrast with the results for supported arguments stated in Proposition~\ref{prop:ef.model},
this property does not hold for arbitrary $\leq$-minimal models.
This fact can be illustrated by considering a simple $\newef\label{ef:equilibrium}$ such that
\mbox{$\LEF{\ef\ref{ef:equilibrium}} = \set{ a,\, a \sup b }$}.
Let
$\sI_{\ref{ef:equilibrium}} = \tuple{\bH_{\ref{ef:equilibrium}},\bT_{\ref{ef:equilibrium}}}$
be some interpretation
with
\mbox{$\bH_{\ref{ef:equilibrium}} = \set{ a }$}
and
\mbox{$\bT_{\ref{ef:equilibrium}} = \set{ a, \sneg a }$}.
It is easy to see that $\sI_{\ref{ef:equilibrium}}$ is a $\leq$-minimal model of
$\LEF{\ef\ref{ef:equilibrium}}$, though it is not an equilibrium model (because it is not a total interpretation).
It can also be checked that
$a$ is not defeated and, consequently, that $b$ is supportable \wrt~$\aA_{\sI_{\ref{ef:equilibrium}}} = \varnothing$.
On the other hand, the unique equilibrium model of
$\LEF{\ef\ref{ef:equilibrium}}$
is
$\sJ_{\ref{ef:equilibrium}} = \tuple{\bH'_{\ref{ef:equilibrium}},\bT'_{\ref{ef:equilibrium}}}$
with
\mbox{$\bH'_{\ref{ef:equilibrium}} = \set{ a, b }$}
and
\mbox{$\bT'_{\ref{ef:equilibrium}} = \set{ a, b }$}.
Here, both $a$ and $b$ are supportable (and supported) \wrt~$\aA_{\sJ_{\ref{ef:equilibrium}}} = \set{a,b}$.
The following result shows that, indeed, this correspondence holds for any EBAF:
\begin{theorem}{\label{thm:ef.stable<->emodel}}
Given some finitary $\EBAF$, there is a one-to-one correspondence between its stable extensions and the equilibrium models of $\LEF{\EBAF}$ such that
\begin{enumerate}[ label=\roman*)]
\item if $\sI$ is an equilibrium model of $\LEF{\EBAF}$, then $\SI$ is a stable extension of~$\EBAF$,\label{item:1:thm:ef.stable<->emodel}
\item if $\aA$ is a stable extension of $\EBAF$ and $\sI$ is a total interpretation such that $T_\sI^+ \!=\! \Supported{\aA}$ and $T_\sI^- \!=\! \Defeated{\aA}$, then $\sI$ is an equilibrium model of~$\LEF{\EBAF}$.
\label{item:2:thm:ef.stable<->emodel}
\end{enumerate}
\end{theorem}
\section{Translation of $\LanLP$-program to regular programs}
In this section, we show how \mbox{$\LanLP$-programs} can be translated into regular ASP programs.
An important practical consequence of this fact is that current state-of-the-art ASP solvers~\cite{fapfledeie08a,gekasc09c}
can be applied to \mbox{$\LanLP$-programs}.
Let us introduce such a translation as follows:
\begin{definition}\label{def:tr1}
Given a $\LanLP$-program $P$, by $\LNp{P}$ we denote the result of
\begin{enumerate}[ leftmargin=20pt ]
\item replacing every positive literal $a$ in the body of a rule by \mbox{$a \wedge \neg\!\sneg a$},
\item replacing every negative literal $\Not a$ in the body of a rule by \mbox{$\neg a \vee (a \!\wedge\! \sneg a)$},
\item replacing all occurrences of $\sup$ by $\to$.
\end{enumerate}
\end{definition}
\begin{proposition}\label{prop:to.N4}
Any $\LanLP$-program~$P$ and interpretation~$\sI$ satisfy:
$\sI \modelsp P$ iff $\sI \modelsp \LNp{P}$.
\end{proposition}
Proposition~\ref{prop:to.N4} shows how we can translate any $\LanLP$-program into an equivalent theory that does not use the new connectives~$\Not$ and~$\sup$.
The result of the translation in Definition~\ref{def:tr1} is almost a standard logic program, but for two points.
First, strong negation has to be understood in a paraconsistent way, so an atom can be true and false at the same time.
This can be addressed by using new auxiliary atoms to represent strongly negated atoms.\footnote{In fact, modern solvers already allow the use of explicit negation and their implementation is done by using new auxiliary atoms to represent strongly negated atoms.
However, solvers also include a constraint of the form $a \wedge \sneg a \to \bot$ for every atom~$a$.
This would remove the non-consistent answer sets, something we have to avoid to obtain paraconsistent answer sets.
}
Second, step~2 introduces a disjunction in the body, which is not allowed in the standard syntax of logic programs.
This can be addressed in polynomial-time also by using auxiliary atoms (similar to~\citeNP{tseitin68a}).
The following definition addresses these two issues.
\begin{definition}\label{def:tr2}
Given a $\LanLP$-program $P$, by $\LNpp{P}$ we denote the result of applying the following transformations to $\LNp{P}$:
\begin{enumerate}[ leftmargin=20pt ]
\item replacing every explicit literal of the form $\sneg a$ by a fresh atom~$\tilde{a}$,
\item adding rules $a' \leftarrow \neg a$ and $a' \leftarrow a \wedge \tilde{a}$ for each atom $a \in \at$ with $a'$ a new fresh atom, and
\item replacing each occurrence of~$\neg a \vee (a \wedge \tilde{a})$ in the body of any rule by~$a'$.
\end{enumerate}
Given a total interpretation $\sI$, we also denote by $\LNpp{\sI}$ an interpretation that, for every atom~$a \in \at$, satisfies:
\begin{enumerate}[ leftmargin=20pt ]
\item $\LNpp{\sI} \not\modelsn a$
\item $\LNpp{\sI} \modelsp a$ iff $\sI \modelsp a$
\item $\LNpp{\sI} \modelsp \tilde{a}$ iff $\sI \modelsn a$
\item $\LNpp{\sI} \modelsp a'$ iff either $\sI \not\modelsp a$ or both $\sI \modelsp a$ and $\sI \modelsn a$.
\end{enumerate}
\end{definition}
\begin{proposition}\label{prop:tr.lp}
Any $\LanLP$-program~$P$ and total interpretation~$\sI$ satisfy that
$\sI$ is an equilibrium model of $P$ iff $\LNpp{I}$ an equilibrium model of $\LNpp{P}$.
\end{proposition}
The result of Definition~\ref{def:tr2} is a standard logic program.
Proposition~\ref{prop:tr.lp} shows that we can use this translation in combination with standard ASP solvers to obtain equilibrium for $\LanLP$-program and stable extensions of all the AFs considered in this paper.
The second consequence of this translation is that
deciding whether there exists any stable extension of some \mbox{$\LanLP$-program} is in~\mbox{$\SigmaP{2}$} in general and in~\mbox{NP} if the program is normal~\cite{daeigovo01a}.
This complexity results are tight because hardness follows from Corollary~\ref{cor:conservative.asp} and the hardness results for finding answer sets for these classes of programs~\cite{daeigovo01a}.
Therefore, deciding whether there exists any stable extension of some \mbox{$\LanLP$-program} is \mbox{$\SigmaP{2}$-complete} in general and \mbox{NP-complete} for normal \mbox{$\LanLP$-programs}.
Furthermore, this result directly applies to EBAFs so that deciding whether there exists any stable extension is \mbox{NP-complete}.
\section{Discussion}
\label{sec:dis}
LP and AFs are two \mbox{well-established} KRR formalisms for dealing with nonmonotonic reasoning (NMR).
In particular, Answer Set Programming (ASP)
is an LP paradigm, based on the stable model semantics,
which has raised as a preeminent tool for practical NMR with applications in diverse areas of AI including planning, reasoning about actions, diagnosis, abduction and beyond~\cite{baral2003knowledge,BrewkaET11}.
On the other hand, one of the major reasons for the success of AFs is their ability to handle conflicts due to inconsistent information.
Here, we have shown that both formalisms can be successfully accommodated in Nelson's constructive logic.
In fact, it is easy to see that by rewriting attacks using definition~\eqref{eq:att.def}, the translation of any AF becomes a normal $\LanLP$-program.
For instance, by rewriting the attack~\eqref{eq:flyTweety.att}, we obtain the equivalent formula:
\begin{IEEEeqnarray}{lCl+x*}
\mathit{penguinTweety} &\sup& \sneg\mathit{flyTweety}
\label{eq:flyTweety.att.rule}
\end{IEEEeqnarray}
which is a \mbox{$\LanLP$-rule}.
In this sense, we can consider
$\LSF{\ef\ref{ef:tweety}}$ in Example~\ref{ex:tweety3} as an alternative representation of the Tweety scenario in LP.
Note that both
the unique equilibrium model $\sI_{\ref{prg:tweety}}$ of program~$\program\ref{prg:tweety}$ (Example~\ref{ex:tweety2})
and the unique equilibrium model $\sI_{\ref{th:tweety}}$ of this program satisfy:
\begin{IEEEeqnarray*}{l C l}
\begin{IEEEeqnarraybox}{l C l}
\sI_{\ref{prg:tweety}} \not\models \mathit{flyTweety}
\\
\sI_{\ref{prg:tweety}} \models \Not\mathit{flyTweety}
\end{IEEEeqnarraybox}
\hspace{1.25cm}
\begin{IEEEeqnarraybox}{l C l}
\sI_{\ref{th:tweety}} \not\models \mathit{flyTweety}
\\
\sI_{\ref{th:tweety}} \models \Not\mathit{flyTweety}
\end{IEEEeqnarraybox}
\end{IEEEeqnarray*}
In other words, in both programs we conclude that Tweety cannot fly.
However, there are a couple of differences between these two representations.
First, in contrast with $\sI_{\ref{prg:tweety}}$,
we have that $\sI_{\ref{th:tweety}}$ is not consistent:
\mbox{$\sI_{\ref{th:tweety}} \modelsp \mathit{flyTweety}$}
and
\mbox{$\sI_{\ref{th:tweety}} \modelsp \sneg\mathit{flyTweety}$}.
Second and perhaps more interestingly,
in $\LSF{\ef\ref{ef:tweety}}$, the ``normality'' of the statement ``birds can fly'' does not need to be explicitly represented.
Instead, this normality is implicitly handled by the strong closed word assumption~\ref{p:cwa}, which resolves the contradictory evidence for $\mathit{flyTweety}$ by regarding it as false.
In this sense, \mbox{$\LanLP$-programs} and AFs can be seen as two different syntaxes of the same formalism based
on the principles~\ref{p:contradictory} and~\mbox{\ref{p:cwa}} highlighted in the introduction.
In addition, another principle of this formalism is the fact that evidence must be founded or justified: this clearly shows up in normal LP and EBAFs where true literals can be computed by some recursive procedure, but also in Dung's AFs where, as we have seen, defeat can be understood as a proof of falsity.
Regarding practical aspects,
we can use \mbox{$\LanLP$-programs} as a unifying formalism to deal with both logic programs and AFs.
This directly allows to introduce variables in AFs through the use of grounding.
Going further, full first\nobreakdash-order characterizations of AFs can be provided by applying the same principles to first\nobreakdash-order constructive logic (full first\nobreakdash-order characterization of consistent logic programs has been already provided by~\citeNP{PV04}).
Besides, constructive logic immediately provides an interpretation for other richer syntaxes like the use of disjunctive targets in Collective Argumentation~\cite{bochman2003collective} or the use of arbitrary propositional formulas to represent attacks in
Abstract Dialectical Frameworks~\cite{BrewkaW10,BrewkaSEWW13}.
\section{Conclusion and future work}
We have formalized the principles~\ref{p:contradictory} and~\ref{p:cwa} in Nelson's constructive logic and shown that this is a conservative extension of logic programs which allow us to reason with contradictory evidence.
Furthermore, this allows us to translate argumentation frameworks in a modular way and using the object language such that attacks and supports become connectives in logic using the object level.
As a consequence, we can combine both formalisms in an unifying one and use proof methods from the logic or answer set solver to reason about it.
Regarding future work, an obvious open topic is to explore how other argumentation semantics can be translated
into the logic.
For instance, the relation between the complete semantics for AFs, three\nobreakdash-valued stable models semantics for LP~\cite{przymusinski91a,wucaga09a} and partial equilibrium logic~\cite{caodpeva07a} suggest that our framework can be extended to cover other semantics such as the complete and preferred.
Similarly, the relation between the paracoherent semantics for AFs~\cite{americ19a} and semi\nobreakdash-equilibrium models \cite{ameifilemo16a} suggest a possible direction to capture this semantics using the object level.
It will be also interesting to see the relation with the semi\nobreakdash-stable semantics for AFs~\cite{cacadu21a}.
The relation with other AFs extensions such as Collective Argumentation~\cite{bochman2003collective}, Abstract Dialectical Frameworks~\cite{BrewkaW10,BrewkaSEWW13} or Recursive Argumentation Frameworks~\cite{BGW05-sh,Modgil09,Gabbay2009,BaroniCGG11,CCLS16b-sh,cafafala21a} is also a direction worth exploring.
Another important open questions are studying how the principles~\ref{p:contradictory} and~\ref{p:cwa} stand in the context of paraconsistent logics~\cite{dacosta74} and paraconsistent logic programming~\cite{blasub89a};
and studying the notion of strong equivalence~\cite{LifschitzPV01,oikarinen2011characterizing} in this logic and evidence-based frameworks.
\paragraph{Acknowledgements.}
We are thankful to Seiki Akama, Pedro Cabalar, Marcelo Coniglio, David Pearce, Newton Peron and Agust\'{i}n Valverde for their suggestions and comments on earlier versions of this work.
We also thank the anonymous reviewers of the Sixteenth International Conference on Principles of Knowledge Representation and Reasoning for their comments on a preliminary version of this work.
\paragraph{Competing interests:}
The authors declare none.
\bibliographystyle{acmtrans}
|
1,314,259,996,635 | arxiv | \section{Introduction}
\label{introduction}
In the ``standard'' model of long-duration, soft-spectrum gamma-ray
bursts (GRBs; e.g. \citealt{Pir05}), the prompt high-energy emission
arises in ultra-relativistic (bulk Lorentz factor $\Gamma \ga 10^2$),
highly collimated (opening half-angle of a few degrees) jets. The high
Lorentz factors are inferred from the requirement of a sufficiently low
opacity to photon-photon annihilation or to scattering by photon
annihilation-produced electron-positron pairs \citep[e.g.][]{LS01},
whereas the jet opening angle is deduced from the detection of a
panchromatic break in the light curve of the lower-energy afterglow
emission \citep[e.g.][]{Rho99,Sari99}. Recent observations by the
{\it Swift}\/ satellite have indicated that various aspects of this
model may need to be modified \citep[e.g.][]{Mes06,Pan07,Lia08}, but the
basic picture of a collimated $\Gamma \ga 10^2$ outflow is still the
accepted paradigm.
Observations of long/soft GRBs and their afterglows have revealed that
these events typically involve the release of a few times $10^{51}\;
{\rm erg}$, although the fraction of this energy that corresponds to the
$\gamma$-ray emitting outflow component may vary from source to source
\citep[e.g.][]{BKF03,Fra05}. The outflows in these GRBs have been argued
to originate either in a magnetar or in a rapidly accreting stellar-mass
black hole, formed in the collapse of a massive star. The jets could tap
into the rotational energy of the neutron star, black hole or accretion
disc through the agency of an ordered magnetic field that threads the
source
\citep[e.g.][]{U92,T94,MR97,K97,KR98,VK01,VK03a,VK03b,Bla02,DS02,VPK03,P03,M06,Lyu06b,Lev06,KB07,B08,BK08}. For
typical burst energies and durations the field amplitudes should be
$\sim 10^{14}-10^{15}\;$G. Early models have postulated that GRB
outflows are driven purely thermally via annihilation of neutrinos
emitted by the accretion disc. Although this model remains very popular,
some recent studies have indicated that the neutrino heating may not be
as efficient as previously thought \citep[e.g.][]{DPN02}. At present,
both the magnetic and the thermal mechanisms seem equally possible and
it may well be that in many cases they operate simultaneously. In
particular, neutrino heating may play an important role in the initial
acceleration of magnetized outflows \citep[e.g.][]{VK03a} and in
determining their mass load \citep[e.g.][]{Lev06, BL08}.
While short/hard GRBs
evidently have different progenitors (quite possibly merging neutron
stars or neutron star/black hole pairs) and on average involve a smaller
energy release, a lower Lorentz factor, and weaker collimation than
long/soft GRBs, they may well represent the same basic phenomenon and
arise in relativistic outflows that are driven in a similar way
\citep[e.g.][]{Nak07}.
The magnetic acceleration and collimation of GRB outflows needs to be
studied within the framework of relativistic magnetohydrodynamics
(MHD). Although general-relativistic effects may influence the
conditions near the base of the flow, most of the action takes place
sufficiently far away from the central mass that the simpler equations
of special-relativistic MHD can be employed. Since our focus in this
paper is on the global structure of GRB jets, we henceforth consider
only the special-relativistic theory. However, even in this case there
are qualitatively new effects in comparison with Newtonian MHD. These
include the fact that, when the bulk Lorentz factor becomes large, the
electric force can no longer be neglected relative to the magnetic force
and, in fact, becomes comparable to it in magnitude. Correspondingly,
one needs to retain the displacement current and the electric charge
density in Maxwell's equations. Another consequence of relativistic
motion (which also affects unmagnetized flows) is the coupling between
different spatial components of the momentum conservation equation
brought about by the appearance of the Lorentz factor (which is
calculated from the total velocity) in each of the component equations.
Furthermore, in cases where the temperature (i.e. the characteristic
velocity of internal motions) is relativistic, one needs to take into
account the enthalpy contribution to the inertia of the flow. On account
of these various factors, relativistic MHD does not naturally yield
simple generalizations of results obtained in Newtonian MHD. To simplify
the treatment, various authors have adopted the force-free
electrodynamics (also termed ``magnetodynamics'') approximation, in
which the matter inertia is neglected altogether. While this approach
has led to useful insights and interesting exact solutions, it is
inherently limited in that one cannot explicitly calculate the fluid
velocity and hence the efficiency of transforming electromagnetic energy
into kinetic form, which are key parameters of interest for
astrophysical applications.
In a pioneering work, \citeauthor{LCB92} (\citeyear{LCB92}; see also
\citealt{Con94}) derived exact semi-analytic MHD solutions of steady,
axisymmetric, ``cold'' relativistic flows patterned after the Newtonian
radially self-similar outflow solutions of \citet{BP82}. In contrast
with the Newtonian solutions, one cannot match the flow in the
relativistic case to a given power-law radial distribution of the
rotation velocity of the source (e.g. the $\propto r^{-1/2}$ rotation
law of a Keplerian accretion disc) because the relativistic equations
already contain the (constant) speed of light $c$. However, this
constraint only affects the base of the flow (and, as shown by
\citealt{VK03a}, it is possible to approximate a Keplerian disc even in
this case by judiciously parametrizing the disc height above the origin
of the coordinate system), and one can proceed to obtain the global
structure of the outflow as in the Newtonian case. \citet{LCB92}
identified as a key property of the relativistic outflow solutions the
spatially extended nature of the acceleration region, which continues
well beyond the classical fast-magnetosonic surface. These results were
further generalized to initially ``hot'' outflows by
\citet{VK03a,VK03b}, who went on to apply the relativistic self-similar
solutions to GRB outflows (see also \citealt{VK01} and \citealt{VPK03})
and to the lower-$\Gamma$ jets imaged in active galactic nuclei
\citep[AGNs;][]{VK04}. The solutions obtained in these papers confirmed
that spatially extended acceleration is a generic property of MHD
outflows that distinguishes it from purely hydrodynamic, thermally
driven winds. \citet{VK01,VK03a} noted that this property can be
understood from the fact that the magnetic acceleration is determined
from the joint solution of the Bernoulli equation (derived from the
momentum conservation equation along the poloidal magnetic field) and
the trans-field equation (which describes the force balance in the
transverse direction). The effective singular surface (the ``event
horizon'' for the propagation of fast-magnetosonic waves) is the
so-called modified fast magnetosonic surface, which can lie well beyond
the corresponding classical surface. (The classical fast-magnetosonic
surface is singular only when one solves the Bernoulli equation alone,
assuming that the shape of the field lines is given; in
Section~\ref{theory} we further elaborate on the strong connection
between acceleration and poloidal field-line shape in magnetically
driven flows.)
The semi-analytic solutions have also established the collimation
properties of MHD outflows, demonstrating that they converge
asymptotically to cylinders for flows that are Poynting flux-dominated
at the source and to cones when the enthalpy flux is initially dominant
\citep[e.g.][]{VK03a,VK03b}. These solutions are, however, limited by
the self-similarity assumption, which, besides restricting the angular
velocity distribution at their base, also requires the magnetic flux
distribution to be a power law in radius and only enables one current
regime (current-carrying or return-current, but not a global current
circuit) to be modelled by any given solution. To validate the
applicability of these results under more realistic circumstances and
to ascertain their dynamical stability, one needs to resort to numerical
simulations. However, the large spatial extent of the acceleration
region (which, according to the semi-analytic solutions, typically
covers several decades in spherical radius) has posed a strong challenge
for such calculations: in fact, early attempts to simulate such flows
were limited by numerical dissipation to maximum Lorentz factors that
were only a small fraction (less than $1\%$) of the potentially
achievable terminal value.
\citeauthor{KBVK07} (\citeyear{KBVK07}; hereafter Paper~I) have taken a
major step toward overcoming this challenge by employing a
special-relativistic, ideal-MHD numerical scheme that was specifically
designed to optimize accuracy and resolution and to minimize numerical
dissipation. A key element of their approach was the implementation of a
grid-extension method that made it possible to follow the flow up to six
decades in spatial scale while reducing the computation time by up to
three orders of magnitude. They were able to model cold flows that
converted nearly $80\%$ of the initial Poynting flux into kinetic energy
of $\Gamma_\infty \ga 10$ baryons and demonstrated that the results were
consistent with the available data on the acceleration of relativistic
jets in AGNs. They found that the numerical solutions assumed a
quasi-static configuration that was qualitatively in accord with the
self-similar AGN jet models of \citet{VK04}. The simulations were,
however, able to examine various aspects of the flow that could not be
studied within the framework of a self-similar model (including the
structure of outflows in which both the current and the return current
flow within the jet and the dependence of the collimation properties on
the shape of the jet boundary) and uncovered new features (such as the
formation of a cylindrical core around the jet axis) that were
inherently non--self-similar.
In this paper we further extend the scheme presented in Paper~I to cover
the regime of GRB outflows. In particular, we present simulations of
outflows that attain terminal Lorentz factors $\Gamma_\infty \ga 10^2$,
following them over up to eight decades in axial scale. Besides cold
jets, we also consider the case of an outflow in which the enthalpy flux
is a significant fraction of the injected energy flux. Owing to the
larger range in $\Gamma$ in comparison with the solutions presented in
Paper~I, the magnetic acceleration region can now be better isolated,
which enables us to more accurately compare its behaviour with that of
the self-similar solutions and to analyse it using the asymptotic forms
of the Bernoulli and trans-field equations. We begin by reviewing the
relativistic MHD formalism (Section~\ref{equations}) and our numerical
scheme (Section~\ref{simulations}). We present key simulation results in
Section~\ref{results} and discuss them in the context of the theory of
magnetic acceleration in Section~\ref{theory}. Section~\ref{application}
deals with applications of our results to GRBs. Our conclusions are
given in Section~\ref{conclusion}.
\section{Basic equations}
\label{equations}
Since most of the acceleration takes place far away from the source,
we assume that the space-time is flat. In an inertial frame at rest
relative to the source, the relativistic ideal-MHD equations that
describe the flow take the following form: {\it
continuity equation}
\begin{equation}
(1/c)\Pd{t}(\sqrt{-g}\rho u^t)+ \Pd{i}(\sqrt{-g}\rho u^i)=0\, ,
\label{cont1}
\end{equation}
where $\rho$ is the rest mass density of matter, $u^\nu$ is its
4-velocity, and $g$ is the determinant of the metric tensor; {\it
energy-momentum equations}
\begin{equation}
(1/c)\Pd{t}(\sqrt{-g}T^t_{\ \nu})+ \Pd{i}(\sqrt{-g}T^i_{\ \nu})=
\frac{\sqrt{-g}}{2} \Pd{\nu}(g_{\alpha\beta}) T^{\alpha\beta}\, ,
\label{en-mom1}
\end{equation}
where $T^{\kappa\nu}$ is the total stress-energy-momentum tensor;
{\it induction equation}
\begin{equation} (1/c)\Pd{t}(B^i)+e^{ijk}\Pd{j}(E_k) =0\, , \label{ind1}
\end{equation}
where $e_{ijk} = \sqrt{\gamma} \epsilon_{ijk} $ is the Levi-Civita
tensor of the absolute space ($\epsilon_{123}=1$ for right-handed
systems and $\epsilon_{123}=-1$ for left-handed ones) and $\gamma$ is
the determinant of the spatial part of the metric tensor
($\gamma_{ij}=g_{ij}$); the {\it solenoidal condition}
\begin{equation}
\Pd{i}(\sqrt{\gamma} B^i) =0\, .
\end{equation}
The total stress-energy-momentum tensor, $T^{\kappa\nu}$, is a sum of
the stress-energy momentum tensor of matter,
\begin{equation}
T_{(m)}^{\kappa\nu} = wu^\kappa u^\nu /c^2 + p g^{\kappa\nu}\, ,
\end{equation}
where $p$ is the thermodynamic pressure and $w$ is the enthalpy per
unit volume, and the stress-energy momentum tensor of the
electromagnetic field,
\begin{equation}
T_{(e)}^{\kappa\nu} = \frac{1}{4\pi}\left[F^{\kappa\alpha} F^\nu_{\
\alpha} - \frac{1}{4}(F^{\alpha\beta}F_{\alpha\beta})g^{\kappa\nu}
\right]\, ,
\end{equation}
where $F^{\nu\kappa}$ is the Maxwell tensor. The electric and magnetic
fields are defined as measured by an observer stationary relative to the
spatial grid, which gives
\begin{equation}
B^i= \frac{1}{2}e^{ijk} F_{jk}
\label{B-def}
\end{equation}
and
\begin{equation}
E_i = F_{it}\, .
\label{E-def}
\end{equation}
In the limit of ideal MHD
\begin{equation}
E_i=-e_{ijk}v^jB^k/c\, ,
\label{perf-cond}
\end{equation}
where $v^i=u^i/u^t$ is the usual 3-velocity of the plasma.
In all of our simulations we use an isentropic equation of state
\begin{equation}
p=Q\rho^s\, ,
\label{eos}
\end{equation}
where $Q=$const and $s=4/3$.
This relation enables us to exclude the
energy equation from the integrated
system. However, the momentum equation remains intact, including the
non-linear advection term. Therefore, if the conditions for shock
formation were to arise, our calculation would capture that
shock.\footnote{Since entropy is fixed, the compression of our shocks
would be the same as for continuous compression waves. This would give a higher
jump in density for the same jump in pressure than in a proper
(dissipative) shock. Fortunately, we do not need to contend with this
issue in practice as shocks do not form in our simulations.}
The enthalpy per unit volume is
\begin{equation}
w=\rho c^2 + \frac{s}{s-1} p \,.
\label{w-def}
\end{equation}
\subsection{Field-line constants}
\label{section_integrals}
The poloidal magnetic field is fully described by the azimuthal
component of the vector potential,
\begin{equation}
B^i = \frac{1}{\sqrt{\gamma}} \epsilon^{ij\phi} \pder{A_\phi}{x^j}\,
.
\end{equation}
For axisymmetric solutions $A_\phi=\Psi/2\pi$, where $\Psi(x^i)$, the
so-called magnetic flux function, is the total magnetic flux enclosed
by the circle $x^i=$const ($x^i$ being the coordinates of the
meridional plane). Stationary and axisymmetric ideal MHD flows have five
quantities that propagate unchanged along the magnetic field lines and
thus are functions of $\Psi$ alone. These are
$k$, the mass flux per unit magnetic flux;
$\Omega$, the angular velocity of magnetic field lines;
$l$, the total angular momentum flux per unit rest-mass flux;
$\mu$, the total energy flux per unit rest-mass energy flux;
and $Q$, the entropy per particle:
\begin{equation}
k = \frac{\rho u_p}{B_p}\, ,
\label{kappa}
\end{equation}
\begin{equation}
\Omega=\frac{v^{\hat{\phi}}}{r}-\frac{v_p}{r}
\frac{B^{\hat{\phi}}}{B_p} \, ,
\label{omega-def}
\end{equation}
\begin{equation}
l = -\frac{I}{2\pik c}+r \frac{w}{\rho c^2} \Gamma v^{\hat{\phi}} \,,
\label{angm-def}
\end{equation}
\begin{equation}
\mu = \mu_h + \mu_m,
\label{kap-def}
\end{equation}
and
\begin{equation}
Q = P/\rho^s,
\end{equation}
where $u_p=\Gamma v_p$ is the magnitude of the poloidal component of
the 4-velocity, $B_p$ is the magnitude of the poloidal component of
the magnetic field, $r$ is the cylindrical radius,
\begin{equation}
I = \frac{c}{2} r B^{\hat\phi}
\label{I}
\end{equation}
is the total electric current flowing through a loop of radius $r$
around the rotation axis,
\begin{equation}
\mu_h= \frac{w}{\rho c^2}\Gamma
\label{mu_h-def}
\end{equation}
is the total hydrodynamic energy (rest mass plus thermal plus kinetic)
flux per unit rest-mass energy flux,
\begin{equation}
\mu_m=\mu_h \sigma=-\frac{\Omega I}{2\pik c^3}
\label{mu_m-def}
\end{equation}
is the Poynting flux per unit rest-mass energy flux, and
$\sigma$ is the ratio of the Poynting flux to the hydrodynamic
(rest-mass plus thermal plus kinetic) energy flux.
For cold flows $Q=0$, $w=\rho c^2$. (Here and in the
rest of the paper we use a hat symbol over vector indices to indicate
their components in a normalized coordinate basis.) From
equation~(\ref{kap-def}) it follows that the Lorentz factor $\Gamma$
cannot exceed $\mu$.
\section{Numerical Simulations}
\label{simulations}
To maintain a firm control over the jet's confinement and to prevent
complications related to numerical diffusion of the denser
plasma from the jet's surroundings, we study outflows
that propagate inside a solid funnel of a prescribed
shape.\footnote{As was already noted in Paper~I, in real astrophysical
systems the shape of the boundary is determined by the spatial
distribution of the pressure or the density of the confining ambient medium.
The effective ambient pressure
distributions implied by the adopted funnel shapes are considered in
Section~\ref{pressure}.} Specifically, we consider axisymmetric
funnels
\begin{eqnarray}
\nonumber z \propto r^a\, ,
\end{eqnarray}
where $z$ and $r$ are the cylindrical coordinates of the funnel wall
and $a=2/3$, $1$, $3/2$, $2$ and $3$.
We employ elliptical coordinates $\{\xi,\eta,\phi\}$, where
\begin{equation}
\xi=rz^{-1/a}
\label{xi}
\end{equation}
and
\begin{equation}
\eta^2=\frac{r^2}{a} + z^2
\label{eta}
\end{equation}
(see Paper~I for details).
We use a Godunov-type numerical code based on the scheme described in
\citet{K99}. To reduce numerical diffusion we apply parabolic
reconstruction instead of the linear one of the original code. Our
procedure, in brief, is to calculate minmod-averaged first and second
derivatives and use the first three terms of the Taylor expansion for spatial
reconstruction.
This simple procedure results in a noticeable
improvement in the solution accuracy even though the new scheme is still
not 3rd-order accurate because of the non-uniformity of the grid.
The grid is uniform in the $\xi$ direction (the polar angle direction
when we use spherical coordinates), where in most runs it has a total
of 60 cells. To check the convergence, some runs were repeated with a
doubled resolution. The cells are elongated in the $\eta$ direction
(the radial direction when we use spherical coordinates), reflecting
the elongation of the funnel. Very elongated cells lead to a
numerical instability, so we imposed an upper limit of 40 on the
length/width ratio.
To speed up the simulations, we implement a sectioning of the
computational grid as described in \citet{KL04}. In each section,
which is shaped as a ring, the numerical solution is advanced using a
time step based on the local Courant condition. It is twice as large
as the time step of the adjacent inner ring and twice as small as the
time step of the adjacent outer ring. This approach is particularly
effective for conical flows but less so for highly collimated, almost
cylindrical configurations.
The equations are dimensionalized in the following manner. The unit of length, $L$, is
such that $\eta_i=1$, where the subscript $i$ refers to the inlet
boundary. The unit of time is $T=L/c$. The unit of mass is $M=L^3
B_0^2/4\pi c^2$, where $B_0$ is the dimensional magnitude of the $\eta$
component of magnetic field at the inlet (so the dimensionless magnitude
of $B^{\hat\eta}$ at the inlet is $\sqrt{4\pi}$). In applications, $L$
is the typical length-scale of the launch region, $T$ is the light
crossing time of that region and $B_0$ is the typical strength of the
poloidal magnetic field at the origin. Notice that $L$ does not have to be
the size of the rotating object at the base of the jet and in particular it
cannot be identified with the radius of the black hole event horizon which
allows only inflows. When dimensional estimates are required we use the
expected magnitude of the light-cylinder radius, $r_{\rm lc} \equiv c/\Omega$.
The mass scale $M$ does not represent the mass of the central object
but rather the rest-mass equivalent of the magnetic energy within the
magnetosphere.
\subsection{Boundary conditions}
\subsubsection{Inlet boundary}
\label{inlet_section}
We treat the inlet boundary, $\eta_i=1$, as a surface of a perfectly
conducting rotator with either a uniform angular velocity $\Omega =
\Omega_0$ or with
\begin{equation}
\Omega = \Omega_0(1+a_2(\xi/\xi_j)^2+a_3(\xi/\xi_j)^3),
\label{omega}
\end{equation}
where the subscript $j$ refers to the jet boundary (funnel wall).
In this paper we set $a_2=0.778$ and $a_3=-1.778$. The angular
velocity profile is directly related to the distribution of the
electric current in the jet, which for $r\gg r_{\rm lc}$ is given by
\begin{equation}\label{current}
I\approx -\frac{1}{2}\Omega B_p r^2
\label{I1}
\end{equation}
(see Paper~I, or equation~\ref{I-Bp} in Section~\ref{power-law}).
In fact, the current is driven
by the electric field associated with the rotating poloidal field, and
the electric charge conservation requires the circuit to eventually close. In
the case of a constant $\Omega$ the return current flows over the jet
boundary, whereas in the case of differential rotation with
$\Omega(\xi_j)=0$ it flows mainly inside the jet (within $0.75<\xi/\xi_j <1$
for the $\Omega$ distribution given by equation~\ref{omega}).
The solid-body rotation law provides a very good
description of the behaviour of magnetic field lines that thread the horizon
of a black hole or the surface of a magnetized star. This choice is therefore
entirely appropriate for the black-hole or magnetar theory of GRB jets.
On the other hand, differential rotation is a natural choice for jets
that are launched from an accretion disc, and although the
distribution~(\ref{omega}) does not correspond to a realistic disc
model, it should nevertheless capture the qualitative aspects of such a
system.\footnote{Note in this connection that \citet{TMN08} simulated a
force-free black-hole/disc outflow in which current flowed out along
field lines that threaded the uniformly rotating hole and returned along
field lines attached to the differentially rotating disc.}
The condition of perfect conductivity allows us to fix the azimuthal
component of the electric field and the $\eta$ component of the
magnetic field:
\begin{equation}
E_\phi=0, \quad B^{\hat\eta}=B_0 \text{at} \eta=\eta_i\, .
\end{equation}
{}From the first of these conditions we derive
\begin{equation}
v^{\hat\xi} = \frac{v^{\hat\eta}}{B^{\hat\eta}} B^{\hat\xi}
\label{v_xi_b}
\end{equation}
and (using equation \ref{omega-def})
\begin{equation}
v^{\hat{\phi}} = r \Omega + \frac{v^{\hat\eta}}{B^{\hat\eta}}
B^{\hat{\phi}}\, .
\label{v_phi_b}
\end{equation}
The adopted uniform distribution of $B^{\hat{\eta}}$ is consistent with
transverse mechanical equilibrium at the inlet.
We have also experimented with nonuniform distributions of
the magnetic field, in particular with $B^{\hat\eta}$ decreasing with
$\xi$. The results were not significantly different as the
field distribution downstream of the inlet underwent a rapid
rearrangement that restored the transverse force balance.
To have control over the mass flux, the flow
at the inlet boundary is set to be super--slow-magnetosonic. This
means that both the density and the radial component of the velocity
can be prescribed some fixed values:
\begin{eqnarray}
\nonumber \rho=\rho_0\, , \quad v^{\hat\eta}=v_{p_{0}}\, .
\end{eqnarray}
In the simulations we use $v_{p_{0}}=0.5\, c$ or $0.7\, c$, which is a
choice of convenience. On the one hand, this value is sufficiently small
to insure that the flow at $\eta_i=1$ is sub-Alfv\'enic and hence that
the Alfv\'en and fast-magnetosonic critical surfaces are located
downstream of the inlet boundary. On the other hand, it is large enough
to promote the rapid establishment of a steady state (in which the
outflow speed remains constant along the symmetry axis).
Because of the sub-Alfv\'enic nature of the inlet flow, we cannot fix the
other components of the magnetic field and the velocity --- they are to
be found as part of the global solution. Following the standard
approach we extrapolate $B^{\hat\phi}$ and $B^{\hat\xi}$ from the
domain into the inlet boundary cells. We then compute $v^{\hat\phi}$
and $v^{\hat\xi}$ from equations~(\ref{v_xi_b}) and~(\ref{v_phi_b}).
The magnitude of the angular velocity is chosen in such a way that
the Alfv\'en surface is encountered close to the source. Specifically,
in the case of solid-body rotation the light cylinder radius, $r_{\rm
lc}$, is $\simeq 50\%$ larger than the initial jet radius.
In the differential rotation case, the
closest point of the Alfv\'en surface is located at a distance
of $\simeq 1$ initial jet radius from the inlet surface.
The inlet density varies from model to model in order to cover a
wide range of initial magnetizations. Table I gives the key parameters
of all the jet models constructed in this study. Most of the models,
denoted by the letter B, correspond to the wall shape $z\propto r^{3/2}$
and differ only by the value of the magnetization parameter: $\mu$
varies from the relatively small value of 39, which is more suitable to
AGN jets (see Paper~I), all the way up to 620. Model B2H is included to
study the effects of a high temperature at the source. The initial
effective thermal Lorentz factor in this model is
$\Gamma_{t0}=w_0/\rho_0c^2=55$.
Models A and AW have a wall of conical
shape. In model AW the half-opening angle of the cone is $90^\circ$,
which allows us to model the case of an unconfined outflow (which could be
relevant to pulsar winds).
The remaining models help to explore the effects of
differential rotation (model D), of various other paraboloidal wall shapes
($z \propto r^{2}$ in model C, $z \propto r^{3}$ in model F)
and of a wall shape whose opening angle increases with distance (model E).
\begin{table}
\caption{Parameters of simulation models.}
\begin{tabular}{|c|l|l|l|l|l|}
\hline
Model & a & rotation & $w_0/\rho_0c^2$ & $\xi_j$ or $\theta_j$ &
$\mu_{\rm max}$\\
\hline
\hline
A & 1 & uniform & 1.0 & $\theta_j=0.2$ & 560\\
\hline
AW & 1 & uniform & 1.0 & $\theta_j=\pi/2$ & 560\\
\hline
B1 & 3/2 & uniform & 1.0 & $\xi_j=2.0$ & 620\\
\hline
B2 & 3/2 & uniform & 1.0 & $\xi_j=2.0$ & 310\\
\hline
B2H & 3/2 & uniform & 55 & $\xi_j=2.0$ & 370\\
\hline
B3 & 3/2 & uniform & 1.0 & $\xi_j=2.0$ & 155 \\
\hline
B4 & 3/2 & uniform & 1.0 & $\xi_j=2.0$ & 78\\
\hline
B5 & 3/2 & uniform & 1.0 & $\xi_j=2.0$ & 39\\
\hline
C & 2 & uniform & 1.0 & $\xi_j=2.0$ & 620\\
\hline
D & 3/2 & differential & 1.0 & $\xi_j=2.0$ & 600\\
\hline
E & 2/3 & uniform & 1.0 & $\xi_j=0.1$ & 300\\
\hline
F & 3.0 & uniform & 1.0 & $\xi_j=2.0$ & 540\\
\hline
\end{tabular}
\end{table}
\begin{figure*}
\includegraphics[width=55mm]{figures/gau-k.png}
\includegraphics[width=55mm]{figures/gau-omega.png}
\includegraphics[width=55mm]{figures/gau-mu.png}
\includegraphics[width=55mm]{figures/gbu1-k.png}
\includegraphics[width=55mm]{figures/gbu1-omega.png}
\includegraphics[width=55mm]{figures/gbu1-mu.png}
\caption{Computational errors for models A (top row) and B1 (bottom row).
The plots show the flow parameters $k(\Psi)$, $\Omega(\Psi)$ and
$\mu(\Psi)$ at the inlet (solid lines) and at $\eta=1\times 10^5$
for model A and $\eta=5\times 10^7$ for model B1 (dashed lines).
}
\label{constants}
\end{figure*}
\subsubsection{Other boundaries}
The computational domain is always chosen to be long enough for the
jet to be super--fast-magnetosonic when it approaches the outlet
boundary $\eta=\eta_o$. This justifies the use of radiative boundary
conditions at this boundary (i.e. we determine the state variables of
the boundary cells via extrapolation of the domain solution).
At the polar axis, $\xi=0$, we impose symmetry boundary conditions for
the dependent variables that are expected to pass through zero there,
\begin{eqnarray}
\nonumber f(-\xi)=-f(\xi)\, .
\end{eqnarray}
These variables include $B^{\hat\xi}$, $B^{\hat\phi}$, $u^{\hat\xi}$
and $u^{\hat\phi}$. For other variables we impose a ``zero second
derivative'' condition,
\begin{eqnarray}
\nonumber \partial^2{f}/\partial{\xi^2} = 0\, ,
\end{eqnarray}
which means that we use linear interpolation to calculate the values of these
variables in the boundary cells.
We do this in order to improve the numerical representation of a
narrow core that develops in all cases as a result of the magnetic
hoop stress. Within this core the gradients in the $\xi$ direction are
very large and the usual zero-gradient condition, $f(-\xi)=f(\xi)$,
results in increased numerical diffusion in this region. We have
checked that this has a noticeable effect only on the axial region and
that the global solution does not depend on which of these two
conditions is used.
At the wall boundary, $\xi=\xi_j$, we use a reflection condition,
\begin{eqnarray}
\nonumber f(\xi_j+\Delta \xi)=-f(\xi_j-\Delta\xi) \, ,
\end{eqnarray}
for $B^{\hat\xi}$ and $u^{\hat\xi}$ and a zero-gradient condition for
all other variables.
\begin{figure*}
\includegraphics[width=70mm]{figures/rho-a.png}
\includegraphics[width=70mm]{figures/lor-a.png}
\caption{Model A. Left panel shows $\log_{10}\rho'$
(colour), where $\rho'=\Gamma\rho$ is the jet density as measured in
the frame of jet source, and the magnetic field lines. Right panel
shows the Lorentz factor (colour) and the current lines.
The light cylinder radius is $r_{\rm lc}=0.29$.
}
\label{model-a}
\end{figure*}
\begin{figure*}
\includegraphics[width=70mm]{figures/rho-d.png}
\includegraphics[width=70mm]{figures/lor-d.png}
\caption{Same as in Fig.~\ref{model-a}, but for model D.
The closest to the inlet point of the Alfv\'en surface has the radius
$r_{\rm lc}=1.3$.
}
\label{model-d}
\end{figure*}
\begin{figure*}
\includegraphics[width=70mm]{figures/rho-b2.png}
\includegraphics[width=70mm]{figures/lor-b2.png}
\caption{Same as in Fig.~\ref{model-a}, but for model B2.
The light cylinder radius is $r_{\rm lc}=1.6$.
}
\label{model-b2}
\end{figure*}
\begin{figure*}
\includegraphics[width=70mm]{figures/rho-b2h.png}
\includegraphics[width=70mm]{figures/lor-b2h.png}
\caption{Same as in Fig.~\ref{model-a}, but for model B2H.
The light cylinder radius is $r_{\rm lc}=1.6$.
}
\label{model-b2h}
\end{figure*}
\subsection{Initial setup}
\label{setup}
The initial configuration corresponds to a non-rotating, purely poloidal
magnetic field with approximately constant magnetic pressure across the
funnel. The plasma density within the funnel is set to a small value so
that the outflow generated at the inlet boundary can easily sweep it
away. In order to speed this process up the $\eta$ component of velocity
inside the funnel is set equal to $0.7\, c$, whereas the $\xi$ component
is set equal to zero.
\subsection{Grid extensions}
The inner rings of the grid, where the grid cells are small and,
therefore, so is also
the time step, are the computationally most intensive regions of the
simulation domain. If we kept computing these inner rings during the
whole run then we would not be able to advance very far from the jet
origin. Fortunately, the transonic nature of the jet flow allows us
to cease computations in the inner region once the solution there
settles to a steady state. To be more precise, we cut the funnel
along the $\xi$-coordinate surfaces into overlapping sectors with the
intention of computing only within one sector at any given time,
starting with the sector closest to the inlet boundary. Once the
solution in the ``active'' sector settles to a steady state we switch
to the subsequent sector, located further away from the inlet. During
the switch the solution in the outermost cells of the active sector is
copied into the corresponding inner boundary cells of the subsequent
sector. During the computation within the latter sector these inner
boundary cells are not updated. This procedure is justified
only when the flow in a given sector cannot communicate with the flow
in the preceding sector through hyperbolic waves, and thus we
ensure that the Mach cone of the fast-magnetosonic waves points
outward at the sector interfaces (see Paper~I).
In these simulations we used up to 7 sectors, with each
additional sector being ten times longer than the preceding one. This
technique has enabled us to reduce the computation time by more than
three orders of magnitude. Although
the grid extension can in principle be continued indefinitely, there
are other factors that limit how far along the jet one can advance in
practice. Firstly, once the paraboloidal jets become highly collimated
the required number of grid cells along the jet axis increases, and
each successive sector becomes more computationally expensive than the
previous one. Secondly, errors due to numerical diffusion gradually
accumulate in the downstream region of the flow and the solution
becomes progressively less accurate (see Fig.~\ref{constants}).
\section{Results}
\label{results}
As is generally the case in numerical simulations, our computations are
subject to numerical errors, mainly the truncation errors of our RMHD
scheme. The field-line constants described in
Section~\ref{section_integrals} can be used for a straightforward
evaluation of the absolute error. Fig.~\ref{constants} shows the
ideal-MHD constants $k,\Omega$ and $\mu$ as functions of magnetic
flux at the inlet and near the outer boundary of the computational
domain for models A and B1. If the curves do not exactly coincide, this
is indicative of computational errors. Although the plots exhibit
noticeable deviations, they remain relatively small, and we
conclude that the results are trustworthy.
\begin{figure}
\includegraphics[width=77mm]{figures/gbu1-bp.png}
\caption{Distribution of the poloidal magnetic field across the
jet of model B1, showing the development of an axial core as the
distance from the origin increases. From top to bottom, the curves
correspond to $\eta = 1$, $50$, $5\times 10^2$, $5 \times 10^3$, $5 \times
10^4$, $5\times 10^5$, $5\times 10^6$ and $5\times 10^7$, respectively.
}
\label{bp}
\end{figure}
\begin{figure*}
\includegraphics[width=77mm]{figures/gbu1-sigev.png}
\includegraphics[width=77mm]{figures/gb-sigev.png}
\caption{Distribution of $\Gamma$ and $\mu_m=\mu_h\sigma$ across the jet in
models B1 (left panel) and D (right panel). Solid lines show $\Gamma$ at
$\eta=5\times10^4,5\times10^5,5\times10^6,5\times10^7$ (increasing upward),
dashed lines show $\mu_h\sigma$ at the same locations (increasing downward),
and the dash-dotted line shows $\mu$.
}
\label{cross2}
\end{figure*}
Figs.~\ref{model-a}--\ref{model-b2h} show the general 2D structure of
the derived jet solutions for models A, D, B2 and B2H midway from the
inlet surface. We selected these particular cases since they represent the most
significant variations in the model parameters, namely the
transition (i) from conical to paraboloidal shape of the confining wall
(A and B2), (ii) from uniform to differential rotation at the base (A
and D)\footnote{Note that, when displaying results for model D, we
define the fiducial light-cylinder radius in terms of the angular
velocity $\Omega_0$ of the innermost field line.}
and (iii) from cold to initially hot flows (B2 and B2H). In
general, the structure of the simulated ultra-relativistic jets is very
similar to that of the moderately-relativistic conical jets studied in
Paper~I. All models show the development of a central core where the
source-frame mass density $\rho'=\Gamma\rho$ peaks.
The mass concentration is accompanied by a bunching-up of the poloidal
magnetic field lines near the axis, as further illustrated in
Fig.~\ref{bp}. The development of an axial core is a generic property of
axisymmetric MHD outflows from a rotating source \citep{Bog95} and was
also a feature of the jets simulated in Paper~I. The distribution
of the Lorentz factor across the jet varies, however, from case to case.
In model A $\Gamma$ has
its maximum value at the jet boundary (Fig.~\ref{model-a}). In model D
the maximum is located approximately midway between the symmetry
axis and the boundary (Fig.~\ref{model-d}). This reflects the fact that
the angular velocity of magnetic field lines, and hence the
electromagnetic energy flux (equation~\ref{mu_m-def}), vanishes at the
boundary in this model, resulting in $\mu \approx \mu_h \approx 1$ near
the wall (see Fig.~\ref{cross2}).
The Lorentz factor of the initially cold jet in model B2 at first peaks
near the axis, with its value decreasing slightly on the way to the jet
boundary. However, further downstream the maximum shifts towards the
boundary and eventually disappears. In the initially hot jet of model
B2H the Lorentz factor at first peaks right on the symmetry axis, where
the acceleration is due to the by the gas pressure. However, further
downstream its evolution is similar to that of model B2.
Fig.~\ref{sigma} shows the efficiency of plasma acceleration
along the magnetic surface $\Psi=0.8\Psi_{\rm max}$ (located near the
jet boundary) for models B1--B4, which differ only by the strength of
the initial magnetization. One can see that in all four cases the
kinetic energy flux, $\simeq \mu_h \rho u_p c^2 \simeq \Gamma\rho u_p c^2$,
eventually exceeds the Poynting flux, $\mu_m \rho u_p c^2$.
This magnetic surface is not exceptional
and a similar behaviour is exhibited along other flux surfaces. This is
illustrated by Figs.~\ref{cross2}~and~\ref{geom-effect}. These figures also
show that soon after reaching equipartition the plasma acceleration
slows down significantly:
this is consistent with the relation $\mu \approx \Gamma (1+\sigma)$
obtained from equations~(\ref{kap-def}), (\ref{mu_m-def})
and~(\ref{mu_h-def}), in which crossing the equipartition point
corresponds to the magnetization parameter $\sigma$ dropping below 1.
Fig.~\ref{sigma} further indicates that the
efficiency of magnetic acceleration is higher the lower the initial
magnetization. This is reflected in the behaviour of $\sigma$, the ratio
of the Poynting flux to the matter energy flux (see
Section~\ref{section_integrals}). The left panel of Fig.~\ref{sigma-ev}
shows that the fast initial decrease of $\sigma$ slows down at
a higher value of $\sigma$ when the initial magnetization is larger. If
this behaviour in fact extends to values of $\mu_{m0}\approx \mu$ that are
low enough for the maximum attainable speed to remain nonrelativistic then the
indicated inverse correlation is consistent with the very high
acceleration efficiency exhibited by MHD outflow solutions in the
Newtonian regime \citep[e.g.][]{V00}.
\begin{figure*}
\includegraphics[width=70mm]{figures/gbu1-sigma.png}
\includegraphics[width=70mm]{figures/gbu-sigma.png}
\includegraphics[width=70mm]{figures/gbu2-sigma.png}
\includegraphics[width=70mm]{figures/gbu3-sigma.png}
\caption{$\Gamma$ (solid line), $\mu_m=\mu_h\sigma$ (dashed line) and
$\mu$ (dash-dotted line) along the magnetic field line with
$\Psi=0.8\Psi_{\rm max}$
as a function of cylindrical radius for models
B1 (top left panel),
B2 (top right panel),
B3 (bottom left panel) and
B4 (bottom right panel).
}
\label{sigma}
\end{figure*}
\begin{figure*}
\includegraphics[width=55mm]{figures/gau-sigmam.png}
\includegraphics[width=55mm]{figures/gbu1-sigmam.png}
\includegraphics[width=55mm]{figures/gcu-sigmam.png}
\caption{$\Gamma$ (increasing functions of r) and
$\mu_m=\mu_h\sigma$ (decreasing functions of r)
along the magnetic field lines $\Psi=0.8\Psi_{\rm max}$ (solid lines),
$\Psi=0.5\Psi_{\rm max}$ (dashed lines) and $\Psi=0.2\Psi_{\rm max}$
(dash-dotted lines) in models A (left panel), B1 (middle panel) and C
(right panel).
}
\label{geom-effect}
\end{figure*}
\begin{figure*}
\includegraphics[width=77mm]{figures/gbu-sig.png}
\includegraphics[width=77mm]{figures/bpr2-b.png}
\caption{Left panel: Evolution of $\sigma$ along the magnetic field line
$\Psi=0.8\Psi_{\rm max}$ in models B1 (solid line), B2 (dashed line),
B3 (dash-dotted line), B4 (dotted line) and B5 (dash-triple-dotted line).
Right panel: Evolution of the bunching function ${\cal S}=\pi B_p r^2/\Psi$
for the same models along the same magnetic field line.
}
\label{sigma-ev}
\end{figure*}
The high efficiency of magnetic acceleration is not unique to models in
which the magnetic field lines rotate uniformly. Fig.~\ref{cross2}, in
which the results for model B1 are compared with those for model D,
shows that equally effective acceleration is achievable in the case of a
differentially rotating source.
The geometry of the bounding wall has a pronounced effect on the
acceleration efficiency, as demonstrated by Fig.~\ref{geom-effect}. A
larger value of the power-law index $a$ in the shape function $z\propto
r^a$ corresponds to a more rapidly rising function $\Gamma(r/r_{\rm
lc})$ along a given magnetic flux surface $\Psi=$const. Whereas in the
model B1 ($a=3/2$) the acceleration slows down only after the
equipartition point, in model A ($a=1$) this occurs much earlier and, as
a result, equipartition between magnetic and kinetic energy is reached
only near the jet axis. Equipartition is not reached in model C ($a=2$)
either (see Fig.~\ref{geom-effect}), but for a different reason. Due to
the higher degree of external collimation, this jet eventually becomes
very thin. This makes our simulation increasingly expensive and we are
forced to terminate it before reaching sufficiently large jet radii.
(Moreover, the computational errors are accumulated over a longer path
along the jet and would become rather high if we continued.) However,
Fig.~\ref{geom-effect} shows that in this model the Lorentz factor is a
faster growing function of cylindrical radius compared to model B1.
Finally, in model E ($a=2/3$) we consider a jet propagating in a channel
with a progressively diverging wall, which in practice may correspond to
the polar funnel of a thick accretion disc \citep[e.g.][]{PW80}. In this
case the jet eventually becomes detached from the wall and then expands
as a conical outflow (Fig.~\ref{geu1}). The acceleration rate is similar
to that of model A (see Fig.~\ref{geu2}).
The initially hot jet, model B2H, is subject to both magnetic and
thermal acceleration, so, as expected, the Lorentz factor in this case
grows faster compared to the corresponding cold jet (see the right panel
of Fig.~\ref{gbut}). But a closer inspection reveals that the
acceleration process exhibits a new mode of behaviour in this case (one
that was, however, found before in semi-analytic self-similar solutions;
see \citealt{VK03b}). It is seen that a significant fraction of thermal
energy is at first converted into Poynting flux. The middle panel of
Fig.~\ref{gbut} shows that the Poynting-to-mass flux ratio $\mu_m c^2$
grows until $r\simeq 10^2r_{\rm lc}$ and only then starts to
decline. However, this decrease is quite fast and the terminal value of
$\mu_m$ for the chosen magnetic flux surface ($\Psi=0.5\Psi_{\rm max}$)
is, in fact, lower than in the corresponding cold jet (model B2) shown
in the left panel of this figure, with a correspondingly higher
asymptotic Lorentz factor.
The distribution of the terminal bulk Lorentz factor across these two
jet models is shown in right panel of Fig.~\ref{gbut}. One can see that
on the axis the Lorentz factor of the hot jet is higher than that of the
cold jet by approximately the value of the initial thermal Lorentz factor,
$\Gamma_{t0}=55$. This is as expected given that magnetic acceleration
does not operate along the axis. However, at the wall the difference is
only half as large and in the middle of the jet it is higher than
40. These traits are evidently a consequence of the thermal-to-Poynting
energy conversion and its effect on the poloidal magnetic field
distribution, as discussed in Section~\ref{hot}.
Although the case of an unconfined wind may not be directly relevant to
GRB flows, which are inferred to undergo a fairly efficient collimation
(see Section~\ref{introduction}), it is certainly of interest to the
pulsar community. Furthermore, it is worth investigating from a purely
theoretical point of view. The acceleration details for this case (model
AW) are presented in Fig.~\ref{gpw}. The lower efficiency of magnetic
acceleration noted in the conical-wall case (model A), particularly near
the jet boundary, is even more pronounced in this instance. As can be
seen in the right panel of Fig.~\ref{gpw}, only $\simeq 5\%$ of the
Poynting flux injected at $\simeq 12^\circ$ to the equatorial direction has been
converted into kinetic energy by the time the cylindrical radius grew to
$r=10^6r_{\rm lc}$.
Although, as shown in the left panel of Fig.~\ref{gpw}, the
efficiency is higher near the symmetry axis, the terminal Lorentz factor
there remains comparatively low because of the reduced effectiveness of
magnetic acceleration as the polar angle approaches zero.
\section{Analysis of the Results}
\label{theory}
\subsection{Efficiency of magnetic acceleration}
\label{efficiency}
The steady-state structure of a magnetized relativistic outflow can be
understood by analysing the momentum equation. After the partial
integration described in Section~\ref{section_integrals}, two more
equations remain to be considered, corresponding to the two components
of the momentum equation in the poloidal plane. Since the main part of
the acceleration occurs in the super-Alfv\'enic region of the flow, it
is sufficient to examine only this regime. We further simplify the
discussion by taking the flow to be cold. Thermal effects, when
present, in any case only affect the initial acceleration region of the
flow; we consider them in Section~\ref{hot}. We now proceed to extend
the discussion in Paper~I by taking the $\Gamma\gg 1$ of the constituent
equations, appropriate for the ultrarelativistic flows simulated in the
present work, which enables us to derive analytic scalings.
For cold flows $\mu_h\approx \Gamma$ (equation~\ref{mu_h-def}), and from
equation~(\ref{kap-def}) one finds that $\Gamma\approx \mu-\mu_m$.
Substituting the electric current from equation~(\ref{I1}) into
equation~(\ref{mu_m-def}), we get
\begin{equation}
\mu_m \approx \frac{\Psi \Omega^2}{4 \pi^2 k c^3} \, {\cal S} \,,
\label{S}
\end{equation}
where
\begin{equation}
{\cal S}
= \frac{\pi r^2 B_p}{\int \bmath{B}_p \! \cdot \! d \bmath{S}}
= \frac{\pi r^2 B_p}{\Psi}
= \frac{r |\vgrad{\Psi} |}{2\Psi}\,.
\label{calS}
\end{equation}
Thus, the flow Lorentz factor can be written as
\begin{equation}
\Gamma\approx \mu- \frac{\Psi \Omega^2}{4 \pi^2 k c^3} \, {\cal S} \,.
\label{moment1}
\end{equation}
All the quantities except for ${\cal S}$ on the right-hand side of this
equation are field-line constants, so an increase in
$\Gamma$ along a field line necessarily requires ${\cal S}$ to decrease.
The function ${\cal S}$
is a measure of how bunched the poloidal field lines are --- indeed,
it is equal to the ratio of $B_p$ at some cylindrical radius $r$ along the
field line to the
mean magnetic field within that radius, $\Psi/\pi r^2$.
For example, for a flow confined within a sufficiently small
angle that satisfies $B_p\propto r^\lambda$, $\Psi\propto r^{\lambda+2}$ and
\begin{eqnarray}
{\cal S} = \frac{\lambda+2}{2}\,.
\nonumber
\end{eqnarray}
For a uniform distribution of $B_p$ this yields ${\cal S}=1$, whereas
one has ${\cal S}>1$ if $B_p$ increases with $r$ and ${\cal S}<1$
if it decreases. This shows that magnetic acceleration requires
a gradual concentration of magnetic flux in the central part of
the flow. In the case of a collimating flow this can be achieved through
a faster collimation of the inner magnetic flux surfaces than of the
outer ones, and in the case of a decollimating flow a faster
decollimation of the outer flux surfaces is required.
Fig.~\ref{bp} illustrates the concentration of magnetic flux toward the axis
in one of our simulations. In this case, at large distances
the poloidal magnetic field scales roughly as $B_p \propto r^{-1.2}$,
corresponding to ${\cal S}_\infty \sim 0.4$.
This is indeed the asymptotic value of ${\cal S}$, as shown in Fig.~\ref{S-ev}.
\begin{figure}
\includegraphics[width=77mm]{figures/geu.png}
\caption{Colour image shows
$\log_{10}p_{\rm tot}$ (with the total pressure given by $p_{\rm tot} =
p + B_{\rm co}^2/8\pi$, where $B_{\rm co}$ is the comoving magnetic
field) and the contours show the magnetic field lines for model E.
In this model the light cylinder radius is $r_{\rm lc}=0.29$.
}
\label{geu1}
\end{figure}
\begin{figure}
\includegraphics[width=77mm]{figures/geu-lor.png}
\caption{
Lorentz factor along the magnetic field line with
$\Psi=0.5\Psi_{\rm max}$ for model A (solid line) and model E (dashed line).
}
\label{geu2}
\end{figure}
\begin{figure*}
\includegraphics[width=55mm]{figures/gbut-sigma-b.png}
\includegraphics[width=55mm]{figures/gbut-sigma-a.png}
\includegraphics[width=59mm]{figures/gbut-lor-cr.png}
\caption{Effects of thermal acceleration. Left panel: cold jet of model B2.
Middle panel: hot jet of model B2H (with $w_0/\rho_0 c^2=55$).
The lines show $\Gamma$ (solid line), $\mu$ (dash-dotted line),
$\mu_m=\mu_h\sigma$ (dashed line) and $(w/\rho c^2-1)\Gamma$ (dotted
line) along the magnetic
field line with $\Psi=0.5\Psi_{\rm max}$ as a function of cylindrical radius.
Right panel: Lorentz factor across the jet at $\eta=4\times10^6r_{\rm lc}$
for the cold jet of model B2 (solid line) and the hot jet of model B2H
(dashed line).}
\label{gbut}
\end{figure*}
\begin{figure*}
\includegraphics[width=80mm]{figures/gpw-lor.png}
\includegraphics[width=80mm]{figures/gpw-sigma.png}
\caption{Unconfined wind solution (model AW). Left panel:
Lorentz factor (increasing function) and
$\mu_h\sigma$ (decreasing function) along five different
magnetic field lines:
$\Psi=0.8\Psi_{\rm max}$ (solid line),
$\Psi=0.5\Psi_{\rm max}$ (dashed line),
$\Psi=0.2\Psi_{\rm max}$ (dash-dotted line),
$\Psi=0.1\Psi_{\rm max}$ (dotted line),
$\Psi=0.027\Psi_{\rm max}$ (dash-triple-dotted line),
the last line originating
from the same point at the inlet as the $\Psi=0.8\Psi_{\rm max}$ line of
model A.
Right panel:
$\Gamma$ (solid line), $\mu_h\sigma$ (dashed line) and
$\mu$ (dash-dotted line) along the magnetic field line with
$\Psi=0.8\Psi_{\rm max}$
as a function of cylindrical radius.
}
\label{gpw}
\end{figure*}
Equation~(\ref{moment1}) is a consequence of the momentum equation along
the flow. It shows how $\Gamma$ increases by the action of the $(1/c)
\bmath{J}_p \! \times \! \bmath{B}_\phi$ force when the function ${\cal
S}$ decreases along the flow, thereby demonstrating the intimate
connection between the acceleration efficiency and the evolution of the
poloidal shape of the flow.
In evaluating this efficiency we can use ${\cal S}_{\rm f}$, the value of
${\cal S}$ at the fast surface, as a convenient proxy for the initial
value of ${\cal S}$. This is because, for $\mu\gg 1$, $\Gamma$ remains
$\ll \mu$ on this surface \citep[e.g.][]{K04}. In this case the two
terms on the right-hand side of equation~(\ref{moment1}) are comparable,
and we obtain
\begin{eqnarray}
{\cal S}_{\rm f}=\frac{4 \pi^2 k \mu c^3}{\Psi \Omega^2} \,.
\nonumber
\end{eqnarray}
We can legitimately use equation~(\ref{moment1}) since the fast surface
lies well outside the light cylinder and hence is in the
super-Alfv\'enic domain for most of the simulated field lines. We now
utilize this equation to write the asymptotic Lorentz factor in the form
\begin{equation}
\Gamma_\infty \approx \mu (1-{\cal S}_\infty/{\cal S}_{\rm f})\, .
\label{G_infty}
\end{equation}
In our simulations ${\cal S}_{\rm f}\approx 0.9$ (see Figs.~\ref{sigma-ev}
and~\ref{S-ev}). This value reflects the adopted uniform distribution of
$B^{\hat{\eta}}$ at the inlet.\footnote{As we already in
Section~\ref{inlet_section}, we have experimented with other
distributions that put more flux near the axis and observed a quick
``uniformization'' of magnetic flux in the immediate vicinity of the
inlet under the action of magnetic pressure.} Beyond the Alfv\'en
surface the azimuthal magnetic field component becomes dominant, and its
hoop stress causes the inner flux surfaces to collimate faster than the
outer ones. As a result ${\cal S}$ decreases, attaining asymptotic values
${\cal S}_\infty \approx 0.25 - 0.4$ for paraboloidal jets
(see Figs.~\ref{sigma-ev} and~\ref{S-ev}).
\begin{figure}
\includegraphics[width=77mm]{figures/bpr2.png}
\caption{Evolution of the function ${\cal S} =\pi B_p r^2/\Psi$ along
the magnetic field line with $\Psi=0.5\Psi_{\rm max}$ in models A
(solid line), B1 (dashed line), C (dash-dotted line) and D(dotted line).
}
\label{S-ev}
\end{figure}
The implied asymptotic Lorentz factors thus satisfy
\begin{eqnarray}
\Gamma_\infty/\mu \approx 0.55 - 0.72\, ,
\nonumber
\end{eqnarray}
which are indeed the values reached by our simulated flows
(see Figs.~\ref{cross2}--\ref{geom-effect}).
This result indicates that $\ga 50\%$ of the initial Poynting flux
is converted into kinetic energy of bulk motion (see also
\citealp{V04dogl}).
The significantly lower efficiency found in our simulations of flows
inside conical and diverging funnels, down to $25\%$ near the boundary
(models A and E), is most likely due to the loss of causal
connection across the flow (see Section~\ref{causality}).
\subsection{Power-law acceleration phase}
\label{power-law}
Next we analyse the trans-field component of the momentum equation.
The asymptotic form of the trans-field
equation in the highly relativistic limit is
\begin{equation}\label{transf}
\frac{\Gamma^2 r}{{\cal R}} \approx
\frac{ {\displaystyle
\left(\frac{2I}{\Omega B_p r^2} \right)^2
r \vgrad{\ln\left|\frac{I}{\Gamma}\right|}
\!\cdot\! \frac{\vgrad{\Psi}}{|\vgrad{\Psi}|}
} }{ {\displaystyle 1+ \frac{w}{\rho c^2} \frac{4
\pi \rho u_p^2}{B_p^2} \frac{r_{\rm lc}^2}{r^2}} }
-\Gamma^2\frac{r_{\rm lc}^2}{r^2}
\frac{\vgrad{r} \!\cdot\!\vgrad{\Psi}}{|\vgrad{\Psi}|}
\end{equation}
where ${\cal R}$ is the curvature radius of poloidal field lines
(see equation~16 and related discussion in \citealp{V04}).
The three terms of this equation are
the poloidal curvature term (left-hand side), the electromagnetic term
(first on the right-hand side), which is of order 1,
and the centrifugal term (second on the right-hand side).
This important equation, with the centrifugal term omitted,
was derived by \citet{CLB91}, \citet{2001ApJ...562..494L},
and \citet{2002ApJ...573L..31O}, while
\citet{Bog95}, \citet{2000AstL...26..208B},
and \citet{2003ApJ...592..321T}
derived the same equation with the centrifugal
term included but the poloidal curvature term omitted.
Well outside the light cylinder, where
$r\Omega \gg v^{\hat{\phi}}$ and $v\simeq c$, equations~(\ref{v_xi_b})
and~(\ref{v_phi_b}) imply
\begin{equation}
r B^{\hat{\phi}}=-\frac{1}{c} \Omega B_p r^2\, .
\end{equation}
\noindent
{}From this equation and equation~(\ref{I}) one finds that
\begin{equation}\label{I-Bp}
I=-\frac{1}{2} \Omega B_p r^2\, ,
\end{equation}
\noindent
where $B_p$ is the magnitude of the poloidal magnetic field.
Substituting this result into equation~(\ref{mu_m-def}) one also finds that
\begin{equation}\label{mu_new}
\mu_m =\frac{1}{4\pi}\frac{r^2}{r^2_{\rm lc}} \frac{B_p^2\Gamma}{\rho u_p^2}.
\end{equation}
Thus, in this regime one can rewrite equation~(\ref{transf}) as
\begin{equation}\label{transf1}
\frac{\Gamma^2 r}{{\cal R}} \approx
\frac{ {\displaystyle
r \vgrad{\ln\left|\frac{I}{\Gamma}\right|}
\!\cdot\! \frac{\vgrad{\Psi}}{|\vgrad{\Psi}|} } }
{ {\displaystyle 1+ \frac{\mu_h}{\mu_m} }}
-\Gamma^2\frac{r_{\rm lc}^2}{r^2}
\frac{\vgrad{r} \!\cdot\!\vgrad{\Psi}}{|\vgrad{\Psi}|} \,.
\end{equation}
In the magnetically dominated case, where $\mu_m\gg\mu_h$, order-of-magnitude
evaluation of the last two terms in this equation gives the useful result
\begin{equation}\label{transf2}
\frac{\Gamma^2 r}{{\cal R}} \approx 1
-\Gamma^2\frac{r_{\rm lc}^2}{r^2}\, .
\end{equation}
Depending on which term in equation~(\ref{transf}) can be neglected, we
can isolate the following three cases (ordered by increasing importance):
(i) If the electromagnetic part is negligible then the shape of the flow is
determined by the centrifugal term, resulting in a hyperbolic line shape,
a characteristic of ballistic motion
(see equation~20 and related discussion in \citealp{V04};
see also Sections~\ref{alphap>2}~and~\ref{A1.3}).
None of the end-states of our simulations has this property.
(ii) If the poloidal curvature term is negligible, the electromagnetic
and centrifugal terms balance each other. This is the case very
close to the rotation axis (inside the cylindrical core) as well as for a
quasi-conical flow like our model A and for paraboloidal flows with $a >
2$ as in our model F (see Section~\ref{pressure}). In this case
equation~(\ref{transf2}) gives
\begin{equation}
\Gamma \simeq \frac{r}{r_{\rm lc}}\,.
\label{Gamma2}
\end{equation}
Following different methods, this ``linear acceleration case'' was found by
\citet{2002ApJ...566..336C}, who analysed radial force-free flows beyond
the light cylinder (and hence their analysis holds in the regime between
the Alfv\'en and the fast-magnetosonic surfaces), and by \citet{BKR98},
who perturbed a quasi-conical flow (and found that $\Gamma \approx
r/r_{\rm lc}$ applies in the sub--fast-magnetosonic regime). Our results
for models A and F agree with the scaling $\Gamma \approx r/r_{\rm lc}$;
see the top left panel of Fig.~\ref{lor-r}.
(iii) If the centrifugal term is negligible then the
shape of the flow is determined by the electromagnetic force.
This regime applies to the case of paraboloidal wall with $a \le 2$ (see
Section~\ref{pressure}).
Equation~(\ref{transf2}) implies that in this case the radius of curvature
of poloidal field lines is
\begin{equation}
{\cal R} \approx \Gamma^2 r\, .
\label{R-curv2}
\end{equation}
Now, consider a field line of the shape, $z\propto r^b$.
(In what follows we use the superscript $b$ to
indicate the power-law index that describes the shape of given magnetic
field lines, whereas the superscript
$a$ is reserved for the power-law index that gives the shape of the funnel
wall in our numerical models. Note that the interior field lines in
these models have $b$ that is slightly larger than $a$, although $b
\rightarrow a$ as the wall is approached; see Fig.~\ref{par-b}.)
The curvature radius of such a line satisfies
\begin{eqnarray}
\frac{r}{\cal R}= - r \left(\frac{B_z}{B_p}\right)^3
\frac{ \partial^2 r }{ \partial z^2 }
\approx \frac{ b-1 } {b^2} \left(\frac{r}{z}\right)^2 \,,
\label{R-curv}
\end{eqnarray}
where the final form is valid when $B_p \approx B_z$. Combining this
with equation~(\ref{R-curv2}) we get
\begin{equation}
\Gamma \sim \frac{b}{\sqrt{b-1}}\frac{z}{r} \propto r^{b-1} \propto
z^{(b-1)/b}
\label{G_z_r}
\end{equation}
(see also \citealp{VK03b}), which applies when the power-law index lies
in the range $1<b\le 2$ and shows that the spatial growth of the Lorentz
factor is also a power law in this case (in either $r$ or $z$).
Assuming that the flow is not too collimated within the light cylinder,
so that $z_{\rm lc}\simeq r_{\rm lc}$ for most of the field lines
(an assumption that is well satisfied in our numerical models), we can
write the above result in the following useful forms:
\begin{equation}
\Gamma \simeq (r/r_{\rm lc})^{b-1} \text{or} \Gamma \simeq (R/r_{\rm
lc})^{(b-1)/b}.
\label{G-scaling}
\end{equation}
This acceleration regime operates in our $1<a\le 2$ numerical models
before the flow reaches approximate equipartition,
as can be verified by inspecting Figs.~\ref{lor-r} and~\ref{lor-R}.
\begin{figure*}
\includegraphics[width=55mm]{figures/gau1-b.png}
\includegraphics[width=55mm]{figures/gbu2-b.png}
\includegraphics[width=55mm]{figures/gbut2-b.png}
\caption{The exponent $b$ of the poloidal shape function $z\propto r^b$
for models A (left panel), B2 (middle panel) and B2H (right panel) across the
jet. For model A the depicted cross sections are at
$R=10$ (solid line),
$R=10^2$ (dashed line),
$R=10^3$ (dash-dotted line),
$R=10^4$ (dotted line) and
$R=10^5$ (dash-triple-dotted line).
For models B2 and B2H the plotted cross sections are at
$\eta=5\times10^2$ (thin solid line),
$\eta=5\times10^3$ (dashed line),
$\eta=5\times10^4$ (dash-dotted line),
$\eta=5\times10^5$ (dotted line),
$\eta=5\times10^6$ (dash-triple-dotted line) and
$\eta=5\times10^7$ (thick solid line).
}
\label{par-b}
\end{figure*}
\begin{figure*}
\includegraphics[width=65mm]{figures/gau-lor.png}
\includegraphics[width=65mm]{figures/gbu1-lor.png}
\includegraphics[width=65mm]{figures/gcu-lor.png}
\includegraphics[width=65mm]{figures/gb-lor.png}
\includegraphics[width=65mm]{figures/gfu-lor.png}
\includegraphics[width=65mm]{figures/gbut-lor.png}
\caption{Lorentz factor along three different magnetic field lines of
models A (top left panel), B1 (top right panel), C (middle left panel),
D (middle right panel), F (bottom left panel), and B2H (bottom right panel)
as a function of the cylindrical radius $r$.
Solid line: $\Psi=0.8\Psi_{\rm max}$;
dashed line: $\Psi=0.5\Psi_{\rm max}$;
dash-dotted line: $\Psi=0.2\Psi_{\rm max}$.
}
\label{lor-r}
\end{figure*}
\begin{figure*}
\includegraphics[width=65mm]{figures/gau-lor1.png}
\includegraphics[width=65mm]{figures/gbu1-lor1.png}
\includegraphics[width=65mm]{figures/gcu-lor1.png}
\includegraphics[width=65mm]{figures/gb-lor1.png}
\includegraphics[width=65mm]{figures/gfu-lor1.png}
\includegraphics[width=65mm]{figures/gbut-lor1.png}
\caption{Lorentz factor along three different magnetic field lines of
models A (top left panel), B1 (top right panel), C (middle left panel),
D (middle right panel), F (bottom left panel), and B2H (bottom right panel)
as a function of the spherical radius $R$.
Solid line: $\Psi=0.8\Psi_{\rm max}$;
dashed line: $\Psi=0.5\Psi_{\rm max}$;
dash-dotted line: $\Psi=0.2\Psi_{\rm max}$.
}
\label{lor-R}
\end{figure*}
The direct dependence of the flow acceleration on the poloidal curvature
of the magnetic field lines in the regime (iii) leads to an anti-correlation
between the jet Lorentz factor and its opening angle. For a line shape
$z\propto r^b$ ($1<b\le 2$) we find
\begin{equation}
\Gamma\tan\theta_{\rm v}=1/\sqrt{b-1}\,,
\label{lor-theta-eq}
\end{equation}
where $\theta_{\rm v}\equiv \arctan(dr/dz)$ is the local half-opening
angle of the magnetic flux surface. Fig.~\ref{lor-theta-fig} shows the
variation of $\Gamma\tan\theta_{\rm v}$ along the flux surface
$\Psi=0.8\Psi_{\rm max}$ of model B1. One can see that this product is
indeed close to $1/\sqrt{b-1}$. It is, however, not exactly a constant,
for the following reasons: the curvature acceleration regime is not
really applicable at small and large spherical radii, the
electromagnetic term in equation~(\ref{transf}) is not exactly equal to
1, and the power-law index $b$ varies along the flow. The figure
nevertheless indicates that equation~(\ref{lor-theta-eq}) provides a
useful estimate of the relationship between $\Gamma$ and $\theta_{\rm v}$.
\begin{figure*}
\includegraphics[width=65mm]{figures/lortheta.png}
\includegraphics[width=65mm]{figures/equip.png}
\caption{
Left panel: variation of $\Gamma\tan\theta_{\rm v}$ along
the flux surface $\Psi=0.8\Psi_{\rm max}$ of model B1.
Right panel: the diamonds show the equipartition radius (where the
Poynting and kinetic energy fluxes are equal) along $\Psi=0.8\Psi_{\rm
max}$ as a function of the magnetization parameter $\mu$ for models
B1--B5. The solid line shows the function
$\log_{10}(r/r_{\rm lc}) = 2\log_{10}(\mu/2)$.
}
\label{lor-theta-fig}
\end{figure*}
As expected from our discussion in Section~\ref{efficiency} of the close
connection between the acceleration efficiency and the evolution of the
poloidal field-line shape, the trans-field force balance equation, which
determines the variation of the flux-surface shape along the flow,
is seen to provide information on how fast the Lorentz factor increases
with distance from the source.
For all shape functions $z\propto r^b$ with $1<b\le 2$, the corresponding
power-law dependence of $\Gamma$ leads to a high ($\ga 50\%$)
magnetic-to-kinetic energy conversion efficiency over astrophysically
relevant distances. Using equation~(\ref{G-scaling}) we find that
equipartition between the Poynting and kinetic energy fluxes is attained at
a cylindrical radius
\begin{equation}
r_{\rm eq}=r_0 \left(\frac{\mu}{2\Gamma_0}\right)^{1/(b-1)}\,.
\label{eq1}
\end{equation}
After substitution $r_0=r_{\rm lc}$ and $\Gamma_0=1$, this equation
reads
\begin{equation}
r_{\rm eq}=r_{\rm lc} \left(\frac{\mu}{2}\right)^{1/(b-1)}\,,
\label{equip1}
\end{equation}
which, in fact, agrees very well with our results for models B
(see Fig.~\ref{lor-theta-fig}).
In terms of the spherical radius,
assuming again that $R_{\rm lc}\simeq r_{\rm lc}$, we can write this
expression as
\begin{equation}
R_{\rm eq}=r_{\rm lc} \left(\frac{\mu}{2}\right)^{b/(b-1)}\,.
\label{equip2}
\end{equation}
For $b>2$ the corresponding relations are (using equation~\ref{Gamma2})
$r_{\rm eq}=(\mu/2) r_{\rm lc} $
and $R_{\rm eq}=(\mu/2)^b r_{\rm lc} $.
The derived scaling for the Lorentz factor can be used to find the
behaviour of other quantities. For example,
for the main part of the flow in which the Poynting flux dominates
the energy flux, one has $\sigma \Gamma \approx \mu$ and
hence, for $1<b\le 2$,
\begin{equation}
\sigma \approx \mu/\Gamma \propto r/z \propto r^{-(b-1)} \,.
\end{equation}
The predicted behaviour is indeed seen in the left panel
of Fig.~\ref{sigma-ev}.
This figure further shows that the ``self similar'' structure of the
magnetization curves extends also beyond the equipartition radius, where
they flatten out; in particular, they do not cross each other even in
that regime. Consequently, the magnetization beyond the turning point of the
curve is lower the smaller the inlet value, which goes along with our finding
that the efficiency $\sim 1/(1+\sigma_\infty)$ of magnetic-to-kinetic
energy conversion in cold flows decreases with increasing initial
magnetization.
The high acceleration efficiencies attained by our simulated flows
appear to be inconsistent with the conclusion of \citet{CLB98} that a
transition to a low-$\sigma$ configuration cannot occur gradually in
regions well beyond the light cylinder, where the flow has become
ultra-relativistic. Their analysis was, however, based in part on an
estimate of the change in the angle $\theta_{\rm v}$ between the poloidal
flow and the rotation axis as one moves through a length $\Delta \ell$
along the flow (see text after equation~14 in their paper): this
estimate is not generally valid since it assumes that $\Delta \ell \sim
\Delta r$, which only applies to quasi-radial flows.
If instead we use $\Delta \ell \sim \Delta z$ in equation~(14) of
\citet{CLB98} and concentrate on paraboloidal flows ($z \propto r^b$)
with $b \le 2$,
we get $\Delta \theta_{\rm v} \sim \Delta (z/r) b/ \Gamma^2 (b-1)$,
which yields the scaling $\Gamma \propto z/r$ found above. On the other
hand, the lower acceleration efficiency exhibited by our model A, in
which the flow morphology is quasi-radial (see Figs.~\ref{model-a},
\ref{geom-effect} and~\ref{S-ev}), appears to be consistent with the
\citet{CLB98} inference of logarithmic collimation and slower
acceleration. We note in this connection that, beyond the end of the
power-law acceleration phase analysed in this subsection, it is possible
to have an additional, logarithmic acceleration regime in which
potentially up to 100\% of the Poynting flux could be converted into
matter kinetic energy flux (see \citealp{V04} and references therein).
However, this acceleration is too slow to be of astrophysical interest
since it requires exponentially large distances for completion.
\subsection{Dependence on the external pressure distribution}
\label{pressure}
Although we have chosen, for numerical convenience, to prescribe the
shape of the funnels that guide our simulated flows, in reality the
boundary shape of pressure-confined flows will be determined by the
ambient pressure distribution, $p_{\rm ext}$, and we expect a one-to-one correspondence
between the shape of the boundary and the parameters of the confining
medium, enforced through the pressure-balance condition at the
boundary, $p_{\rm int}=p_{\rm ext}$. Here we analyse this issue for
the asymptotic region of a magnetically accelerated flow, where
the internal jet pressure, $p_{\rm int}$, is dominated by the contribution
due to the azimuthal component of magnetic field,
$p_{\rm int}= p+B_{\rm co}^2/8\pi \simeq (B^{\hat\phi})^2/8\pi\Gamma^2$.
Thus,
\begin{eqnarray}\nonumber
\Gamma^{-2} = \frac{8\pi p_{\rm ext}}{(B^{\hat\phi})^2} \,.
\end{eqnarray}
In the following we assume that the external pressure distribution is a
power-law
\begin{eqnarray}\nonumber
p_{\rm ext} = p_{\rm ext,lc} (z/z_{\rm lc})^{-{\alpha}} \,,
\end{eqnarray}
which is consistent with the funnel shape $z \propto r^a$ adopted in
our numerical simulations. Moreover, since $\mu_m \propto I \propto r B^{\hat\phi}$
(see equations \ref{I}, \ref{mu_m-def}) is a weak function of distance
we may assume that at the jet
boundary $B^{\hat\phi}= B^{\hat\phi}_{\rm lc}(r/r_{\rm lc})^{-1}$,
Then we have
\begin{equation}
\Gamma^{-2}=C x^2 Z^{-{\alpha}} \,,
\label{C1}
\end{equation}
where $x\equiv r/r_{\rm lc}$ and $Z\equiv z/r_{\rm lc}$ are the dimensionless
coordinates of the jet boundary and
\begin{equation}
C=\left( \frac{8\pi p_{\rm ext} }{B_{\hat\phi}^2} \right )_{\rm lc}
\left(\frac{z_{\rm lc}}{r_{\rm lc}}\right)^{\alpha}
= \frac{(z_{\rm lc}/r_{\rm lc})^{\alpha}}{\Gamma_{\rm lc}^2} \,.
\label{C2}
\end{equation}
It is easy to see that $C$ is a positive dimensionless constant
of the order of 1. Provided that $dr/dz\ll 1$ we can approximate the
curvature radius of the jet boundary via
\begin{equation}
{\cal R}^{-1} \approx - \frac{d^2 r}{dz^2}
= - \frac{1}{r_{\rm lc}}\frac{d^2 x}{dZ^2}
\label{R-curv1}
\end{equation}
and rewrite equation~(\ref{transf2}) as
\begin{equation}
x \frac{d^2 x}{dZ^2} +\frac{1}{\Gamma^2} - \frac{1}{x^2} \approx 0 \,.
\label{transf3}
\end{equation}
After the substitution of $\Gamma$ from equation~(\ref{C1}) this
yields an ordinary differential equation for the jet boundary
\begin{equation}
\frac{d^2 x}{dZ^2} +C \frac{x}{Z^{\alpha}} - \frac{1}{x^3} =0 \,.
\label{ODE1}
\end{equation}
The first term on the left-hand side of equation~(\ref{ODE1})
represents the effect of poloidal curvature, the second
is the electromagnetic term and the third is the centrifugal term.
Equation~(\ref{ODE1}) can be
solved in closed form in various limits, as discribed in
Appendix~\ref{appA}. Here we simplify the
discussion by looking for almost power-law solutions
\begin{equation}
x=K^{-1} Z^{1/a}\,,
\label{K}
\end{equation}
with $K$ being positive constants and $a$ varying very slowly.
Substituting this ansatz into
equation~(\ref{ODE1}) and ignoring all terms including derivatives of $a$,
we obtain
\begin{equation}
\frac{1}{a} \left( \frac{1}{a} -1 \right)
+C Z^{2-{\alpha}} - K^4 Z^{2-4/a} =0 \,.
\label{ODE2}
\end{equation}
We now proceed to analyse this equation for different values of the
exponent ${\alpha}$.
\subsubsection{${\alpha} > 2$}
\label{alphap>2}
In this case the second term on the left-hand side of
equation~(\ref{ODE2}) vanishes as $Z\rightarrow \infty$ and the only
acceptable asymptotic value of $a$ is unity. Indeed, for $a>2$ the third
term diverges, for $a=2$ it is constant but negative and so is the first
term, for $a<2$ it vanishes and so must the first one, implying $a\to 1$.
Thus, asymptotically the boundary adopts conical shape.
\begin{itemize}
\item When ${\alpha} <4$ the electromagnetic term of equation~(\ref{ODE2})
dominates over the centrifugal term, and thus $a\to 1^+$ (since
the first term must be negative in order to cancel the second). The
boundary shape is therefore {\em paraboloidal} (with conical
asymptotes). An explicit solution of equation~(\ref{ODE1}) in this limit
is given in Appendix~\ref{appA}.
\item When ${\alpha} >4$ the centrifugal term dominates over the
electromagnetic term in equation~(\ref{ODE2}) and thus $a\rightarrow
1^-$ (since the first term must be positive in order to cancel the
third). This is case (i) of our analysis of equation~(\ref{transf1}),
which corresponds to a {\em hyperboloidal} shape (with conical
asymptotes), as demonstrated in Appendix~\ref{appA} through an explicit
solution of equation~(\ref{ODE1}) in this limit.
\item
When ${\alpha} = 4$ one can obtain a solution that is conical ($a=1$) from
the start, with $K^4 = C$. This solution corresponds to our conical
model A during the acceleration phase, when $\Gamma \propto r$ (see
equation~\ref{Gamma2}). Fig.~\ref{p-ext} verifies the predicted scaling
($p_{\rm ext} \propto Z^{-4}$) and also shows that, after the growth of
$\Gamma$ saturates, a conical shape can be maintained only if the
ambient pressure scales as $z^{-2}$, which follows directly from the
scaling $p_{\rm int} \propto \Gamma^{-2} r^{-2}$ discussed at the
beginning of this subsection.
\end{itemize}
In summary, for ${\alpha} > 2$ the boundary does not simply adjust to the
ambient pressure profile but instead asymptotes to a conical shape.
This result is consistent with the expectation that in this case the
transverse expansion time of the jet becomes shorter than the
propagation time of magnetosonic waves across the flow, leading to a
loss of causal connectivity and hence to a ``free'' ballistic expansion
in a cone (\citealt{BBR84}; see also Section~\ref{causality}). This is
essentially the behaviour exhibited by our Model E (see Fig.~\ref{geu1}).
\subsubsection{${\alpha} = 2$}
\label{alphap=2}
In this case the second term on the left-hand side of
equation~(\ref{ODE2}) is a positive constant. This implies that
$1<a\le 2$. (Indeed, for $a>2$, the third term diverges and hence
unbalanced. For $a\le 1$ it vanishes but the first term is non-negative
and hence cannot balance the second one.)
We can distinguish between the following two cases:
\begin{itemize}
\item
$a=2$ --- the power law solution with $K^4=C-1/4$ is exact.
This implies $C>1/4$.
\item $1< a<2$ --- the third term becomes negligible at large $Z$
and balancing of the first two terms requires $a\to2/(1+\sqrt{1-4C})$.
This implies $C\le1/4$.
\end{itemize}
In other words, for $C< 1/4$ the centrifugal term is negligible and
the resulting shape is $Z = (z_{\rm lc}/r_{\rm
lc})x^{2/(1+\sqrt{1-4C})}$, whereas for $C>1/4$ the centrifugal term is
comparable to the other two terms and the solution is $Z=\sqrt{C-1/4} \
x^2$. Fig.~\ref{p-ext} verifies that the confining pressure in our
simulated flows scales as $Z^{-2}$ irrespective of the precise
value of $a$ so long as the shape exponent lies in the range $1<a\le
2$. The figure also corroborates the prediction that the $Z^{-2}$
scaling is attained only gradually when $a<2$ (models B and D,
corresponding to $a=3/2$) but that it is present almost from the start
when $a=2$ (model C).
As shown in Appendix~\ref{appA}, the asymptotic solution for $C=1/4$ is
$x=Z^{1/2} (C_1+C_2 \ln{Z})$, where $C1$ and $C_2 \neq 0$ are constants.
(We kept the constant $C_1$
to accommodate the possibility that the solution extends all the way
down to the light-cylinder radius, where $Z\approx 1$.) This
solution is similar to the $C<1/4$ solutions of equation~(\ref{ODE1})
in having a negligible centrifugal contribution.
Although all the funnel shapes whose power-law indices lie in the range
$1<a\le 2$ correspond to a single exponent (${\alpha} = 2$) of the
confining pressure distribution, there is nevertheless a one-to-one
match between a given pressure distribution and the resultant funnel
shape. This is because both the power-law index ${\alpha}$ {\em and} the
magnitude of the confining pressure (as expressed in relation to the
internal magnetic pressure at the light-cylinder radius by the parameter
$C$; see equation~\ref{C2}) play a role in determining the functional
form of the boundary: when $C< 1/4$ the magnitude of $C$ fixes the
exponent of the boundary paraboloid, whereas when $C>1/4$ it fixes the
normalization constant $K$.
The parameter $C$ is evaluated at the effective base of the asymptotic
region of the flow and it
conveys physical properties (e.g. $z_{\rm lc}$ and $\Gamma_{\rm lc}$; see
equation~\ref{C2}) imprinted on the outflow before it reaches the
asymptotic regime. Thus, the asymptotic shape of a jet
propagating through a power-law pressure distribution is determined both
by the exponent of that distribution and by the evolution of the outflow
before entering the asymptotic region.
\subsubsection{${\alpha} < 2$}
\label{alphap<2}
In this case the second term on the left-hand side of
equation~(\ref{ODE2}) diverges as $Z\rightarrow \infty$. To balance
this term, the third term must also diverge in this limit, which implies
that $a=4/{\alpha} > 2$ and $C=K^4$. Thus, the jet shape is paraboloidal,
$Z=C^{1/{\alpha}} x^{4/{\alpha}}$. Like in the ${\alpha} = 2$ case,
both the parameters ${\alpha}$ and $C$ are needed to uniquely fix the
functional form of the jet shape. For ${\alpha}=4/3$ we have $a=3$,
the funnel shape index of our numerical model F.
Fig.~\ref{p-ext} verifies that the boundary pressure for this model
indeed scales as $Z^{-4/3}$.
We can collect the results derived in this subsection into a concise
description of the correspondence between the exponent
${\alpha}$ of the ambient pressure distribution and the exponent $a$ of
the asymptotic jet shape:
\begin{itemize}
\item ${\alpha} < 2\ \Leftrightarrow\ a=4/{\alpha} >2\, ,$
\item ${\alpha} = 2\ \Leftrightarrow\ 1<a\le2\, ,$
\item ${\alpha} > 2\ \Leftrightarrow\ a = 1\, .$
\end{itemize}
Similar results for the behaviour of the ambient pressure in a confined
jet (${\alpha} \le 2$) were found by \citet{TMN08} in the
force-free limit, which is consistent with the fact that our expressions
for the spatial profile of $\Gamma$ were obtained in effectively the same
approximation.
As we have seen, ${\alpha} = 2$ leads to the asymptotic balance between
the electromagnetic and poloidal curvature forces (regime iii)
whereas ${\alpha} < 2$ leads to the balance between the electromagnetic and
centrifugal forces (regime ii; see Section~\ref{power-law}).
These regimes are characterized by different evolution of many flow
parameters, which may have observable consequences (see also
Section~\ref{application}). For example, in regime (ii)
the product $\Gamma\tan\theta_{\rm v}$ is predicted to be a constant
${\cal{O}}(1)$ in the acceleration region, whereas in regime (iii) it is
expected to decrease with distance as $Z^{-(1-2/b)}$, with $b$ being
slightly larger than $a$ due to the stronger collimation of
the flow inside the jet. The evolution of the
Lorentz factor in regime (ii) is given by $\Gamma \propto r$
(equation~\ref{Gamma2}) rather than by the $\Gamma \propto r^{b-1}$
scaling of regime (iii). However, in practice this may not translate
into a significant difference in how fast the jet accelerates
(for example, $\Gamma \approx z^{1/3}$ for both the
${\alpha}=4/3$ and ${\alpha}=2$, $b=3/2$ cases).
After the end of the acceleration the internal pressure scales as $r^{-2}$
(since $\Gamma=\Gamma_\infty=$ const).
If the external pressure continues to decline as $z^{-{\alpha}}$,
the pressure balance implies that
the radial coordinate $r$ increases faster compared to its
variation during the acceleration.
The new flow shape is
$Z=C^{1/{\alpha}} \Gamma_\infty^{2/{\alpha}} x^{2/{\alpha}}$
as a result of equation~(\ref{C1}). For example, in the cases
${\alpha}=2$, $1<a<2$, the flow becomes radial
and the opening angle of the jet remains constant.
The quantity $\Gamma \tan \theta_{\rm v}$ is also constant and equal to
$C^{-1/2}=a/\sqrt{a-1}$ (using the relation between $C$ and $a$,
see Section~\ref{alphap=2}).
Thus, $\Gamma \tan \theta_{\rm v}$ is $a$ times larger compared to
its value during the acceleration phase
(see equation~\ref{lor-theta-eq}).\footnote{
The change of this quantity is smooth and happens as the
function $\Gamma(Z)$ changes from a power law to a constant.
Equation~(\ref{C1}), written as
$x= C^{-1/2} Z^{{\alpha}/2} \left[\Gamma(Z) \right]^{-1}$,
gives $\Gamma \tan \theta_{\rm v} = \Gamma dx/dZ =
C^{-1/2} ({\alpha}/2 - d\ln \Gamma / d \ln Z) Z^{{\alpha}/2-1}$.
In the cases with ${\alpha}=2$, $1<a<2$
the slope $d\ln \Gamma / d \ln Z$ changes from
$1-1/a$ during the main part of the acceleration phase
(see equation~\ref{G_z_r}) to zero after it ends. As a result,
$\Gamma \tan \theta_{\rm v}$ changes from
$1/\sqrt{a-1}$ to $a/\sqrt{a-1}$.
}
\begin{figure}
\includegraphics[width=77mm]{figures/pext.png}
\caption{Evolution of total pressure along the jet boundary in
models A (solid line), B1 (dashed line), C (dash-dotted line), D
(dotted line) and F (dash-double-dotted line).
}
\label{p-ext}
\end{figure}
\subsection{Magnetic acceleration and causality}
\label{causality}
We have found that the acceleration efficiency is smaller when the wall
has a conical shape (model A) than in the cases when its shape is
paraboloidal (see Fig.~\ref{geom-effect}). In the conical-wall case the flow
attains equipartition only along field lines that are close to the
rotation axis ($\Psi \le 0.2 \Psi_{\rm max}$). In accordance with our
discussion in Section~\ref{efficiency}, the variation in the
acceleration efficiency is tied to the difference in the degree of the
collimation across the outflow, as seen in Fig.~\ref{par-b}:
Only for small values of $\Psi$ does the exponent $b$ become
significantly larger than 1, corresponding to the innermost field lines
bending toward the rotation axis, which implies that the bunching
function ${\cal S}$ decreases along this portion of the outflow. In
order for collimation to occur, there must, however, exist causal
connectivity across the outflow.
A related discussion of this issue can be found in \citet{ZBB08}.
However, the simpler flow structure assumed in that paper excludes the
possibility of magnetic acceleration. In particular, the assumption of zero
azimuthal speed implies that the current $I$ is a constant of motion
(see equation~\ref{angm-def}), which in turn means that $\mu_m$ remains
constant (see equation~\ref{mu_m-def}).
One can check whether the condition of causal connectivity is
satisfied by comparing the field-line opening angle $\theta_{\rm v}$ (defined
in Section~\ref{power-law}) with the half-angle of the Mach cone of fast
waves, $\theta_{\rm m}$. The latter can be found from the relation
\begin{eqnarray}
\sin\theta_{\rm m}=\frac{\Gamma_{\rm f} c_{\rm f}}{\Gamma v_p}\,,
\label{mach-angle}
\end{eqnarray}
where $c_{\rm f}$ and $\Gamma_{\rm f}$ are the fast speed and the
corresponding Lorentz factor, respectively.
Since $\Gamma_{\rm f} c_{\rm f} =
B_{\rm co} / \sqrt{4\pi\rho}$, where $B_{\rm co}$ is the magnetic field as measured
in the fluid frame, and $v_p\approx c$, we have
\begin{equation}
\sin\theta_{\rm m} \approx
\left ({\frac{B_{\rm co}^2}{4\pi\rho c^2}}\right ) ^{1/2} \frac{1}{\Gamma} =
\frac{{\sigma}^{1/2}}{\Gamma} \,.
\end{equation}
In the magnetically dominated regime $\sigma \approx \mu/ \Gamma$.
For highly super-magnetosonic flows $\theta_{\rm m}\ll 1$. Thus, we may write
\begin{equation}
\theta_{\rm m} \approx \sqrt{\mu / \Gamma^3}\,.
\end{equation}
In the hydrodynamic limit the fast magnetsonic speed reduces to the sound
speed and $\Gamma_{\rm f}$, $c_{\rm f}$ in equation~(\ref{mach-angle})
should be replaced by $\Gamma_{\rm s}$, $c_{\rm s}$.
For the ultra-relativistic equation of state
and $\Gamma\gg 1$ this gives $\theta_{\rm m}\simeq 1/\Gamma$, the value
used for causality analysis in \citet{ZBB08}. However, in the magnetic
case $\theta_m$ can be much higher because the magnetosonic speed can be
much closer to the speed of light.
In the conical case we have $\Gamma \approx R/r_{\rm lc}$, and
\begin{equation}
\theta_{\rm v} / \theta_{\rm m} \approx (\theta/\sqrt{\mu})
(R/r_{\rm lc})^{3/2}
\label{theta-ratio}
\end{equation}
grows rapidly to a value $> 1$
(where it is a good approximation to replace $\theta_{\rm v}$ by $\theta$).
The left panel of Fig.~\ref{mach_angle}
shows that only the inner part of the jet has $\theta_{\rm v}/ \theta_{\rm m} <1$
and thus in causal connection. Collimation (and thus efficient
acceleration) is possible only in this inner region. In contrast, the
outer parts of the conical jet lack causal connection with the axial
region and the flow there is essentially ballistic.
In the paraboloidal case with $b<2$ (for which $\theta_{\rm v} \approx
1/\Gamma$ and $\Gamma \approx (R/r_{\rm lc})^{(b-1)/b}$)
\begin{equation}
\theta_{\rm v} / \theta_{\rm m} \approx
\left({\Gamma}/{\mu}\right)^{1/2} \approx
(1/\mu^{1/2}) (R/r_{\rm lc})^{(b-1)/(2b)} \,,
\end{equation}
so this ratio grows much slower compared to the conical case.
Moreover, the loss of causal contact formally occurs when $\Gamma\simeq\mu$,
i.e. at the end of the acceleration phase.
This is confirmed by our simulations. As one can see in the middle and
right panels of Fig.~\ref{mach_angle}, during the
power-law acceleration phase $\theta_{\rm v} / \theta_{\rm m}$ grows slowly but
remains less than 1 almost everywhere in our numerical models. It subsequently
decreases again when the growth rate of $\Gamma$ goes down.
In contrast, in the paraboloidal case with $b>2$ (for which $\theta_{\rm v}
\approx r/bz$ and $\Gamma \approx r/r_{\rm lc}$),
\begin{eqnarray}
\theta_{\rm v} / \theta_{\rm m}& \approx & (1/b \mu^{1/2} C^{b/4}) (r/r_{\rm
lc})^{(5/2)-b} \cr
&=& (1/b \mu^{1/2} C^{5/8}) (R/r_{\rm lc})^{(5/2b)-1}
\end{eqnarray}
(see Section~\ref{alphap<2}), and this ratio actually {\em decreases}
with distance for $b>5/2$! One can also argue quite generally that, even
if $\Gamma$ were to increase all the way up to $\mu$, the value of the
above ratio in that region, which can be estimated to be $\sim 1/b
\mu^{b-2} C^{b/4}$, would likely remain $< 1$ (since $b>2$, $\mu>1$
and $C$ is of the order of 1; see equation~\ref{C2}).
Thus, the necessary (but not sufficient) conditon for acceleration
is satisfied in this case. This suggests that the acceleration efficiency
may be comparable to the $1\le b< 2$ cases.
The behaviour of an unconfined wind is similar to that of an outflow in
a conical funnel, which is not surprising given the fact that
the former is a limiting case of the latter.
As seen in Fig.~\ref{gpw},
the acceleration in model AW is $\ga 50 \%$ efficient only along field
lines that are close to the rotation axis ($\Psi \le 0.1 \Psi_{\rm
max}$), similarly to the situation in model~A.
\begin{figure*}
\includegraphics[width=57mm]{figures/gau1-mach.png}
\includegraphics[width=55mm]{figures/gbu3-mach.png}
\includegraphics[width=55mm]{figures/gfu-mach.png}
\caption{The ratio of flow half-angle, $\theta_{\rm v}$, to the Mach angle,
$\theta_{\rm m}$, across the jet for models A (left panel), B3 (middle panel)
and F (right panel). For model A the depicted cross-sections are at
$R=10$ (solid line),
$R=10^2$ (dashed line),
$R=10^3$ (dash-dotted line),
$R=10^4$ (dotted line) and
$R=10^5$ (dash-triple-dotted line).
For model B3 the depicted cross-sections are at
$\eta=5\times10^2$ (thin solid line),
$\eta=5\times10^3$ (dashed line),
$\eta=5\times10^4$ (dash-dotted line),
$\eta=5\times10^5$ (dotted line),
$\eta=5\times10^6$ (dash-triple-dotted line) and
$\eta=5\times10^7$ (thick solid line).
For model F the depicted cross-sections are at
$\eta=1.5\times10^3$ (thin solid line),
$\eta=5\times10^3$ (dashed line),
$\eta=1.5\times10^4$ (dash-dotted line), and
$\eta=1.5\times10^5$ (dotted line).
The curves in the right panel dive to zero when the flow becomes
sub-magnetosonic.
}
\label{mach_angle}
\end{figure*}
\subsection{Hot flows}
\label{hot}
When $w/\rho c^2$ is significantly larger than 1 at the inlet there is
an additional reservoir of energy for the flow acceleration --- the
thermal energy of particles. As the flow expands the enthalpy per unit
rest mass $w/\rho =c^2+[s/(s-1)] (p/\rho)$ (equation~\ref{w-def})
decreases until it reaches its minimum value $(=c^2)$, and beyond that
point the flow can be regarded as cold. In the pure hydrodynamic
case the thermal energy is directly transferred to the bulk
kinetic energy of the fluid. In the magnetic case there is an additional
possibility --- the thermal energy can also be transferred to the
Poynting flux. Indeed, since $\mu c^2 = (w/\rho) \Gamma + \mu_m c^2$,
it is possible to have both $\Gamma$ and $\mu_m c^2$ increasing when
$w/\rho$ decreases, and this in fact is what we observe in model B2H
(Fig.~\ref{gbut}).
\begin{figure*}
\includegraphics[width=77mm]{figures/gbut-s.png}
\includegraphics[width=77mm]{figures/gbut-v1r.png}
\caption{Effects of thermal acceleration.
Left panel: the bunching function ${\cal S}$ along the magnetic
field line with $\Psi=0.5\Psi_{\rm max}$.
Right panel: $rv^{\hat{\phi}}$ along the magnetic field line with
$\Psi=0.5\Psi_{\rm max}$. Solid lines: model B2; dashed lines: model B2H.
}
\label{gbut-extra}
\end{figure*}
We have already noted in Section~\ref{efficiency} that in
Poynting flux-dominated flows $\mu_m$ is proportional to the bunching
function ${\cal S}$ (see equation~\ref{S}). In agreement with this result,
the left panel of Fig.~\ref{gbut-extra} shows that in model B2H
${\cal S}$ exhibits the same evolution as $\mu_m$
(which is shown in the middle panel of Fig.~\ref{gbut}).
In the super-Alfv\'enic regime the trans-field force balance for
hot flows is described by equation~(\ref{transf}) even for hot outflows
provided that $p$ remains $\ll B_{\rm co}^2/8\pi$.
Therefore we still have ${\cal R} \sim \Gamma^2 r$
and hence $\Gamma \propto r^{b-1}$ along magnetic field lines of
paraboloidal jets with exponents in the range $1<b\le 2$. Combining the
mass conservation relation~(\ref{kappa}) and equation~(\ref{calS}) we obtain
\begin{eqnarray}
\Gamma \rho =\frac{k \Psi \cal S}{\pi r^2} \propto r^{-2}\,,
\nonumber
\end{eqnarray}
where we took account of the fact that ${\cal S}$ is a weak function of
$r$. This enables us to write the variation of the thermodynamic
parameters as
\begin{eqnarray}
\rho \propto r^{-b-1}, \qquad p\propto r^{-s(b+1)}\,.
\nonumber
\end{eqnarray}
In the limit $w\gg \rho c^2$ equation~(\ref{w-def}) gives
$w \propto p \propto r^{-s(b+1)}$, and therefore $\mu_h = (w/\rho c^2)
\Gamma$ scales as
\begin{equation}
\mu_h \propto r^\delta, \qquad \delta= b(2-s) - s\,.
\label{mu_h}
\end{equation}
For model B2H with $b \approx 3/2$ and $s=4/3$
this yields $\mu_h \propto r^{-1/3}$. Hence $\mu_h$ is expected
to decrease and $\mu_m=\mu-\mu_h$ to increase along the field lines,
in agreement with what is observed in the simulation.
Similar behaviour has been found in the self-similar solutions of
\cite{VK03b}, but only in cases where the flow is super-Alfv\'enic
from the start (see also \citealp{VPK03}).
In their trans-Alfv\'enic, hot-flow solutions \citep{VK03a}, $\mu_m$
remained constant throughout the thermal acceleration phase. This could
be understood from the fact that these solutions corresponded to
$b\approx 2$ and therefore to $\delta \approx 0$ in
equation~(\ref{mu_h}), resulting in constant $\mu_h$ and $\mu_m$ in the
thermal acceleration region.\footnote{As was shown analytically in the
magnetodynamic self-similar solutions of \cite{NMcKF07}, the field-line
shape is $z\propto r^{2/(2-F)}$, where $F$ is a constant parameter entering the
self-similarity expression of the magnetic flux function, $\Psi = r^F
{\cal F}(r/z)$. The MHD self-similar solutions follow the same
scaling in their force-free regime. The trans-Alfv\'enic solutions
presented in \citet{VK03a} were characterized by $F\approx 1$, which
implies $b \approx 2$. Note in this connection that the $F=1$
magnetodynamic solution is exactly the paraboloidal force-free solution
presented by \citet{Bland76}.
\label{footnote_bland}}
In contrast, the super-Alfv\'enic solutions presented in \citet{VK03b}
corresponded to $b \approx 3/2$ and hence to $\delta \approx -1/3$ (the
same values as in our models B and D), and therefore they exhibited the
same behaviour in the thermal acceleration zone as our simulated flows.
The increase of the Poynting-to-mass flux ratio $\mu_m$ in the thermal
acceleration regime leads to a rather unusual behaviour of the azimuthal
velocity. The right panel of Fig.~\ref{gbut-extra} shows the variation
of $r v^{\hat{\phi}}$ along the same magnetic surface in models B2 and
B2H. For the cold jet it always grows with cylindrical radius and hence
with the distance from the jet origin. This reflects the fact that the
plasma is being spun up by the rotating magnetic field:
in this case $|B_p / B^{\hat\phi}| \gg |v_p/v^{\hat\phi} |$ in
equation~(\ref{omega-def}) and $v^{\hat\phi} \approx r \Omega \propto r$.
However, in the hot jet $r v^{\hat{\phi}}$ (and therefore also
$v^{\hat{\phi}}$) initially decreases with increasing $r$ and even
attains negative values, indicating counter rotation of the plasma.
Eventually the cold-jet behaviour is restored, with the switch
taking place at the turning point of $\mu_m$. The decrease in $r
v^{\hat\phi}$ when $\mu_m$ increases along a field line follows from the
following relation,
\begin{equation}\label{rvphi}
\frac{r \Omega v^{\hat\phi}}{c^2}=
1- \frac{1-{l \Omega}/{\mu c^2}} {1 - {\mu_m}/{\mu}} \,,
\end{equation}
obtained by combining equations~(\ref{angm-def})
and~(\ref{kap-def}).\footnote{The inequality ${l \Omega}/{\mu c^2} < 1$
always holds in trans-Alfv\'enic flows, since $({l \Omega}/{\mu
c^2})^{1/2}$ equals the value of $r/r_{\rm lc}$ at the Alfv\'en surface
(e.g. \citealp{VK03a}), and the Alfv\'en surface is located closer to
the source than the light cylinder (with the two surfaces almost
coinciding for highly magnetized flows).}
Physically, the increase in $\mu_m$ implies that the magnetic
contribution to the total angular momentum per unit rest mass goes
up (see equations~\ref{mu_m-def} and~\ref{angm-def}), which, by the
conservation of $l$ along a field line (and taking account of energy
conservation) implies that the specific material angular momentum $r
v^{\hat\phi}$ must decline.
The efficiency of the acceleration in model B2H is higher than in the
cold models, as can be seen in Fig.~\ref{gbut}.
This is connected to the behaviour of the function ${\cal S}$.
The increase of ${\cal S}$ during the
thermal acceleration phase results in a higher ${\cal S}_{\rm f}$, the value
of the function ${\cal S}$ at the fast magnetosonic surface. In addition,
the asymptotic value ${\cal S}_\infty$ is smaller than in cold models
(see Fig.~\ref{gbut-extra}). Both effects result in a higher
value of $\Gamma_\infty / \mu$ (see equation~\ref{G_infty}).
\subsection{Comparison with semi-analytic solutions}
As discussed in Section~\ref{introduction}, it is possible to find exact
solutions of the relativistic MHD equations by assuming radial
self-similarity \citep{LCB92,Con94,VK03a,VK03b,VK04}. Due to the
mathematical complexity of the equations, these are the only possible
exact semi-analytic solutions describing cold or polytropic flows
\citep{VK03a}. Similarly to their non-relativistic counterparts (the
Blandford-Payne--type models), they successfully capture the physics of
magnetically driven jets and yield the general characteristics of the
flow acceleration and collimation.\footnote{The self-similar solutions
of \cite{VK03a} have a line-shape $z\propto r^2$ (see footnote
\ref{footnote_bland}) and thus most closely resemble our model C.} In
particular, the results of \cite{VK03a,VK03b} for ultra-relativistic GRB
jets follow the general scaling relationships derived here.
In fact, the scaling $\Gamma \propto r^{b-1}$, corresponding to a
streamline shape $z\propto r^b$ (for $1<b\le 2$; the regime (iii) of
equation~\ref{transf2}), was first presented in
\citet{VK03b}. Note in this connection that both the $\Gamma \simeq
r/r_{\rm lc}$ (equation~\ref{Gamma2}) and the $\Gamma \simeq
z/r$ (equation~\ref{G_z_r}) scalings exhibited by our solutions could be
captured through the basic radial-self-similarity ansatz $\Gamma=\Gamma(\theta)$
because both $r/r_{\rm lc}$ and $z/r_{\rm lc}$ are functions of the
polar angle $\theta$ in the self-similar solutions.
The semi-analytic solutions exhibit as high an acceleration efficiency ($\ga
50\%$) as the simulated $b\le 2$ solutions, and, correspondingly, have a
similar value for the asymptotic shape function (${\cal S}_\infty \sim 1/2$;
\citealp{V04dogl}). Self-collimation also acts in a similar way in both
types of solution, with the inner field lines at any given height $z$
being better aligned with the rotation axis than the poloidal field at
larger values of~$r$.
Despite their qualitative similarity in regard to the acceleration and
collimation processes, the semi-analytic and numerical solutions do of
course differ in their details, reflecting the fact that in the
self-similar model the angular velocity at the base necessarily scales
as $1/r$ and that only one current-flow regime is allowed. In
particular, the spatial distributions of the integrals of motion is not
the same in these two cases. For example, the energy integral, which is
constant in the self-similar model, is roughly proportional to the
magnetic flux function in the simulated uniform-rotation jets, and the
adiabat $Q$, which is given as a power of the magnetic flux function in
the self-similar model, is a global constant in the simulations. We also
note that, while the far-asymptotic (beyond the acceleration region)
flow shape in the self-similar models is either cylindrical or conical,
only the innermost field lines become cylindrical in the simulated jets,
whereas further out the streamlines remain paraboloidal. However, this
is evidently related to the imposed boundary shape, and we can expect
that, if the flow were followed to still larger distances, even more of the
interior field lines would tend to cylinders \citep[see][]{CLB91} or
(in the case of an initially ``hot'' flow) to cones.
The high acceleration efficiency inferred from the self-similar and
numerical solutions for non-radial, relativistic MHD outflows
was also deduced by \cite{BN06} on the basis of a perturbative analysis around
a parabolic ($z \propto r^2$) flow. These authors found that the Lorentz factor
increases with distance from the origin as $\Gamma \propto z^{1/2}$,
in agreement with our general result for paraboloidal jets of this type,
$\Gamma \sim z/r$.
\section{Application to GRB Jets}
\label{application}
The observational study of GRBs has not yet reached the stage where the
basic parameters of the flows producing prompt $\gamma$-ray emission and
afterglows have become well established. There is no general consensus
yet on the angular structure, degree of collimation, distance from the
central source or composition of GRB jets. These parameters may vary
significantly from burst to burst. The anisotropy of $\gamma$-ray emission
due to relativistic beaming further complicates the problem as the same
burst could have a very different appearance when observed from
different viewing angles. In this section we test our theory against
the current, not yet very stringent, observational constraints and
provide a guide for future observations.
The maximum terminal Lorentz factors in our numerical models of
parabolic jets, $\sim 100 - 300$, are close to those inferred for
long/soft GRB jets and also high enough to ensure that we have captured
the properties of magnetic acceleration in the ultra-relativistic
regime. Although real GRB jets may be even faster \citep[e.g.][]{LS01},
the analytic results verified by our numerical study can be applied to
such jets with a high degree of confidence.
To make detailed comparisons between our theory and the observations
we need to determine the characteristic light-cylinder radius
at the source of the jets. In the case of a millisecond magnetar
\begin{eqnarray}
r_{\rm lc}=\frac{cT}{2\pi} \simeq 5\times10^6
\left(\frac{T}{1\,\mbox{ms}}\right)\,\mbox{cm}\,,
\nonumber
\end{eqnarray}
and for a maximally rotating black hole
\begin{eqnarray}
r_{\rm lc}=4r_g \approx 6\times10^5
\left(\frac{M}{M_\odot}\right)\,\mbox{cm}\,.
\nonumber
\end{eqnarray}
Thus, $L=10^6$cm is a suitable reference length-scale for this application.
Given the extended nature of magnetic acceleration,
the first question that one has to address is whether the Lorentz
factors deduced from observations can be reached in our model
on the inferred scale of the $\gamma$-ray emission region. According to
equations~(\ref{Gamma2}) and~(\ref{G-scaling}),
\begin{eqnarray}
R\simeq10^{12}\left(\frac{\Gamma}{100}\right)^3\mbox{cm}
\nonumber
\end{eqnarray}
for paraboloidal jets with $b=3/2$ and $b=3$, and
\begin{eqnarray}
R\simeq10^{10}\left(\frac{\Gamma}{100}\right)^2\mbox{cm}
\nonumber
\end{eqnarray}
for paraboloidal jets with $b=2$. These estimates are lower than the distance
to the $\gamma$-ray production region inferred from the burst variability in
the internal-shocks model of GRBs,
\begin{eqnarray}
R_{\gamma} \sim \Gamma^2 c \delta t = 3\times 10^{13}
\left(\frac{\Gamma}{100}\right)^2
\left(\frac{\delta t}{0.1\; {\rm s}}\right)\; {\rm cm}\, ,
\nonumber
\end{eqnarray}
where $\delta t$ is the internal variability time-scale \citep[e.g.][]{Pir05}.
In fact, recent {\it Swift}\/ observations indicate even
larger distances ($\sim 10^{15}-10^{16}\; {\rm cm}$;
e.g. \citealt{Lyu06a,Kum07}). The theory thus appears to be
consistent with the observations in this respect.
We emphasize that the above results have been derived in the context
of ideal and axisymmetric MHD. In reality, various instabilities, and
in particular non-axisymmetric, current-driven ones occurring near the
jet axis, may result in magnetic reconnection and dissipation. It is
interesting to note in this connection that the dissipation of
Poynting flux would naturally generate a negative magnetic pressure
gradient (associated with the azimuthal field component) along the flow
and that this process was argued to be capable, on its own, to
accelerate the flow to a high Lorentz factor
\citep[e.g.][]{DS02,Dre02}. In this respect our ideal-MHD simulations
may be yielding only lower limits on the terminal Lorentz factor in
the modelled jets.
A related issue is whether there is an adequate confining
medium, as required for the establishment of the ``power-law''
acceleration regime described by equation~(\ref{G-scaling}). If the
confinement of a long/soft GRB jet is provided only by the envelope
of the progenitor massive star, as proposed by \citet{TMN08}, the
acceleration would need to take place on a scale smaller than the
stellar radius, $\sim 10^{11}-10^{12}\;$cm.
Downstream of the stellar surface the jet is expected to enter the regime
of ``free'' (ballistic) expansion, as in our model E, which is
characterized by a less efficient magnetic acceleration.\footnote{
It has been suggested that matter-dominated GRB jets could
remain confined by the expanding cocoon of relativistically hot
shocked jet material after they break out through the stellar surface
\citep[e.g.][]{RCR02} and could continue to accelerate during that phase
\citep[e.g.][]{LB05}. In contrast, Poynting-dominated jets do not inflate
large cocoons but instead create the so-called ``nose cones''
\citep[e.g.][]{Kom99}. In fact, given the low compression ratio of a fast
shock in a magnetically dominated plasma, a jet termination shock is
unlikely to form before the jet emerges from the star --- instead, the
jet would have the form of a super-Alfv\'enic but sub--fast-magnetosonic
outflow, as has been observed in recent computer simulations
\citep[e.g.][]{KB07,BK08}.} But even this rather restrictive constraint
on the size of the acceleration region, and hence on $\Gamma_\infty$,
is in principle consistent with the theory.
An alternative possibility is that the GRB outflow is confined by a wind
launched from the surface of a disc that surrounds the central object
\citep[e.g.][]{LE00}. This mechanism is a prime candidate for the
confinement of short/hard GRB outflows, which evidently do not originate
inside a star. In this case the collimation might be attained smoothly, with
the disc-driven and central object-driven components constituting parts of a
coherent outflow configuration \citep[e.g.][]{TMN08}. However, the outflow
may also involve shocks formed at the interface of these two
components \citep[e.g.][]{BL07}. If the GRB jet and disc outflow
commence at the same time, the spatial extent of the confining medium in
this picture can be estimated as
\begin{eqnarray}
R_{\rm wind} \approx 3 \times 10^9 \left (\frac{v_{\rm wind}}{0.1\,
c}\right ) \left ( \frac{\Delta t}{1\, {\rm s}}\right )\; {\rm cm}\,,
\nonumber
\end{eqnarray}
where $v_{\rm wind}$ is the mean wind speed over this distance and
$\Delta t$ is the GRB duration (normalized here to a fiducial value
appropriate for a short/hard burst). This should be compared with the
above theoretical relationships between $R$ and $\Gamma$, which for
$\Gamma = 30$ (a fiducial value for the lower limit on $\Gamma_\infty$
in short/hard GRBs; e.g. \citealt{Nak07}) yields $R \approx 3 \times
10^{10}\; {\rm cm}$ for $b=3/2$ or $b=3$ and $R \approx 9 \times 10^{8}\; {\rm
cm}$ for $b=2$. This comparison indicates that, over the time $\Delta
t$, a moderately relativistic disc outflow could form a sheath around
the jet acceleration region. Given that the size of a disc that forms
during a binary (NS-NS or NS-BH) merger that gives rise to a short/hard
GRB event is not expected to exceed a few times $10^6\; {\rm cm}$
(i.e. significantly less than than the expected cylindrical radius of
the jet in the main acceleration region), meaningful confinement would
be attained only if the wind had sufficiently large inertia, which would
require the wind-to-jet total energy ratio to be $\gg 1$
\citep[cf.][]{LE00}.
If the initial magnetizations of short/hard and
long/soft GRB outflows are comparable, this scenario provides a
plausible explanation of the finding (from the best available current
data) that short-GRB jets are on average less relativistic than their
long-duration counterparts. A concomitant prediction, which could be
tested when more afterglow data for short/hard GRBs become available, is
that short/hard GRB outflows should also be less well collimated, on
average, than long/soft ones.
The internal-shocks model envisions the prompt GRB emission to be
powered by the collision of successively ejected relativistic ``shells''
\citep[e.g.][]{Pir05}. This scenario requires the jet to be kinetic
energy-dominated on the scale of
the emission region; otherwise, the flow deceleration
and dissipation at fast shocks is too weak
(or else, if the flow is inhomogeneous, the energy
requirements are strongly increased).
The numerical solutions presented in this paper have demonstrated the
possibility of efficient conversion of Poynting flux into bulk
kinetic energy, with $\ga 50\%$ efficiency attained by the end of
the power-law--like acceleration regime. However, the distance
$R_\gamma$ of the prompt
emission region from the central source imposes a constraint on the initial
magnetization of GRB jets in this model. Using
equation~(\ref{equip1}), we obtain
\begin{equation}
\mu
\approx 2 \Gamma_\infty < \left\{
\begin{array}{lcc}
2(r_\gamma/r_{\rm lc})^{b-1} & \mbox{if} & b\le 2 \\
2(r_\gamma/r_{\rm lc}) & \mbox{if} & b\ge2 \\
\end{array}\right. .
\nonumber
\end{equation}
For paraboloidal jets with
$b=3/2$ or $b=3$ this gives (setting $R_{\rm lc} \approx r_{\rm lc}$)
\begin{eqnarray}
\mu < 430 \left(\frac{R_\gamma}{10^{13}\mbox{cm}}\right)^{1/3}\,,
\nonumber
\end{eqnarray}
whereas for $b=2$ we obtain
\begin{eqnarray}
\mu <6\times10^3\left(\frac{R_\gamma}{10^{13}\mbox{cm}}\right)^{1/2}\,.
\nonumber
\end{eqnarray}
By approximating $\Gamma_\infty \dot M_j c^2 \approx {\cal{E}}/\Delta
t$, where ${\cal{E}}$ the outflow kinetic energy as inferred from
afterglow observations and $\Delta t$ is the burst duration, we
estimate the mass outflow rate in the jet to be
\begin{eqnarray}
\dot{M}_j \approx 5.6\times 10^{-8} \;
\left(\frac{\cal{E}}{10^{51}\mbox{erg}}\right)
\left(\frac{\Delta t}{10\mbox{s}}\right)^{-1}
\left(\frac{\Gamma}{10^3}\right)^{-1} M_\odot\;\mbox{s}^{-1}\,,
\nonumber
\end{eqnarray}
where we normalized by values appropriate to long/soft bursts. This is
very much lower than the expected mass accretion rate onto the central
black hole in the collapsar model ($\sim 0.05 - 1\;
M_\odot\,\mbox{s}^{-1}$; e.g. \citealt{PWF99}) and constitutes the
so-called ``baryon loading problem'' in GRB source models. Such a
comparatively low mass outflow rate might be produced if the
GRB-emitting outflow originates on magnetic field lines that thread the
horizon of a spinning black hole and tap its rotational energy via the
Blandford-Znajek mechanism \citep[e.g.][]{LE93}; in this case the flow
would initially be baryon-free and would require a baryon-injection
mechanism as it propagates outward. Alternatively, jets launched from
an accretion disc may experience such a low mass loading if they are
initially thermally driven along magnetic field lines inclined at a
small ($\la 15^\circ$) angle to the rotation axis
\citep{BL08}.\footnote{It was also proposed that the problem could be
alleviated in a magnetically driven disc outflow that is initially
neutron rich and hot
if the neutrons decouple from the protons well
before the latter attain their terminal Lorentz factor
(see \citealt{VPK03} and \citealt{FPA00}).
There are indications from studies
of discs around non-rotating black holes that this might not work in
practice because outflows may be required to be comparatively massive to
remain neutron rich \citep[e.g.][]{Lev06,BL08}, but this conclusion
still needs to be verified in the case of discs around rapidly rotating
black holes. }
The internal-shocks model of GRBs has been questioned on account of
the relatively high emission efficiency that it requires, and these
challenges have become significantly stronger following observations
made by {\it Swift}\/ \citep[e.g.][]{GKP06,Kum07}. Various suggestions
have been made (and continue to be made) in the literature for
reconciling this scenario with the observations \citep[e.g.][]{KZ07} or
else for modifying or replacing it. Perhaps the main alternative
picture proposed to date is based on the assumption that
the prompt high-energy emission is produced directly from
the dissipation of magnetic energy without requiring it to be
converted into kinetic energy first \citep[e.g.][]{Kum07}, which
circumvents the efficiency problem that has troubled the
internal-shocks model. Although magnetic dissipation could in principle
occur also in the context of the MHD model \citep[e.g.][]{DS02}, perhaps
the most extreme realization of this idea occurs within the framework of
the magnetodynamics scenario, in which GRB outflows are regarded as
remaining Poynting flux-dominated (and sub--fast-magnetosonic)
in the $\gamma$-ray emission region \citep[e.g.][]{Bla02,Lyu06b}.
In this scenario, neither the internal nor the reverse shocks
of the standard model would develop, which could be the basis for an
observational test.\footnote{Note in this connection that, in some of
the proposed interpretations of the {\it Swift}\/ data
\citep[e.g.][]{UB07,GDM07}, the entire afterglow emission is
attributed to a reverse shock that is driven into the ejecta.}
As we discussed in Section~\ref{power-law}, a key prediction of the
magnetic acceleration model is the approximate inverse proportionality
between the Lorentz factor along a poloidal magnetic surface and
$\tan\theta_{\rm v}$ for that surface for paraboloidal jets with $1<b \le 2$
(see equation~\ref{lor-theta-eq}). For a small opening angle and $b$ not
very close to 1 this result
can be approximated as $\Gamma \theta_{\rm v} \approx 1$. This implies that
GRB outflows with $b\le 2$ that attain $\Gamma \sim 100$,
the approximate inferred lower limit for long/soft
GRBs, must have $\theta_{\rm v} \sim 0.6^\circ$, essentially independent of
the details of the acceleration process.
When $b>2$, $\Gamma\theta_{\rm v} \approx b^{-1} (R/r_{\rm lc})^{-(1-2/b)}$
{\em decreases} with $R$ in the magnetic acceleration region, implying
an even smaller value of $\theta_{\rm v}$ at the end of this zone. The
relation $\Gamma\theta_{\rm v} \sim 1$ may be useful for differentiating
between magnetic and fireball models of GRB flows. Indeed, this
property is generic to the magnetic acceleration mechanism, whereas for
the thermal acceleration the terminal bulk Lorentz factor is essentially
given by the thermal Lorentz factor at the base of the flow and is
fairly independent on the flow collimation, which means that the product
$\Gamma \theta_{\rm v}$ can in principle become $\gg 1$. Interestingly, one
of the proposals made for interpreting the apparent GRB ``tails''
observed by {\it Swift}\/ invokes a GRB-emitting outflow component whose
opening half-angle must be $< 1^\circ$ \citep{Pan07}. While the currently
available data are not sufficient for favouring this interpretation over
other suggested explanations of the ``tails,'' it is noteworthy that the
requirement arrived at by \citet{Pan07} on strictly phenomenological
grounds is consistent with a distinguishing property of the magnetic
acceleration model. It is also noteworthy that there is already at least
one source (GRB 070401) in which such a small opening half-angle has
been inferred directly from a measurement of an early break in the X-ray
afterglow light curve \citep{Kamb08}.
Such small asymptotic opening angles and even
$\Gamma\theta_{\rm v}\la 1$ could in principle be
attained also in purely hydrodynamical jet models, although this would
require a very high efficiency of collimation and acceleration within
the stellar interior. Specifically, the jets would need to emerge from
the star with $\theta_v<1^\circ$ and $\Gamma\ga
1/\theta_v \simeq 60$, which, in view of recent analytic and numerical studies
\citep[e.g.][]{LB05,MLB07}, is unlikely to be achieved in practice.
The original fireball model for GRB jets envisions a uniform conical
outflow that becomes accelerated to Lorentz factors $\Gamma \gg
1/\theta_{\rm v}$ and predicts that during the afterglow phase the Lorentz
factor of the forward shock driven by the jet into the ambient medium
will decrease to values $< 1/\theta_{\rm v}$. The observational consequence
of this transition is a panchromatic break in the afterglow light curve
(referred to as the ``jet break'') occurring when $\theta_{\rm v} \Gamma$ becomes
$\sim 1$ \citep[e.g.][]{Rho99,Sari99}. In view of the results presented
in this paper, the predictions of the MHD model for GRB outflows that are
efficiently accelerated --- and therefore necessarily confined (by
either thermal, magnetic or ram pressure) during the acceleration phase
--- are radically different.
Specifically, the MHD model predicts that the afterglow light curve would
exhibit either a very early jet break (in cases where
$\Gamma\theta_{\rm v}\approx 1$ at the end of the acceleration phase, as
expected in jets with $b \le 2$) or no jet break at all (if
$\Gamma\theta_{\rm v} < 1$ at the end of the magnetic acceleration region,
as expected in jets with $b>2$).\footnote{If the low current detection
rate of jet breaks in the early afterglow light curves of GRB sources
would prove to be more than just the result of observational
difficulties, this could be an indication, when interpreted in the
context of the magnetic acceleration model, that these jets are
characterized by effective shape-function exponents $b>2$.} This prediction
is seemingly at odds with the inference from a number of pre-{\it Swift}\/
GRB sources of breaks of this type occurring on a time-scale of days
(see e.g. \citealt{LZ05} for a compilation). The paucity of ``textbook''
jet breaks in {\it Swift}\/ GRB sources \citep[e.g.][]{Lia08}, which has
even cast doubts on the interpretation of the alleged pre-{\it Swift}\/
jet breaks, points to one way out of this dilemma: it may be that indeed
there are no bona fide jet breaks at later times. We recall, however,
that the jet-break interpretation lies at the basis of the
identification of GRB outflows as collimated jets, which has
significantly reduced the otherwise prohibitive energy requirements in
some sources. Alternatively, it could be that the difficulties in
finding late-time jet breaks in {\it Swift}\/ sources are to a large
extent observational \citep[e.g.][]{Zhang07}, in which case other
explanations for late-break candidates must be sought.
One natural possibility is that the outflow possesses more than one
kinematic component. In its simplest incarnation, this is the ``two
component'' model, which envisions the prompt emission to originate in
an ultra-relativistic, highly collimated jet and the afterglow emission
to be dominated by a less relativistic, wider outflow component. The
suggestion in \citet{Pan07} and in \citet{Kamb08} that the $\gamma$-ray
emitting jet is very narrow was made in the context of this model, and a
similar picture was used by \citet{GKP06} to explain other aspects of
the early GRB X-ray emission measured by {\it Swift} \citep[see
also][]{Zhang07}. In fact, a two-component outflow configuration had
already been proposed in the pre-{\it Swift}\/ era to account for
certain observations \citep[e.g.][]{Ber03} and as a means of alleviating
the efficiency requirements on the internal-shocks model
\citep{PKG05}. The separation into two components could arise either
from an interaction of the outflow with the envelope of a massive
progenitor star or represent an intrinsic property of the central engine
(see \citealt{PKG05} for a summary of some specific proposals). In the
context of the magnetically driven outflow model, there are at least two
possibilities for an intrinsic origin. First, neutron-rich, hot outflow
may split into two components when the neutrons and protons decouple
before the protons have attained their terminal Lorentz factor
\citep{VPK03}. Second, a baryon-poor ultra-relativistic outflow launched
from the black hole can be surrounded by a magnetically driven,
relativistic outflow from the accretion disc itself
\citep[see][]{GKP06}.\footnote{In the latter scenario, the disc wind
could provide a ready source for seeding the central funnel with baryons
\citep[e.g.][]{LE03} and could also help collimate the interior outflow
\citep{LE00}.}
We stress that, in reality, the outflow may be more complex than in the
schematic ``two component'' picture sketched above. For example,
inhomogeneities in the accretion flow may result in several distinct
outflow components emerging from the disc, associated, perhaps, with
isolated magnetic flux tubes that thread the disc at different
locations. Phenomenologically, this situation might resemble the
``patchy shell'' scenario considered by \citet{KP00}.
\begin{figure*}
\includegraphics[width=60mm]{figures/lor-theta.png}
\includegraphics[width=60mm]{figures/keflux1a.png}
\includegraphics[width=60mm]{figures/keflux.png}
\includegraphics[width=60mm]{figures/kefluxa.png}
\caption{Angular distributions of the Lorentz factor (top left panel),
the kinetic power per unit solid angle in the local direction of
the flow ($\epsilon$, top right panel), the kinetic power per annulus of
unit angular size ($\epsilon\theta$, bottom left panel)
and $\epsilon\theta^2$ (bottom right panel) in the asymptotic regime,
plotted as functions of polar angle.
The variable $\epsilon$ is given in units of
$c B_0^2 L^2 / 4 \pi$, and when it is multiplied by $\theta$ or $\theta^2$
the polar angle is measured in radians. Note, however, that the polar
angle along the horizontal axis is given in degrees.
The solid lines show model B2, the dashed lines
model B2H, and the dash-dotted lines model D.
}
\label{grb-profiles}
\end{figure*}
The distribution of the terminal Lorentz factor and of the kinetic power
across the jet directly affects the evolution of the light
curve of the GRB afterglow \citep[e.g.][]{Gra05} as well as the
statistical properties of a GRB sample \citep[e.g.][]{NGG04} and the
detectability of ``orphan'' afterglows (afterglows detected without an
associated GRB; e.g. \citealt{NP03}). One could in turn attempt to use
such observations to probe the jet structure and to test the
underlying acceleration and collimation models.
With this in mind, we present in Fig.~\ref{grb-profiles} illustrative
asymptotic distributions of the Lorentz factor and of the kinetic power
from our simulations.
We consider first the $\Gamma_\infty$ distribution. The top left panel
of Fig.~\ref{grb-profiles} shows that in all of the cases the Lorentz
factor decreases toward the axis --- this is a generic feature of the
axisymmetric, ideal-MHD acceleration mechanism as the azimuthal
magnetic field and hence the Poynting flux vanish along the symmetry
axis. This feature may not, however, be as pronounced when
non-axisymmetric instabilities and resistive dissipation of magnetic
energy (which are not incorporated into our study) are taken into
account. In fact, we find that even in our solutions $\Gamma\not=1$ at
$\theta=0$ because of numerical dissipation. In the case of an
initially hot outflow $\Gamma(\theta=0)>1$ is due to the
thermal acceleration. In initially cold outflows that have uniform
rotation and mass density distribution
at the base $\Gamma$ peaks at the jet
boundary. It is seen, however, that if the flow is initially hot the
anisotropy of the Lorentz factor distribution within the jet is
reduced. Uniform rotation is a robust prediction of models with a
magnetar or a magnetized black hole as a central rotator.
The assumption
a uniform mass-flux distribution at the jet base is more of an approximation:
for example, when the central source is a black hole the degree of baryon
loading is likely to be higher near the jet boundary due to various
boundary interactions with the jet surroundings \citep[e.g.][]{LE03}.
Such a mass distribution would lead to lower terminal Lorentz factors near the
boundary compared to that found in our simulations.
If the inner regions of an accretion
disc contribute to the magnetic driving of the GRB-emitting outflow
component then a model with differential rotation, with $\Omega$
decreasing away from the centre, is more suitable. As seen from the
figure, in this case the terminal Lorentz factor peaks at intermediate
angles.
In practice it may, however, be difficult to distinguish this case from
that of uniform rotation with nonuniform mass loading.
Turning now to the distribution of energy flux across the jets in the
asymptotic regime, we recall that the observational consequences of this
energy are strongly influenced by relativistic beaming --- whenever a
fraction of this energy is dissipated and converted into radiation, this
radiation will be beamed in the direction of motion of the corresponding
fluid element, given by $\theta_{\rm v}$. Most phenomenological
models of GRBs have assumed that the jet is conical
and has radial streamlines. Thus, the streamline angle, $\theta_{\rm v}$,
is equal to $\theta$, the polar angle of
the fluid element. In our model the streamlines are curved and
asymptotically their shape is close to that of the boundary (with the
exception of the cylindrical core). Hence we have
$\theta_{\rm v}\simeq\theta/a$. Consider a surface element normal to the
$\eta$ coordinate lines (streamlines)
$d\Sigma_\eta=\sqrt{g_{\phi \phi} g_{\xi \xi}} d\phi d\xi$,
where $g_{\phi \phi}$ and $g_{\xi \xi}$ are components of the metric tensor. Since in
the asymptotic regime $\theta,\theta_{\rm v}\ll 1$, we can write $d\xi=a
z^{1-1/a}d\theta_{\rm v}$, $g_{\phi \phi}=r^2$ and $g_{\xi \xi}=z^{2/a}$
(see Appendix A of Paper I),
and hence
\begin{eqnarray}\nonumber
d\Sigma_\eta=a^2z^2 d\omega,
\end{eqnarray}
where $d\omega=\theta_{\rm v} d\theta_{\rm v} d\phi$ is the solid angle defined by the
tangents to the streamlines passing through the surface element. The
power per unit solid angle is then given by
\begin{eqnarray}\nonumber
d{\cal L}/d\omega \equiv \epsilon = S^{\eta} a^2 R^2,
\end{eqnarray}
where $S^{\eta}$ is the component of the energy flux density in the $\eta$
direction.
The top right panel of Fig.~\ref{grb-profiles} shows the distribution of
kinetic power per unit solid angle, $\epsilon$
(the total power has a very similar distribution).
One can see that in all models it peaks at, or very close to, $\theta=0$.
The reason for this behaviour, which seemingly
conflicts with the Lorentz factor distribution shown in the left
panel of the figure, is that the density distribution across the jet
is highly nonuniform, with the mass density strongly peaking near the symmetry
axis on account of the enhanced collimation of the flow in that region
(see Figs.~\ref{model-a}--\ref{model-b2h}). The bottom left panel of
Fig.~\ref{grb-profiles} shows the distribution of $\epsilon\theta$: this
quantity tells us how the jet power is distributed between annuli of
equal size in $\theta$.
One can see that in model D (differential rotation) more power comes
from the intermediate annuli, in model B (uniform rotation at
the base) from the outer annuli, and that
a significant core component emerges in model B2H (initially hot jet).
Note, however, that the distributions of the Lorentz factor and the power
depend on the choices of the density, magnetic flux and angular velocity
distributions at the inlet boundary, so different profiles may be possible.
The derived distributions of $\Gamma(\theta)$ and $\epsilon(\theta)$ are
markedly different from those commonly adopted in phenomenological
GRB jet models, which either take
them to be uniform within the jet half-opening angle
$\theta_j$ or else assume that the flow has a universal structure, with
$\epsilon$ being a Gaussian or a power-law in $\theta$ (in particular,
$\epsilon \propto \theta^{-2}$;
\citealt{RLR02} ---
compare with the bottom right panel of Fig.~\ref{grb-profiles}) outside
a uniform-core region.\footnote{
In the force-free electromagnetic model for GRBs it is envisioned that the
current flows along the axis of rotation and returns through the
equatorial plane; this yields an energy distribution $\propto
\theta^{-2}$ in the associated electromagnetic shell
\citep[e.g.][]{Bla02,Lyu06b}. A universal structured outflow with
$\epsilon \propto \theta^{-2}$ could
potentially also be produced when a relativistic GRB jet with possibly a
different initial energy distribution breaks out through the surface of
a massive progenitor star \citep{LB05}.} The structure exhibited by our
model jets is also different from that of a ``hollow cone,'' where the
flow occupies the region $\theta\in[\theta_j-\Delta\theta,\theta_j]$
\citep[e.g.][]{EL04,LB05}. Although the distribution of Lorentz factors
is reminiscent of such a cone, the distribution of kinetic power
actually peaks near the symmetry axis. Moreover, in contrast with the
phenomenological hollow-cone models considered in the literature, in
which $\Delta \theta \ll \theta_j$, our solutions yield configurations
with $\Delta \theta \sim \theta_j$. The detailed observational
implications of these structures remain to be explored.
\section{Conclusion}
\label{conclusion}
In this paper we extend our previous numerical study of magnetically
accelerated relativistic jets (Paper I) from the case of terminal Lorentz
factors $\Gamma_\infty \sim 10$, appropriate to AGN jets,
to $\Gamma_\infty \ga 10^2$,
appropriate to GRB jets. The larger values of $\Gamma_\infty$
reached in the present study enable us to compare results of our simulations,
carried out using the equations of special-relativistic ideal MHD, with the
asymptotic analytic formulae that we obtain from the constituent
equations in the limit $\Gamma \gg 1$. Our analysis of the results also
benefits from a comparison with semi-analytic solutions that were derived
under the assumption of radial self-similarity. We can summarize our
conclusions regarding the magnetic acceleration of ultra-relativistic
outflows as follows.
\begin{enumerate}
\item Our simulations verify that the MHD
acceleration mechanism remains robust even when the terminal Lorentz
factors reach the ultra-relativistic regime
($\Gamma_\infty \ga 10^2$). The simulated flows rapidly settle into
quasi-steady and seemingly stable configurations. A complete model would
need to incorporate non-axisymmetric effects, which we have not
considered.
\item A key property of magnetically driven relativistic flows in the
ideal-MHD regime is the spatially extended nature of their
acceleration. This property, which was first revealed by the
self-similar solutions and subsequently confirmed in the moderately
relativistic regime by the simulations reported in Paper I, is also a
distinguishing characteristic of jets accelerated to ultra-relativistic
speeds. For initially Poynting flux-dominated jets whose magnetic flux
surfaces can be approximated by paraboloids of the form $z\propto r^b$
(with $b\ge 1$), the Lorentz factor during the main magnetic
acceleration phase increases as $\Gamma \simeq (b/\sqrt{b-1}) z/r$ when
$1<b\le 2$ and as $\Gamma \simeq r/r_{\rm lc}$ when $b=1$ or $b>2$. After the
(increasing) kinetic energy flux becomes comparable to the (decreasing)
Poynting flux the growth of $\Gamma$ saturates, and thereafter it
increases at a much slower rate. (We have not been able to reach this
phase in models with $b>2$ due to the limitations of our numerical method.)
\item The conversion efficiency $\Gamma_\infty/\mu$ of
total injected energy to kinetic energy at the end of the
power-law acceleration phase lies in the range $55-75\%$ for the
initially cold simulated paraboloidal flows whose effective exponents
lie in the range $1<b \le 2$; the efficiency is smaller
the larger the initial magnetization (or, equivalently, the higher the
value of $\Gamma_\infty$).
A higher efficiency is attained in jets with $b<2$ that are initially
relativistically hot than in the corresponding initially-cold outflows:
in this case a measurable fraction
($>50\%$ in the example that we show) of the thermal energy flux is at
first converted into Poynting flux, thereby reducing the initial
thermal acceleration of the flow and enhancing the subsequent magnetic
acceleration.
\item In our simulations the flow is confined by a rigid wall whose
shape is described by $z\propto r^a$, with $a$ ranging from 2/3 to
3. We have conducted a
detailed analytic investigation of the relationship between a confining
pressure distribution of the form $p_{\rm ext}\propto z^{-{\alpha}}$ and
the shape of the jet boundary in the asymptotic regime of the magnetic
acceleration zone. We found that there is a one-to-one correspondence
between the functional forms of the pressure distribution and of the
boundary shape. Except for one special case (for which $a$ remains close
to 2), the jet becomes an exact paraboloid of the form given above,
with $a=4/{\alpha}>2$ for ${\alpha} < 2$ and $1<a\le 2$ for ${\alpha} = 2$. When
${\alpha} > 2$ the jet cannot maintain pressure equilibrium with the
ambient medium and asymptotes to a conical shape. This situation is
reproduced in our simulations by unconfined flows as well as by flows
with $a \le 1$. In this case the outer regions of the jet become causally
disconnected (the local opening half-angle of the field lines becomes
larger than the local half-angle of the Mach cone of fast-magnetosonic
waves), and only the innermost regions continue to collimate and
accelerate.
\item We find that for all current-carrying jets (irrespective of whether
the return current flows inside or outside the jet) the innermost
field lines are more strongly collimated than the exterior ones,
indicating ``self collimation'' by the magnetic hoop stress (see also
Paper I). This redistribution of the poloidal field lines within the
jet is directly responsible for the high acceleration efficiency of the
flow.
\end{enumerate}
We have applied our results to GRB sources, taking into account the
constraints imposed by the detected prompt and afterglow emission on the
properties of the ultrarelativistic jets that evidently give rise to the
GRB phenomenon. Our main conclusions are:
\begin{enumerate}
\item
Initially Poynting flux-dominated outflows can be magnetically
accelerated to a Lorentz factor exceeding the minimum ($\Gamma\sim
10^2$) inferred in long/soft GRBs within a distance of $\sim
10^{11}-10^{12}\; {\rm cm}$ from a rapidly rotating stellar-mass black
hole or a millisecond magnetar. Thus, most of the acceleration of
long/soft GRB jets can be achieved inside a typical progenitor star
in the collapsar model, whose envelope provides a natural confining
environment for the jets. Lack of confinement outside of the star may
result in a radial outflow characterized by loss of causal
connectivity across the jet and inefficient acceleration.
An alternative confinement mechanism that is of particular relevance to
short/hard GRBs, which likely form through a
merger of compact stars rather than in the collapse of a massive star,
is a disc wind.
The MHD acceleration mechanism implies that the minimum bulk Lorentz
factor inferred in short/hard GRBs ($\Gamma\sim 30$) could be attained
within the distance that such a wind covers over the burst duration if
the disc outflow (which might also be driven magnetically) has at least
a moderately relativistic speed ($\sim 0.1-1\, c$).
\item
The MHD acceleration model entails a high ($\ga 50\%$) asymptotic
conversion efficiency of injected magnetic and thermal energy into
bulk kinetic energy for effectively confined flows.
If the initial magnetization is of the same
order as that of the inferred Lorentz factor of a GRB jet,
$\sigma_0\sim 10^2-10^3$, the energy conversion can be attained on a
spatial scale that is smaller than the indicated size of the prompt
emission region. The model is then compatible with the internal-shocks
scenario for GRBs. For a much higher initial magnetization the jet
remains Poynting flux-dominated on these scales and the prompt
emission has to be attributed to direct magnetic energy dissipation,
as in the magnetodynamics scenario. A full treatment of the dynamics
of such jets in the context of MHD would require taking account of
the acceleration induced by the field-dissipation process and the use of
a non-ideal, relativistic-MHD code.
\item
We have found that the MHD jet model places a strong
constraint on the product of the Lorentz factor and
the half-opening angle of the streamline in the asymptotic regime of the
main acceleration region: $\Gamma\theta_{\rm v}\simeq 1$
along paraboloidal streamlines $z \propto r^b$ when $b\le 2$ (but $b$ not
too close to 1), and $\Gamma\theta_{\rm v}\propto z^{-(1-2/b)}$ (and thus
attaining even smaller values at the end of the main acceleration phase)
when $b>2$.
This feature is unique to the ideal MHD
mechanism and could potentially serve to distinguish it from alternative
models, notably the classical fireball scenario (in which
$\Gamma\theta_{\rm v}$ is envisioned to be $\gg 1$ at the end of the
acceleration region). In particular, this property implies that, if
long/soft GRB jets with $\Gamma \ga 100$ are magnetically accelerated,
they must be collimated to $\theta_{\rm v} \la 1^\circ$.
This result is consistent with one of the interpretations of the
prompt emission ``tails'' discovered by {\it Swift},
although this is not the only possible explanation of a very small
collimation angle. This relationship also indicates that
the $\gamma$-ray emitting outflow component might exhibit a
panchromatic jet break (corresponding to $\Gamma\theta_{\rm v}\simeq 1$
decreasing from a value $>1$ to a valure $<1$)
soon after it enters the afterglow phase, although in principle no such
break need to occur (corresponding to cases where this product is $<1$ at
the end of the acceleration zone). A later jet break could
potentially be seen if the outflow has more than one kinematic component.
\item
The magnetic acceleration model also makes specific predictions about
the angular distributions of the terminal Lorentz factor and of the
kinetic and total energy per unit solid angle across the jet, which can
be probed by a variety of observations. These distributions depend on
the magnetization profile and the thermal energy content of the jet at
the inlet boundary, which could in principle be constrained by the
observations. A general characteristic of this model is that
$\Gamma_\infty(\theta)$ {\it decreases}\/ with decreasing polar angle
$\theta$ near the symmetry axis.
\end{enumerate}
Although our analytic scalings have been derived in the limit where the
jet is in the force-free regime, we emphasize that key parameters of
interest for astrophysical applications --- including the jet velocity
and the magnetic-to-kinetic energy conversion efficiency --- could have
only been obtained within the magneto{\em hydro}dynamics formalism that we
adopted and not in the magnetodynamics (or force-free electrodynamics)
approximation adopted in other recent semi-analytic and numerical
investigations. Another point worth emphasizing is that the acceleration
mechanism investigated in this paper is identical to that considered in
paper I. Our results are consistent with the view that the main
difference between ``superluminal'' AGN jets and GRB jets is that the
latter outflows have a higher initial magnetization (and possibly also a
higher initial enthalpy), which leads to their correspondingly higher
terminal Lorentz factors. If this picture is correct, one could use
observations of AGN and GRB sources to deduce complementary aspects of
the same basic phenomenon. For example, one could take advantage of the
fact that the acceleration region in AGN jets is potentially resolvable
by radio interferometry to probe the details of the acceleration
process; one could then consider the implications to GRB jets, which are
not directly accessible to such observations.
\section*{Acknowledgments}
This research was funded by PPARC under the rolling grant
``Theoretical Astrophysics in Leeds'' (SSK and MVB).
NV acknowledges partial support by the Special Account
for Research Grants of the National and Kapodistrian University of Athens.
AK was partially supported by a NASA Theoretical Astrophysics Program grant.
We thank Vasily Beskin for many helpful comments on the magnetic acceleration
mechanism and Jonathan Granot for useful discussions of GRB issues.
|
1,314,259,996,636 | arxiv | \section{Introduction}\label{sec:intro}
One development in knot theory is
to define and study the structure of the topological space $\mathcal{S}$
composed of isotopy classes of knots.
Here, a knot is an embedding of an oriented circle in the 3-sphere $S^3$ which can be
represented as a finite number of line segments, and a link is a disjoint union of knots.
We identify knots which are isotopic to one another.
In \cite{hu}, the {\it Gordian Complex} $\mathcal{G}$ of knots was defined as follows:
The vertex set of $\mathcal{G}$ consists of isotopy classes of knots, and
a set of $n+1$ vertices $K_{0}, \dots, K_{n}$ spans an $n$-simplex
if and only if any pair of knots in it can be changed into each other by a single crossing change.
Since then, many studies have been done by replacing
the crossing change with other local operations on knots and on virtual knots as in \cites{lyz,gmv,hkys,ik,no}.
In the following, we consider the operation
of Murasugi sum along Seifert surfaces of links. An oriented, embedded surface $F$ without a closed component is
called a
{\it Seifert surface} for an oriented link $L$ if its boundary $\partial F$ coincides with the oriented link $L$.
An $m$-Murasugi sum is an operation to glue two Seifert surfaces $F_1$ and $F_2$ along an $m$-gon with $m$ even
(for the precise definition, see Definition \ref{dfn:msum}).
We may regard a fixed knot $K$ as an operation on the space of knots as follows.
A knot $K_1$ is changed to a knot $K_2$ via $K$ if a Seifert surface for $K_2$ is
obtained from a Seifert surface $F_1$ for $K_1$
by Murasugi-summing a Seifert surface $F$ for $K$ to $F_1$.
Thus for each fixed $K$, the space of knots $\mathcal{S}$ has the structure of directed graph,
where an edge is an arrow or a double-headed arrow.
\begin{defn}\label{dfn:m-graph}
For a knot $K$,
the Murasugi sum graph of knots $M\kern-1pt SG(K)$ is
a directed graph such that
(1) the vertex set consists of all isotopy classes of knots,
(2) two vertices $K_1$ and $K_2$ are connected by an edge
with arrowhead on $K_2$ if
there exist Seifert surfaces $F, F_1, F_2$ for $K,K_1, K_2$ such
that $F_2$ is a Murasugi sum of $F$ and $F_1$.
The {\it restricted Murasugi sum graph} $M\kern-1ptSG(K, n)$ is
considered by only allowing Murasugi sums along $m$-gons
with $m\leq n$.
\end{defn}
For $n=2$, a $2$-Murasugi sum is the connected sum operation.
Hence for any knot $K$,
$M\kern-1ptSG(K, 2)$ has an obvious structure where the
edges are arrows from each knot $K'$ to the connected sum of
$K$ and $K'$. For $n=4$, a $4$-Murasugi sum is called a {\it plumbing}.
It was shown in \cites{mura1, gabai, stall} that nice geometric properties of knots and surfaces
(such as fiberedness and genus-minimality)
were preserved under Murasugi sums and decompositions
of minimal genus Seifert surfaces. On the other hand, Thompson \cite{thomp} gave examples where
the trefoil is obtained as a plumbing of two unknots, and the unknot is obtained as a plumbing of two figure-eight knots. Thus,
expectations to generalize preservation results to Murasugi sums of
non-minimal Seifert surfaces were negated.
In this paper, we generalize Thompson's examples to show that given any three knots, we can produce one of them as a Murasugi sum of the other two.
\begin{thmA1}
For any three knots $K_1, K_2, K_3$, there exist
Seifert surfaces $F_1, F_2,$ $F_3$ for them such that
$F_3$ is a Murasugi sum of $F_1$ and $F_2$.
\end{thmA1}
Therefore we have the following:
\begin{cor2}
For a knot $K$, any set of knots
$\{K_1, K_2, \dots, K_n\}$ in
$M\kern-1ptSG(K)$
composes a complete graph where all the edges are bi-directed.
\end{cor2}
We refine the result of Theorem A1 by giving an algorithm to find a closed braid for $K_3$ which naturally splits into closed braids for $K_1$ and $K_2$. See Figures \ref{fig:ex1},\,\ref{fig:ex2} following Example \ref{ex:knots-braid}.
\begin{thmA2}
For any three knots $K_1, K_2, K_3$, there are braids $b_1, b_2, b_3$ such that
$K_i$ is the closure of $b_i$ $(i=1,2,3)$, satisfying
the following:
If the braid $b_3$ is expressed as a braid word $W_3$
with generators\\
$\sigma_1, \sigma_2, \dots, \sigma_k, \dots, \sigma_n$,
then $W_1$ (resp. $W_2$) is obtained from
$W_3$ by deleting the generators $\sigma_1, \dots, \sigma_k$ (resp. $\sigma_{k+1}, \dots, \sigma_n$).
\end{thmA2}
To further study the structure of $M\kern-1ptSG(K, n)$ where
the size of Murasugi sums is limited, we give, in Section
\ref{sec:ngon}, lower and upper bounds on the minimal $m$-gon required to form $K_3$ as a Murasugi sum of $K_1$ and $K_2$. Our bounds are in terms of $d_{cb}(K,K')$ and $d_{bt}(K,K')$, which are, respectively, the minimal number of coherent band surgeries (resp. band-twists) required to transform $K$ into $K'$. For the precise definition of these operations, see Definitions \ref{dfn:bandtwist} and \ref{dfn:bandsurg}.
\begin{thmB}
If $K_3$ is an $m$-Murasugi sum of $K_1$ and $K_2$ such that $m$ is minimal, then
\begin{center}
$d_{cb}(K_1\# K_2, K_3)+2\leq m\leq 2(d_{bt}(K_1,K_3)+d_{bt}(K_2,O)+1), $
\end{center}
where the roles of $K_1$ and $K_2$ can be switched to improve the upper bound.
\end{thmB}
Another motivation for the study of manipulation of Seifert surfaces
is to provide tools for physically constructing knots as in
DNA, polymers, and proteins.
Manipulating Seifert surfaces seems handier than manipulating the knot itself,
while dissolving the surface yields a knot on the boundary. So in the Appendix (Section \ref{sec:append}), we
prove the following two results regarding particularly nice Murasugi decompositions of Seifert surfaces of links in terms of positive Hopf bands, planar surfaces, and punctured Heegaard surfaces (i.e., the boundary of a standardly embedded handlebody).
\begin{thmC1}\label{thm:heggaard-positive}
Any link $L$ has a Seifert surface $F$ which is
a Murasugi sum of
a once-punctured Heegaard surface
and
a boundary connected sum of positive Hopf bands.
\end{thmC1}
\begin{thmC2}\label{thm:three-planar}
For any link $L$, there exists a Seifert surface $F$ for
$L$ which consists of three planar surfaces
$P, P_1, P_2$, where $P_1$ (resp. $P_2$) is Murasugi summed
to $P$ on the positive (resp. negative) side.
\end{thmC2}
\section{Any knot is a Murasugi sum of any two knots}\label{sec:anyknotsum}
The original construction of the Murasugi sum was first introduced by Murasugi in \cite{mura1} and was later coined as the Murasugi sum by Gabai in \cite{gabai}. For simplicity, we define the Murasugi sum in terms of Murasugi decomposition. See Figure \ref{fig:6sum}.
\begin{defn}\label{dfn:msum}
Let $F$ be a Seifert surface in $S^3$, let $\Sig$ be a 2-sphere such that $S^3\setminus \Sig$ is a union of two open 3-balls $B_1,B_2$ and such that $\Sig \cap F$ is an $m$-gon $\Om$ with $m$ even. Denote $F\cap \overline{B_1}=F_1$ and $F\cap \overline{B_2}=F_2$. Then we say that $F$ {\it decomposes} into $F_1$ and $F_2$ along $\Om$.
\end{defn}
\incf{50}{msumdef.eps}{A local picture of a 6-Murasugi sum}{fig:6sum}
If $F$ decomposes into $F_1$ and $F_2$ along an $m$-gon, then $F$ is said to be an $m$-Murasugi sum of $F_1$ and $F_2$, which we denote by $F_1 \star_m F_2$.
Given two knots $K_1,K_2$, we write $K_1\star_m K_2$ to denote the boundary of $F_1 \star_m F_2$ for some Seifert surfaces $F_1,F_2$ for $K_1,K_2$. Note that the summing disk $\Om$ can initially appear stretched and twisted, but we can isotope the given surfaces to see $\Om$ as flat. We can also isotope the surfaces so that $\Sig$ corresponds to the plane $z=0$ in $\bbr^3$, and in this situation, if $F_1$ lies above (resp. below) $z=0$ and $F_2$ lies below (resp. above) $z=0$, then we say that we Murasugi sum $F_1$ onto the positive (resp. negative) side of $F_2$.
There are several operations one can perform on knot diagrams to obtain a new knot. One such operation is a crossing change, and more generally, an antiparallel full-twisting.
\begin{defn}\label{dfn:bandtwist}
An {\it antiparallel full-twisting} on an oriented link is a local move where we
select a pair of
locally antiparallel strings
\raisebox{-5pt}{\includegraphics[height=13pt]{./figures/bandbefore.eps}} and apply some number of full twists
\raisebox{-5pt}{\includegraphics[height=13pt]{./figures/bandafter.eps}}.
\end{defn}
For convenience, we sometimes refer to antiparallel full-twisting as band twisting. We can realize this twisting operation along an arc $\alpha$,
where $\alpha$ is a short, unknotted arc connecting two antiparallel strings of a link $L$, which is contained within
a small ball $B$ such that $L\cap B$ is a trivial $2$-string tangle.
In this setting, we can span a Seifert surface $F$ for $L$ such that
$F \cap B$ is a rectangle $b$ containing the arc $\alpha$.
Then the twisting operation is realized by applying some full-twists to $b$.
Consider the two Seifert surfaces of the unknot in Figure \ref{fig:twounknots}. The following proposition states that a crossing change within a knot can be realized by either plumbing or deplumbing these surfaces. More generally, by increasing the number of full twists in $R_+$ or $R_-$, we can realize any antiparallel full-twisting by either plumbing or deplumbing Seifert surfaces for the unknot.
\incf{32}{unknots.eps}{Two Seifert surfaces for the unknot}{fig:twounknots}
\begin{prop}\label{prop:cc-by-plumbing}
Let $K_1$, $K_2$ be knots such that
$K_2$ is obtained by changing a positive crossing in $K_1$.
Then there exist Seifert surfaces $F_1$, $F_1'$ for $K_1$,
$F_2$, $F_2'$ for $K_2$, and $R_{+},R_{-}$ for the unknot
satisfying the following:
\begin{enumerate}
\item $F_1'$ is a plumbing of $F_2$ and $R_{+}$.
\item $F_2'$ is a plumbing of $F_1$ and $R_{-}$.
\end{enumerate}
\end{prop}
\begin{proof}
We illustrate both statements in Figure \ref{fig:crosschange}.
\incf{90}{crossingchange.eps}{Changing a positive crossing}{fig:crosschange}
\end{proof}
Furthermore, we can perform any number of crossing changes simultaneously via a single Murasugi sum with a Seifert surface for the unknot,
which allows us to prove the following lemma.
\begin{lem}\label{lemm:sum-of-unknots}
Any knot $K$ has a Seifert surface $F$
which is a Murasugi sum of
two Seifert surfaces $F_1, F_2$ for the unknot.
\end{lem}
\begin{proof}
For any diagram $D$ of $K$, we can choose a
subset ${\mathcal C}$ of crossings such that
we obtain the unknot by simultaneously changing
the crossings in ${\mathcal C}$.
To see this, start at some point in $D$ and walk along
the knot. Then can specify ${\mathcal C}$ to consist of
those crossings which we enter first along an under-path and later along an over-path.
Near each crossing
\raisebox{-5pt}{\includegraphics[height=13pt]{./figures/cross.eps}}
in ${\mathcal C}$,
apply a Reidemeister II move to introduce an
antiparallel clasp
\raisebox{-5pt}{\includegraphics[height=13pt]{./figures/triplecross.eps}}, where undoing the clasp results in changing that crossing as in \raisebox{-5pt}{\includegraphics[height=13pt]{./figures/unclasped.eps}}.
Thus we obtain a new diagram $D'$ for $K$, and we obtain the unknot by simultaneously undoing
all the clasps.
Put a 3-ball $B$ in the complement of $K$
and isotope $K$ so that all the clasps are in $B$,
and span a Seifert surface $F$ for $K$
as in the left part of Figure \ref{fig:ooK}.
Then $F$ decomposes into two Seifert surfaces $F_1$
and $F_2$, where $\partial F_2$ is the unknot and
$\partial F_1$ is the result of undoing all the clasps in $K$,
and hence is the unknot.
\end{proof}
\incf{98}{unknotdecomp.eps}{Producing $\partial F=K$ as a Murasugi sum of unknots}{fig:ooK}
Conversely, any knot is obtained from an unknot
by simultaneously removing antiparallel clasps.
Therefore, in the proof above, we may regard
$F$ and $F_2$ as surfaces for the unknot and $F_1$ as a surface for $K$. This
gives the following.
\begin{cor}\label{cor:unknoting-by-summing-unknot}
Any knot $K$ has a Seifert surface $F_1$ which
becomes a Seifert surface for the unknot by Murasugi summing $F_1$
with some Seifert surface for the unknot.
\end{cor}
\begin{proof}[Proof of Theorem A1.]
By Corollary \ref{cor:unknoting-by-summing-unknot}, there is a Seifert surface $F_1$ (resp. $F_2$) for $K_1$ (resp. $K_2$) that Murasugi sums with a Seifert surface $F_1'$ (resp. $F_2'$) of the unknot $O$ to yield a Seifert surface for $O$. By Lemma \ref{lemm:sum-of-unknots}, there exist two Seifert surfaces $F_3,F_3'$ for $O$ that Murasugi sum to a Seifert surface for $K_3$. The boundary connected sum of $F_1,F_2',F_3$ (resp. $F_1',F_2,F_3'$) is a Seifert surface for $K_1$ (resp. $K_2$), and we Murasugi sum these surfaces as in Figure \ref{fig:JKL} along the shaded $n$-gon.
\end{proof}
\incf{94}{maintheorem2.eps}{$K_3$ as a Murasugi sum of $K_1$ and $K_2$}{fig:JKL}
\begin{proof}[Proof of Theorem A2.]
By Theorem A1, there are Seifert surfaces $F_1,F_2,F_3$ for $K_1,K_2,K_3$
such that $F_3$ is a Murasugi sum of $F_1$ and $F_2$ and the summing disk $\Omega$ is flat.
Apply a trivial twist at each band attached
to $\Omega$ as depicted in Figure \ref{fig:algo}.
Then we have a diagram $D'$ such that
the summing disk is spanned by a Seifert circle $C$.
Note that the canonical Seifert surface
$F_3'$ is a Murasugi sum
of $F_1'$ and $F_2'$ along the summing disk
$\Omega$, where
$ \partial F_1' =\partial F_1, \partial F_2' =\partial F_2, \partial F_3' =\partial F_3$.
Apply Yamada's braiding algorithm \cite{yam} to $D'$,
independently inside and outside $C$.
Then we have the desired braids.
\end{proof}
\incf{90}{braidform.eps}{Making a canonical surface for the braid decomposition}{fig:algo}
We use the notation $F\xrightarrow{\partial}K$ to mean that $\partial F=K$, and $F_1\stackrel{\partial}{=}F_2$ to mean that $\partial F_1=\partial F_2$.
\begin{exmp}\label{ex:knots-braid}
In Figure \ref{fig:ex1}, we illustrate $7_5$ as a Murasugi sum of two unknots, and in Figure \ref{fig:ex2}, we illustrate the unknot as a Murasugi sum of $5_2$ and $7_5$. In these figures, we also express the Murasugi sums in terms of braid decompositions, as described in Theorem A2. Note that we simplified some procedures in the proofs of Theorems A1 and A2.
In particular, we already have a Seifert circle corresponding to the summing disk without twisting as in Figure 6,
we eliminated some Seifert circles with two bands, and
we used ``long" bands to save us from depicting many generators.
\end{exmp}
\incf{84}{7_5total.eps}{$7_5$ as a Murasugi sum of two unknots, and its braid decomposition}{fig:ex1}
\incf{91}{7_55_2total.eps}{The unknot as a Murasugi sum of $5_2$ and $7_5$, and its braid decomposition}{fig:ex2}
\section{Constraining the minimal $m$-gon}\label{sec:ngon}
In Theorem A1, it was shown that given three knots $K_1,K_2,K_3$,
we can form $K_3$ as a Murasugi sum of $K_1$ and $K_2$ by using a sufficiently large $m$-gon.
We begin this section with Definition \ref{dfn:m-distance} in order to give lower and upper bounds for the minimal such $m$-gon, culminating in Theorem B from the Introduction. In particular, if we restrict the size of our $m$-gons, Theorem B obstructs forming a knot as a Murasugi sum of two given knots. Previously, the only restriction on Murasugi sums was for plumbings of fiber surfaces, such as in \cite{melmor}, where Melvin and Morton showed that the Conway polynomials of fibered knots of genus 2 take a restricted form when the fiber surface is a plumbing of Hopf bands.
\begin{defn}
\label{dfn:m-distance}
Given three oriented knots $K_1,K_2,K_3$, define the
\emph{minimal size of Murasugi summation} for $K_3=K_1\star K_2$ to be
\begin{center}
$d_M(K_1,K_2;K_3)=\min\{m\;|\; F_3=F_1\star_m F_2, \; \partial F_i=K_i\text{ for }i=1,2,3\},$
\end{center}
where the minimum is taken over $F_1,F_2$.
\end{defn}
Recall that $m$ is even. From the definition, $d_M(K_1,K_2;K_3)=2$ if and only if $K_3=K_1\# K_2$.
Also, if $K_3 \neq K_1\# K_2$ can be realized as a plumbing of $K_1$ and $K_2$, then $d_M(K_1,K_2;K_3)=4$.
Now we recall the notion of a coherent band surgery. See Figure \ref{fig:bandsurgex}.
\incf{85}{bandsumex.eps}{A band surgery along an unlink yielding $3_1\# \overline{3_1}$}{fig:bandsurgex}
\begin{defn}\label{dfn:bandsurg}
Given a knot or link $L$ and an embedded band $b:I\times I\rightarrow S^3$ with $L\cap b(I\times I)=b(I\times \partial I)$, we obtain a new link $L'=(L\setminus b(I\times \partial I))\cup b(\partial I\times I)$, and we say that $L'$ is obtained from $L$ by a \emph{band surgery}. For oriented $L,L'$, a \emph{coherent} band surgery is a band surgery that respects the orientations of both links, that is, $L\setminus b(I\times \partial I)=L'\setminus b(\partial I\times I)$ as oriented spaces.
\end{defn}
Given two oriented links $L,L'$, we denote by $d_{cb}(L,L')$ the minimal number of coherent band surgeries required to produce $L'$ from $L$. This number is known as the coherent band-Gordian distance, and in the case of knots it is equal to twice the $SH(3)$-Gordian distance \cite{kan}. In \cite{kanmor}, $d_{cb}$ is calculated for most pairs of knots up to seven crossings. For knots $K,K'$, note that $d_{cb}(K,K')$ is necessarily even because a coherent band surgery changes the number of components by one.
We have the following lower bounds for $d_M$ in terms of $d_{cb}$ and the signature $\sig$.
\begin{thm}\label{thm:lowerbounds}
For knots $K_1,K_2,K_3$, we have
\begin{center}
$ d_{cb}(K_1\# K_2,K_3)+2\leq d_M(K_1,K_2;K_3).$
\end{center}
Consequently,
\begin{center}
$ d_{cb}(K_1\sqcup K_2,K_3)+1\leq d_M(K_1,K_2;K_3),$
\end{center}
where $K_1 \sqcup K_2$ is a split link, and
\begin{center}
$|\sig(K_1)+\sig(K_2)-\sig(K_3)|+2\leq d_M(K_1,K_2;K_3).$
\end{center}
\end{thm}
\begin{proof}
Suppose $K_3$ is an $m$-Murasugi sum of $K_1$ and $K_2$, where $m=2n$ is minimal. Then there is a sequence of $2(n-1)=m-2$ coherent band surgeries between $K_3$ and $K_1\#K_2$, where each band lies within a Seifert surface for $K_3$, so that
\begin{center}
$d_{cb}(K_1\#K_2,K_3)+2\leq m.$
\end{center}
See Figure \ref{fig:obstruct}. With one more band surgery, we have $m-2+1$ coherent band surgeries between $K_3$ and the split link $K_1 \sqcup K_2$, so that
\begin{center}
$d_{cb}(K_1\sqcup K_2,K_3)+1\leq m.$
\end{center}
If an oriented link $L'$ is obtained from $L$ by a coherent band surgery, an estimate was given by Murasugi \cite{mura2} on the difference of the signatures as $|\sig(L)-\sig(L')|\leq 1$. Since $\sig(K_1\# K_2)=\sig(K_1)+\sig(K_2)$, we obtain the third inequality.
\end{proof}
\incf{65}{bandcuts.eps}{Performing $(6-2)$ band surgeries to recover $K_1\#K_2$ from $K$}{fig:obstruct}
\begin{exmp}
By Theorem \ref{thm:lowerbounds}, the signature obstructs forming $9_1$ as a plumbing of two copies of $3_1$. However, Figure \ref{fig:9_1=3_1+3_1} depicts how $9_1$ desums into two copies of $3_1$ along the shaded 6-gon, which is obtained by first merging two 4-gons into an 8-gon which is then reduced to the 6-gon. This process is explained in the proof of Lemma \ref{lemm:mergesum}.
\end{exmp}
\incf{93}{9_13_1.eps}{$9_1$ as a 6-Murasugi sum of $3_1$ and $3_1$}{fig:9_1=3_1+3_1}
\begin{rmk}
For pairs of knots where $d_{cb}$ has not been determined, we may apply lower bounds for $d_{cb}$ from \cite{kan} in terms of the smooth four-ball genus $g_4$, hence in terms of $\sig$ by \cite{mura2}, and in terms of the Nakanishi index $e$. Indeed, for knots $K_1,K_2$ we have
\begin{center}
$|\sig(K_1)-\sig(K_2)| \leq 2g_4(K_1\# - K_2) \leq d_{cb}(K_1, K_2),$
\end{center}
where $-K_2$ is the reverse mirror of $K_2$, and
\begin{center}
$|e(K_1)-e(K_2)| \leq d_{cb}(K_1,K_2).$
\end{center}
\end{rmk}
We now move on to some upper bounds of $d_M$. Using Lemma \ref{lemm:band-twists-at-once} below, we can easily modify the proof of Theorem A1 to form $K_3$ as a $4(u(K_1)+u(K_2)+u(K_3))$-Murasugi sum of $K_1$ and $K_2$, where $u(K)$ is the unknotting number of $K$. This gives the upper bound
\begin{center}$d_M(K_1,K_2;K)\leq 4(u(K_1)+u(K_2)+u(K_3)).$\end{center}
In what follows, we improve this upper bound into Theorem \ref{thm:improve}. Recall that the Gordian distance $d_G$ between two knots $K,K'$ is the
minimal number of crossing change operations required to transform a
diagram for $K$ into a diagram for $K'$, where the minimum is taken over all diagrams for $K$.
Since $K$ can be transformed into $K'$ by a sequence of crossing changes that passes through the unknot,
we have $d_G(K,K')\leq u(K)+u(K')$. More generally, we consider the band-twist distance $d_{bt}$.
\begin{defn}\label{dfn:bt-distance}
Define the {\it band-twist distance} between two oriented links $L,L'$,
denoted by $d_{bt}(L,L'),$ as the minimal $n$ such that there exists a
sequence of links $L=L_0,L_1,L_2,\dots, L_n=L'$,
where $L_{i+1}$ is obtained from $L_{i}$ by an antiparallel full-twisting (a band-twist) as in Definition \ref{dfn:bandtwist}.
\end{defn}
Since any crossing change may be realized as a band-twist, we have $d_{bt}(K,K')\leq d_G(K,K').$ Just as with crossing changes, we may perform any number of band-twist operations simultaneously.
\begin{lem}\label{lemm:band-twists-at-once}
For two links $L$ and $L'$ with $d_{bt}(L,L') =n$,
there exists a Seifert surface $F$ for $L'$
with a set $A$ of $n$ mutually disjoint properly embedded arcs such that
a Seifert surface $F$ for $L$ is obtained by applying a band-twist operation
along each arc in $A$.
\end{lem}
\begin{proof}
Let $L=L_0,L_1,L_2,\dots, L_n=L'$ be a sequence of
links related by the band-twisting operation.
After obtaining $L_1$ from $L_0$ by twisting along an
arc $\alpha_0$, instead of erasing $\alpha_0$, isotope it so that
it is disjoint from the arc $\alpha_1$ used to obtain $L_2$.
Repeating this, we obtain a set of arcs $\alpha_0, \alpha_1, \dots, \alpha_{n-1}$
attached to $L_n=L'$.
By an isotopy, we can arrange the arcs to be short and contained in a ball $B$ with
$L'\cap B$ being a trivial $2n$-string tangle.
Splice $L'$ along the arcs and push the resulting link $L''$ slightly off $B$.
Span a Seifert surface for $L''$ disjoint from $B$.
Then we obtain the desired Seifert surface for $L'$ by attaching bands to $L''$
that pass through $B$.
\end{proof}
If we wish to perform several Murasugi sums of several surfaces with a single surface, we can often combine these sums into a single sum as indicated by the following lemma. One implication of the below is that we may perform any number of band-twist operations simultaneously via a single Murasugi sum.
\begin{lem}\label{lemm:mergesum}
Suppose that a Seifert surface $F'$ is
obtained by Murasugi summing
$S_1, S_2, \dots, S_n$
on the same side of a connected Seifert surface $F$ along mutually disjoint
summing disks $\Om_1, \Om_2, \dots, \Om_n$.
Let $\Om_i$ be an $e_i$-gon $(i= 1, 2, \dots, n)$.
Then $F'$ is a Murasugi sum of $F$ with
a boundary connected sum of $S_1, S_2, \dots, S_n$
along an $m$-gon, where $m=\sum_{i=1}^n e_i$.
Moreover,
if $F$ is a Seifert surface for a knot,
then we can merge the sums into an
$m'$-gonal sum with
$m'=m-2(n-1)$.
\end{lem}
\begin{proof}
We may assume that each summand $S_i$ is contained in a thin blister neighborhood of
the summing disk $\Om_i$.
Denote the edges in $\partial \Om_i$ as $a_{i,1},b_{i,1},a_{i,2}, b_{i,2}, \dots$ for $i=1,2,\dots,n$,
where the $a_{i,\cdot}$'s are sub-arcs of $\partial F$ and the $b_{i,\cdot}$'s are properly embedded arcs in
$F$.
Since $F$ is connected, we may find an embedded arc $\gamma$ in $F$ whose endpoints are
the midpoints of $b_{1,j}$ and $b_{2,k}$ for some $j$ and $k$ as in Figure \ref{fig:mergedisks}.
We merge the two (dark shaded) summing disks $\Omega_1$ and $\Omega_2$ into an $(e_1+e_2)$-gon
$\Omega'$ whose boundary consists of
$(\left(\partial \Omega_1 \cup \partial \Omega_2)\setminus (b_{1,j}\cup b_{2,k})\right) \cup (\gamma_1 \cup \gamma_2)$,
where $\gamma_1$ and $\gamma_2$ are properly embedded in $F$ and
$b_{1,j} \cup \gamma_1 \cup b_{2,k} \cup \gamma_2$ is a rectangle $R=\gamma \times I$ such that
$R \cap {\rm int}\! \bigcup_{i=1}^n \Omega_i$=$\emptyset$.
We see that the boundary connected sum of $S_1$ and $S_2$ is contained in a thin blister neighborhood of the new summing disk $\Omega'$.
By repeating this merging operation, we eventually combine all the summing disks into one and
obtain the desired Murasugi sum.
For the last part of the assertion, the assumption that $\partial F$ being connected ensures the
existence of two arcs $a_{1,p}, a_{2,q}$ for some $p, q$ such that one segment of $\partial F$ between
$a_{1,p}$ and $a_{2,q}$ does not pass through other summing disks.
Then we can apply the previously mentioned merging of $\Omega_1$ and $\Omega_2$ so that
$\gamma_1$ or $\gamma_2$,
say $\gamma_1$, can be isotoped to a sub-arc of
$\partial F$ in $F \setminus \bigcup_{i=1}^n \Om_i$.
Then the three consecutive edges in $\Omega'$,
say, $a_{1,k}, \gamma_1, a_{2,j}$ for some $k,j,$
can be regarded as one edge by merging the bands
of $S_1$ and $S_2$ attached to $a_{1,k}$ and $a_{2,j}$ as in Figure \ref{fig:reducedisks}.
Then the new summing disk $\Omega''$
is a $p$-gon, where $p=e_1+e_2 -2$.
\end{proof}
\incf{70}{diskmerge2.eps}{Merging disks $\Om_1,\Om_2$ in $F$ along $\g$}{fig:mergedisks}
\incf{99}{diskreduce.eps}{Merging two 6-gons into a ($6+6-2$)-gon}{fig:reducedisks}
Combining Lemma \ref{lemm:band-twists-at-once} and Lemma \ref{lemm:mergesum}, we arrive at the following improvement on our upper bound.
\begin{thm}\label{thm:improve}
Let $K_1,K_2,K_3$ be knots. Then
\begin{center}$d_{M}(K_1,K_2;K_3)\leq 2(d_{bt}(K_1,K_3)+d_{bt}(K_2,O)+1),$\end{center}
where the roles of $K_1$ and $K_2$ may be switched to improve the upper bound.
\end{thm}
\begin{proof}
Suppose that $d_{bt}(K_1,K_3)=p$ and $d_{bt}(K_2,O)=q$. As guaranteed by Lemma \ref{lemm:band-twists-at-once}, there is a Seifert surface $F_1$ for $K_1$ (resp. $F_2$ for $K_2$) with a collection $\A$ of arcs $\alpha_1,\ldots, \alpha_p$ (resp. $\B$ of arcs $\beta_1,\ldots,\beta_q$) such that performing band-twist operations along the arcs yields a Seifert surface $F_3$ for $K_3$ (resp. $F_0$ for $O$). Prepare Seifert surfaces $A_1,\ldots, A_p$ and $B_1,\ldots, B_q$ such that plumbing $A_j$ along $\alpha_j$ (resp. $B_k$ along $\beta_k$) results in applying the band-twist operation along $\alpha_j$ (resp. $\beta_k$). More precisely, each of the surfaces is a plumbing of a trivial annulus and an unknotted annulus with various numbers of full-twists (recall Figure 2).
Construct $F_1'$ from $F_1$ such that $\partial F_1'=K_3$ by plumbing $A_1,\ldots, A_p$ along $\A$ on the positive side of $F_1$. Also, construct $F_0'$ from $F_0$ such that $\partial F_0'=O$ by plumbing $B_1,\ldots, B_q$ along $\B$ on the negative side of $F_0$.
We merge the plumbed surfaces $A_1,\ldots A_p$ so that $F_1'$ is a Murasugi sum of $F_1$ and $A$, where $A$ is a boundary connected sum of $A_1,\ldots, A_p.$ Note that $\partial A$ is an unknot $O_1$. By Lemma \ref{lemm:mergesum}, we may regard the Murasugi sum as along a $(4p-2(p-1))$-gon and hence a $(2p+2)$-gon. Similarly, $F_0'$ is regarded as a $(2q+2)$-Murasugi sum of $F_0$ and a Seifert surface $B$ for an unknot $O_2$. Denote by $F$ the 2-Murasugi sum (i.e., a boundary connected sum) of $F_1'$ and $F_0'$, where $F_0'$ is summed on the positive side of $F_1'$.
Then $F$ is a $(2p+2+2q+2)$-Murasugi sum of two summands, where one summand is the boundary connected sum of $F_1$ and $B$, and the other summand is the boundary connected sum of $F_0$ and $A$. The boundary of the first (resp. second) summand is $K_1\#O_2$ (resp. $K_2\#O_1$). By Lemma \ref{lemm:mergesum}, we can reduce the summing $(2p+2q+4)$-gon into a $(2p+2q+2)$-gon. Therefore, we have expressed $K_3$ as a $2(p+q+1)$-Murasugi sum of $K_1$ and $K_2$.
\end{proof}
By combining Theorems \ref{thm:lowerbounds} and \ref{thm:improve}, we arrive at Theorem B. As an application of Theorem B, we determine $d_M(3_1,3_1;K)$ for knots up to five crossings. Theorem B shows that
\begin{center}$d_M(3_1,3_1;3_1)=4, \qquad d_M(3_1,3_1;O)=d_M(3_1,3_1;4_1)=6,$\end{center}
while it only gives the bounds
\begin{center}$4\leq d_M(3_1,3_1;5_1),\; d_M(3_1,3_1;5_2) \leq 6.$\end{center}
We can show that $d_M(3_1,3_1;5_1)=d_M(3_1,3_1;5_2)=4$ in the following way.
For $a_1,a_2,\ldots, a_n\in 2\bbz$, denote by $S[a_1,a_2,\ldots,a_n]$ a linear plumbing of $n$ unknotted annuli, where the $i^{th}$ annulus has $a_i$ half-twists. Note that all 2-bridge links have such a linear plumbing as a Seifert surface \cite{hatthu}, which is of minimal genus if and only if $a_1a_2\cdots a_n\neq 0$. Using the notation of Example \ref{ex:knots-braid}, we have the following:
\begin{enumerate}
\item $S[a_1,a_2,\ldots,a_n]\star_4 S[b_1,b_2,\ldots,b_m] =S[a_1,\ldots,a_n,b_1,\ldots,b_m]$,
\item $S[a_1,a_2,\ldots,a_n]\stackrel{\partial}{=}S[a_1,a_2,\ldots,a_n,a_{n+1},0]$ for any $a_{n+1}$,
\item $S[a_1,a_2,\ldots,a_i,0,a_{i+2},\ldots, a_n]\stackrel{\partial}{=}S[a_1,a_2,\ldots,a_i+a_{i+2},\ldots, a_n]$.
\end{enumerate}
Using the above, we have:
\begin{exmp}\label{thom3141}
\begin{enumerate}
\item $3_1\star_4 3_1 \xleftarrow{\partial} S[2,2]\star_4 S[2,2]=S[2,2,2,2]\xrightarrow{\partial}5_1$
\item $3_1\star_4 3_1 \xleftarrow{\partial} S[2,2,-2,0]\star_4 S[2,2]\stackrel{\partial}{=}S[2,2,0,2]\stackrel{\partial}{=}S[2,4]\xrightarrow {\partial}5_2.$
\end{enumerate}
\end{exmp}
Using this notation, we summarize what Thompson showed in \cite{thomp} as follows:
\begin{enumerate}
\item $O\star_4 O\xleftarrow{\partial}S[2,0]\star_4 S[0,2]=S[2,0,0,2]\stackrel{\partial}{=}S[2+0,2]=S[2,2]\xrightarrow{\partial}3_1$
\item $4_1 \star_4 4_1 \xleftarrow{\partial} S[2,-2,2,0]\star_4 S[-2,2]\stackrel{\partial}{=} S[2,-2,0,2]\stackrel{\partial}{=}S[2,0]\xrightarrow{\partial}O.$
\end{enumerate}
\section{Concluding remarks and further discussion}
When Thompson \cite{thomp} negated the possibility of generalizing nice properties
preserved under Murasugi sums and decompositions, she also initiated the study of
constructing knots by Murasugi sums with her examples of $3_1 = O \star_4 O$ and $O=4_1 \star_4 4_1$.
Around that time, it was shown in \cite{haywad} that any link is constructed by successively plumbing
trivial (i.e., unknotted and untwisted) annuli, where the gluing square could be quite complicated.
Then in \cite{fhk}, it was shown that any link has a Seifert surface which is obtained from a disk $D$
by successively plumbing trivial annuli to $D$, where all of the gluing squares are in $D$,
thus giving a new standard form of links called a {\it flat basket plumbing presentation}.
In this paper, we have shown that given a knot $K$ and any two knots $K_1$, $K_2$,
we can form $K$ as a Murasugi sum of $K_1$ and $K_2$, and that this situation can be illustrated in
a closed braid form.
So anything can happen when we use Murasugi sums of general complexity, but as we have also shown,
Murasugi sums are more well-behaved when the size of Murasugi sums is restricted.
For further study, we may also put other restrictions on the Murasugi sums, for example by
restricting the genera of the Seifert surfaces involved in the Murasugi sums.
So we can ask the following decomposition question about the result of the Murasugi sum:
\begin{question}
For a nontrivial knot $K$ with genus $g(K)$, what is the minimal genus
of a Seifert surface $F$ for $K$ which is a Murasugi sum of two unknots?
\end{question}
In this situation, if $K$ has an alternating diagram which can be unknotted with $u(K)$ crossing changes, then Seifert's algorithm yields a minimal genus Seifert surface, so our constructions give that $1\leq g(F)-g(K)\leq u(K)$. Note that this inequality is also true for knots with $u(K)=1$, because by \cite{koba}, there exists a minimal genus Seifert surface for $K$ on which the
unknotting crossing change can be done by twisting a band. Similarly, we can ask the following composition question about the summands of the Murasugi sum:
\begin{question}
For two nontrivial knots $K_1,K_2$, what are the minimal genera of Seifert surfaces $F_1,F_2$ of $K_1, K_2$ that Murasugi sum to the unknot?
\end{question}
Answering these questions in general seems to be more involved than what we treat in this paper, where we typically form some Seifert surfaces, manipulate them, then dissolve them to obtain their knot boundaries. In particular, one would need to be more explicit with how the original Seifert surfaces are formed. Studying these questions via band surgery and band twisting (or perhaps other moves) should yield insight into a collection of subgraphs of $M\kern-1pt SG(K)$ similar to $M\kern-1pt SG(K,n)$.
\iffalse
We may rephrase our results in terms of certain combinatorial and algebraic structures. In particular, we define a complex $\M$ of knots to be a directed, weighted graph, where the vertices consist of all isotopy classes of knots in $S^3$, and a directed edge going from $K$ to $K'$ exists if and only if there exists a knot $K''$ such that there exists a Murasugi sum of $K$ with $K''$ that yields $K'$. The weight of an edge from $K$ to $K'$ is given by $\displaystyle\min_{K''} d_M(K,K'';K')$.
In this context, Theorem A1 says that every pair of vertices has a pair of edges in either direction connecting them. However, the weights on such a pair of edges are not necessarily equal, as one can easily observe by considering the pair of edges connecting the unknot to any knot. By fixing $K''$ and $m$, we may consider the subcomplex spanned by edges with weight $m=d_M(K,K'';K')$ and the vertices these edges connect.
In \cite{gigo}, it was shown that two fibered knots in $S^3$ are related by a sequence of plumbings and deplumbings of $3_1$ and $4_1$. This result may be rephrased by saying that the Grothendieck group of fibered knots in $S^3$ is the free abelian group of rank two generated by $3_1$ and $4_1$. We now describe a related sequence of Grothendieck groups of knots in $S^3$. For $l\in \bbz_{\geq 0}$, consider the free abelian group $G_l$ generated by all isotopy classes of knots in $S^3$. We impose the relation $K=K_1+K_2$ if $K$ is a Murasugi sum of $K_1$ and $K_2$ along Seifert surfaces $F_1$ and $F_2$ satisfying $g(F_i)-g(K_i)\leq l$ for $i=1,2$. The identity element for each $G_l$ is clearly $O$. Because $G_l$ is obtained from $G_{l-1}$ by imposing additional relations, we have the descending filtration $G_0 \supset G_1 \supset G_2 \supset \cdots.$
In this context, Theorem A1 says that $\bigcap_{l\in \bbz_{\geq 0}} G_l=O$. Meanwhile, we can see $G_0$ is infinitely generated in the following way. Consider the monoid homomorphism from the free abelian group generated by all isotopy classes of knots in $S^3$ to the (multiplicative) monoid of natural numbers $\bbn^\times$ which gives the rank of the top group of knot Floer homology. The composition of this monoid homomorphism with the Grothendieck group homomorphism, which takes $\bbn^\times$ to $\bbq_{>0}^\times$, is an additive function due to the multiplicativity of the top group under Murasugi sum \cite{ni}. Then by the universal property of the Grothendieck group, we have a unique map from $G_0$ to $\bbq_{>0}^\times$. Since nontrivial twist knots have top group with rank equal to the number of twists, the monoid homomorphism from the free abelian group generated by all isotopy classes of knots in $S^3$ to $\bbn^\times$ is surjective. Thus, the map from $G_0$ to $\bbq_{>0}^\times$ is surjective, hence $G_0$ is infinitely generated. To get a better sense of the structure of $\{G_k\}$, one could determine if there exists $k$ such that $G_k$ is finitely generated.
\fi
\section{Appendix}\label{sec:append}
This section is not directly concerned with the main theorems,
but we give proofs for Theorems C1 and C2 here to record some other ways of constructing knots and links by Murasugi sums.
\begin{proof}[Proof of Theorem C1.]
Let $\widetilde{F}$ be a canonical surface on
a closed braid for $L$.
For each set of bands connecting the same pair of
disks, select one to save, and apply a modification as in
Figure \ref{fig:pHopf} for the other bands. This produces a Seifert surface $F$.
Then at each site of modification, we can deplumb
a positive Hopf band. Note that
the Hopf bands are plumbed on the same side, and
hence we can merge them into a boundary connected sum of
positive Hopf bands.
Deplumbing all Hopf bands yields a surface $F'$
which is composed of
\begin{enumerate}
\item the Seifert disks of $F$,
\item one band between each pair of adjacent disks, and
\item one tube at each site of the bands which are cut by the deplumbing.
\end{enumerate}
Slide the tubes to the top of the disks and we have
the desired once-punctured Heegaard surface.
\end{proof}
\incf{90}{C1.eps}{Deplumbing a positive Hopf band, and then sliding tubes to the top disk}{fig:pHopf}
\begin{proof}[Proof of Theorem C2.]
Take a diagram $D$ of $L$, apply a Reidemeister II move at each crossing of $D$
to triple the crossing number
as in Figure \ref{fig:depflata}, and span
a canonical surface $F$.
Then we can deplumb a flat annulus at the site
of each original crossing.
We see that all annuli are plumbed to $F$ on the
positive side, which has lighter shading. Merge all plumbings so that
$F$ is a Murasugi sum of a Seifert surface $F'$ and a surface $P_1$, where
$P_1$ is a boundary connected sum of flat annuli, and hence a planar surface.
Now, $F'$ is obtained from Seifert disks of the original
diagram $D$ by applying a tube on the negative side of the site of each crossing.
Therefore, $F'$ is the boundary of the
neighborhood (i.e., a standard handlebody)
of the Seifert graph $G$ of $D$ with
punctures. In other words,
place a punctured sphere
at each vertex of $G$ and apply a tube along each
edge of $G$.
Hence $F'$ is a standard Heegaard surface with punctures, which is isotopic to
a linear plumbing of untwisted, unknotted annuli with punctures.
We can regard $F'$ as a planar surface $P$
with
a planar surface $P_2$
Murasugi-summed on the negative side.
\end{proof}
\incf{90}{C2.eps}{Applying a Reidemeister II move, and then deplumbing a flat annulus}{fig:depflata}
\bibliographystyle{amsxport}
|
1,314,259,996,637 | arxiv |
\section{Adversarial Sample Crafting}
\label{sec:adv-sample-crafting}
This section describes machine learning techniques used in this paper, along
with methods used to \emph{craft} adversarial samples against classifiers
learned using these techniques. Building on
previous work~\cite{szegedy2013intriguing,goodfellow2014explaining,papernot2015limitations}
describing how adversaries can efficiently select perturbations leading deep
neural networks to misclassify their inputs, we introduce new crafting
algorithms for adversaries targeting Support Vector Machines (SVMs) and
Decision Trees (DTs).
\subsection{Deep Neural Networks}
Deep Neural Networks (DNNs) learn hierarchical representations of high
dimensional inputs used to solve ML tasks~\cite{Goodfellow-et-al-2016-Book},
including classification. Each representation is modeled by a layer of
neurons---elementary parameterized computing units---behaving like a
multi-dimensional function. The input of each layer $f_i$ is the output of the
previous layer $f_{i-1}$ multiplied by a set of weights, which are part of the
layer's parameter $\theta_i$. Thus, a DNN $f$ can be viewed as a composition of
parameterized functions $$f:\vec{x}\mapsto
f_n(\theta_n,...f_2(\theta_2,f_1(\theta_1,\vec{x}))...)$$ whose parameters
$\theta=\{\theta_i\}_i$ are learned during training. For instance, in the case
of classification, the network is given a large collection of known input-label
pairs $(\vec{x},y)$ and adjusts its parameters $\theta$ to reduce the label
prediction error $f(\vec{x})-y$ on these inputs. At test time, the model
extrapolates from its training data to make predictions $f(\vec{x})$ on unseen
inputs.
\vspace*{-0.05in}
To craft adversarial samples misclassified by DNNs, an adversary with knowledge
of the model $f$ and its parameters $\theta$ can use the \emph{fast gradient
sign method} introduced in~\cite{goodfellow2014explaining} or the
Jacobian-based iterative approach proposed in~\cite{papernot2015limitations}.
We only provide here a brief description of the fast gradient sign method,
which is the one we use in this work. To find an adversarial sample $\vec{x^*}$
approximatively solving the optimization problem stated in
Equation~\ref{eq:adv-sample-crafting-misclassification}, Goodfellow et
al.~\cite{goodfellow2014explaining} proposed to compute the following
perturbation:
\begin{equation}
\label{eq:fgsm}
\delta_{\vec{x}} = \varepsilon\sgn (\nabla_{\vec{x}} c(f, \vec{x}, y))
\end{equation}
where $f$ is the targeted DNN, $c$ its associated cost, and $y$ the correct
label of input $\vec{x}$. In other words, perturbations are evaluated as the
sign of the model's cost function gradient with respect to inputs. An
adversarial sample $\vec{x^*}=\vec{x}+\delta_{\vec{x}}$ is successfully crafted
when misclassified by model $f$---it satisfies $f(\vec{x^*})\neq
f(\vec{x})$---while its perturbation $\delta_{\vec{x}}$ remains
indistinguishable to humans. The \emph{input variation} $\varepsilon$ sets the
perturbation magnitude: higher input variations yield samples more likely to be
misclassified by the DNN model but introduce more perturbation, which can be
easier to detect.
\subsection{Multi-class Logistic Regression}
Multi-class logistic regression is the generalization of logistic regression to
classification problems with $N>2$ classes~\cite{murphy2012machine}. Logistic
regression seeks to find the hypothesis best matching the data among the class
of hypothesis that are a composition of a sigmoid function over the class of
linear functions. A multi-class logistic regression model $f$ can be written
as:
\begin{equation}
\label{eq:logistic-regression}
f:\vec{x}\mapsto \left[\frac{e^{\vec{w_j} \cdot \vec{x}}}{\sum_{l=1}^{N}e^{\vec{w_l} \cdot\vec{x}}} \right]_{j\in 1..N}
\end{equation}
where $\theta=\{w_1, ..., w_N\}$ is the set of parameters learned during training, e.g., by gradient descent or Newton's method.
\vspace*{-0.05in}
Adversaries can also craft adversarial samples misclassified by multi-class
logistic regression models using the fast gradient sign
method~\cite{goodfellow2014explaining}. In the case of logistic regression, the
method finds the most damaging perturbation $\delta_{\vec{x}}$ (according to
the max norm) by evaluating Equation~\ref{eq:fgsm}, unlike the case of deep
neural networks where it found an approximation.
\subsection{Nearest Neighbors}
The k nearest neighbor (kNN) algorithm is a lazy-learning non-parametric
classifier~\cite{murphy2012machine}: it does not require a training phase.
Predictions are made on unseen inputs by considering the $k$ points in the
training sets that are closest according to some distance. The estimated class
of the input is the one most frequently observed among these $k$ points. When
$k$ is set to $1$, as is the case in this paper, the classifier is:
\begin{equation}
\label{eq:knn}
f:\vec{x}\mapsto Y\left[\arg\min_{\vec{z}\in X} \|\vec{z}-\vec{x}\|_2^2\right]
\end{equation}
which outputs one row of $Y$, the matrix of indicator vectors encoding labels for the training data $X$.
\vspace*{-0.05in}
Although the kNN algorithm is non-parametric, it is still vulnerable to
adversarial samples as pointed out
in~\cite{papernot2016practical,WardeFarley16}. In this paper, we used the fast
gradient sign method to craft adversarial samples misclassified by nearest
neighbors. To be able to differentiate the models, we use a \emph{smoothed}
variant of the nearest neighbor classifiers, which replaces the argmin
operation in Equation~\ref{eq:knn} by a soft-min, as follows:
\begin{equation}
\label{eq:knn}
f:\vec{x}\mapsto \frac{ \left[e^{-\|\vec{z}-\vec{x}\|_2^2}\right]_{\vec{z}\in X} }{\sum_{\vec{z}\in X} e^{-\|\vec{z}-\vec{x}\|_2^2}} \cdot Y
\end{equation}
\subsection{Multi-class Support Vector Machines}
One possible implementation of a multiclass linear Support Vector Machine
classifier $f$ is the \emph{one-vs-the-rest} scheme. For each class $k$ of the
machine learning task, a binary Support Vector Machine classifier $f_k$ is
trained with samples of class $k$ labeled as positive and samples from other
classes labeled as negative~\cite{bishop2006pattern}. To classify a sample,
each binary linear SVM classifier $f_k$ makes a prediction and the overall
multiclass classifier $f$ outputs the class assigned the strongest confidence.
Each of these underlying linear SVMs is a model $f_k$ classifying unseen
samples $\vec{x}$ using the following:
\begin{equation}
\label{eq:sub-svm-binary}
f_k:\vec{x} \mapsto sgn(\vec{w}[k]\cdot \vec{x} + b_k)
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{fig/svm-adv-sample.pdf}
\caption{SVM Adversarial Samples: to move a sample $\vec{x}$ away from its legitimate class in a binary SVM classifier $f_k$, we perturb it by $\varepsilon$ along the direction orthogonal to $\vec{w[k]}$.}
\label{fig:svm-adv-sample}
\end{figure}
We now introduce an algorithm to find adversarial samples misclassified by a multi-class linear SVM $f$. To the best of our knowledge, this method is more computationally efficient than previous~\cite{biggio2013evasion}: it does not require any optimization. To craft adversarial samples, we perturb a given input in a direction orthogonal to the decision boundary hyperplane. More precisely, we perturb legitimate samples correctly classified by model $f$ in the direction orthogonal to the weight vector $\vec{w}[k]$ corresponding to the binary SVM classifier $f_k$ that assigned the correct class $k$ output by the multiclass model $f$. The intuition, illustrated in Figure~\ref{fig:svm-adv-sample} with a binary SVM classifier, can be formalized as follows: for a sample $\vec{x}$ belonging to class $k$, an adversarial sample misclassified by the multiclass SVM model $f$ can be computed by evaluating:
\begin{equation}
\label{eq:svm-adv-sample}
\vec{x^*} = \vec{x} - \varepsilon \cdot \frac{\vec{w[k]}}{\|\vec{w_k}\|}
\end{equation}
where $\|\cdot \|$ is the Frobenius norm, $\vec{w[k]}$ the weight vector of binary SVM $k$, and $\varepsilon$ the \emph{input variation} parameter. The input variation parameter controls the amount of distortion introduced as is the case in the fast gradient sign method.
\subsection{Decision Trees}
Decision trees are defined by recursively partitioning the input
domain~\cite{murphy2012machine}. Partitioning is performed by selecting a
feature and a corresponding condition threshold that best minimize some cost
function over the training data. Each node is a if-else statement with a
threshold condition corresponding to one of the sample's features. A sample is
classified by traversing the decision tree from its root to one of its leaves
accordingly to conditions specified in intermediate tree nodes. The leaf
reached indicates the class assigned.
\vspace*{-0.05in}
Adversaries can also craft adversarial inputs misclassified by decision trees.
To the best of our knowledge, this is the first adversarial sample crafting
algorithm proposed for decision trees. The intuition exploits the underlying
tree structure of the classifier model. To find an adversarial sample, given a
sample and a tree, we simply search for leaves with different classes in the
neighborhood of the leaf corresponding to the decision tree's original
prediction for the sample. We then find the path from the original leaf to the
adversarial leaf and modify the sample accordingly to the conditions on this
path so as to force the decision tree to misclassify the sample in the
adversarial class specified by the newly identified leaf.
\begin{figure}[t]
\centering
\includegraphics[width=0.62\columnwidth]{fig/decision-tree-adv-sample.pdf}
\caption{Decision Tree Adversarial Samples: leaves indicate output classes (here the problem has 3 output classes) whereas intermediate nodes with letters indicate binary conditions (if condition do else do). To misclassify the sample from class $3$ denoted by the green leaf, the adversary modifies it such that conditions $g$ and $i$ evaluate accordingly for the sample to be classified in class $2$ denoted by the red leaf.}
\label{fig:decision-tree-adv-sample}
\end{figure}
This intuition, depicted in Figure~\ref{fig:decision-tree-adv-sample}, is
formalized by Algorithm~\ref{alg:decision-tree-adv-sample}. The algorithm takes
a decision tree $T$, a sample $\vec{x}$, the
$\small{\texttt{legitimate\_class}}$ for sample $\vec{x}$, and outputs an
adversarial sample $\vec{x^*}$ misclassified by decision tree $T$. The
algorithm does not explicitly minimize the amount of perturbation introduced to
craft adversarial samples, but as shown in Section~\ref{sec:transferability},
we found in practice that perturbations found involve a minuscule proportion of
features.
\begin{algorithm}[t]
\caption{Crafting Decision Tree Adversarial Samples}
\label{alg:decision-tree-adv-sample}
\begin{algorithmic}[1]
\Require $T$, $\vec{x}$, $\small{\texttt{legitimate\_class}}$
\State $\vec{x^*} \leftarrow \vec{x}$
\State $\small{\texttt{legit\_leaf}} \leftarrow$ find leaf in $T$ corresponding to $\vec{x}$
\State $\small{\texttt{ancestor}} \leftarrow \small{\texttt{legitimate\_leaf}} $
\State $\small{\texttt{components}} \leftarrow \small{\texttt{[]}}$
\While{$\small{\texttt{predict}}(T,\vec{x^*}) == \small{\texttt{legitimate\_class}}$}
\If{$\small{\texttt{ancestor}} == \small{\texttt{ancestor.parent.left}}$}
\State $\small{\texttt{advers\_leaf}} \leftarrow$ find leaf under $\small{\texttt{ancestor.right}}$
\Else{$\small{\texttt{ancestor}} == \small{\texttt{ancestor.parent.right}}$}
\State $\small{\texttt{advers\_leaf}} \leftarrow$ find leaf under $\small{\texttt{ancestor.left}}$
\EndIf
\State $\small{\texttt{components}} \leftarrow $ nodes from $\small{\texttt{legit\_leaf}} $ to $\small{\texttt{advers\_leaf}}$
\State $\small{\texttt{ancestor}} \leftarrow \small{\texttt{ancestor.parent}}$
\EndWhile
\For{$i\in \small{\texttt{components}}$}
\State perturb $\vec{x^*}[i]$ to change node's condition output
\EndFor
\State \Return $\vec{x^*}$
\end{algorithmic}
\end {algorithm}
\section{Acknowledgments}
Research was sponsored by the Army Research Laboratory and was accomplished
under Cooperative Agreement Number W911NF-13-2-0045 (ARL Cyber Security
CRA). The views and conclusions contained in this document are those of the
authors and should not be interpreted as representing the official policies,
either expressed or implied, of the Army Research Laboratory or the U.S.
Government. The U.S. Government is authorized to reproduce and distribute
reprints for Government purposes notwithstanding any copyright notation here
on.
\end{document}
\section{Conclusions}
Our work first exposed the strong phenomenon of adversarial sample
transferability across the machine learning space. Not only do we
find that adversarial samples are misclassified across models trained
using the same machine learning technique, but also across models
trained by different techniques. We then improved the accuracy and reduced the computational complexity of
an existing algorithm for learning models substitutes of machine
learning classifiers. We showed that DNNs and LR could both effectively
be used to learn a substitute model for many classifiers trained with
a deep neural network, logistic regression, support vector machine,
decision tree, and nearest neighbors. In a final experiment, we
demonstrated how all of these findings could be used to target online
classifiers trained and hosted by Amazon and Google, without any
knowledge of the model design or parameters, but instead simply by making
label queries for $800$ inputs. The attack successfully forces these
classifiers to misclassify $96.19\%$ and $88.94\%$ of their inputs.
\vspace*{-0.05in}
These findings call for some validation of inputs used by machine
learning algorithms. This remains an open problem.
Future work should continue to improve the learning of substitutes to
maximize their accuracy and the transferability of adversarial samples
crafted to targeted models. Furthermore, poisoning
attacks at training time remain largely to be investigated, leaving room
for contributions to the field.
\section{Discussion and Related Work}
Upon completion of their training on collections of known input-label pairs
$(\vec{x},\vec{y})$, classifiers $f$ make label predictions $f(x)$ on unseen
inputs $\vec{x}$~\cite{murphy2012machine}.
Models
extrapolate from knowledge extracted by processing input-label pairs during
training to make label predictions. Several factors, including (1)
imperfections in the training algorithms, (2) the linearity of many underlying
components used to built machine learning models, and (3) the limited amount of
training points not always representative of the entire plausible input domain,
leave numerous machine learning models exposed to adversarial manipulations of
their inputs despite having excellent performances on
legitimate---expected---inputs.
Our work builds on a practical method for attacks against black-box deep
learning classifiers~\cite{papernot2016practical}. Learning substitute models
approximating the decision boundaries of targeted classifiers alleviates the
need of previous
attacks~\cite{szegedy2013intriguing,goodfellow2014explaining,papernot2015limitations}
for knowledge of the target architecture and parameters. We generalized this method and showed that it can target any machine
learning classifier. We also reduced its computational cost by (1)
introducing substitute models trained using logistic regression instead of deep
learning and (2) decreasing the number of queries made with reservoir sampling. Learning substitutes is an instance of knowledge transfer, a set of techniques to transfer the generalization knowledge learned by a model into another model~\cite{bucilua2006model, chen2015net2net}.
\vspace*{-0.05in}
This paper demonstrates that adversaries can reliably target classifiers whose characteristics are unknown, deployed
remotely, e.g., by machine learning as a service platforms.
The existence of such a threat vector calls for the design of defensive
mechanisms~\cite{mpc16}. Unfortunately, we found that defenses proposed in the
literature---such as training with adversarial
samples~\cite{goodfellow2014explaining}---were noneffective, or we were unable
to deploy them because of our lack of access to the machine learning model
targeted---for instance distillation~\cite{papernot2015distillation}. This
failure is most likely due to the shallowness of models like logistic
regression, which support the services offered by Amazon and Google, although
we are unable to confirm that statement in Google's case using available
documentation.
\vspace*{-0.05in}
This work is part of a series of security evaluations of machine
learning algorithms~\cite{barreno2006can,biggio2014security}. Unlike us, previous work in this field assumed knowledge of the model
architecture and parameters~\cite{biggio2011support,huang2011adversarial}. Our threat model considered
adversaries interested in misclassification at test time, once the model has
been deployed. Other largely unexplored threat models exist. For
instance poisoning the training data used to learn models was only considered in the context of binary
SVMs whose training data is known~\cite{biggio2012poisoning} or anomaly detection systems whose underlying model is known~\cite{kloft2010online}.
\section{Transferability of Adversarial Samples in Machine Learning}
\label{sec:transferability-section}
In this section, our working hypothesis is that intra-technique and cross-technique
adversarial sample transferability are strong phenomena across the machine
learning space. Thus, we empirically study these two phenomena across a range of machine learning techniques: deep
neural networks (DNNs), logistic regression (LR), support vector machines
(SVM), decision trees (DT), nearest neighbors (kNN), and ensembles (Ens.).
All models are found vulnerable to intra-technique adversarial sample
transferability---misclassification of samples by different models trained using
the same machine learning technique, the phenomenon is stronger for
differentiable models like DNNs and LR than for non-differentiable models
like SVMs, DTs and kNNs. Then, we observe that DNNs and kNNs boast resilience
to cross-technique transferability, misclassifications of adversarial samples by
models trained with distinct machine learning techniques. We find that all other models, including LR, SVMs, DTs, and an ensemble of
models collectively making predictions, are considerably more vulnerable to
cross-technique transferability.
\subsection{Experimental Setup}
\label{sec:transferability-experimental-setup}
We describe here the dataset and machine learning models used in this section
to study both types of transferability.
\textbf{Dataset -} We use the seminal MNIST dataset of handwritten
digits~\cite{lecun1998mnist}. This dataset has been well-studied in both the
machine learning and security communities. We chose it because its
dimensionality is suitable to the range of machine learning techniques included
in our study, which all perform at least reasonably well on this dataset. The
task associated with the dataset is classification of images in one of the $10$
classes corresponding to each possible digit ranging from $0$ to $9$. The
dataset includes $50,000$ training samples, $10,000$ validation samples, and
$10,000$ test samples. Each $28$x$28$ gray-scale pixel image is encoded as a
vector of intensities whose real values range from $0$ (black) to $1$ (white).
\textbf{Machine learning models -} We selected five machine learning
techniques: DNNs, LR, SVMs, DTs, and kNNs. All of these machine learning
techniques, as well as the algorithms used to craft adversarial samples, are
presented in Section~\ref{sec:adv-sample-crafting} of this paper. As outlined
in Table~\ref{tbl:machine-learning-techniques}, DNNs were chosen for their
state-of-the-art performance, LR for its simplicity, SVMs for their potential
robustness stemming from the margin constraints when choosing decision
boundaries at training, DTs for their non-differentiability, and kNNs for being
lazy-classification\footnote{No model is learned during training. Predictions
are made by finding $k$ points closest to the sample in the training data, and
extrapolating its class from the class of these $k$ points.} models. To train
DNN, LR, and kNN models, we use Theano~\cite{bergstra2010theano} and
Lasagne~\cite{lasagne}. The DNN is made up of a hierarchy of 2 convolutional
layers of $32$ $3$x$3$ kernels, 2 convolutional layers of $64$ $3$x$3$ kernels,
2 rectified linear layers of $100$ units, and a softmax layer of $10$ units. It
is trained during 10 epochs with learning, momentum, and dropout rates of
respectively $10^{-2}$, $0.9$, and $0.5$ decayed by $0.5$ after 5 epochs. The
LR is performed using a softmax regression on the inputs. It is trained during
15 epochs at a learning rate of $10^{-2}$ with a momentum rate of $0.9$ both
decayed by $0.5$ after $10$ epochs. The linear SVM and DT are trained with
scikit-Learn.
\subsection{Intra-technique Transferability}
\label{sec:intra-technique-transferability}
We show that differentiable models like DNNs and LR are more vulnerable to
intra-technique transferability than non-differentiable models like SVMs, DTs, and
kNNs. We measure \emph{intra-technique transferability} between models $i$ and $j$,
both learned using the same machine learning technique, as the proportion of
adversarial samples produced to be misclassified by model $i$ that are
misclassified by model $j$.
To train different models using the same machine learning technique, we split
the training set in disjoint subsets A,B,C,D,E of $10,000$ samples each, in
order of increasing indices. For each of the machine learning techniques (DNN,
LR, SVM, DT, kNN), we thus learn five different models referred to as
A,B,C,D,E. Model accuracies, i.e. the proportion of labels correctly predicted
by the model for the testing data, are reported in
Figure~\ref{tbl:accuracies-intramodel}. For each of the 25 models, we apply the
suitable adversarial sample algorithm described in
Section~\ref{sec:intra-technique-transferability} and craft $10,000$ samples from
the test set, which was unused during training. For adversarial sample
algorithms with parameters, we fine-tune them to achieve a quasi-complete
misclassification of the $10,000$ adversarial samples by the model on which
they are crafted. Upon empirically exploring the input variation parameter
space, we set it to $\varepsilon=0.3$ for the fast gradient sign method
algorithm, and $\varepsilon=1.5$ for the SVM algorithm.
\begin{figure}[t!]
\begin{subfigure}[b]{0.5\columnwidth}
\centering
\includegraphics[width=\columnwidth]{fig/intramodel/accuracies}
\caption{Model Accuracies}
\label{tbl:accuracies-intramodel}
\vspace{4ex}
\end{subfigure
\begin{subfigure}[b]{0.5\columnwidth}
\centering
\includegraphics[width=\columnwidth]{fig/intramodel/dnn}
\caption{DNN models}
\label{fig:intramodel-transferability:b}
\vspace{4ex}
\end{subfigure}
\begin{subfigure}[b]{0.5\columnwidth}
\centering
\includegraphics[width=\columnwidth]{fig/intramodel/lr}
\caption{LR models}
\label{fig:intramodel-transferability:c}
\vspace{4ex}
\end{subfigure
\begin{subfigure}[b]{0.5\columnwidth}
\centering
\includegraphics[width=\columnwidth]{fig/intramodel/svm}
\caption{SVM models}
\label{fig:intramodel-transferability:d}
\vspace{4ex}
\end{subfigure}
\begin{subfigure}[b]{0.5\columnwidth}
\centering
\includegraphics[width=\columnwidth]{fig/intramodel/dt}
\caption{DT models}
\label{fig:intramodel-transferability:e}
\end{subfigure
\begin{subfigure}[b]{0.5\columnwidth}
\centering
\includegraphics[width=\columnwidth]{fig/intramodel/knn}
\caption{kNN models}
\label{fig:intramodel-transferability:f}
\end{subfigure}
\caption{intra-technique transferability for 5 ML techniques. Figure~\ref{tbl:accuracies-intramodel} reports the accuracy rates of the 25 models used, computed on the MNIST test set. Figures~\ref{fig:intramodel-transferability:b}-\ref{fig:intramodel-transferability:f} are such that cell $(i,j)$ reports the intra-technique transferability between models $i$ and $j$, i.e. the percentage of adversarial samples produced using model $i$ misclassified by model $j$.}
\label{fig:intramodel-transferability}
\end{figure}
Figures~\ref{fig:intramodel-transferability:b}-\ref{fig:intramodel-transferability:f}
report intra-technique transferability rates for each of the five machine learning
techniques. Rates $(i,i)$ on the diagonals indicate the proportion of
adversarial samples misclassified precisely by the same model $i$ on which they
were crafted. Off-diagonal rates $(i,j)$ indicate the proportion of adversarial
samples misclassified by a model $j$ different from the model $i$ on which they
were crafted. We first observe that all models are vulnerable to intra-technique
transferability in a non-negligible manner. LR models are most vulnerable as
adversarial samples transfer across models at rates larger than $94\%$. DNN
models display similarly important transferability, with rates of at least
$49\%$. On the SVM, DT, and kNN matrices, the diagonals stand out more,
indicating that these techniques are to some extent more robust to the
phenomenon. In the case of SVMs, this could be explained by the explicit
constraint during training on the choice of hyperplane decision boundaries that
maximize the margins (i.e. support vectors). The robustness of both DTs and
kNNs could simply stem from their non-differentiability.
\subsection{Cross-technique Transferability}
\label{sec:transferability}
We define \emph{cross-technique transferability} between models $i$ and $j$,
trained using different machine learning techniques, as the proportion of
adversarial samples produced to be misclassified by model $i$ that are also
misclassified by model $j$. Hence, this is a more complex phenomenon than
intra-technique transferability because it involves models learned using possibly
very different techniques like DNNs and DTs. Yet, cross-technique transferability
is surprisingly a strong phenomenon to which techniques like LR, SVM, DT, and
ensembles are vulnerable, making it easy for adversaries to craft adversarial
samples misclassified by models trained using diverse machine learning
techniques.
We study the cross-technique transferability phenomenon across models trained using
the five machine learning techniques already used in
Section~\ref{sec:intra-technique-transferability} and described in
Section~\ref{sec:transferability-experimental-setup}
and~\ref{sec:adv-sample-crafting}. To these, we add a 6th model: an ensemble
$f(\vec{x})$. The ensemble $f$ is implemented using a collection of 5 experts,
which are the 5 previously described models: the DNN denoted $f_1$, LR denoted
$f_2$, SVM denoted $f_3$, DT denoted $f_4$, and kNN denoted $f_5$. Each expert
makes a decision and the ensemble outputs the most frequent
choice (or the class with the lowest index if they all disagree):
\begin{equation}
f(\vec{x}) = \arg\max_{i\in 0..N-1} \sum_{j\in 1..5} f_{j,i}(\vec{x})
\end{equation}
where $f_{j,i}(\vec{x})=1_{f_j(\vec{x})==i}$ indicates whether classifier $f_j$ assigned class $i$ to input $\vec{x}$. Note that in this section, we only train one model per machine learning technique on the full MNIST training set of $50,000$ samples, unlike in Section~\ref{sec:intra-technique-transferability}.
In this experiment, we are interested in transferability across machine
learning techniques. As such, to ensure our results are comparable, we
fine-tune the parameterizable crafting algorithms to produce adversarial
samples with similar perturbation magnitudes. To compare magnitudes across
perturbation styles, we use the L1 norm: the sum of each perturbation
component's absolute value. Perturbation added to craft adversarial samples
using the DNN, LR, and SVM have an average L1 norm $\|\delta_{\vec{x}}\|_1$ of
$11.5\%$. To achieve this, we use an input variation parameter of
$\varepsilon=0.25$ with the fast gradient sign method on the DNN, LR, and kNN.
To craft adversarial samples on the SVM, we use an input variation parameter of
$\varepsilon=5$ with the crafting method introduced in
Section~\ref{sec:adv-sample-crafting}. Unfortunately, the attack on DT cannot
be parameterized to match the L1 norm of DNN, LR, kNN and SVM attacks. Hence,
perturbations selected have much lower average L1 norms of respectively
$1.05\%$.
We build a cross-technique transferability matrix where each cell $(i,j)$ holds the
percentage of adversarial samples produced for classifier $i$ that are
misclassified by classifier $j$. In other words, rows indicate the machine
learning technique that trained the model against which adversarial samples
were crafted. The row that would correspond to the ensemble is not included
because there is no crafting algorithm designed to produce adversarial samples
specifically for an ensemble, although we address this limitation in
Section~\ref{sec:learning-approximators} using insight gained in this
experiment. Columns indicate the underlying technique of the classifier making
predictions on adversarial samples. This matrix, plotted in
Figure~\ref{tbl:transferability-matrix}, shows that cross-technique transferability
is a strong but heterogeneous phenomenon. The most vulnerable model is the
decision tree (DT) with misclassification rates ranging from $47.20\%$ to
$89.29\%$ while the most resilient is the deep neural network (DNN) with
misclassification rates between $0.82\%$ and $38.27\%$. Interestingly, the
ensemble is not resilient to cross-technique transferability of adversarial samples
with rates reaching $44.14\%$ for samples crafted using the LR model. This is
most likely due to the vulnerability of each underlying expert to adversarial
samples.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{fig/crossmodel}
\caption{cross-technique Transferability matrix: cell $(i,j)$ is the percentage of adversarial samples crafted to mislead a classifier learned using machine learning technique $i$ that are misclassified by a classifier trained with technique $j$.}
\label{tbl:transferability-matrix}
\end{figure}
We showed that all machine learning techniques we studied are vulnerable to two types of adversarial sample
transferability. This most surprisingly results in adversarial samples being
misclassified across multiple models learned with different machine learning techniques. This \emph{cross-technique transferability} greatly reduces the minimum knowledge that adversaries must
possess of a machine learning classifier in order to force it to misclassify inputs that they crafted. We leverage this
observation, along with findings from
Section~\ref{sec:learning-approximators}, to justify design choices in the
attack described in Section~\ref{sec:ml-oracle-attack}.
\section{Introduction}
Many classes of machine learning algorithms have been shown to be vulnerable to {\it adversarial samples}~\cite{szegedy2013intriguing,goodfellow2014explaining,papernot2015limitations}; adversaries subtly alter legitimate inputs (call input perturbation) to induce the trained model to produce erroneous outputs. Adversarial samples can be used to, for example, subvert fraud detection, bypass content filters or malware detection, or to mislead autonomous navigation systems~\cite{papernot2016practical}. These attacks on input integrity exploit imperfections and approximations made by learning algorithms during training to control machine learning models outputs (see Figure~\ref{fig:adv-sample-intro}).
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{fig/adv_sample_intro}
\caption{An adversarial sample (bottom row) is produced by slightly altering a legitimate sample (top row) in a way that forces the model to make a wrong prediction whereas a human would still correctly classify the sample~\cite{papernot2015limitations}.}
\label{fig:adv-sample-intro}
\end{figure}
\emph{Adversarial sample transferability}\footnote{Note that this is distinct from \emph{knowledge transfer}, which refers to techniques designed to transfer the generalization knowledge learned by a model $f$ during training---and encoded in its parameters---to another model $f'$~\cite{hinton2015distilling}.} is the property that some adversarial samples produced to mislead a specific model $f$ can mislead other models $f'$---even if their architectures greatly differ~\cite{szegedy2013intriguing,goodfellow2014explaining,papernot2016practical}. A practical impact of this property is that it leads to \emph{oracle}-based black box attacks. In one such attack, Papernot et al. trained a local deep neural network (DNN)
using crafted inputs and output labels generated by the target ``victim'' DNN~\cite{papernot2015limitations}. Thereafter, the local network was used to generate adversarial samples that were highly effective on the original victim DNN. The key here was that the adversary has very limited information---they knew nothing about the architecture or parameters but only knew that the victim was a DNN---and had only oracle access that allowed it to obtain outputs for chosen inputs.
In this paper, we develop and validate a generalized algorithm for black box attacks that exploit adversarial sample transferability on broad classes of machine learning. In investigating these attacks, we explore transferability within and between different classes of machine learning classifier algorithms. We explore neural networks (DNNs), logistic regression (LR), support vector machines (SVM), decision trees (DT), nearest neighbors (kNN), and ensembles (Ens.). In this, we demonstrate that black-box attacks are generally applicable to machine learning and can effectively target classifiers not built using deep neural networks. The generalization is two-fold: we show that (1) the substitute model can be trained with other techniques than deep learning, and (2) transferability-based black box attacks are not restricted to deep learning targets and is in fact successful with targeted models of many machine learning types. Our contributions are summarized as follows:
\vspace{-1em}
\begin{itemize}
\item We introduce adversarial sample crafting techniques for support vector machine as well as decision trees---which are non-differentiable machine
learning models.
\item We study adversarial sample transferability across the machine learning space and find that samples largely transfer well across models trained with the same machine learning technique, and across models trained with different techniques or ensembles taking collective decisions. For example, a support vector machine and decision tree respectively misclassify $91.43\%$ and $87.42\%$ of adversarial samples crafted for a logistic regression model.
Previous work on adversarial example transferability has primarily studied the case where
at least one of the models involved in the transfer is a neural network \cite{szegedy2013intriguing,goodfellow2014explaining,WardeFarley16},
while we aim to more generally characterize the transferability between a diverse set of models chosen
to capture most of the space of popular machine learning algorithms.
\item We generalize the learning of substitute models from deep learning to logistic regression and support vector machines. Furthermore, we show that it is possible to learn substitutes matching labels produced by many machine learning models (DNN, LR, SVM, kNN) at rates superior to $80\%$. We improve the accuracy and computational cost of a previously proposed substitute learning technique by introducing a new hyper-parameter and the use of reservoir sampling.
\item We conduct black-box attacks against classifiers hosted by Amazon and Google. We show that despite our lack of knowledge of the
classifier internals, we can force them to respectively misclassify 96.19\% and 88.94\% of their inputs using a logistic regression substitute model trained by making only $800$ queries to the target.
\end{itemize}
\section{Black-Box Attacks of Remote\\ Machine Learning Classifiers}
\label{sec:ml-oracle-attack}
Intra-technique and cross-technique transferability of adversarial samples, together with
the learning of substitutes for classifier oracles, enable a range of attacks
targeting remote machine learning based systems whose internals are unknown to adversaries. To
illustrate the feasibility of \emph{black-box attacks} on such remote systems,
we target in an experiment two machine learning classifiers respectively trained and hosted
by Amazon and Google. We find it is possible to craft samples misclassified by
these commerical oracles at respective rates of $96.19\%$ and
$88.94\%$ after making $800$ queries to learn substitute models approximating them.
\subsection{The Oracle Attack Method}
This section's adversarial threat model is identical to the one used when
learning substitutes in Section~\ref{sec:learning-approximators}: adversaries
have an \emph{oracle} access to the remote classifier. Its type, parameters, or
training set are all unknown to the adversary. The attack method leverages
Sections~\ref{sec:transferability-section}
and~\ref{sec:learning-approximators} of this paper, and is a generalization of
the approach introduced in~\cite{papernot2016practical}.
The adversary first locally trains a substitute model to
approximate the remotely hosted classifier, using queries to the oracle as described in
Section~\ref{sec:learning-approximators}. We consider the use of deep learning
and logistic regression to learn substitutes for classifiers. We apply
the two refinements introduced in this paper: a periodic step size
and reservoir sampling. Since substitute models are locally trained, the
adversary has full knowledge of their model parameters. Thus, one of
the adversarial sample crafting algorithms introduced in
Section~\ref{sec:adv-sample-crafting} corresponding to the machine learning
technique used to learn the substitute are employed to craft adversarial samples
misclassified by the substitute model. The adversary than leverages either intra-technique
or cross-technique transferability of adversarial samples---depending on the techniques with which the substitute and oracle were learned: the inputs misleading
the locally trained substitute model are very likely to also deceive the targeted remotely
hosted oracle.
Previous work conducted such an attack using a substitute and targeted
classifier both trained using deep learning, demonstrating that the attack was
realistic using the MetaMind API providing Deep Learning as a
Service~\cite{papernot2016practical}. We generalize these results by performing
the attack on Machine Learning as a Service platforms that employ techniques
that are unknown to us: Amazon Web Services and Google Cloud Prediction. Both
platforms automate the process of learning classifiers using a labeled dataset
uploaded by the user. Unlike MetaMind, neither of these platforms claim to
exclusively use deep learning to build classifiers. When analyzing our
results, we found that Amazon uses logistic regression (cf. below) but to the
best of our knowledge Google has never disclosed the technique they use to
train classifiers, ensuring that our experiment is properly blind-folded.
\subsection{Amazon Web Services Oracle}
Amazon offers a machine learning service, \emph{Amazon Machine
Learning},\footnote{\url{https://aws.amazon.com/machine-learning}} as part of
their Amazon Web Services platform. We used this service to train and host a ML
classifier oracle. First, we uploaded a CSV encoded version of the MNIST
training set to an S3 bucket on Amazon Web Services. We truncated the pixel
values in the CSV file to $8$ decimal places. We then started the ML model
training process on the Machine Learning service: we loaded the CSV training
data from our S3 bucket, selected the multi-class model type, provided the
target column in the CSV file, and kept the default configuration settings.
Note that Amazon offers limited customization options: the settings allow one
to customize the recipe (data transformations), specify a maximum model size
and number of training epochs, disable training data shuffle, and change the
regularization type between L1 and L2 or simply disable regularization. The
training process takes a few minutes and outputs a classifier model achieving a
$92.17\%$ accuracy on the MNIST test set. We have no way to improve that
performance beyond the limited customizing options as the intent of the service
is to automate model training. Finally, we activate real-time predictions to be
able to query the model for labels from our local machine.
We then use the Python API provided with the Amazon Machine Learning service to
submit prediction queries to our trained oracle model and retrieve the output
label. Although confidence values are available for predictions, we only
consider the label to ensure our threat model for adversarial capabilities
remains realistic. We incorporate this oracle in our experimental setup and
train two substitute models to approximate the labels produced by this oracle,
a DNN and LR, as SVM substitutes were dismissed by the conclusions of Section~\ref{sec:learning-approximators}. We train two variants of the DNN and LR substitutes. The first variant is trained with the vanilla dataset augmentation and the second variant with the enhanced dataset augmentation introduced in this paper, which uses both a periodic step size and reservoir sampling.
Learning is initialized
with a substitute training set of $100$ samples
from the MNIST test set. For all substitutes, we measure the attack success as the proportion among the $10,000$ adversarial
samples, produced using the fast gradient sign method with parameter
$\varepsilon=0.3$ (cf. Section~\ref{sec:adv-sample-crafting}) and the MNIST test set, misclassified by the Amazon oracle.
\begin {table}[t]
\centering
\begin{tabular}{|c||c|c|}
\hline
Substitute type & DNN & LR \\ \hline \hline
$\rho=3$ (800 queries) & 87.44\% & 96.19\% \\ \hline
$\rho=6$ (6,400 queries) & 96.78 \% & 96.43\% \\ \hline\hline
$\rho=6$ (PSS + RS) (2,000 queries) & 95.68\% & 95.83\% \\ \hline
\end{tabular}
\caption{Misclassification rates of the Amazon oracle on adversarial samples ($\varepsilon=0.3$) produced with DNN and LR substitutes after $\rho=\{3,6\}$ augmentation iterations. Substitutes are trained without and with refinements from Section~\ref{sec:learning-approximators}: periodic step size (PSS) and reservoir sampling (RS).}
\label{tbl:aws-misclassification}
\end{table}
Misclassification rates of the Amazon Machine Learning oracle on adversarial
samples crafted using both the DNN and LR substitutes after $\rho\in\{3,6\}$ dataset augmentation iterations
are reported in Table~\ref{tbl:aws-misclassification}. Results are given for models learned without and with the two refinements---periodic step size (PSS) and reservoir sampling (RS)---introduced in Section~\ref{sec:learning-approximators}. With a misclassification
rate of $96.19\%$ for an adversarial perturbation $\varepsilon=0.3$ using a LR substitute
trained with $800$ queries ($\rho=3$) to the oracle, the model trained by Amazon is easily
misled. To understand why, we carefully read the online documentation and eventually found
one page indicating that the type of model trained by the Amazon Machine
Learning service is an ``industry-standard'' multinomial logistic
regression.\footnote{\url{http://docs.aws.amazon.com/machine-learning/latest/dg/types-of-ml-models.html}}
As seen in Section~\ref{sec:transferability-section}, LR is extremely vulnerable to
intra-technique and to a lesser extend vulnerable to cross-technique
transferability. In fact, as pointed out by Goodfellow et
al.~\cite{goodfellow2014explaining}, shallow models like logistic regression
are unable to cope with adversarial samples and learn a classifier resistant to
them. This explains why (1) the attack is very successful and (2) the LR
substitute performs better than the DNN substitute.
Additionally, Table~\ref{tbl:aws-misclassification} shows how the use of a periodic step size (PSS) together with
reservoir sampling (RS) allows us to reduce the number
of queries made to the Amazon oracle while learning
a DNN substitute producing adversarial samples with
higher transferability to the targeted classifier.
Indeed, we reduce by a factor of more than $3$ the
number of queries made from $6,400$ to $2,000$, while only degrading the misclassification rate from $96.78\%$ to $95.68\%$---still larger than
the rate of $87.44\%$ achieved after $800$ queries by the substitute learned without PSS and RS. For the LR substitutes,
we do not see any positive impact from the use of PSS
and RS, which is most likely to the fast convergence
of LR substitute learning, as observed in Section~\ref{sec:learning-approximators}.
\subsection{Google Cloud Prediction Oracle}
To test whether this poor performance is limited to the Amazon Web Services
platform, we now target the Google Cloud Prediction API service\footnote{\url{https://cloud.google.com/prediction/}}. The procedure
to train a classifier on Google's platform is similar to Amazon's. We first
upload to Google's Cloud Storage service the CSV encoded file of the MNIST
training data identical to the one used to train the oracle on Amazon Machine
Learning. We then activate the Prediction API on Google's Cloud Platform and
train a model using the API's method named
\texttt{prediction.trainedmodels.insert}. The only property we are able to
specify is the expected multi-class nature of our classifier model as well as
the column in the CSV indicating target labels. We then evaluate the resulting
model using the API method \texttt{prediction.trainedmodels.predict} and an uploaded CSV file of the MNIST test set. The API reports
an accuracy of $92\%$ on this test set for the model trained.
\vspace*{-0.05in}
We now use the Google Cloud Python API to connect our experimental setup to the Prediction API, thus allowing our algorithms to
make queries to the Google classifier oracle. As we did for Amazon, we train two
substitute models (DNN and LR) using an initial substitute training set of 100
samples from the MNIST test set. For each substitute type, we train two model variants: the first one without periodic step size (PSS) or reservoir sampling (RS), the second one with both PSS and RS. Table~\ref{tbl:google-misclassification}
reports the rate of adversarial samples produced by each of the four resulting substitutes
and misclassified by the Google Prediction API oracle.
\begin {table}[t]
\centering
\begin{tabular}{|c||c|c|}
\hline
Substitute type & DNN & LR \\ \hline \hline
$\rho=3$ (800 queries) & 84.50\% & 88.94\% \\ \hline
$\rho=6$ (6,400 queries) & 97.17\% & 92.05\% \\ \hline\hline
$\rho=6$ (PSS + RS) (2,000 queries) & 91.57\% & 97.72\% \\ \hline
\end{tabular}
\caption{Misclassification rates of the Google oracle on adversarial samples ($\varepsilon=0.3$) produced with DNN and LR substitutes after $\rho=\{3,6\}$ augmentation iterations.. Substitutes are trained without and with refinements from Section~\ref{sec:learning-approximators}: periodic step size (PSS) and reservoir sampling (RS).}
\label{tbl:google-misclassification}
\end{table}
\vspace*{-0.05in}
The model trained using Google's machine learning service is a little more
robust to adversarial samples than the one trained using Amazon's service, but
is still vulnerable to a large proportion of samples: $88.94\%$ of adversarial
samples produced with a perturbation $\varepsilon=0.3$ using a LR substitute
trained with $800$ queries to the oracle are misclassified. This confirms the
above demonstration of the feasibility of black-box attacks against the classifier hosted by Amazon. Furthermore, if we use PSS and RS, the
misclassification rate is $91.57\%$ for the DNN substitute and $97.72\%$ for the LR substitute, which again
demonstrates that combining PSS and RS increases misclassification compared to the original method for $\rho=3$, and reduces by a factor of $3$ the number of queries ($2,000$) compared to the original method for $\rho=6$.
\vspace*{-0.05in}
\textbf{A brief discussion of defenses -} In an effort to evaluate possible defenses against such attacks, we now add these adversarial samples to the MNIST training dataset and train a
new instance of the classifier oracle with the same procedure. The new oracle
has an accuracy of $91.25\%$ on the MNIST test set. Adversarial samples crafted
by training a new DNN substitute, even without PSS and RS, are still misclassified at a rate of $94.2\%$ after
$\rho=3$ iterations and $100\%$ after $\rho=6$. This defense is thus not
effective to protect the oracle from adversaries manipulating inputs. This is
most likely due to the fact that the Google Prediction API uses shallow
techniques to train its machine learning models, but we have no means to verify this. One could
also try to deploy other defense mechanisms like defensive
distillation~\cite{papernot2015distillation}. Unfortunately, as we do not have
any control on the training procedure used by Google Cloud, we cannot do so. To the
best of our knowledge, Google has not disclosed the machine learning technique
they use to train models served by their Google Cloud Prediction API service.
As such, we cannot make any further recommendations on how to better secure
models trained using this service.
\section{Approach Overview}
\label{sec:transferability-theory}
In this section, we describe our approach, which is structured around the
evaluation of two hypotheses relevant to the design of black-box attacks
against machine learning classifiers.
Let us precisely define adversarial sample transferability. Consider an
adversary interested in producing an \emph{adversarial sample} $\vec{x^*}$
misclassified in any class different from the class assigned by model $f$ to
legitimate input $\vec{x}$. This can be done by solving\footnote{Finding a
closed form solution to this problem is not always possible, as some machine
learning models $f$ preclude the optimization problem from being linear or
convex. Nevertheless, several approaches have been proposed to find
approximative solutions to
Equation~\ref{eq:adv-sample-crafting-misclassification}. They yield adversarial
samples effectively misleading non-linear and non-convex models like neural
networks~\cite{szegedy2013intriguing,goodfellow2014explaining,papernot2015limitations}.
In addition, we introduce new techniques to craft adversarial samples against
support vector machines and decision trees in
Section~\ref{sec:adv-sample-crafting}.} the following optimization
problem~\cite{szegedy2013intriguing}:
\begin{equation}
\label{eq:adv-sample-crafting-misclassification}
\vec{x^*}=\vec{x}+\delta_{\vec{x}} \texttt{ where } \delta_{\vec{x}} = \arg\min_{\vec{z}} f(\vec{x}+\vec{z}) \neq f(\vec{x})
\end{equation}
Samples $\vec{x^*}$ solving
Equation~\ref{eq:adv-sample-crafting-misclassification} are specifically
computed to mislead model $f$. However, as stated previously, such adversarial
samples are in practice also frequently misclassified by models $f'$ different
from $f$. To facilitate our discussion, we formalize this \emph{adversarial
sample transferability} notion as:
\begin{equation}
\label{eq:inter-transferability-rate}
\Omega_X(f,f') = \left| \left\{ f'(\vec{x}) \neq f'\left(\vec{x}+\delta_{\vec{x}}\right) : \vec{x}\in X\right\} \right|
\end{equation}
where set $X$ is representative of the expected input distribution
for the task solved by models $f$ and $f'$. We partition
adversarial sample
transferability in two variants characterizing the pair of
models $(f,f')$. The first, \emph{intra-technique transferability}, is
defined across models trained with the same machine learning technique but
different parameter initializations or datasets (e.g., $f$ and $f'$
are both neural networks or both decision trees). The second,
\emph{cross-technique transferability}, considers models trained using two
techniques (e.g., $f$ is a neural network and $f'$ a decision tree).
\noindent\textbf{Hypothesis 1:} \emph{Both intra-technique and cross-technique
adversarial sample transferabilities are consistently strong phenomena across
the space of machine learning techniques}.
In this first hypothesis, we explore how well both variants of transferability
hold across classes of machine learning algorithms. The motivation behind this
investigation is that adversarial sample transferability constitutes a threat
vector against machine learning classifiers in adversarial settings. To
identify the most vulnerable classes of models, we need to generate an accurate
comparison of the attack surface of each class in constrained experimental
settings.
To validate this hypothesis, we perform a large-scale study in
Section~\ref{sec:transferability-section}. Each of the
study's two folds investigates one of the adversarial sample transferability variants:
intra-technique and cross-technique. For completeness, we consider a collection of
models representatively spanning the machine learning space, as demonstrated by
Table~\ref{tbl:machine-learning-techniques}. Models are trained on MNIST
data~\cite{lecun1998mnist} to solve the hand-written digit recognition task. In
the first fold of the study, we measure intra-technique adversarial sample
transferability rates $\Omega_X(f,f')$, for each machine learning technique,
across models trained on different subsets of the data. In the second fold of
the study, we measure inter-technique adversarial sample transferability rates
$\Omega_X(f,f')$ across models corresponding to all possible pairs of machine
learning techniques.
\begin {table}[t]
\centering
\begin{tabular}{|c||c|c|c|}
\hline
ML & Differentiable & Linear & Lazy \\
Technique & Model & Model & Prediction\\ \hline \hline
DNN & Yes & No & No \\ \hline
LR & Yes & Log-linear & No \\ \hline
SVM & No & No & No \\ \hline
DT & No & No & No \\ \hline
kNN & No & No & Yes \\ \hline
Ens & No & No & No \\ \hline
\end{tabular}
\caption{Machine Learning Techniques studied in Section~\ref{sec:transferability-section}}
\label{tbl:machine-learning-techniques}
\end{table}
\noindent\textbf{Hypothesis 2:} \emph{Black-box attacks are possible in
practical settings against any unknown machine learning classifier.}
Our motivation is to demonstrate that deployment of machine learning in
settings where there are incentives for adversaries to have models misbehave
must take into account the practical threat vector of adversarial samples.
Indeed, if black-box attacks are realistic in practical settings, machine
learning algorithm inputs must be validated as being part of the expected
distribution of inputs. As is the case for SQL injections, the existence of
adversarial samples calls for input validation in production systems using
machine learning.
The verification of this second hypothesis is two-fold as well. In
Section~\ref{sec:learning-approximators}, we show how to transfer the
generalization knowledge of any machine learning classifiers into a substitute
model by querying the classifier for labels on carefully selected inputs. In
Section~\ref{sec:ml-oracle-attack}, we perform black-box attacks against
commercial machine learning classifiers hosted by Amazon and Google. As we
validate the hypothesis throughout Sections~\ref{sec:learning-approximators}
and~\ref{sec:ml-oracle-attack}, we operate under the specific threat model of
an oracle, described in~\cite{papernot2016practical}, which characterizes
realistic adversarial settings. Instead of having full knowledge of the model's
architecture $f$ and its parameters $\theta$, as was the case for the first
hypothesis validation in Section~\ref{sec:transferability-section}, we now
assume the adversary's only capability is to observe the label predicted by
the model $f$ on inputs of its choice.
\section{Learning Classifier Substitutes by Knowledge Transfer}
\label{sec:learning-approximators}
In the previous section, we identified machine learning techniques (e.g., DNNs and LR) yielding
models adequate for crafting samples misclassified across models
trained with different techniques, i.e adversarial samples
with strong cross-technique transferability. Thus, in order to craft adversarial
samples misclassified by a classifier whose underlying model is unknown,
adversaries can instead use a \emph{substitute} model if it solves the same classification problem and its parameters are known. Therefore, efficiently learning substitutes is
key to designing \emph{black-box attacks} where adversaries target remote
classifiers whose model, parameters, and training data are unknown to them. This is precisely the attack
scenario evaluated against commercial machine learning platforms in
Section~\ref{sec:ml-oracle-attack}, while we focus in this section on the
prerequisite learning of substitutes for machine learning classifiers.
We enhance an algorithm introduced in~\cite{papernot2016practical}
to learn a substitute model for a given
classifier simply by querying it for labels on carefully chosen inputs. More precisely, we introduce two refinements to the algorithm: one improves its accuracy and the second reduces its computational complexity.
We generalize the learning of
substitutes to oracles using a range of machine learning techniques: DNNs, LR, SVMs, DTs, and kNNs.
Furthermore, we show that both DNNs and LR can be used as substitute models for all machine learning techniques studied to the exception of decision trees.
\subsection{Dataset Augmentation for Substitutes}
\label{sec:jacobian-based-dataset-augmentation}
The targeted
classifier is designated as an \emph{oracle} because adversaries have the
minimal capability of querying it for predictions on inputs of
their choice. The oracle returns the \emph{label} (\textbf{not} the probabilities) assigned to the
sample.
No other knowledge of the
classifier (e.g., model type, parameters, training data) is available. To
circumvent this, we build on a technique introduced
in~\cite{papernot2016practical}, which leverages a dataset augmentation
technique to train the substitute model.
\textbf{Jacobian-based dataset augmentation -} We use this augmentation
technique introduced in~\cite{papernot2016practical} to learn DNN and LR substitutes for oracles. First, one collects
an initial substitute training set of limited size (representative of the task
solved by the oracle) and labels it by querying the oracle. Using this labeled
data, we train a first substitute model $f$ likely to perform poorly as a
source of adversarial samples due to the small numbers of samples used for
training. To select additional training points, we use the following:
\begin{equation}
\label{eq:jacobian-based-heuristic}
S_{\rho+1} = \{ \vec{x}+\lambda_\rho \cdot \sgn(J_f[\tilde{O}(\vec{x})]:\vec{x}\in S_\rho) \} \cup S_\rho
\end{equation}
where $S_\rho$ and $S_{\rho+1}$ are the previous and new training sets,
$\lambda_\rho$ a parameter fine-tuning the augmentation step size, $J_f$ the
Jacobian matrix of substitute $f$, and $\tilde{O}(\vec{x})$ the oracle's label
for sample $\vec{x}$. We train a new instance $f$ of the substitute with the
augmented training set $S_{\rho+1}$, which we can label simply by querying
oracle $\tilde{O}$. By alternatively augmenting the training set and training a
new instance of the substitute model for multiple iterations $\rho$, Papernot
et al. showed that substitute DNNs can approximate another
DNNs~\cite{papernot2016practical}.
\textbf{Periodical Step Size -} When introducing the technique, Papernot et al. used a fixed step size parameter
$\lambda_\rho$ throughout the substitute learning iterations $\rho$. In this section, we
show that by having a step size periodically alternating between positive and negative
values, one can improve the quality of the oracle approximation made by the substitute, which we measure in terms
of the number of labels matched with the
original classifier oracle. More precisely, we introduce an iteration period $\tau$
after which the step size is multiplied by $-1$. Thus, the step size
$\lambda_\rho$ is defined as:
\begin{equation}
\label{fig:periodical-step-size}
\lambda_\rho = \lambda \cdot (-1)^{\left\lfloor \frac{\rho}{\tau} \right\rfloor}
\end{equation}
where $\tau$ is set to be the number of epochs after which the Jacobian-based
dataset augmentation does not lead any substantial improvement in the
substitute. A grid search can also be performed to find an optimal value for
the period $\tau$. We also experimented with a decreasing grid step amplitude
$\lambda$, but did not find that it yielded substantial improvements.
\textbf{Reservoir Sampling -} We also introduce the use of \emph{reservoir sampling}~\cite{vitter1985random}
as a mean to reduce the number of queries made to the oracle. This is useful
when learning substitutes in realistic environments where the number of label queries an adversary can make without
exceeding a quota or being detected by a defender is
constrained. Reservoir
sampling is a class of algorithms that randomly select $\kappa$ samples from a list of samples. The
total number of samples in the list can be both very large and unknown. In our
case, we use reservoir sampling to select a limited number of new inputs
$\kappa$ when performing a Jacobian-based dataset augmentation. This prevents
the exponential growth of queries made to the oracle at each augmentation
iteration. At iterations $\rho>\sigma$ (the first $\sigma$ iterations are
performed normally), when considering the previous set $S_{\rho-1}$ of
substitute training inputs, we select $\kappa$ inputs from $S_{\rho-1}$ to be
augmented in $S_\rho$. These $\kappa$ inputs are selected using reservoir
sampling, as described in Algorithm~\ref{alg:reservoir-sampling}. This
technique ensures that each input in $S_{\rho-1}$ has an equal probability
$\frac{1}{\left| S_{\rho-1} \right|}$ to be augmented in $S_\rho$. The number
of queries made to the oracle is reduced from $n\cdot2^\rho$ for the vanilla
Jacobian-based augmentation to $n\cdot2^\sigma+\kappa\cdot(\rho-\sigma)$ for
the Jacobian-based augmentation with reservoir sampling. Our experiments show
that the reduced number of training points in the reservoir sampling variant
does not significantly degrade the quality of the substitute.
\begin{algorithm}[t]
\caption{Jacobian-based augmentation with Reservoir Sampling: sets are considered as arrays for ease of notation.}
\label{alg:reservoir-sampling}
\begin{algorithmic}[1]
\Require $S_{\rho-1}$, $\kappa$, $J_f$, $\lambda_\rho$
\State $N \leftarrow \left| S_{\rho-1} \right|$
\State Initialize $S_{\rho}$ as array of $N+\kappa$ items
\State $S_{\rho}[0:N-1] \leftarrow S_{\rho-1}$
\For{$i\in 0..\kappa-1$}
\State $S_{\rho}[N+i] \leftarrow S_{\rho-1}[i]+\lambda_\rho \cdot \sgn(J_f[\tilde{O}(S_{\rho-1}[i])]) $
\EndFor
\For{$i\in \kappa .. N-1$}
\State $r\leftarrow$ random integer between $0$ and $i$
\If{$r < \kappa$}
\State $S_{\rho}[N+r] \leftarrow S_{\rho-1}[i]+\lambda_\rho \cdot \sgn(J_f[\tilde{O}(S_{\rho-1}[i])]) $
\EndIf
\EndFor
\State \Return $S_{\rho}$
\end{algorithmic}
\end {algorithm}
\subsection{Deep Neural Network Substitutes}
\label{sec:dnn-approximators}
In~\cite{papernot2016practical}, the oracle classifier approximated was always
a DNN. However, the authors concluded with preliminary results suggesting
applicability to a nearest neighbors classifier. We here show that in fact the
technique is generalizable and applicable to many machine learning techniques
by evaluating its performance on 5 types of ML classifiers: a DNN, LR, SVM, DT,
and kNN. This spectrum is representative of machine learning (cf.
Section~\ref{sec:transferability-experimental-setup}). Our experiments suggest
that one can accurately \emph{transfer} the knowledge from \emph{many} machine learning
classifiers to a DNN and obtain a DNN mimicking the decision boundaries of the
original classifier.
Using the Jacobian-based augmentation technique, we train 5 different
substitute DNNs to match the labels produced by 5 different oracles, one for
each of the ML techniques mentioned.
These classifiers serving as oracles are all trained on the $50,000$ sample
MNIST training set using the models described previously in
Section~\ref{sec:transferability-experimental-setup}. To approximate them, we
use the first $100$ samples from the MNIST test set (unseen during training) as
the initial substitute training set and follow three variants of the procedure
detailed in Section~\ref{sec:jacobian-based-dataset-augmentation} with
$\lambda=0.1$: (1) vanilla Jacobian-based augmentation, (2) with $\tau=3$
periodic step size, (3) with both $\tau=3$ periodic step size and reservoir
sampling with parameters $\sigma=3$ and $\kappa=400$. The substitute
architecture is identical to the DNN architecture from
Section~\ref{sec:transferability-experimental-setup}. We allow experiments to
train substitutes for 10 augmentation iterations, i.e. $\rho\leq 9$.
Figure~\ref{fig:learning-approximators-dnn} plots at each iteration $\rho$ the
share of samples on which the substitute DNNs agree with predictions made by
the classifier oracle they are approximating. This proportion is estimated by
comparing the labels assigned to the MNIST test set by the substitutes and
oracles before each iteration $\rho$ of the Jacobian-based dataset
augmentation. The substitutes used in this figure were all trained with both
a periodic step size and reservoir sampling, as described previously.
Generally speaking, all substitutes are able to successfully
approximate the corresponding oracle, after $\rho=10$ augmentation iterations,
the labels assigned match for about $77\%$ to $83\%$ of the MNIST test set,
except for the case of the DT oracle, which is only matched for $48\%$ of the samples. This difference could be explained by the
non-differentiability of decisions trees. On the contrary, substitute DNNs are
able to approximate the nearest neighbors oracle although it uses lazy
classification: no model is learned at training time and predictions are made
by finding close training sample(s).
The first three rows of Table~\ref{tbl:refinements} quantify the impact of
the two refinements introduced above on the proportion of test set labels
produced by the oracle that were matched by DNN substitutes.
The first refinement, the periodic step size, allows
substitutes to approximate more accurately their target oracle. For instance at
$\rho=9$ iterations, the substitute DNN trained with a periodic ste size
for the DNN oracle matches $89.28\%$ of the labels whereas the vanilla substitute
DNN only matched $78.01\%$. Similarly, the substitute DNN trained with a periodic ste size
for the SVM oracle matches $83.79\%$ of the labels whereas the vanilla substitute
only matched $79.68\%$.
The second refinement, reservoir sampling allows us to train substitutes for
more augmentation iterations without making too many queries to the oracle. For
instance, $10$ iterations with reservoir sampling (using $\sigma=3$ and $\kappa
=400$) make $100\cdot 2^3 + 400(10-3)=3,600$ queries to the oracle instead of
$102,400$ queries with the vanilla technique. The reduced number of queries has
an impact on the substitute quality compared to the periodic step size
substitutes but it is still superior to the vanilla substitutes. For instance,
when approximating a DNN oracle, the vanilla substitute matched $7,801$ labels,
the periodic step size one $8,928$, and the periodic step size with reservoir
sampling one $8,290$.
\begin{figure}[t!]
\begin{subfigure}[b]{\columnwidth}
\centering
\includegraphics[width=0.95\columnwidth]{fig/learning_dnn_approximator_v3}
\caption{DNN substitutes}
\label{fig:learning-approximators-dnn}
\end{subfigure}
\begin{subfigure}[b]{\columnwidth}
\centering
\includegraphics[width=0.95\columnwidth]{fig/learning_lr_approximator_v3}
\caption{LR substitutes}
\label{fig:learning-approximators-lr}
\end{subfigure}
\caption{Label predictions matched between the DNN and LR substitutes and their target classifier oracles on test data.}
\end{figure}
\begin {table}[t!]
\centering
\begin{tabular}{|l||c|c|c|c|c|}
\hline
Substitute & DNN & LR & SVM & DT & kNN \\ \hline \hline
DNN & 78.01 & 82.17 & 79.68 & 62.75 & 81.83 \\ \hline
DNN+PSS & 89.28 & 89.16 & 83.79 & 61.10 & 85.67 \\ \hline
DNN+PSS+RS & 82.90 & 83.33 & 77.22 & 48.62 & 82.46 \\ \hline\hline
LR & 64.93 & 72.00 & 71.56 & 38.44 & 70.74 \\ \hline
LR+PSS & 69.20 & 84.01 & 82.19 & 34.14 & 71.02 \\ \hline
LR+PSS+RS & 67.85 & 78.94 & 79.20 & 41.93 & 70.92 \\ \hline
\hline
\end{tabular}
\caption{Impact of our refinements, Periodic Step Size (PSS) and Reservoir Sampling (RS), on the percentage of label predictions matched between the substitutes and their target classifiers on test data after $\rho=9$ substitute iterations. }
\label{tbl:refinements}
\end{table}
\subsection{Logistic Regression Substitutes}
\label{sec:lr-approximators}
Having generalized substitute learning with a demonstration of the
capacity of DNNs to approximate any machine learning model, we now
consider replacing the substitute itself by another machine learning technique.
Experiments in Section~\ref{sec:transferability} led us to conclude that cross-technique transferability is not specific to adversarial
samples crafted on DNNs, but instead applies to many
learning techniques. Looking at
Figure~\ref{tbl:transferability-matrix} again, a natural candidate is logistic
regression, as it displays large cross-technique transferability rates superior to
DNNs except when targeting DNNs themselves.
The Jacobian-based dataset augmentation's implementation for DNNs is easily
adapted to multi-class logistic regression. Indeed, multi-class logistic
regression is analog to the softmax layer frequently used by deep neural
networks to produce class probability vectors. We can easily compute the
$(i,j)$ component of the Jacobian of a multi-class LR model:
\begin{equation}
\label{lr-jacobian}
J_f(\vec{x}) [i,j] = \frac{w_j e^{\vec{w_j}[i]\cdot\vec{x}}-\sum_{l=1}^N \vec{w_l}[i]\cdot e^{w_l \vec{x}}}{\left(\sum_{l=1}^N e^{\vec{w_l}[i]\cdot \vec{x}}\right)^2}
\end{equation}
where notations are the ones used in Equation~\ref{eq:logistic-regression}.
Hence, we repeat the experiment from Section~\ref{sec:dnn-approximators} but we
now train multi-class logistic regression substitute models (instead of the DNN
substitutes) to match the labels produced by the classifier oracles. Everything
else is unchanged in the experimental setup. As illustrated in Figure~\ref{fig:learning-approximators-lr}, the change of model type for the
substitute generally speaking degrades the approximation quality: the
proportion of labels matched is reduced. Performances of LR substitutes are
competitive with those of DNN substitutes for LR and SVM oracles. Here again,
the substitutes perform poorly on the decision tree oracle, with match rates
barely above $40\%$.
The last three rows of Table~\ref{tbl:refinements} quantify the impact of
the two refinements introduced above on the proportion of test set labels
produced by the oracle that were matched by LR substitutes.
The first refinement, the periodic step size, allows LR
substitutes to approximate more accurately their target oracle, as was also the
case for DNN substitutes. For instance at
$\rho=9$ iterations, the LRsubstitute trained with a periodic ste size
for the LR oracle matches $84.01\%$ of the labels whereas the vanilla LR substitute only matched $72.00\%$. Similarly, the LR substitute trained with a periodic ste size
for the SVM oracle matches $82.19\%$ of the labels whereas the vanilla substitute
only matched $71.56\%$.
The second refinement, reservoir sampling allows us to reduce the number of queries
with a limited impact on the substitute quality: less labels are match than
the periodic step size substitutes but more than the vanilla substitutes.
For instance,
when approximating a SVM oracle, the vanilla substitute matched $71.56\%$ of the labels, the periodic step size one $82.19\%$, and the periodic step size with reservoir sampling one $79.20\%$.
The benefit of vanilla LR substitutes compared
to DNN substitutes is that they achieve their asymptotic match rate faster,
after only $\rho=4$ augmentation iterations, corresponding to $1,600$ oracle
queries. Furthermore, LR models are much lighter in terms of computational
cost. These two factors could justify the use of LR (instead of DNN)substitutes
in some contexts. The reservoir sampling technique gives good performances,
especially on LR and SVM oracles.
\subsection{Support Vector Machines Substitutes}
Having observed that deep learning and logistic regression were both relevant
when approximating classifier oracles, we now
turn to SVMs for substitute learning. This is motivated by the strong cross-technique transferability of adversarial sample crafted
using an SVM observed in Section~\ref{sec:transferability-section}, making SVMs good candidates for substitutes in a black-box attack.
\textbf{SVM-based dataset augmentation -} To train SVMs to approximate
oracles in a manner analogous to the Jacobian-based dataset augmentation, we introduce a new augmentation
technique. We
replace the heuristic in Equation~\ref{eq:jacobian-based-heuristic}
by the following, which is adapted to the specificities of SVMs:
\begin{equation}
\label{eq:svm-dataset-augmentation}
S_{\rho+1} = \{ \vec{x} - \lambda \cdot \frac{\vec{w}[\tilde{O}(\vec{x})]}{\left\| \vec{w}[\tilde{O}(\vec{x})] \right\|}\vec{x}:\vec{x}\in S_\rho) \} \cup S_\rho
\end{equation}
where $\vec{w}[k]$ is the weight indicating the hyperplane direction of
subclassifier $k$ used to implement a multi-class SVM with the one-vs-the-rest
scheme as detailed in Equation~\ref{eq:sub-svm-binary}. This heuristic selects
new points in the direction orthogonal to the hyperplane acting as the decision
boundary for the binary SVM subclassifier $k$ corresponding to the input's
label. This is precisely the direction used in Equation~\ref{eq:svm-adv-sample}
to find adversarial samples but parameter $\lambda$ is here generally set to
lower values so as to find samples \emph{near} the decision boundary instead of on
the other side of the decision boundary.
\textbf{Experimental Validation -} We
repeat the experiments from Sections~\ref{sec:dnn-approximators}
and~\ref{sec:lr-approximators} but we now train 18 different SVM
models to match labels produced by the classifiers---instead of training DNN or LR substitutes. Unfortunately, our results suggest that
SVMs are unable to perform knowledge transfer from oracles that are not SVMs
themselves using the dataset augmentation technique introduced in Equation~\ref{eq:svm-dataset-augmentation}, as well as the refinements introduced previously: the periodic step size and reservoir sampling. Indeed, the SVM substitute matches $79.80\%$ of the SVM oracle labels, but only $11.98\%$ and $11.97\%$ of the DNN and LR oracle labels. These numbers are not improved by the use of a periodic step size and/or reservoir sampling. This could be due to the specificity of SVM training and the decision boundaries they learn. Future work should investigate the use of alternative augmentation techniques to confirm our findings.
In this section, we evaluated the capacity of DNN, LR, and SVM substitutes
to approximate a classifier oracle by querying it for labels on inputs
selected using a heuristic relying on the substitute's Jacobian. We observed that predictions
made by DNN and LR substitutes more accurately matched the
targeted oracles than SVM substitute predictions. We
emphasize that all experiments only required knowledge of $100$
samples from the MNIST test set. In other words, learning substitutes does not
require knowledge of the targeted classifier's type,
parameters, or training data, and can thus be performed under realistic adversarial threat
models.
|
1,314,259,996,638 | arxiv | \section{INTRODUCTION}
Prediction of event times, also known as {\it survival analysis} in the clinical context, is one of the most extensively studied topics in the statistical literature, largely due to its significance in a wide range of clinical and population health applications.
It provides a fundamental set of tools to statistically analyze the future behavior of a system, or an individual.
In the classical setup, the primary goal of time-to-event modeling is to either characterize the distribution of the occurrence of an event of interest on a population level \citep{kaplan1958nonparametric,kalbfleisch2011statistical}, or more specifically, to estimate a risk score on a subject level \citep{cox1972regression}.
In recent years, there has been a surge of interest in the prediction of individualized event time distributions \citep{yu2011learning}.
A characteristic feature in the study of time-to-event distributions is the presence of censored instances, which refer to an event that is not reported during the follow-up period of a subject.
This can happen, for instance, when a subject drops out during the study (right censoring), including when the study terminates before the event happens (administrative censoring).
Unlike many conventional predictive models, where incomplete observations are usually safely ignored, censored observations contain crucial information that should be adequately considered.
To efficiently leverage the censored observations, together with the complete observations, a classical treatment is to work with the notion of a {\it hazard function}, formally defined as the instantaneous event risk at time $t$, which can be computed by contrasting the event population to the population at risk at a specific time.
Estimates can be derived, for instance by optimizing the {\it partial likelihood} defined by the relative hazards in the case of the Cox Proportional Hazard model (CoxPH) \citep{cox1972regression}.
Alternatively, other work follows the standard {\it Maximal Likelihood Estimation} (MLE) framework, where the individual event distribution is a deformed version of some baseline distribution.
For example, in the {\it Accelerated Failure Time} model (AFT) \citep{kalbfleisch2011statistical}, covariate effects are assumed to rescale the {\it temporal index} of event-time distributions, {\it i.e.}, they either accelerate or delay event progression.
For censored events, their likelihoods are given as the cumulative density after the censoring time \citep{aalen2008survival}.
While vastly popular among practitioners, these models have been criticized for a number of reasons, in particular for the assumptions they make, that consequently render them unfit for many modern applications \citep{wang2019machine}.
For instance, most survival models, including CoxPH and the proportional odds model \citep{murphy1997maximum}, work under the premise of fixed covariate effects, overlooking individual uncertainty.
However, it has been widely recognized that, individual heterogeneity and other sources of variation are common and often time-dependent \citep{aalen1994effects}.
In real-world scenarios, these random factors are typically costly to measure, if not impossible to observe.
Unfortunately, many models are known to be sensitive to the violation of this fixed effect assumption, raising seriously concerns when deployed in actual practice \citep{hougaard1995frailty}.
Alternatively, machine learning techniques have been leveraged to overcome the limitations of standard statistical survival modeling schemes, especially in terms of model flexibility to address the complexity of data. For example, survival trees employed special node-splitting strategies to stratify the population and derive covariate-based survival curves \citep{bou2011review}, support vector machines \citep{khan2008support} and neural networks \citep{faraggi1995neural} have been used for more expressive predictors and LASSO-type variants \citep{zhang2007adaptive} simultaneously execute variable selection to boost statistical efficiency.
Bayesian statistics has also been explored in the context of model selection \citep{lisboa2003bayesian}, averaging \citep{raftery1995bayesian} and imposing prior beliefs \citep{fard2016bayesian}.
Recent advances in modern machine learning bring extra traction to the concept of data-driven survival models, an important step toward precision medicine.
Prominent examples include direct deep learning extensions of CoxPH \citep{katzman2016deep, li2019deep}, accelerated failure time \citep{chapfuwa2018adversarial} and Bayesian exponential family models \citep{ranganath2016deep}.
Other efforts include the use of Gaussian Process to capture complex interactions between covariates in relation to event times \citep{fernandez2016gaussian} and competing risks \citep{alaa2017deep}.
It has been argued that direct modeling of the event distribution might be beneficial \citep{yu2011learning}, and more recently, adversarial distribution matching has also been considered for survival applications \citep{chapfuwa2018adversarial} with promising results reported.
In this work we present a principled approach to address the challenges of nonparametric modeling of time-to-event distributions in the presence of censored instances.
Our approach, named {\it Variational Survival Inference} (VSI), builds upon recent developments in black-box variational inference \citep{ranganath2014black}.
It directly targets the estimation of individualized event-time distributions, rather than a risk score that correlates with event ordering.
By explicitly accounting for latent variables in its formulation, VSI better accommodates for individual uncertainty.
The proposed VSI is a highly scalable and flexible framework without strong assumptions, featuring easy implementation, stable learning, and importantly, it does not rely on {\it ad-hoc} regularizers.
Our key contributions include:
($i$) a variational formulation of nonparametric time-to-event distribution modeling conditioned on explanatory variables;
($ii$) a cost-effective treatment of censored observations;
($iii$) a thorough discussion on how our modeling choices impact VSI performance, and
($iv$) an empirical validation confirming that the proposed VSI compares favorably to its counterparts on an extensive set of tasks, covering representative synthetic and real-world datasets.
\section{BACKGROUND}
A dataset for survival analysis is typically composed of a collection of triplets $D=\{ Y_i=(t_i, \delta_i,X_i)\}_{i=1}^N$, where $i$ indexes the subjects involved in the study.
For each triplet, $X_i \in \mathbb{R}^p$ denotes the set of explanatory variables, $t_i$ is the observation time and $\delta_i$ is the event indicator.
To simplify our discussion, we only consider the standard survival setup.
This means $\delta_i$ is binary with $\delta_i=1$ indicating the event of interest happened at $t_i$, otherwise $\delta_i=0$ corresponds to a censoring event, {\it i.e.}, no event occurs until $t_i$ and the subject is unobserved thereafter.
This distinction creates a natural partition of the dataset $D=D_{c} \bigcup D_{e}$, with $D_c= \{Y_i:\delta_i=0\}$ and $D_{e}= \{Y_i:\delta_i=1\}$ representing the censored and event groups, respectively.
\subsection{Statistical survival analysis}
In survival analysis, one is interested in characterizing the survival function $S(t)$, defined as the probability that any given subject survives until time $t$.
The basic descriptors involved in the discussion of survival analysis are: the cumulative survival density $F(t) = 1-S(t)$, the survival density $f(t) = \partial_t F(t)$, the hazard function $h(t) = \lim_{\Delta t\rightarrow 0} \frac{P(t\leq T < t+\Delta t|T\geq t)}{\Delta t}$ and the cumulative hazard function $H(t) = \int_{0}^t h(s) \text{d} s$.
The following expressions are fundamental to survival analysis \citep{aalen2008survival}:
$S(t) = \exp(-H(t))$ and $f(t) = h(t)S(t)$.
Further, we use $S(t|x)$, $f(t|x)$, $F(t|x)$, $h(t|x)$, $H(t|x))$ to denote their individualized (subject-level) counterparts given explanatory variables $x$.
All survival models leverage these definitions to derive population-level estimators or subject-level predictive functions, {\it e.g.}, of risk, $S(t|x)$, or event time, $f(t|x)$.
\subsection{Variational inference}
For a latent variable model $p_{\theta}(x,z)=p_{\theta}(x|z)p(z)$, we consider $x\in \mathbb{R}^p$ as an observation, {\it i.e.}, data, and $z\in \mathbb{R}^m$ as latent variable.
The marginal likelihood, given by $p_{\theta}(x) = \int p_{\theta}(x,z) \text{d} z$, typically does not enjoy a closed form expression.
To avoid direct numerical estimation of $p_{\theta}(x)$, Variational Inference (VI) optimizes a variational bound to the marginal log-likelihood.
The most popular choice is known as the Evidence Lower Bound (ELBO) \citep{wainwright2008graphical}, given by
\begin{equation}\label{eq:elbo}
\text{ELBO}(x)
\triangleq \mathbb{E}_{Z\sim q_{\phi}(z|x)}
\left[ \log \frac{p_{\theta}(x,Z)}{q_{\phi}(Z|x)} \right] \leq \log p_{\theta}(x),
\end{equation}
where $q_\phi(z|x)$ is an approximation to the true (unknown) posterior $p_{\theta}(z|x)$, and the inequality is a direct result of Jensen's inequality.
The variational gap between the ELBO and true $\log$-likelihood is the KL-divergence between posteriors, {\it i.e.}, $\text{KL}(q_{\phi}(z|x)\parallel p_{\theta}(z|x)) = \mathbb{E}_{q_{\phi}(z|x)}[\log q_{\phi}(z|x) - \log p_{\theta}(z|x)]$, which implies the ELBO tightens as $q_{\phi}(z|x)$ approaches the true posterior $p_{\theta}(z|x)$.
For estimation, we seek parameters $\theta$ and $\phi$ that maximize the ELBO.
At test time, $q_{\phi}(z|x)$ is used for subsequent inference tasks on new data.
Given a set of observations $\{ x_i \}_{i=1}^N$ sampled from data distribution $x\sim p_{d}$, maximizing the expected ELBO is also equivalent to minimizing the KL-divergence $\text{KL}(p_d \parallel p_{\theta})$ between the empirical and model distributions.
When $p_{\theta}(x|z)$ and $q_{\phi}(z|x)$ are specified as neural networks, the resulting architecture is more commonly known as the Variational Auto-Encoder (VAE) \citep{kingma2013auto} in the context of computational vision and natural language processing.
\section{VARIATIONAL SURVIVAL INFERENCE}
\label{sec:VSI}
Below we detail the construction of the Variational Survival Inference (VSI) model, which results in predictions of the time-to-event distribution $p_{\theta}(t|x)$ given attribute $x$, with the individual uncertainty accounted in the form of a latent variable $z$ whose distribution is estimated under the VI framework.
Unlike classical survival models, we do not need to specify a parametric form for the baseline distribution, {\it e.g.}, the base hazard $h_0(t)$ in CoxPH \citep{cox1972regression} or the base density $p_0(t)$ in AFT \citep{kalbfleisch2011statistical}.
Instead, we leverage the power of deep neural networks to amortize the learning of the event time and survival distributions, allowing arbitrary (high-order) interactions between the predictors and survival time to be captured.
This overcomes the limitations caused by the restrictive assumptions made in the classical statistical survival analysis frameworks, thus allowing flexible inference of time-to-event distributions.
\iffalse
\begin{figure}[t]
\centerline{\includegraphics[scale = 0.3]{plots/graphcompare.pdf}}
\caption{Graphical model for standard VAE (left) and SVI model(right).}
}
\label{fig:graphcompare}
\end{figure}
\fi
\subsection{Variational bound of observed events}
We start the discussion with the simplest scenario, that for which there are no censoring events.
Our goal is to maximize the expected $\log$-likelihood $1/N \sum_i \log p_{\theta}(t_i | X_i)$.
To model the conditional likelihood, we consider a latent variable model of the form $p_{\theta}(t,z|x)$.
The unconditional formulation of the ELBO in \eqref{eq:elbo} can be readily generalized to case conditional on event times as
\begin{equation}\label{eq:elbo_joint}
\text{ELBO}(t|x) = \mathbb{E}_{Z\sim q_{\phi}(z|x,t)}\left[ \log \frac{p_{\theta}(t,Z|x)}{q_{\phi}(Z|x,t)} \right],
\end{equation}
where $q_{\phi}(z|x,t)$ denotes the conditional posterior approximation to the true (unknown) $p_{\theta}(z|x,t)$.
In particular, we assume a model distribution with the following decomposition
\begin{equation}\label{eq:cond}
p_{\theta}(t,z|x) = p_{\theta}(t|z,x) p_{\theta}(z|x) = p_{\theta}(t|z) p_{\theta}(z|x),
\end{equation}
which posits that $z$ is a sufficient statistics of $x$ w.r.t. survival time $t$.
Another key assumption we make is that, unlike in the standard variational inference model, we have used a learnable inhomogeneous prior $p_{\theta}(z|x)$ for the latent $z$ to replace the standard fixed homogeneous prior $p(z)$.
Such covariate-dependent prior formulation allows the model to account for individual variation, thus further helping to close the variational gap \citep{tomczak2017vae}.
Replacing \eqref{eq:cond} into the ELBO expression in \eqref{eq:elbo_joint} results in the usual likelihood and KL decomposition pair
\begin{align}
\begin{aligned}
\text{ELBO}(t|x) & = \mathbb{E}_{Z\sim q_{\phi}(z|x,t)}\left[ \log p_{\theta}(t|Z) \right] \\[10pt]
& - \text{KL}(q_{\phi}(z|x,t) \parallel p_{\theta}(z|x)),
\end{aligned}
\end{align}
from which we can see that maximizing the ELBO is equivalent to estimate the parameters of a probabilistic time-to-event model $p_{\theta}(t|z)p_{\theta}(z|x)$ with maximum likelihood such that the inhomogeneous prior $p_{\theta}(z|x)$ matches as well as possible a conditional posterior that explicitly accounts for event times, $q_{\phi}(z|x,t)$.
At test time, only $p_{\theta}(z|x)$ will be used to make predictions provided that $t$ is not available during inference.
More specifically, $p_{\theta}(t|z)$, $p_{\theta}(z|x)$ and $q_{\phi}(z|x,t)$ are defined as neural networks
\begin{align}
\begin{aligned}
p_{\theta}(t|z) & = {\rm Softmax}(g(z;\theta)), \\
p_{\theta}(z|x) & = {\mathcal N}( \mu_p(x;\theta), \Sigma_p(x;\theta)), \\
q_{\phi}(z|x,t) & = {\mathcal N}( \mu_q(x,t;\phi), \Sigma_q(x,t;\phi) ) ,
\end{aligned}
\end{align}
where $p_{\theta}(t|z)$ is represented on a discretized time line (see below for details), $g(z;\theta)$, $\mu_p(x;\theta)$, $\Sigma_p(x;\theta)$ and $\mu_q(x,t;\phi)$, $\Sigma_q(x,t;\phi)$ are deep neural nets parameterized by model parameters $\theta$ and variational parameters $\phi$, and ${\mathcal N} (\mu, \Sigma)$ denotes the multivariate Gaussian with mean $\mu$ and (diagonal) covariance $\Sigma$.
For standard tabular data, we use Multi Layer Perceptrons (MLPs) to specify these functions.
\subsection{Variational bound of censored events}
Addressing censoring in the formulation is more challenging as this type of {\em partial observation} is not subsumed in the conventional VI framework.
To address this difficulty, we recall that in likelihood-based survival analysis, the likelihood function for censored observations is given by $\log S_{\theta}(t|x)$, where $S_{\theta}(t|x)$ is the survival function and $t$ is the censoring time.
For censored observations $Y$ with $\delta=0$, we do not have the exact event time $t$.
This means that we only have partial information of the events, in that the event should happen only after the censoring time $t$.
To derive a tractable objective for censored observations, we first expand $\mathcal{L}_c(x, t) = \log S_{\theta}(t|x)$ based on its definition and an application of Fubini's theorem \citep{resnick2003probability} and Jensen's inequality, {\it i.e.},
\begin{align*}
\mathcal{L}_c(x, t) & = \text{log }S_{\theta}(t|x)
= \text{log } \int_{t}^{\infty}p_{\theta}(t|x) dt \\
& \geq \mathbb{E}_{q_{\phi}(z|t,x)}\left[\text{log }{\frac{p_{\theta}(z|x)}{q_{\phi}(z|t,x)}} + \text{log }S_{\theta}(t|z)\right] \\
& = \mathbb{E}_{q_{\phi}(z|t,x)}[\text{log }S_{\theta}(t|z)] \\
& \hspace{15mm} - {\rm KL}(q_{\phi}(z|t,x)||p_{\theta}(z|x)) \\
&\triangleq\text{ELBO}_c(t|x)
\end{align*}
where the censored log-likelihood bound $\text{ELBO}_c(t|x)$ is only evaluated on $D_c$, {\it i.e.}, the subset of censored observations.
See Supplementary Materials for the full derivation of $\text{ELBO}_c(t|x)$.
\iffalse
\begin{equation}
\begin{split}
\log \mathcal{L}_c(x_i, t_i;\alpha)&=\text{log }S_{\alpha}(t_i|x_i)= \text{log } \int_{t_i}^{\infty}p_{\alpha}(t|x_i) dt\\
&= \text{log } \int_{t_i}^{\infty}\int_z \frac{p_{\alpha}(z, t|x_i)}{q_{\beta}(z|t_i,x_i)} q_{\beta}(z|t_i,x_i) dzdt\\
&= \text{log } \int_{t_i}^{\infty}\int_{q_z} \frac{p_{\alpha}(z, t|x_i)}{q_{\beta}(z|t_i,x_i)} dQ_zdt \\%\text{, by Fubini's theorem}\\
&= \text{log } \int_{q_z} \int_{t_i}^{\infty}\frac{p_{\alpha}(z, t|x_i)}{q_{\beta}(z|t_i,x_i)} dt dQ_z\\
&= \text{log } \int_{q_z} \frac{p_{\alpha}(z|x_i)}{q_{\beta}(z|t_i,x_i)}\int_{t_i}^{\infty}p_{\alpha}(t|z) dt dQ_z\\
&\ge \int_{q_z} \text{log } \left(\frac{p_{\alpha}(z|x_i)}{q_{\beta}(z|t_i,x_i)}\int_{t_i}^{\infty}p_{\alpha}(t|z) dt \right)dQ_z\\
&= \mathbb{E}_{q_{\beta}(z|t,x)}[\text{log }{\frac{p_{\alpha}(z|x_i)}{q_{\beta}(z|t_i,x_i)}} + \text{log }S_{\alpha}(t_i|z)] \\
&= -KL(q_{\beta}(z|t_i,x_i)||p_{\alpha}(z|x_i))\\
&+ \mathbb{E}_{q_z}[\text{log }S_{\alpha}(t_i|z)] \\
&=\text{ELBO}_c
\end{split}
\label{eq:censorVAElb}
\end{equation}
\fi
\subsection{Implementing VSI}
In the current instantiation of the model, we discretize time into $M$ bins spanning the time horizon of the (training) data.
This means that (at inference) $t$ is only known up to the time bin it falls into.
We note this is not a restrictive assumption as many survival data is only known up to certain temporal accuracy.
That said, generalization to continuous observations is fairly straightforward.
For datasets that do have a natural discretization, we leave the choice to the user.
In this study, we partition the temporal index based on the percentiles of observed event time, while also allowing for an artificial $(M+1)$-th bin to account for event times beyond the full observation window, {\it i.e.}, events happening after the end-of-study as observed in the training cohort.
Since both $p_{\theta}(z|x)$ and $q_{\phi}(z|x,t)$ are assumed to be Gaussian,
the following closed-form expression can be used in the computation of the KL terms above
\begin{equation}
\begin{array}{c}
\hspace{-3em}{\rm KL}(q_{\phi}(z|x,t) \parallel p_{\theta}(z|x))) = \frac{1}{2} \left\{ \text{tr}\left( \Sigma_p^{-1} \Sigma_q \right) + \right.\\
[5pt]
\hspace{6em} \left.\left( \mu_p - \mu_q \right)^T \Sigma_p^{-1} \left( \mu_p - \mu_q \right) - m + \log \frac{\det(\Sigma_p)}{ \det(\Sigma_q)}\right\}.
\end{array}
\end{equation}
Following \citet{ranganath2014black}, we use diagonal covariance matrices and apply the reparameterization trick to facilitate stable differatiable learning.
In order to compute the term $S_{\theta}(t|x)$, we use discretized time scheme as previously described, and sum up all predicted probabilities subsequent to bin $t$.
Note that this can be readily generalized to continuous time models.
So long as the cumulative distribution of $p_{\theta}(t|z)$ enjoys a closed form expression, a numerical integration scheme is not necessary to implement VSI.
\subsection{Importance-Weighted estimator for likelihood evaluation}
For evaluation purposes, we need to be able to compute the model's log-likelihood for an observation $Y = (x_i,t_i,\delta_i)$, {\it i.e.},
\begin{equation}
\mathcal{L}_{\rm VSI}(x_i,t_i;\theta) = \delta_i\log p_\theta(t_i|x_i) \\
+ (1-\delta_i)\log S_\theta(t_i|x_i).
\end{equation}
In this study, we use the importance-weighted (IW) estimator \citep{burda2015importance}, which provides a tighter bound to the $\log$-likelihood. While more sophisticated alternatives might provide sharper estimates \citep{neal2001annealed}, we deem IW estimator sufficient for the scope of this study. Additionally, while the tighter bound can be repurposed for training, it does not necessarily result in improved performance \citep{rainforth2018tighter}, which we find to be the case in this study.
To obtain a more accurate value of the likelihood, we use the approximate posterior as our proposal, and use the following finite sample estimate
\begin{equation*}
\begin{split}
\hat{p}_\theta(t_i|x_i) &= \int \frac{p_{\theta}(t_i|z)p_{\theta}(z|x)}{q_{\phi}(z|t_i,x)}q_{\phi}(z|t_i,x) dz \\
& \approx \frac{1}{L}\sum_{l=1}^L \frac{p_{\theta}(t_i|z_l)p_{\theta}(z_l|x)}{q_{\phi}(z_l|t_i,x)},
\end{split}
\end{equation*}
where $L$ is the number of samples.
The log-likelihood for the corresponding conditional survival function is
\begin{equation*}
\begin{split}
\hat{S}_\theta(t_i|x_i) &= \int_{t>t_i}\int \frac{p_{\theta}(t_i|z)p_{\theta}(z|x)}{q_{\phi}(z|t_i,x)}q_{\phi}(z|t_i,x) dz dt\\
& \approx \frac{1}{L}\sum_{l=1}^L \frac{ \int_{t>t_i} p_{\theta}(t,z_l|x)dt}{q_{\phi}(z_l|t_i,x)}
\end{split}
\end{equation*}
Note that by nature of Jensen's inequality, the resultant estimand will be an under-estimation of the true $\log$-likelihood. As $L$ goes to infinity, the approximated lower bound will converge to the true $\log$-likelihood.
\subsection{Making Predictions}
\label{sec:wtaverage}
{\bf Predictive time-to-event distribution}
During inference, given a new data point with $x_*$, according to the generative model $p_{\theta}(t|x_*) = \int p_{\theta}(t,z|x_*)dz = \int p_{\theta}(t|z)p_{\theta}(z|x_*)dz$, where the integration is conducted numerically by Monte Carlo sampling.
{\bf Point estimation of time-to-event}
To better exploit the learned approximated posterior $q_{\phi}(z|x,t)$, we generalize the importance sampling idea and provide a weighted average as time-to-event summary, rather than for instance using a summary statistic such as median or mean.
Specifically, consider multiple samples of $t_*^{(l)} \sim p_{\theta}(t|x_*)$, then calculate a weighted average as
\begin{equation}\label{eq:wt}
\begin{array}{c}
t_* = \frac{\sum_{l=1}^L w_*^{(l)} t_*^{(l)}}{\sum_{l=1}^L w_*^{(l)}},\, w_*^{(l)} = \frac{p_{\theta}(z_l|x_*)}{q_{\phi}(z_l|t_*^{(l)},x_*)}, \\
[10pt]
t_*^{(l)} \sim p_{\theta}(t|x_*), \,\,\, z_l \sim q_{\phi}(z|t_*^{(l)},x_*).
\end{array}
\end{equation}
In the Supplementary Materials we show that \eqref{eq:wt} gives better model performance for point-estimate-based evaluation metrics, Concordance Index in particular, compared to other popular summary statistic such as the median of $t_* \sim p_{\theta}(t|x_*)$ with $L$ empirical samples.
\section{DISSECTING VSI}
\label{sec:baselines}
In the experiments, we show the effectiveness of the proposed VSI model in recovering underlying time-to-event distributions.
To provide additional insight into the differentiating components of the VSI model, we consider two baseline models that partially adopt a VSI design, as detailed below.
\vspace{3pt}
{\bf VSI without a $q_\phi$ arm (VSI-NoQ)}
In VSI, we use the variational lower bound to maximize the likelihood in survival studies by implicitly forcing the unknown intractable model posterior $p_\theta(z|x)$ to be close to the tractable posterior approximation $q_{\phi}(z|x, t)$.
Via the KL divergence minimization, such matching allows the model to better account for interactions between covariates $x$ and event times $t$ captured by $q_{\phi}(z|x,t)$ to better inform the construction of the latent representation $z$ via isolating out the individual uncertainty encoded by $p_{\theta}(z|x)$.
If we exclude the interaction term $(x,t)$ in $q_{\phi}$ and only make the prediction with $x$, {\it i.e.}, with the approximate posterior given by $q_{\phi}(z|x)$, through the same stochastic latent representation $z$, then naturally the optimal solution is to equate $q_{\phi}(z|x)$ with the prior $p_{\theta}(z|x)$ \footnote{Based on a KL-vanishing argument.}. This basically eliminates $q_{\phi}$ from our formulation, and therefore we call this variant VSI-NoQ.
More specifically, without a $q_{\phi}$ arm the model described in Section \ref{sec:VSI} essentially becomes a feed-forward model with a special stochastic hidden layer $z$.
In this case, the model likelihood is given by $p_{\theta}(t|x) = \int p_{\theta}(t,z|x)dz = \int p_{\theta}(t|z)p_{\theta}(z|x)dz$, where $p_{\theta}(t|z)$ and $p_{\theta}(z|x)$ are defined as in \eqref{eq:cond}.
Note that the only difference with VSI is the lack of the KL divergence term to match $p_{\theta}(z|x)$ to $q_{\phi}(z|x, t)$.
This baseline model (VSI-NoQ) is considered to dissect the impact of excluding complex interaction between covariates and event time when constructing the individualized priors.
\vspace{3pt}
{\bf Deterministic feed-forward model (MLP)} To understand the importance of the stochastic latent representations $z$, we consider a straightforward baseline which directly predicts the event time distribution based on the input $x$, {\it i.e.}, $p_{\theta}(\cdot|x) = \rm{Softmax}(g_{\theta}(x))$, which is essentially a standard multinomial regression with censored observation. In our study, we use the MLP to implement $g_{\theta}(x)$. And as such, hereafter we will refer to this model as MLP. Additionally, we also considered standard randomization schemes, such as dropout \citep{srivastava2014dropout}, in the construction of a stochastic neural net, which promises to improve performance. Such strategy also incorporates randomness, however differs principally from the modeled uncertainty exploited by our VSI scheme. In our experiment section, we report the best results from MLP with or without dropout.
These baseline approaches use feed-forward deep learning networks to learn $p_\theta(t|x)$ without incurring the notation of variational inference.
In the experiments we will show that the variational inference is crucial to the accurate learning of time-to-event distributions, resulting in better performance relative to these baselines, especially when the proportion of censoring events is high.
\section{RELATED WORK}
\vspace{3pt}
{\bf Machine learning and survival analysis} Early attempts of combining machine learning techniques with statistical survival analysis, such as the Faraggi-Simon network (FS-network) \citep{faraggi1995neural}, often failed to demonstrate a clear advantage over classical baselines \citep{schwarzer2000misuses}. Recent progresses in machine learning allow researchers to overcome the difficulties suffered by prior studies. For example, \citet{katzman2018deepsurv} showed that weight decay, batch normalization and dropout significantly improved the performance of FS-network. \citet{li2019deep} analyzed survival curves based on clinical images using deep convolution neural net (CNN). In addition to deep nets, \citet{fernandez2016gaussian} showed that Gaussian Process can be used to effectively capture the non-linear variations in CoxPH models, and \citet{alaa2017deep} further proposed a variant that handles competing risks. Similar to these works, our VSI also draws power from recent advances in machine learning to define a flexible learner.
\vspace{3pt}
{\bf Bayesian survival analysis} Bayesian treatment of survival models has a long history. \citet{raftery1996accounting} first considered modeling uncertainties for survival data, \citet{zupan1999machine} reported probabilistic analysis under Bayesian setup. More recently, \citet{fard2016bayesian} exploited the Bayesian framework to extrapolate priors, and \citet{zhang2018nonparametric} described a Bayesian treatment of competing risks. Closest to VSI is the work of {\it deep exponential family} model (DEF) survival model \citep{ranganath2016deep}, where the authors introduced a Bayesian latent variable model to model both predictors $x$ and survival time $t$. Unlike our VSI, DEF still imposes strong parametric assumptions on the survival distribution, and it's not clear how the censored observations are handled in DEF's actual implementation. Another key difference between DEF and VSI is the factorization of joint likelihood. As the VSI encoder will only seek to capture the latent components that are predictive of the survival time distribution, while DEF encoder also needs to summarize information required to reconstruct covariates $x$. We argue that our VSI factorization of joint probability is more sensible for survival time modeling, because modeling $x$ not only adds model complexity but also introduces nuisance to the prediction of survival time $t$. For datasets with large covariates dimensions and noisy observations, the DEF features can be dominated by the ones predictive of $x$ rather $t$, compromising the main goal of modeling the survival distribution.
\vspace{3pt}
{\bf Individual uncertainties and randomization} The seminal work of \citet{aalen1994effects} first identified importance of accounting for the individual uncertainties, the main culprit for the failure of classical survival models, which can be remedied by explicitly modeling the random effects \citep{hougaard1995frailty}. Alternatively, \citet{ishwaran2008random} presented {\it Random Survival Tree} (RST) to predict cumulative hazards using a tree ensemble, demonstrating the effectiveness of a randomization scheme for statistical survival models. Our approach differs from the above schemes by systematically account for individual uncertainty using the randomness of latent variables.
\vspace{3pt}
{\bf Direct modeling of survival distribution} The pioneering work of \citet{yu2011learning} advocated the prediction of individual survival distributions, which is learned using a generalized logistic regression scheme. This idea is further generalized in the works of \citet{luck2017deep} and \citet{fotso2018deep}. Recently, \citet{chapfuwa2018adversarial} explored the use of deep {\it Generative Adversarial Network} (GAN) to capture the individual survival distribution, which is closest to our goal. Compared the proposed VSI, the adversarial learning of survival distribution is largely unstable, and its success crucially relies on the use of {\it ad-hoc} regularizers.
\section{EXPERIMENTS}
To validate the effectiveness of the proposed VSI, we benchmarked its performance against the following representative examples from both statistical and machine learning survival analysis schemes: AFT-Weibull, CoxPH, LASSO-based CoxNet \citep{simon2011regularization}, Random Survival Forest (RSF) \citep{ishwaran2008random} and deep learning based DeepSurv \citep{katzman2018deepsurv}.
To fully appreciate the gains from using a variational setup, we further compared the results with the baselines discussed in Section \ref{sec:baselines}, namely, the feed-forward model (MLP) and VSI model without the backward encoding arm $q_{\phi}(z|t,x)$ (VSI-NoQ).
For data preparation, we randomly partition data into three non-overlapping sets for training (60\%), validation (20\%) and evaluation (20\%) purposes respectively. All models are trained on the training set, and we tune the model hyper-parameters wrt the out-of-sample performance on the validation set. The results reported in the paper are based on the evaluation set using best-performing hyper-parameters determined by the validation set. We apply ADAM optimizer with learning rate of $5\times 10^{-4}$ during training, with mini-batches of size $100$. The early stopping criteria of no improvement on the validation datasets is enforced.
To ensure fair comparisons, all deep-learning based solutions are matched for the number parameters and similar model architectures \& similar hyper-parameter settings.
TensorFlow code to replicate our experiments can be found at \url{https://github.com/ZidiXiu/VSI/}.
The details of the VSI model setups are related to the Supplementary Materials (SM).
\subsection{Evaluation Metrics}
To objectively evaluate these competing survival models, we report a comprehensive set of distribution-based and point-estimate based scores to assess model performance, as detailed below.
\vspace{3pt}
{\bf Concordance Index} (C-Index) is commonly used to evaluate the consistency between the model predicted risk scores and observed event rankings \citep{harrell1982evaluating}. Formally, it is defined as
\begin{equation*}
\text{C-Index} = \frac{1}{|\mathcal{E}|} \sum_{(i,j)\in\mathcal{E}} \mathbb{1}_{f(x_i)>f(x_j)}
\end{equation*}
,
where $\mathcal{E}=\{ t_i < t_j | \delta_i=1 \}$ is the set of all valid ordered pairs (event $i$ before event $j$) and $f(x)$ is a scalar prediction made by the model. Higher is better.
\vspace{3pt}
{\bf Time-dependent Concordance Index} is a distribution generalization of the scalar risk score based C-Index \citep{antolini2005time}, which is computed from the predicted survival distribution. Formally it is given by
\begin{equation*}
\mathcal{C}^{\text{td}} = P(\hat{F}(t_i|x_i)>\hat{F}(t_i|x_j)|t_i<t_j).
\end{equation*}
, where $\hat{F}$ denotes the model predicted cumulative survival function.
We report the results using the following empirical estimator
\begin{equation*}
\hat{\mathcal{C}}^{\text{td}} = \frac{1}{|\mathcal{E}|}\sum_{(i,j)\in \mathcal{E}}\mathbb{1}_{\hat{F}(t_i|x_i)>\hat{F}(t_j|x_j)}
\end{equation*}
\vspace{3pt}
{\bf Kolmogorov-Smirnov (KS) distance} For synthetic datasets, we also report the KS distance \citep{massey1951kolmogorov} between the predicted distribution and the ground truth. KS computes the maximal discrepancy between two cumulative densities, {\it i.e.},
\begin{equation*}\text{KS} = \text{sup}_t |F_1(t)-F_2(t)|,
\end{equation*}
and a lower KS indicates better match of two distributions.
\vspace{3pt}
{\bf Test log-likelihood} We also report the average $\log$-likelihood on the held-out test set. A higher score indicates the model is better aligned with the ground-truth distribution in the sense of KL-divergence. Additionally, we also evaluate the spread of empirical likelihood wrt the models. In the case of an expected $\log$-likelihood tie, models with the more concentrated $\log$-likelihoods are considered better under the maximal entropy principle \citep{chen2018variational} ({\it i.e.}, as observed instances received more uniform/similar likelihoods, better generalization of the model is implied).
\vspace{3pt}
{\bf Coverage Rate}
To quantify the proportion of observed time covered in the predicted personalized time-to-event distributions, we calculated the coverage rate for different percentile ranges. For subjects with event observations, the coverage rate is defined as the proportion of observations fall in the percentile ranges $[l, u]$ of the predicted distributions, where $l,u$ respectively denotes lower and upper quantile of percentile ranges, {\it i.e.},
\begin{equation*}
\text{Cover Rate}_{\text{events}}(l,u)=\frac{1}{n_e}\sum_{y_i \in \mathcal{D}_{e}}\mathrm{I}(l<t_i<u)
\end{equation*}
In our experiments, we report coverage rates of events at percentile range $[l,u] \in \{ [0.05, 0.95]$, $[0.1, 0.9]$, $[0.15, 0.85]$, $[0.2, 0.8]$, $[0.25, 0.75]$, $[0.3, 0.7]$, $[0.35, 0.65]$, $[0.4, 0.6]$, $[0.45, 0.55]\}$ of the predicted personalized distributions.
For censoring, we calculate the proportion of the censoring time happened before the percentiles of predicted range, since the true time-to-event for censoring is happened after censoring time,
\begin{equation*}
\text{Cover Rate}_{\text{censor}}(l)=\frac{1}{n_c}\sum_{y_i \in \mathcal{D}_{c}}\mathrm{I}(t_i\le l)
\end{equation*}
We evaluated the coverage rate for censoring at $l \in\{ 0.1, 0.2, \cdots, 0.9 \}$ percentiles.
For all coverage rates, a higher score implies better performance. Coverage rates for events and censoring should be considered together to evaluate model performance.
\subsection{Synthetic datasets}
Following \citet{bender2005generating} we simulate a realistic survival data based on the German Uranium Miners Cohort Study in accordance with the Cox-Gompertz model $$T = \frac{1}{\alpha}\text{log}\left[1-\frac{\alpha\text{log}(U)}{\lambda \text{exp}(\beta_{\text{age}}\times \text{AGE} + \beta_{\text{radon}}\times \text{RADON})}\right]$$,
with $U \sim \text{Unif}[0,1]$.
This model simulates the cancer mortality associated with radon exposure and age. Model parameter are derived from real data: $\alpha=0.2138$, $\lambda=7\times 10^{-8}$, $\beta_{\text{age}} =0.15$ and $\beta_{\text{radon}}=0.001$. Covariates are generated according to
\begin{equation*}
{\rm AGE}\sim {\mathcal N}(24.3, (8.4)^2), \, {\rm RADON}\sim{\mathcal N}(266.8, (507.8)^2),
\end{equation*}
where $\mathcal{N}(\mu,\sigma^2)$ denotes a normal distribution with mean $\mu$ and variance $\sigma^2$. We simulate uniform censoring within a fixed time horizon $c$, {\it i.e.}, we let $C_i \sim \text{UNIF}(0,c)$, then $\delta_i=\mathbb{1}_(T_i < C_i)$ and $T_i=C_i$ if $C_i < T_i$. By setting different upper bounds $c$ for censoring, we achieve different observed event rates, $100\% (c=\infty)$, $50\% (c=100)$ and $30\% (c=70)$.
For each simulation we randomly draw $N=50k$ iid samples.
\begin{figure}[t!]
\centering
\begin{tabular}{c@{}c}
\subfloat[$\delta=1$ (observed)]{\includegraphics[width=0.5\linewidth]{plots/simu_event_er30.pdf}} & \subfloat[$\delta=0$ (censored)]{\includegraphics[width=0.5\linewidth]{plots/simu_censor_er30.pdf}}\\
\end{tabular}
\caption{Two simulated time-to-event distributions with 30\% event rate showing that VSI successfully predicts the underlying distributions from covariates. (left: events, right:censoring)}
\label{fig:simusubj}
\end{figure}
\vspace{3pt}
{\bf Prediction of subject-level distribution} In practice, for each subject we only observe one $t$ from its underlying distribution.
Our goal is to accurately predict the underlying distribution from the covariates $x$ alone (since $t$ and $\delta$ are not observed at test time), by learning from the observed instances.
Figure~\ref{fig:simusubj} compares our VSI prediction with the ground-truth for two random subjects, which accurately recovers of individual survival distribution for both observed (Figure~\ref{fig:simusubj}(a)) and censored cases (Figure~\ref{fig:simusubj}(b)).
\begin{table}[ht]
\centering
\caption{KS statistic for simulation study.}
\begin{tabular}{lrrr}
\toprule
Event Rate & 100\% & 50\% & 30\% \\ \midrule
CoxPH (Oracle) & 0.027 &0.032 & 0.027\\
[5pt]
AFT-Weibull & 0.057 & 0.058 & 0.068\\
MLP & 0.047 & 0.063 & 0.064 \\
[5pt]
VSI-NoQ & 0.049 & 0.068 & 0.066 \\
VSI & \textbf{0.044} & \textbf{0.052} & \textbf{0.059} \\
\bottomrule
\end{tabular}
\label{Tab:simulationKS}
\vspace{-1em}
\end{table}
To systematically evaluate the consistency between the predicted and the true distributions, we compare average KS distance from models trained with various event rates in Table~\ref{Tab:simulationKS}.
Since the underlying generative process is based on CoxPH model, we consider the results from CoxPH as the oracle reference, since there is no model mis-specification.
At 100\% event rate ({\it i.e.}, complete observation), apart from the oracle CoxPH, all models perform similarly. The VSI variants give slightly better results compared with MLP and AFT-Weibull.
As the proportion of observed events decreases, VSI remains the best performing model, closely followed by the parametric AFT-Weibull.
Note that neither MLP nor VSI-NoQ matches the performance of VSI, which suggests that the full VSI design better accommodates censoring observations.
\begin{table*}[t!]
\centering
\caption{Model performance summary for simulation study based on $C_{td}$, C-Index and average test log-likelihood. Confidence Intervals for C-Index provided in the SM. For NA entries, the corresponding evaluation metric can not be applied. }
\begin{tabularx}{\textwidth}{c *{9}{Y}}
\toprule
Models
& \multicolumn{3}{c}{$C_{td}$}
& \multicolumn{3}{c}{C-Index Raw}
& \multicolumn{3}{c}{log-likelihood}\\
\cmidrule(lr){2-4} \cmidrule(l){5-7} \cmidrule(l){8-10}
& 100\% & 50\% & 30\% & 100\% & 50\% & 30\% & 100\% & 50\% & 30\% \\
\midrule
CoxPH & \textbf{0.757} & 0.755 & 0.761 & 0.773 & 0.781 & 0.793 & NA & NA & NA \\
Coxnet & NA & NA & NA & 0.776 & 0.784 & 0.760 & NA & NA & NA \\
AFT-Weibull & 0.742 & 0.750 & 0.768 & 0.773 & 0.781 & 0.793 & -4.43 & -2.29 & -1.47 \\
RSF & 0.631 & 0.638 & 0.608 & 0.701 & 0.718 & 0.712 & -14.12 & -8.02 & -5.35 \\
DeepSurv & NA & NA & NA & 0.772 & 0.781 & 0.793 & NA & NA & NA \\
MLP & 0.744 & 0.751 & 0.770 & 0.772 & 0.781 & 0.793 & \textbf{-4.15} & \textbf{-2.22} & -1.41 \\
[5pt]
VSI-NoQ & 0.748 & 0.749 & 0.763 & 0.772 & 0.781 & 0.793 & -4.16 & \textbf{-2.22} & -1.41 \\
VSI & 0.748 & \textbf{0.756} & \textbf{0.772} & 0.773 & 0.781 & 0.793 & \textbf{-4.15} & \textbf{-2.22} & \textbf{-1.40} \\ \bottomrule
\end{tabularx}
\vspace{-1em}
\label{tab:simubigtable}
\end{table*}
\vspace{3pt}
{\bf Average log-likelihood and C-Index} To validate the effectiveness of VSI, we also provide a comprehensive summary of model performance against other popular or state-of-the-art alternatives in Table~\ref{tab:simubigtable}, under various simulation setups with different evaluation metrics.
VSI consistently outperforms its counterparts in terms of the average log-likelihood and time-dependent C-Index. Together with the observation that VSI also yields better KS distance (see Table~\ref{Tab:simulationKS}), converging evidence suggests our VSI better predicts the individual survival distributions relative to other competing solutions.
We also compared the raw C-Index and the corresponding confidence intervals using the weighted average of model predicted survival time (defined in Sec~\ref{sec:wtaverage}) as the risk score, and we did not find significant differences between alternative methods, as shown in Table~\ref{tab:simubigtable} and Supplemental Materials. Thus VSI can deliver comparable performance relative to models that are compatible with the data generating mechanism. Raw C-Index quantifies the corresponding pairs without considering the time horizon, thus the distinctions among good performing models are not significant.
To provide a more informative summary, We plot the test log-likelihood distributions for selected models in Figure~\ref{fig:simulikeli}. We can see that VSI log-likelihoods estimates are tighter and higher for both observed and censored observations, especially when we have low event rates. The $(0.10, 0.90)$ percentiles range for simulation studies please refer to SM.
\begin{figure}[t!]
\centering
\begin{tabular}{c@{}c}
\subfloat[]{\includegraphics[width=0.5\linewidth]{plots/simu_er50_e_loglikeli_box.pdf}}&
\subfloat[]{\includegraphics[width=0.5\linewidth]{plots/simu_er50_c_loglikeli_box.pdf}}\\
\end{tabular}
\caption{Test log-likelihood distributions for the 50\% event rate simulation dataset. (left: events, right:censoring)}
\label{fig:simulikeli}
\end{figure}
\vspace{3pt}
{\bf Coverage Plots} In Figure~\ref{fig:simucov30}, VSI achieves both relatively high coverage for event (Figure~\ref{fig:simucov30}(a)) and censored observations (Figure~\ref{fig:simucov30}(b)), comparing to the oracle method CoxPH in this synthetic example. Note that while RSF performs better for the observed events, its performance on censored cases falls well below other solutions.
We refer the readers to our Supplementary Materials for additional simulations and analyses based on toy models.
\begin{figure}[t!]
\centering
\begin{tabular}{c@{}c}
\subfloat[]{\includegraphics[width=0.5\linewidth]{plots/simu_er50_e_coverage.pdf}}&
\subfloat[]{\includegraphics[width=0.5\linewidth]{plots/simu_er50_c_coverage.pdf}}\\
\end{tabular}
\caption{Test coverage rate for the 50\% event rate simulation dataset. (left: events, right: censoring)}
\label{fig:simucov30}
\end{figure}
\subsection{Real-World datasets}
Moving beyond toy simulations, we further compare VSI to competing solution on the following three real-world datasets,
$i$) \texttt{FLCHAIN} \citep{dispenzieri2012use}: a public dataset to determine whether the elevation in free light chain assay provides prognostic information to the general population survival,
$ii$) \texttt{SUPPORT} \citep{knaus1995support}: a public dataset for a prospective cohort study to estimate survival over seriously ill hospitalized adults for 180 days period, and
$iii$) \texttt{SEER} \citep{ries2007seer}: a public dataset aim to study cancer survival among adults, which contains 1988 to 2001 information, provided by U.S. Surveillance, Epidemiology, and End Results (SEER) Program.
In this experiments, we used 10-year follow-up breast cancer subcohort in \texttt{SEER} dataset.
We follow the data pre-processing steps outlined in \citet{chapfuwa2018adversarial}.
To handle the missing values in data, we adopt the common practice of median imputation for continuous variables and mode imputation for discrete variables.
\begin{table}[h]
\centering
\caption{Summary Statistics for Real Datasets.}
\begin{tabular}{lrrr}
\toprule
& \textsc{FLCHAIN} & \textsc{SUPPORT} & \textsc{SEER} \\ \midrule
$N$ & 7,894 & 9,105 & 68,082 \\
Event rate($\%$) & 27.5 & 68.1 & 51.0 \\
$p$(cat) & 26(21) & 59(31) & 789(771) \\
NaN($\%$) & 2.1 & 12.6 & 23.4 \\
Max event $t$ & $4998_{\text{days}}$ & $1944_{\text{days}}$ & $120_{\text{months}}$ \\
Loss of Info($\%$) & 10.45 & 1.57 & 0.0 \\ \bottomrule
\end{tabular}
\label{tab:realsummary}
\end{table}
Summary statistics of the datasets are shown in Table~\ref{tab:realsummary}, where $N$ is the total number of observations, $p$ denotes the total number of variables after one-hot-encoding, NaN($\%$) stands for the proportion of missingness in covariates, and loss of information stands for the proportion of censoring observations happened after the maximum event time $t$.
\begin{table*}[t!]
\centering
\caption{Summary for Real Datasets based on C-Index and average log-likelihood. Confidence Intervals for C-Index are provided in SM. NA implies the corresponding evaluation metric can not be evaluated.}
\begin{tabularx}{\textwidth}{c *{9}{Y}}
\toprule
Models
& \multicolumn{3}{c}{$C_{td}$}
& \multicolumn{3}{c}{C-Index Raw}
& \multicolumn{3}{c}{log-likelihood}\\
\cmidrule(lr){2-4} \cmidrule(l){5-7} \cmidrule(l){8-10}
& \textsc{FLCHAIN} & \textsc{SUPPORT} & \textsc{SEER} & \textsc{FLCHAIN} & \textsc{SUPPORT} & \textsc{SEER} & \textsc{FLCHAIN} & \textsc{SUPPORT} & \textsc{SEER} \\
\midrule
Coxnet & NA & NA & NA & 0.790 & 0.797 & 0.819 & NA & NA & NA \\
AFT-Weibull & 0.777 & 0.752 & NA & 0.792 & 0.797 & NA & -3.09 & -4.39 & NA \\
RSF & NA & NA & NA & 0.771 & 0.751 & 0.796 & NA & NA & NA \\
DeepSurv & NA & NA & NA & 0.785 & 0.678 & NA & NA & NA & NA \\
MLP & 0.775 & 0.768 & 0.821 & 0.751 & 0.811 & 0.811 & -1.91 & -2.86 & -2.50 \\
[5pt]
VSI-NoQ & 0.745 & 0.772 & 0.820 & 0.745 & 0.824 & 0.809 & -2.45 & -2.79 & -2.50 \\
VSI & \textbf{0.787} & \textbf{0.775} & \textbf{0.824} & \textbf{0.792} & \textbf{0.827} & \textbf{0.826} & \textbf{-1.85} & \textbf{-2.74} & \textbf{-2.49} \\ \bottomrule
\end{tabularx}
\label{tab:realbigtable}
\end{table*}
\begin{table}[ht]
\caption{Quantile ranges for $\log$-likelihood in Real Datasets. Note AFT did not converge to reasonable solutions for SEER. }
\begin{tabularx}{\columnwidth}{c *{6}{Y}}
\hline
Models & Observed & & & Censored & & \\
& \textsc{flchain} & \textsc{support} & \textsc{seer} & \textsc{flchain} & \textsc{support} & \textsc{seer} \\ \hline
AFT & 2.491 & 4.706 & NA & {\bf 0.468} & 1.850 & NA \\
MLP & 2.970 & 4.273 & 1.780 & 0.518 & 1.540 & 0.623 \\
VSI-NoQ & 7.34 & 4.744 & 1.801 & 0.559 & 1.634 & 0.529 \\
VSI & {\bf 2.213} & {\bf 4.143} & {\bf 1.718} & 0.537 & {\bf 1.354} & {\bf 0.508} \\ \hline
\end{tabularx}
\label{tab:realqrangelikeli}
\end{table}
In Table~\ref{tab:realbigtable} we compare the C-Indices and average log-likelihood. The advantage of VSI is more evident for the more challenging real datasets, especially in the cases of low observed event rates. For example, with 30\% event rate, in \textsc{SUPPORT} dataset, VSI Confidence Interval for raw C-Index as (0.809, 0.846), while the standard CoxNet is only (0.763,0.805) and AFT (0.782,813), {\it i.e.}, the overlaps with that of VSI are very small. Similar results were observed for other datasets and baseline solutions. VSI shows remarkable robustness against data incompleteness in a real-world scenario, achieving the best results according to all three metrics. For VSI the raw C-Index is computed from the weighted average of VSI predicted distribution, please refer to SM for more details.
\begin{figure}[ht]
\centering
\begin{tabular}{c@{}c}
\subfloat[]{\includegraphics[width=0.50\linewidth]{plots/support_e_loglikeli_violin.pdf}} &
\subfloat[]{\includegraphics[width=0.50\linewidth]{plots/support_c_loglikeli_violin.pdf}}
\end{tabular}
\caption{$\log$-likelihood distributions for \texttt{SUPPORT} Dataset, (left: events, right:censoring)}
\label{fig:reallikeli}
\end{figure}
In Figure \ref{fig:reallikeli}, the distribution of $\log$-likelihood is more concentrated, in addition to a higher mean. To quantitatively evaluate the concentration, we report the difference between $10\%$ and $90\%$ quantiles of $\log$-likelihood in Table~\ref{tab:realqrangelikeli}. The quantile ranges of VSI are considerably smaller compared to alternative solutions under most experimental settings. This verifies VSI enjoys better model robustness compared to other popular alternatives, especially in the case of high censoring rates.
\begin{figure}[ht]
\centering
\begin{tabular}{c@{}c}
\subfloat[]{\includegraphics[width=0.50\linewidth]{plots/support_e_coverage.pdf}} &
\subfloat[]{\includegraphics[width=0.50\linewidth]{plots/support_c_coverage.pdf}}
\end{tabular}
\caption{Coverage rate for \texttt{SUPPORT} Dataset, (left: events, right: censoring)}
\label{fig:realcovsupport}
\vspace{-1em}
\end{figure}
Together with the coverage plots in Figure \ref{fig:realcovsupport}, VSI has relative high coverage for both events and censoring cases which indicates better performance in capturing the true event time in challenging real-world datasets. The consistency of those results have been verified through repeated runs on these three datasets. For more detailed results please refer to SM.
\section{CONCLUSIONS}
We presented an approach for learning time-to-event distributions conditioned on covariates in a nonparametric fashion by leveraging a principled variational inference formulation.
The proposed approach, VSI, extends the variational inference framework to survival data with censored observations.
Based on synthetic and diverse real-world datasets, we demonstrated the ability of VSI to recover the underlying unobserved time-to-event distribution, as well as providing point estimations of time-to-event for subjects that yield excellent performance metrics consistently outperforming feed-forward deep learning models and traditional statistical models.
As future work, we plan to extend our VSI framework to longitudinal studies, where we can employ a recurrent neural net (RNN) to account for the temporal dependencies. For datasets with observations made at irregular intervals, for instance, the Neural-ODE model \citep{chen2018neural} can be applied. Our work can be also adapted to make dynamic predictions of event times to serve the needs of modern clinical practices.
\subsubsection*{Acknowledgements}
The authors would like to thank the anonymous reviewers for their insightful comments. This research was supported in part by by NIH/NIBIB R01-EB025020.
\bibliographystyle{ACM-Reference-Format}
|
1,314,259,996,639 | arxiv | \section{Introduction}
When a spin polarized current enters a ferromagnetic layer, there is generally a transfer of spin angular momentum between the conduction electrons and the magnetization of the ferromagnet. This was first proposed by Slonczewski\cite{slon} and Berger\cite{Ber} in 1996 as a novel mechanism for switching the magnetization of a ferromagnet by a spin polarized current. It was experimentally realized in spin-valve trilayers in 2000\cite{kat}. Since then, spin transfer torque has been investigated in various magnetic nanostructures\cite{Zha,Baek}. In a spin valve, when electric current passes through a fixed magnetic layer, it becomes spin polarized along the direction of the magnetic moment of the fixed magnetic layer. After passing through a nonmagnetic metal layer, the current enters into the free magnetic layer and polarizes along the magnetization direction of the free magnetic layer. When the magnetic moments of the two magnetic layers are not parallel or anti-parallel, free magnetic layer can absorb the spin polarized current\cite{Stil}. Due to this absorption, some angular momentum can be transferred to the free layer. Thus, a torque arises on the magnetic moment of the free layer which can cause the switching of the free layer's magnetization. The aforesaid torque is generally described as non-equilibrium spin transfer torque since it needs a voltage bias to operationalize it. The spin transfer torque can also arise in equilibrium situation in absence of a voltage bias as in Josephson junction.\par
In ferromagnetic Josephson junction's\cite{buz,mey}, the Josephson super-current induces an equilibrium spin transfer torque due to the misaligned magnetic moments of the ferromagnetic layers\cite{Wain} which is proportional to the sine of the difference in magnetization directions of the two ferromagnet's. If $\mathcal{F}$ is the free energy of the superconductor-ferromagnet-normal metal-ferromagnet-superconductor ($SF_{1}NF_{2}S$) junction and $\theta$ is the angle between the magnetic moments of the ferromagnet's, then equilibrium spin transfer torque is defined as\cite{Wain}- $\tau^{eq}=\frac{\partial \mathcal{F}}{\partial \theta}$,
\begin{figure}[h]
\vskip -0.16 in
\centering{\includegraphics[width=.9\linewidth]{Figspin1.pdf}}
\caption{\small\sl Conventional mechanism of the equilibrium spin transfer torque in a superconductor-ferromagnet-normal metal-ferromagnet-superconductor junction. (a) Magnetic moments of the ferromagnet's are misaligned ($\theta_{1} \neq \theta_{2}$).
Equilibrium spin transfer torque $\tau^{eq} \propto \sin (\theta_{1}-\theta_{2})$ and points perpendicular to the plane spanned by the two magnetic moments of the ferromagnet's, (b) Magnetic moments of the ferromagnet's are aligned ($\theta_1 = \theta_2 $). $\tau^{eq}=0$: equilibrium spin transfer torque vanishes.}
\end{figure}
with Josephson super-current\cite{deG}, $I=\frac{2e}{\hbar}\frac{\partial \mathcal{F}}{\partial \varphi}$, and $\varphi$ is the phase difference between the two superconductors. Thus,- $\frac{\partial I}{\partial \theta}=\frac{2e}{\hbar}\frac{\partial \tau^{eq}}{\partial \varphi}$,
which relates Josephson current to equilibrium spin transfer torque. The Josephson super-current, similar to the diagram shown in Fig.~1, depends on sine of phase difference across superconductors $(\varphi_{L}-\varphi_{R})$ and flows from left to right or vice-versa. Equilibrium spin transfer torque points perpendicular to the plane spanned by the two magnetic moments of the ferromagnetic layers\cite{Wain} and its magnitude is sinusoidal in difference in magnetization directions of the two ferromagnet's. Sign and magnitude of the equilibrium spin transfer torque can be controlled by the phase difference between the two superconductors\cite{Wain}.\par
The equilibrium spin torque seen previously in SFFS junction\cite{Wain} or SFFFS junction\cite{Halter} or even SFSFS junction\cite{halter} is due to misalignment of ferromagnet's. The origin of equilibrium spin transfer torque is \textquotedblleft classical\textquotedblright. This can be easily understood via a \textquotedblleft classical mechanism\textquotedblright, see Fig.~1(a). But, this conventional view of the origin of spin transfer torque may not be always applicable. The quantum origins of spin torque, as opposed to the \textquotedblleft classical\textquotedblright spin transfer torque have been speculated recently in Refs.~\cite{spin1,spin2}. In this paper, we give an example where the mechanism underlying the equilibrium spin torque is quantum in nature and due to spin flip scattering. Classically, when the magnetic moment of electron is parallel or anti-parallel to the magnetic field, there is no torque exerted on the electron. Similarly, when the two magnetic moments of the ferromagnetic layers in a Josephson junction are parallel or anti-parallel equilibrium spin transfer torque vanishes, see Fig.~1(b). In Ref.~\cite{Wain}, the equilibrium spin transfer torque also follows the same behavior. But in this paper our main motivation is to show that if we replace the normal metal of Ref.\cite{Wain} by a magnetic impurity between two ferromagnetic layers, we will see a new effect- existence of a finite equilibrium spin torque even when magnetic moments of the ferromagnet's are aligned parallel or anti-parallel. We show that a magnetic impurity can engender a torque in such a junction. We call this \textquotedblleft equilibrium quantum spin torque\textquotedblright. Thus, this new mechanism of spin flip scattering can lead to finite equilibrium spin torque which has no classical analog.\par
The reason we are interested in spin transfer torque is because of the manifold applications like switching of the magnetization of ferromagnet's for sufficiently large current without any external magnetic field. This switching provides a mechanism to create fast magnetic random access memories\cite{Myer}. Further spin transfer torque can also be used for excitation of spin waves\cite{tsoi}. The equilibrium spin transfer torque first shown in Ref.~\cite{Wain} with $s$-wave superconductor has been extended to $d$-wave in Ref.~\cite{lin}.\par
The paper is organized as follows: in the next section on Theory, we first present our model and discuss the theoretical background of our study by writing the Hamiltonian, wave-functions and boundary conditions needed to calculate charge Josephson current and equilibrium quantum spin torque. In section III, we analyze our results for equilibrium quantum spin torque (subsection III.~A) and discuss the physical picture of torque (subsection III.~B). In section IV, we give an experimental realization and brief conclusion to our study. The explicit form of expression of equilibrium quantum spin torque is provided in the Appendix.
\section{Theory}
The Hamiltonian, wave-functions, boundary conditions of our system as depicted in Fig.~2 and the calculations of Andreev bound states are done in this section.
\subsection{Hamiltonian}
Our system consists of two ferromagnet's ($F_{1}$ and $F_{2}$) with a {magnetic impurity}, sandwiched between two conventional s-wave singlet superconductors. The superconductors are isotropic and our model is shown in Fig.~2, with a {magnetic impurity} at $x=0$, two s-wave superconductors on either side at $x<-a/2$ and $x>a/2$ and two ferromagnetic layers in regions: $-a/2<x<0$ and $0<x<a/2$. In general, $h$ the magnetization vectors of the two ferromagnetic layers are misaligned by an angle $\theta$. However, in our calculation we focus on the limit $\theta\rightarrow0$, i.e., magnetization vectors are aligned parallely. We take the superconducting pair potential of the form $\Delta=\Delta_{0}[e^{i\varphi_{L}}\Theta(-x-a/2)+e^{i\varphi_{R}}\Theta(x-a/2)]$, where $\Delta_{0}$ is gap parameter, $\varphi_{L}$ and $\varphi_{R}$ are the superconducting phases for left and right superconductor respectively. The temperature dependence of $\Delta_{0}$ is given by $\Delta_{0}\rightarrow \Delta_{0}\tanh(1.74\sqrt{(T_{c}/T-1)})$, where $T_{c}$ is the superconducting critical temperature\cite{annu}.
\begin{figure}[h]\vskip -0.16in
\centering{\includegraphics[width=1.1\linewidth]{Figspin2.pdf}}
\caption{\small\sl Josephson junction composed of two ferromagnet's and a {magnetic impurity} with spin $S$ and magnetic moment $m'$ at $x=0$ sandwiched between two s-wave superconductors. In our work $\theta_{1}=\theta$ and $\theta_{2}=0$. When ferromagnet's are aligned, i.e., $\theta\rightarrow0$, equilibrium spin transfer torque vanishes (see Fig.~1(b)), however in our setup a new quantum mechanism of spin flip scattering gives rise to a non-zero torque, which we denote as Equilibrium quantum spin torque (EQST). In this paper, we mainly concentrate on the limit $\theta\rightarrow0$.}
\end{figure}
The Bogoliubov-de Gennes equation of our system is given below\cite{Liu,cheng}:
\begin{eqnarray}
\begin{pmatrix}
H_{0}\hat{I} & i\Delta \hat{\sigma}_{y}\\
-i\Delta^{*} \hat{\sigma}_{y} & -H_{0}^{*}\hat{I}
\end{pmatrix} \psi(x)& =& E \psi(x),
\label{eqq}
\end{eqnarray}
where $H_{0}=p^2/2m^{\star}+V[\delta(x+a/2)+\delta(x-a/2)]-J_{0}\delta(x)\vec s.\vec S-\vec{h}.\hat{\sigma}[\Theta(x+a/2)+\Theta(a/2-x)]-E_{F}$. In the Hamiltonian \textquotedblleft$H_{0}$\textquotedblright{}, the first term describes the kinetic energy of an electron with mass $m^{\star}$, the second term depicts interfaces- $V$ is the strength of the $\delta$-like potential at the interfaces between ferromagnet and superconductor, the third term describes {magnetic impurity} with $J_{0}$ being the strength of exchange interaction between the electron with spin $\vec{s}$ and the {magnetic impurity} with spin $\vec{S}$\cite{joseph,AJP}, the fourth term describes ferromagnet's with $\vec{h}$ being the magnetization vectors of the two ferromagnet's and $\Theta$ is the Heaviside step function. Further, $\psi(x)$ is a four-component spinor, $E_{F}$ is the Fermi energy, $\hat{\sigma}$ is the Pauli spin matrix and $\hat{I}$ is the $2\times2$ identity matrix. In general, the magnetization vector ($\vec{h}$) of left ferromagnet ($F_{1}$) is assumed to be at an angle of $\theta$ with $z$ axis in the $y-z$ plane, while that of right ferromagnet ($F_{2}$) is fixed along the $z$ axis. Thus, $\vec{h}.\hat{\sigma}=h\sin \theta\hat{\sigma}_{y}+h\cos \theta\hat{\sigma}_{z}$\cite{Halter}. However, in our study we only concentrate on the case where $\theta\rightarrow 0$, i.e., Ferromagnet's are aligned. In the subsequent analysis we take the dimensionless version of $J_{0}$ and $V$ given as $J=\frac{m^{*}J_{0}}{\hbar^2k_{F}}$ and $Z=\frac{m^{*}V}{\hbar^2k_{F}}$\cite{BTK}.
\subsection{Wave-functions and boundary conditions in the ferromagnetic Josephson junction in presence of a {magnetic impurity}}
The system we consider consists of two ferromagnet's with a {magnetic impurity} sandwiched between two conventional s-wave singlet superconductors.
Our model is shown in Fig.~2, it depicts a {magnetic impurity} at $x=0$ and two superconductors at $x<-a/2$ and $x>a/2$. There are two ferromagnetic regions in $-a/2<x<0$ and $0<x<a/2$.\par
\subsubsection{Wave-functions}
Let us consider a spin up electron incident at $x=-a/2$ interface from left superconductor.
If we solve the Bogoliubov-de Gennes equation for superconductors (see Eq.~\ref{eqq}), we will get the wavefunctions for left and right superconductors.
The wave function in the left superconductor (for $x<-\frac{a}{2}$) is \cite{joseph,LINDER}-
\begin{eqnarray}
\label{eq1}
\psi_{S_{L}}(x)=\begin{pmatrix}u\\
0\\
0\\
v
\end{pmatrix}e^{ik_{+}x}\phi_{m'}^{S}+r_{ee}^{\uparrow\uparrow}\begin{pmatrix}
u\\
0\\
0\\
v
\end{pmatrix}e^{-ik_{+}x}\phi_{m'}^{S}+r_{ee}^{\uparrow\downarrow}\begin{pmatrix}
0\\
u\\
-v\\
0
\end{pmatrix}e^{-ik_{+}x}\phi_{m'+1}^{S}+r_{eh}^{\uparrow\uparrow}\begin{pmatrix}
0\\
-v\\
u\\
0
\end{pmatrix}e^{ik_{-}x}\phi_{m'+1}^{S}+r_{eh}^{\uparrow\downarrow}\begin{pmatrix}
v\\
0\\
0\\
u
\end{pmatrix}e^{ik_{-}x}\phi_{m'}^{S},\nonumber\\
\end{eqnarray}
The amplitudes $r_{ee}^{\uparrow\uparrow},r_{ee}^{\uparrow\downarrow},r_{eh}^{\uparrow\uparrow},r_{eh}^{\uparrow\downarrow}$ are normal reflection without flip, normal reflection with spin flip, Andreev reflection with spin flip and Andreev reflection without flip respectively. The corresponding wave function in the right superconductor (for $x>\frac{a}{2}$) is given by-
\begin{eqnarray}
\psi_{S_{R}}(x)=t_{ee}^{\uparrow\uparrow}\begin{pmatrix}
ue^{i\varphi}\\
0\\
0\\
v
\end{pmatrix}e^{ik_{+}x}\phi_{m'}^{S}+t_{ee}^{\uparrow\downarrow}\begin{pmatrix}
0\\
ue^{i\varphi}\\
-v\\
0
\end{pmatrix}e^{ik_{+}x}\phi_{m'+1}^{S}+t_{eh}^{\uparrow\uparrow}\begin{pmatrix}
0\\
-ve^{i\varphi}\\
u\\
0
\end{pmatrix}e^{-ik_{-}x}\phi_{m'+1}^{S}+t_{eh}^{\uparrow\downarrow}\begin{pmatrix}
ve^{i\varphi}\\
0\\
0\\
u
\end{pmatrix}e^{-ik_{-}x}\phi_{m'}^{S},\nonumber\\
\end{eqnarray}
where $t_{ee}^{\uparrow\uparrow},t_{ee}^{\uparrow\downarrow},t_{eh}^{\uparrow\uparrow},t_{eh}^{\uparrow\downarrow}$ represent transmission amplitudes, corresponding to the reflection process described above and $\varphi=\varphi_{R}-\varphi_{L}$ represents the phase difference between right and left superconductors. $\phi_{m'}^{S}$ is the eigenspinor of the {magnetic impurity}, with its $S^{z}$ operator acting as- $S^{z}\phi_{m'}^{S} = m'\phi_{m'}^{S}$, with $m'$ being the spin magnetic moment of the {magnetic impurity}. $u$ and $v$ are the BCS coherence factors which are defined as $u^{2}={\frac{1}{2}(1+\frac{\sqrt{E^2-\Delta_{0}^{2}}}{E})}$, $v^{2}={\frac{1}{2}(1-\frac{\sqrt{E^2-\Delta_{0}^{2}}}{E})}$.
$k_{\pm}=\sqrt{\frac{2m^{\star}}{\hbar^2}(E_{F}\pm \sqrt{E^2-\Delta_{0}^2})}$ is the wave-vector for electron-like quasi-particle ($k_{+}$) and hole-like quasi-particle ($k_{-}$) in the left and right superconducting wave-functions, $\psi_{S_L}$ and $\psi_{S_R}$.\\
Similarly solving the Bogoliubov-de Gennes equation for ferromagnet's, we get the wave-function in ferromagnet's.
The wave-function in the left ferromagnet ($F_{1}$) is given by-
\begin{eqnarray}
\psi_{F_{1}}(x)=(ee^{iq_{\uparrow}^{+}(x+a/2)}+fe^{-iq_{\uparrow}^{+}x})\begin{pmatrix}
\cos \frac{\theta}{2}\\
i\sin\frac{\theta}{2}\\
0\\
0
\end{pmatrix}\phi_{m'}^{S}+(e^{\prime} e^{iq_{\downarrow}^{+}(x+a/2)}+f^{\prime} e^{-iq_{\downarrow}^{+}x})\begin{pmatrix}
i\sin \frac{\theta}{2}\\
\cos\frac{\theta}{2}\\
0\\
0
\end{pmatrix}\phi_{m'+1}^{S}\nonumber\\+(e_{0}e^{-iq_{\uparrow}^{-}(x+a/2)}+f_{0}e^{iq_{\uparrow}^{-}x})\begin{pmatrix}
0\\
0\\
\cos\frac{\theta}{2}\\
-i\sin\frac{\theta}{2}
\end{pmatrix}\phi_{m'+1}^{S}+(e_{0}^{\prime} e^{-iq_{\downarrow}^{-}(x+a/2)}+f_{0}^{\prime} e^{iq_{\downarrow}^{-}x})\begin{pmatrix}
0\\
0\\
-i\sin\frac{\theta}{2}\\
\cos\frac{\theta}{2}
\end{pmatrix}\phi_{m'}^{S},\mbox{for $-\frac{a}{2}<x<0.$}
\end{eqnarray}
Similarly, the wave-function in the right ferromagnet ($F_{2}$) is given by-
\begin{eqnarray}
\psi_{F_{2}}(x)=(a_{0}e^{iq_{\uparrow}^{+}x}+be^{-iq_{\uparrow}^{+}(x-a/2)})\begin{pmatrix}
1\\
0\\
0\\
0
\end{pmatrix}\phi_{m'}^{S}+(a^{\prime} e^{iq_{\downarrow}^{+}x}+b^{\prime} e^{-iq_{\downarrow}^{+}(x-a/2)})\begin{pmatrix}
0\\
1\\
0\\
0
\end{pmatrix}\phi_{m'+1}^{S}\nonumber\\+(ce^{-iq_{\uparrow}^{-}x}+de^{iq_{\uparrow}^{-}(x-a/2)})\begin{pmatrix}
0\\
0\\
1\\
0
\end{pmatrix}\phi_{m'+1}^{S}+(c^{\prime} e^{-iq_{\downarrow}^{-}x}+d^{\prime} e^{iq_{\downarrow}^{-}(x-a/2)})\begin{pmatrix}
0\\
0\\
0\\
1
\end{pmatrix}\phi_{m'}^{S},\mbox{for $0<x<\frac{a}{2}.$}
\end{eqnarray}
$q_{\sigma}^{\pm}=\sqrt{\frac{2m^{\star}}{\hbar^2}(E_{F}+\rho_{\sigma}h\pm E)}$ is the wave-vector for electron ($q_{\sigma}^{+}$) and hole ($q_{\sigma}^{-}$) in the ferromagnetic layers, wherein $\rho_{\sigma}=+1(-1)$ is related to $\sigma=\uparrow(\downarrow)$. In our work we have used the Andreev approximation $k_{+}=k_{-}=\sqrt{\frac{2m^{\star}E_{F}}{\hbar^2}}=k_{F}$ and $q_{\uparrow,\downarrow}=k_{F}\sqrt{1\pm\frac{h}{E_{F}}}$, where $k_{F}$ is the Fermi wave-vector, with $E_{F}>>\Delta, E$.\par
\subsubsection{Boundary conditions}
The boundary conditions can be written as follows\cite{joseph,LINDER}: at $x=-a/2$- $\psi_{S_{L}}(x)=\psi_{F_{1}}(x)$ (continuity of wave-functions) and $\frac{d\psi_{F_{1}}}{dx}-\frac{d\psi_{S_{L}}}{dx}=\frac{2m^{\star}V}{\hbar^2}\psi_{F_{1}}$, (discontinuity in first derivative), at $x=0$ (see Fig.~1), $\psi_{F_{1}}(x)=\psi_{F_{2}}(x)$ and $\frac{d\psi_{F_{2}}}{dx}-\frac{d\psi_{F_{1}}}{dx}=-\frac{2m^{\star}J_{0}\vec s.\vec S}{\hbar^2} \psi_{F_{1}}$ with $\vec s.\vec S=s^{z}S^{z}+\frac{1}{2}(s^{-}S^{+}+s^{+}S^{-})$ being the exchange coupling due to {magnetic impurity} in the Hamiltonian. $\vec s$ represents spin operator acting on electron/hole states $\phi_{m}^{s}$, while $\vec S$ represents the spin operator acting on {magnetic impurity} states $\phi_{m'}^{S}$. $\phi_{m}^{s}$ and $\phi_{m'}^{S}$ are the eigenstates of electron/hole and magnetic impurity with $m$ and $m'$ being the spin magnetic moment of the electron/hole and {magnetic impurity} respectively. $s$ is spin of electron, while $S$ is spin of {magnetic impurity}, with $s^{\pm}=s^{x}\pm i s^{y}, s^{z}=\frac{\hbar}{2}\begin{pmatrix}\sigma_{z} & 0 \\
0 & \sigma_{z} \end{pmatrix}, s^{x}=\frac{\hbar}{2}\begin{pmatrix} \sigma_{x} & 0\\
0 & \sigma_{x} \end{pmatrix}, s^{y}=\frac{\hbar}{2}\begin{pmatrix} \sigma_{y} & 0\\
0 & \sigma_{y} \end{pmatrix} $, where $\sigma_{z}=\begin{pmatrix}1 & 0 \\
0 & -1 \end{pmatrix}, \sigma_{x}=\begin{pmatrix}0 & 1 \\
1 & 0 \end{pmatrix}, \sigma_{y}=\begin{pmatrix}0 & -i \\
i & 0 \end{pmatrix}$. The action of spin raising and spin lowering operators for {magnetic impurity} are discussed below.
For spin up electron incident $\phi_{m}^{s}=(1\hspace{2pt}0\hspace{2pt}0\hspace{2pt}0)^{T} $, with $s=1/2$, $m=1/2$, T denotes transpose of matrix.
Now, when the spin flip term of our Hamiltonian acts on spin up electron $(1\hspace{2pt}0\hspace{2pt}0\hspace{2pt}0)^{T}$, where $T$ stands for transpose and the {magnetic impurity} state $\phi_{m'}^{S}$ we have-
\begin{equation}
\vec{s}.\vec{S} (1\hspace{2pt}0\hspace{2pt}0\hspace{2pt}0)^{T}\phi_{m'}^{S}=s^{z}S^{z}(1\hspace{2pt}0\hspace{2pt}0\hspace{2pt}0)^{T}\phi_{m'}^{S}+\frac{1}{2}s^{-}S^{+}(1\hspace{2pt}0\hspace{2pt}0\hspace{2pt}0)^{T}\phi_{m'}^{S}+\frac{1}{2}s^{+}S^{-}(1\hspace{2pt}0\hspace{2pt}0\hspace{2pt}0)^{T}\phi_{m'}^{S}
\label{eqq1}
\end{equation}
Now, $s^{+}(1\hspace{2pt}0\hspace{2pt}0\hspace{2pt}0)^{T}=0$, since $s^{+}$ is the spin raising operator and there are no higher spin states for a spin-$1/2$ electron than up and so the 3rd term in Eq.~\ref{eqq1} vanishes, while $s^{-}(1\hspace{2pt}0\hspace{2pt}0\hspace{2pt}0)^{T}=(0\hspace{2pt}1\hspace{2pt}0\hspace{2pt}0)^{T}$, the spin lowering operator gives the down spin state $(0\hspace{2pt}1\hspace{2pt}0\hspace{2pt}0)^{T}$ of electron. Further, for spin-up electron $s^{z} (1\hspace{2pt}0\hspace{2pt}0\hspace{2pt}0)^{T}= \frac{1}{2} (1\hspace{2pt}0\hspace{2pt}0\hspace{2pt}0)^{T}$, as $\hbar=1$ and for {magnetic impurity}- $S^{z} \phi_{m'}^{S}=m' \phi_{m'}^{S}$. Further, the spin-raising and spin-lowering operators acting on {magnetic impurity} give:$S^{+}\phi_{m'}^{S}=f_{2}\phi_{m'+1}^S=\sqrt{(S-m')(S+m'+1)}\phi_{m'+1}^S$ and $S^{-}\phi_{m'+1}^{S}=\sqrt{(S-m')(S+m'+1)}\phi_{m'}^S$.
\begin{equation}
\mbox{ Thus, }\vec{s}.\vec{S}(1\hspace{2pt}0\hspace{2pt}0\hspace{2pt}0)^{T}\phi_{m'}^{S}= \frac{1}{2} m'(1\hspace{2pt}0\hspace{2pt}0\hspace{2pt}0)^{T}\phi_{m'}^{S}+\frac{1}{2}\sqrt{(S-m')(S+m'+1)}(0\hspace{2pt}1\hspace{2pt}0\hspace{2pt}0)^{T}\phi_{m'+1}^{S}
\label{eqq2}
\end{equation}
From Eqs.~\ref{eqq1}, \ref{eqq2} we thus have-
\begin{equation}
\vec{s}.\vec{S}(1\hspace{2pt}0\hspace{2pt}0\hspace{2pt}0)^{T}\phi_{m'}^{S}=\frac{1}{2}m'(1\hspace{2pt}0\hspace{2pt}0\hspace{2pt}0)^{T}\phi_{m'}^{S}+\frac{1}{2}f_{2}(0\hspace{2pt}1\hspace{2pt}0\hspace{2pt}0)^{T}\phi_{m'+1}^{S}{} \mbox{ (for both no flip and spin flip process) }\nonumber
\end{equation}
In quantum spin flip scattering process when { spin polarized super-current (the state of spin polarized super-current is given as-$|s.c\rangle$), in our case denoted by a macroscopic wave-function $\sim|\Psi_{S_{K}}|e^{i\varphi_{K}}\approx \begin{pmatrix}u\\
0\\
0\\
v
\end{pmatrix}e^{i\varphi_{K}}$ (where $K$ can be $L$ or $R$)}, interacts with the {magnetic impurity}, the {magnetic impurity} can flip its spin with finite probability, but there is no certainty for flipping its spin. In addition to the spin flip process, there can be the other process without any flip. Thus, { the spin polarized super-current-{magnetic impurity} state} after exchange interaction is in a superposition of mutual spin-flip as well as no flip state given by the joint entangled wave-function of { spin polarized super-current (s.c)} and {magnetic impurity} as-
\begin{equation}
|s.c\rangle\otimes|\phi_{m'}^{S}\rangle={\frac{m'}{2}}|{\mbox{No flip}}\rangle+{\frac{f_{2}}{2}}|{\mbox{Mutual-flip}}\rangle\nonumber
\end{equation}
On the other hand when there is no possibility of spin-flip scattering, i.e., when $S=m'$, then spin flip probability of: $f_{2}$=$\sqrt{(S-m')(S+m'+1)}$=$0$ and $\vec{s}.\vec{S}(1\hspace{2pt}0\hspace{2pt}0\hspace{2pt}0)^{T}\phi_{m'}^{S}=s^{z}S^{z}\phi_{m'}^{S}= \frac{1}{2} m'(1\hspace{2pt}0\hspace{2pt}0\hspace{2pt}0)^{T}\phi_{m'}^{S}.$ Thus both before as well as after the spin polarized super-current state and magnetic impurity state are not entangled and neither there is any superposition.
{Thus, Hamiltonian $H_{0}=p^2/2m^{\star}+V[\delta(x+a/2)+\delta(x-a/2)]-J_{0}\delta(x)s^{z}S^{z}-\vec{h}.\hat{\sigma}[\Theta(x+a/2)+\Theta(a/2-x)]-E_{F}$, for only no flip process, while it is, $H_{0}=p^2/2m^{\star}+V[\delta(x+a/2)+\delta(x-a/2)]-J_{0}\delta(x)\vec s.\vec S-\vec{h}.\hat{\sigma}[\Theta(x+a/2)+\Theta(a/2-x)]-E_{F}$, for the case wherein mutual spin flip takes place with finite probability.}
Finally, at $x=a/2$, the boundary conditions are- $\psi_{F_{2}}(x)=\psi_{S_{R}}(x)$, $\frac{d\psi_{S_{R}}}{dx}-\frac{d\psi_{F_{2}}}{dx}=\frac{2m^{\star}V}{\hbar^2}\psi_{F_{2}}$.\par
The aforesaid method of addressing spin-flip scattering process is not unique to our work, { many other papers have used the same model of spin flip scattering in different context}, mention may be made of- the first paper which introduced this model, see Ref.~\cite{AJP}, to model of quantum spin flip scattering in graphene, see Ref.~\cite{Maruri}, in modeling the quantum spin flip scattering in a Josephson junction, see Ref.~\cite{joseph},
and finally in modeling the occurrence of {Yu-Shiba-Rusinov (YSR)} bound states in at the interface of normal metal-superconductor junction, see Ref.~\cite{ysr-pal}.
\subsection{Andreev Bound states}
Following the procedure enunciated in Ref.~\cite{annu} to calculate bound state contribution to Josephson supercurrent we neglect the contribution from incoming quasiparticle, i.e., first term $\begin{pmatrix}u & 0 & 0 & v\end{pmatrix}^{T}e^{ik_{+}x}\phi_{m'}^{S}$ of Eq.~\ref{eq1} and insert the wave functions in the boundary conditions, we get a homogeneous system of linear equations for the scattering amplitudes, $Qx=0$, where $x$ is a $8\times1$ column matrix and is given by $x=[r_{ee}^{\uparrow\uparrow},r_{ee}^{\uparrow\downarrow},r_{eh}^{\uparrow\uparrow},r_{eh}^{\uparrow\downarrow},t_{ee}^{\uparrow\uparrow},t_{ee}^{\uparrow\downarrow},t_{eh}^{\uparrow\uparrow},t_{eh}^{\uparrow\downarrow}]$ and $Q$ is a $8\times 8$ matrix obtained by expressing the scattering amplitudes in the two ferromagnetic layers by the scattering amplitudes in the left and right superconductor. For a nontrivial solution of this system, Determinant of $Q=0$. We thus get the Andreev bound state energy spectrum $E_{i}$, $i=\{1,...,8\}$\cite{Been}. This is the usual procedure for calculating the bound state spectra in Josephson junctions, see Refs.~\cite{annu}, \cite{LINDER}.
{We find that $E_{i}(i=1,...,8)=\pm\varepsilon_{p}(p=1,...,4)$.}
\subsection{Josephson charge current}
On solving the boundary conditions, we have 8 Andreev bound states given as $E_{i}(i=1,...,8)=\pm\varepsilon_{p}(p=1,...,4)$.
From Andreev bound states energies\cite{Been} we get the Free energy of our system, which is given by\cite{annu}:
{
\begin{eqnarray}
{}&&\mathcal{F}=-\frac{1}{\beta}\frac{1}{2\pi}\int_{0}^{2\pi}\ln\Big[\prod_{i}(1+e^{-\beta E_{i}})\Big]d(k_{F}a)\nonumber\\
&&=-\frac{2}{\beta}\frac{1}{2\pi}\int_{0}^{2\pi}\sum_{p=1}^{4}\ln\Big[2\cosh\Big(\frac{\beta \varepsilon_{p}}{2}\Big)\Big]d(k_{F}a)
\label{ff}
\end{eqnarray}
}
{We consider only the short junction limit, i.e., $a<<\xi$, where $\xi$ is the superconducting coherence length and $a$ the width of the intervening ferromagnetic layers between superconductors, such that the total Josephson current is determined by only the bound state contribution, the continuum contribution is negligible and so neglected. See Refs.~[10,16] where similar to us the continuum contribution to the total Josephson current is also neglected in the short junction limit.}
The charge Josephson current at finite temperature is the derivative of the Free energy $\mathcal{F}$ of our system with respect to the phase difference $\varphi$ between left and right superconductors\cite{deG,LINDER},
\begin{equation}
I_{c}=\frac{2e}{\hbar}\frac{\partial \mathcal{F}}{\partial \varphi}=-\frac{2e}{\hbar}\frac{1}{2\pi}\int_{0}^{2\pi}\sum_{p=1}^{4}\tanh\Big(\frac{\beta \varepsilon_{p}}{2}\Big)\frac{\partial \varepsilon_{p}}{\partial \varphi}d(k_{F}a)
\label{ff1}
\end{equation}
herein $e$ is the electronic charge and $k_{F}a$ is the phase accumulated in ferromagnetic layers.\par
\subsection{Equilibrium spin torque}
From the Free energy of our system (Eq.~(\ref{ff})) we calculate the equilibrium spin torque\cite{Wain} by taking the derivative of the Free energy with respect to the misorientation angle \textquoteleft$\theta$\textquoteright (the angle between magnetic moments of the two ferromagnets)-
\begin{equation}
\tau^{eq}=\frac{\partial\mathcal{F}}{\partial \theta}=-\frac{1}{2\pi}\int_{0}^{2\pi}\sum_{p=1}^{4}\tanh\Big(\frac{\beta \varepsilon_{p}}{2}\Big)\frac{\partial \varepsilon_{p}}{\partial \theta}d(k_{F}a)
\label{ff2}
\end{equation}
Eqs.~\ref{ff1}, \ref{ff2} are the main working formulas of our paper. The equilibrium spin torque is also referred to as equilibrium spin current in some papers\cite{linder,halter}. In our calculation as previously mentioned we focus on the case where magnetization in two ferromagnet's is aligned, i.e., $\theta\rightarrow0$. In this limit we surprisingly see a finite equilibrium spin torque due to spin flip scattering upending, the classical reason behind spin torque being due to nonaligned magnetization. For transparent regime ($Z=0$) we find-
\begin{equation}
\tau^{eq}\mid_{\theta\rightarrow 0}=\frac{\Delta_{0}^2}{2\pi}\int_{0}^{2\pi}\Big(\tanh\Big(\frac{\beta M_{1}}{2}\Big){M_{1}^{\prime}}+\tanh\Big(\frac{\beta M_{2}}{2}\Big){M_{2}^{\prime}}+\tanh\Big(\frac{\beta M_{3}}{2}\Big){M_{3}^{\prime}}+\tanh\Big(\frac{\beta M_{4}}{2}\Big){M_{4}^{\prime}}\Big)d(k_{F}a)
\end{equation}
{where $M_{1}$, $M_{2}$, $M_{3}$, $M_{4}$, $M_{1}^{\prime}$, $M_{2}^{\prime}$, $M_{3}^{\prime}$ and $M_{4}^{\prime}$ are large expressions that depend on exchange interaction ($J$), magnetization of the ferromagnet's, spin ($S$) and magnetic moment ($m'$) of {magnetic impurity}, the phase ($k_{F}a$) accumulated in ferromagnetic region and spin flip probability of {magnetic impurity} ($f_{2}$). Their explicit forms are given in Appendix. In Appendix we show that for no flip case, the EQST ($\tau^{eq}\mid_{\theta\rightarrow 0}$) vanishes. { In the next section from figures we will see that the EQST is zero in the limit $J\rightarrow0$ and $Z\rightarrow\infty$.}
\section{Results}
\subsection{Analysing EQST}
In Figs.~3-7, we analyze via various plots this unique quantum spin torque due to spin flip scattering alone. In Fig.~3 we plot both Josephson charge super-current as well as the equilibrium quantum spin torque (EQST) for different interface transparencies $Z$ as a function of the phase difference $\varphi$.
\begin{figure}
\includegraphics[width=0.9\linewidth]{Figspin3.pdf}
\caption{\small \sl Josephson charge current and equilibrium quantum spin torque (EQST) as a function of phase difference ($\varphi$) for different values of interface barrier strength ($Z$). Parameters are $\Delta_{0}=1 meV$, $I_{0}=e\Delta_{0}/\hbar$, $T/T_{c}=0.01$, $J=0.5$, $h/E_{F}=0.5$, $\theta\rightarrow0$, $S=5/2$, $m'=-1/2$. { Both Josephson charge current and EQST are inhibited by increasing $Z$ while EQST is zero for $\varphi=0$ and $\varphi=2\pi$.}}
\end{figure}
We consider the magnetic moments of the ferromagnetic layers to be parallel ($\theta\rightarrow0$) and deal with the spin flip case, i.e., $f_{2}\neq 0$ (see Appendix), in this case $S\neq m'$ for {magnetic impurity} thus there is finite probability for {magnetic impurity} to flip its own spin while interacting with an electron/hole. We see both Josephson charge current and the EQST are inhibited by increasing interface barrier strength ($Z$). Further, similar to charge Josephson current, the EQST vanishes at $\varphi=0$ and $\varphi=2\pi$. Usually the spin transfer torque opposes the Josephson current (see Ref.~\cite{Wain}), however the equilibrium quantum spin torque (EQST) as shown here can flow in same direction as the Josephson current, see Fig.~3(a), $-0.7<\varphi<0.7$. This behavior is also seen in Ref.~\cite{spin2} for the quantum spin transfer torque in a different context.\par
In Fig.~4(a) we plot the EQST as a function of phase difference ($\varphi$) for different values of exchange interaction $J$ again for $\theta\rightarrow0$. We see that with change of exchange interaction $J$ there is a sign change of EQST. The change in sign of $\tau^{eq}$ via \textquoteleft$J$\textquoteright{} implies that the EQST seen in our system can be tuned via \textquoteleft$J$\textquoteright and the sign of $\tau^{eq}$ can be controlled by the phase difference as shown in Figs.~3(b), 4(a) \& 4(b). In Fig.~4(b) we plot EQST as function of phase difference ($\varphi$) for different values of the magnitude of magnetization ($h$) of the ferromagnet's. { We see that the EQST increases with increasing \textquoteleft$h$\textquoteright.}
\begin{figure}
\includegraphics[width=0.9\linewidth]{Figspin4.pdf}
\caption{\small \sl \small \sl EQST as a function of phase difference ($\varphi$) for (a) different values of exchange interaction ($J$) of {magnetic impurity} and for (b) different values of magnetization ($h$) of the ferromagnet's. Parameters are $\Delta_{0}=1 meV$, $I_{0}=e\Delta_{0}/\hbar$, $T/T_{c}=0.01$, $Z=0$, $J=1$ (for (b)), $h/E_{F}=0.5$, $\theta\rightarrow0$, $S=5/2$, $m'=-1/2$. { In (a) EQST changes sign with change of exchange interaction $J$ and phase difference $\varphi$. In (b) EQST increases with increasing magnetization $h$ of the ferromagnet's.}}
\end{figure}
In Fig.~5 we study the EQST from low to high spin states { and for different values of spin flip probability} of {magnetic impurity} again at $\theta\rightarrow0$ for a transparent junction, i.e., $Z=0$. In Fig.~5(a), $J=1$ and we see that the EQST monotonically decreases with increasing \textquoteleft$S$\textquoteright{} for particular value of $m'=-\frac{1}{2}$, implying high spin states inhibit EQST. In Fig. 5(b) we plot the EQST for a particular spin $S =5/2$ and for all possible values of spin flip probability of spin flipper. We see that EQST is enhanced for $f_{2}>S$ but for $f_{2}<S$ EQST is suppressed.\par
In Fig.~6(a) we plot the EQST for flip ($S=3/2, m'=-1/2, f_{2}\neq0$) case as well as no flip ($S=3/2, m'=3/2, f_{2}=0$) case and also for a superconductor-ferromagnet-ferromagnet-superconductor ($S$-$F_{1}$-$F_{2}$-$S$) junction without {magnetic impurity} ($J=0$) in the same figure as a function of mis-orientation angle ($\theta$) between ferromagnet's. We see that in contrast to $S$-$F_{1}$-$F_{2}$-$S$ junction ($J=0$ case) and no flip case, EQST is finite at $\theta\rightarrow0$ and $\theta=\pi$ when magnetic impurity flips its spin. Thus the reason for finite EQST at $\theta\rightarrow0$ is finite probability for flipping. This can be explained as follows- after passing through first ferromagnetic layer the {super-current} become polarized in the direction of magnetization of the first ferromagnetic layer. When { spin polarized super-current} interacts with the {magnetic impurity} through the exchange interaction, there is a finite probability for a mutual spin flip.
The equation below depicts the interaction process:
\begin{equation}
|s.c\rangle\otimes|\phi_{m'}^{S}\rangle={\frac{m'}{2}}|{\mbox{No flip}}\rangle+{\frac{f_{2}}{2}}|{\mbox{Mutual-flip}}\rangle
\end{equation}
{where $|s.c\rangle$ is the state of spin polarized supercurrent}, see paragraphs above and below Eq.~\ref{eqq2} on how this aforesaid equation comes into being. Due to this spin flip scattering the direction of { the spin of supercurrent} will be in a superposition too and thus will differ from the direction of the magnetization vector of the ferromagnetic layer. Thus, when the {supercurrent} enters the second ferromagnetic layer, magnetization vector of the second ferromagnetic layer will exert a torque on the spin flipped component of the {supercurrent wave function} in order to rotate { the supercurrent's spin} along the direction of magnetization, while leaving the non spin flipped component as it is. From conservation of spin angular momentum, the {supercurrent} will also exert an equal and opposite torque on the magnetic moment of the second ferromagnetic layer leading to a finite EQST even at $\theta\rightarrow 0$. However, in absence of {magnetic impurity} ($J=0$ case) and for no flip case the {spin polarized supercurrent state} does not flip it's spin. Thus, in absence magnetic impurity or in case of no-flip scattering the { spin polarized supercurrent's spin} and the magnetization vector of the ferromagnetic layers will be in the same direction. Therefore, EQST vanishes in case of $J=0$ and no-flip process but for spin-flip process it is finite.
\begin{figure}
\includegraphics[width=0.9\linewidth]{Figspin5.pdf}
\caption{\small \sl (a) Equilibrium quantum spin torque (EQST) vs spin ($S$) of spin flipper. (b) EQST vs spin flip probability ($f_{2}$) of spin flipper for $S=5/2$ and $m'=5/2 (f_{2}=0), m'=3/2$ and $m'=-5/2 (f_{2}=2.236), m'=1/2$ and $m'=-3/2 (f_{2}=2.8284)$ and $m'= -1/2 (f_{2}=3)$. Parameters are $\Delta_{0}=1 meV$, $T/T_{c}=0.01$, $\varphi=\pi/2$, $J=1$, $m'=-1/2$ (for (a)), $Z=0$, $\theta\rightarrow0$, $h/E_{F}=0.5$. {EQST decreases with increase of spin $S$ of magnetic impurity.}}
\end{figure}
This finite $\tau^{eq}$ can be a check also on whether $SFFS$ junctions are clean or a contaminated with magnetic adatoms. In Fig.~6(b) we plot EQST as a function of exchange interaction $J$ from antiferromagnetic coupling ($J<0$) to ferromagnetic coupling ($J>0$) at phase difference $\varphi=\pi/2$. For $\theta\rightarrow 0$, ferromagnets have no role in flipping the electron's/hole's spin\cite{ping} and spin flip is only due to the spin flipper. We see that for ferromagnetic coupling there is no sign change of EQST with change in $J$. However, for antiferromagnetic coupling ($J < 0$) there is a sign change in $\tau^{eq}$ as $J$ changes from $J=0$ to $J=-2$, implying tunability of the sign of EQST via the exchange interaction of spin flipper.
\begin{figure}
\includegraphics[width=0.9\linewidth]{Figspin6.pdf}
\caption{\small \sl (a) EQST as a function of misorientation angle ($\theta$) for $\varphi=\pi/2$. (b) EQST as a function of exchange interaction ($J$) of spin flipper for $\varphi=\pi/2$ and $\theta\rightarrow 0$. Parameters are $\Delta_{0}=1 meV$,
$I_{0}=e\Delta_{0}/\hbar$, $T/T_{c}=0.01$, $Z=0$, $h/E_{F}=0.5$, spin flip case: $S=3/2, m'=-1/2$, no flip case: $S=3/2, m'=3/2$ and for (a) $J=1$. { In (a) EQST is zero for $J=0$ and no flip case ($f_{2}=0$), but finite for spin flip case ($f_{2}\neq0$). In (b) EQST changes sign with change in $J$ for antiferromagnetic coupling ($J<0$) and is also asymmetric with respect to $J$.}}
\end{figure}
Finally, in Fig.~7 we plot the EQST as a function of interface barrier strength ($Z$). We see that there is no sign change of EQST with increase of interface barrier strength $Z$. Further, the EQST is almost zero in the tunneling regime.\par{ The theoretically predicted numerical value of equilibrium spin transfer torque (ESTT) is $\sim10^{-2}$ meV in Ref.~\onlinecite{Wain}. On the other hand, in our work for the parameter values $Z=0$, $J=0.5$, $\varphi=\pi/2$, $S=5/2$ and $m'=-1/2$, the numerical value of equilibrium quantum spin torque (EQST) is $0.04$ meV. Thus, in our work the value of equilibrium quantum spin torque (EQST) is almost same with the value of equilibrium spin transfer torque as predicted in Ref.~\onlinecite{Wain}.}\par
{ Equilibrium spin current/torque in superconductor-ferromagnet-superconductor junctions with inhomogeneous magnetization is studied in Ref.~\onlinecite{alid}. They pointed out that there are discontinuous jumps in the equilibrium spin current or torque whenever the junction undergoes a $0-\pi$ transition. They find numerically that the spin current or torque is symmetric with respect to phase difference between two superconductors. They also show that for certain values of the thickness of ferromagnetic layer, a pure spin current can flow through the junction without any charge current. Similar to their work, we see in our work quantum spin torque is finite even when charge current vanishes. This finite quantum spin torque is antisymmetric with respect to phase difference between two superconductors in contrast to their work.}
\begin{figure}
\centering{\includegraphics[width=.7\linewidth]{Figspin7.pdf}}
\caption{\small \sl EQST as a function of interface barrier strength ($Z$). Parameters are $\Delta_{0}=1 meV$, $I_{0}=e\Delta_{0}/\hbar$. {EQST decreases with increase of $Z$ and in the tunneling regime ($Z\rightarrow$large) vanishes.}}
\end{figure}
\subsection{Physical picture: How does EQST arise?}
To understand the physical basis of the equilibrium quantum spin torque we go back to Fig.~2. When the Josephson super-current enters the first ferromagnetic layer { it} becomes spin-polarized in the direction of magnetization of the first ferromagnetic layer. This spin polarized {super-current} then interacts with the {magnetic impurity} through the exchange coupling and there is a finite probability for a mutual spin flip. One should note that this is a probability not a certainty, since the interaction of { spin polarized super-currents} is quantum in nature. Thus while before interaction the {super-current wave-function and {magnetic impurity} wave-function} are completely independent after interaction both are in a entangled and in a superposed state of: $\frac{m'}{2} |No-flip\rangle+\frac{f_{2}}{2}|Mutual-flip\rangle$, see paragraph below Eq.~\ref{eqq2}.
This finite probability of spin flip scattering implies the direction of the { super-currents} spin polarization is now too in a superposition of either polarized in direction of the magnetization of ferromagnetic layers or not. Thus since the direction of the magnetization vector of both the ferromagnetic layers is same, say, this means the magnetization direction of second ferromagnetic layer will now differ from that of the { super-currents spin polarization state} which {is} in a superposition. Thus, when this {super-current} enters the second ferromagnetic layer, magnetic moment of the second ferromagnetic layer will exert a torque on that part of the { super-current wave-function} which is not in the same direction as the ferromagnet's, in order to rotate {its} spin state along the direction of magnetization, while leaving the non-spin flipped component of the {super-currents wave-function} as it is. From conservation of spin angular momentum, the spin flipped component of {super-currents wave-function} will also exert an equal and opposite torque on the magnetic moment of the second ferromagnetic layer. In this way, a torque arises although ferromagnet's are aligned. However, for no flip process, the wave-function is not in a superposition and in that case there is only a single no flip component. The {spin polarized state of the super-current} does not flip it's spin when interacting with the {magnetic impurity}. Thus in case of no-flip scattering the direction of the spin of { spin polarized super-current} and direction of magnetization of the ferromagnet's will remain the same. Thus equilibrium quantum spin torque vanishes in case of no-flip process but for spin-flip process it is finite.
\section{Experimental realization and Conclusions}
The experimental detection of the novel phenomena pointed out in this work shouldn't be difficult. Superconductor-Ferromagnet-Ferromagnet-Superconductor (S-F-F-S) junctions have been fabricated experimentally for quite some time now\cite{Col}. Doping a magnetic ad-atom or magnetic impurity in S-F-F-S junctions with identical magnetization for ferromagnet's will experimentally implement our set up as shown in Fig.~2.
In conclusion, we have presented an exhaustive study of the nature of equilibrium spin torque in presence of a {magnetic impurity} of our hybrid system. We focus on the situation when the magnetic moments of the ferromagnetic layers are parallel. We identify spin flip scattering to be critical in inducing a torque in such a configuration. Further, we see that one can control the sign of this spin flip scattering induced Equilibrium quantum spin torque via exchange interaction of as well as by the phase difference across the two superconductors. Tuning the sign of equilibrium quantum spin torque leads to control over the direction of magnetization of ferromagnet's. This has important implications in various spintronic devices as changing the direction of magnetization one can create faster magnetic random access memories\cite{Myer}.
{\section{Appendix: Analytical expression for equilibrium quantum spin torque}}
{From Andreev bound states energies using Eq.~\ref{ff2} we can calculate the equilibrium spin torque ($\tau^{eq}$). For transparent regime ($Z=0$) we find-
\begin{equation}
\tau^{eq}\mid_{\theta\rightarrow 0}=\frac{\Delta_{0}^2}{2\pi}\int_{0}^{2\pi}\Big(\tanh\Big(\frac{\beta M_{1}}{2}\Big){M_{1}^{\prime}}+\tanh\Big(\frac{\beta M_{2}}{2}\Big){M_{2}^{\prime}}+\tanh\Big(\frac{\beta M_{3}}{2}\Big){M_{3}^{\prime}}+\tanh\Big(\frac{\beta M_{4}}{2}\Big){M_{4}^{\prime}}\Big)d(k_{F}a)
\label{eqn}
\end{equation}
\begin{eqnarray}
\mbox{ where } M_{1(2)}=&&\Delta_{0}\sqrt{D-\frac{1}{2}\sqrt{A+B}\pm\frac{1}{2}\sqrt{2A-B-\frac{2C}{\sqrt{A+B}}}},\nonumber\\
M_{1(2)}^{\prime}=&&-\frac{1}{2M_{1(2)}}\Big(-D^{\prime}-\frac{A^{\prime}+B^{\prime}}{4\sqrt{A+B}}\pm\frac{2A^{\prime}-B^{\prime}+\frac{C(A^{\prime}+B^{\prime})}{(A+B)^{3/2}}-\frac{2C^{\prime}}{(A+B)}}{4\sqrt{2A-B-\frac{2C}{\sqrt{A+B}}}}\Big),\nonumber\\
M_{3(4)}=&&\Delta_{0}\sqrt{D+\frac{1}{2}\sqrt{A+B}\pm\frac{1}{2}\sqrt{2A-B+\frac{2C}{\sqrt{A+B}}}}\nonumber\\
\mbox{ and }M_{3(4)}^{\prime}=&&-\frac{1}{2M_{3(4)}}\Big(-D^{\prime}+\frac{A^{\prime}+B^{\prime}}{4\sqrt{A+B}}\pm\frac{2A^{\prime}-B^{\prime}-\frac{C(A^{\prime}+B^{\prime})}{(A+B)^{3/2}}+\frac{2C^{\prime}}{(A+B)}}{4\sqrt{2A-B+\frac{2C}{\sqrt{A+B}}}}\Big),\nonumber
\end{eqnarray}
The explicit form of $A$, $B$, $C$, $D$, $A^{\prime}$, $B^{\prime}$, $C^{\prime}$, $D^{\prime}$ in Eq.~\ref{eqn} is
\begin{align}
\begin{split}
A={}&4L_{1}^2-\frac{2}{3}L_{2},\\
B={}&\frac{2^{1/3}X_{1}}{3(X_{2}+\sqrt{X_{2}^2-4X_{1}^3})}+\frac{(X_{2}+\sqrt{X_{2}^2-4X_{1}^3})^{1/3}}{2^{1/3}3},\\
C={}&8L_{1}^{3}-2L_{1}L_{2}+L_{3},\\
D={}&L_{1},\\
A^{\prime}={}&-8L_{1}K_{1}-\frac{2}{3}K_{2},\\
B^{\prime}={}&\frac{2^{1/3}X_{1}^{\prime}}{3(X_{2}+\sqrt{X_{2}^2-4X_{1}^3})^{1/3}}-\frac{2^{1/3}X_{1}Y}{9(X_{2}+\sqrt{X_{2}^2-4X_{1}^3})^{4/3}},\\
C^{\prime}={}&-192L_{1}^{2}K_{1}+16L_{2}K_{1}-16L_{1}K_{2}+8K_{3},\\
D^{\prime}={}&K_{1},\\
\mbox{ where }\\
X_{1}={}&L_{2}^{2}-12L_{1}L_{3}-12L_{4},\\
X_{2}={}&2L_{2}^{3}-36L_{1}L_{2}L_{3}-432L_{1}^{2}L_{4}+27L_{3}^{2}+72L_{2}L_{4},\\
X_{1}^{\prime}={}&2L_{2}K_{2}+12L_{3}K_{1}-12L_{1}K_{3},\\
Y={}&Y^{\prime}+\frac{X_{2}Y^{\prime}-6X_{1}^{2}X_{1}^{\prime}}{\sqrt{X_{2}^2-4X_{1}^3}},\\\nonumber
\end{split}
\end{align}
\begin{align}
\begin{split}
Y^{\prime}={}&6L_{2}^2K_{2}-36L_{1}L_{3}K_{2}+36L_{2}L_{3}K_{1}+864L_{1}L_{4}K_{1}+54L_{3}K_{3}+72L_{4}K_{2},\\
L_{1}={}&P_{1}(S,m',f_{2},h,J,k_{F}a)+P_{2}(S,m',f_{2},h,J,k_{F}a)\cos(\varphi),\\
L_{2}={}&P_{3}(S,m',f_{2},h,J,k_{F}a)\cos(2\varphi)+P_{4}(S,m',f_{2},h,J,k_{F}a)\\
{}&\cos(\varphi)+P_{5}(S,m',f_{2},h,J,k_{F}a),\\
L_{3}={}&P_{6}(S,m',f_{2},h,J,k_{F}a)+P_{7}(S,m',f_{2},h,J,k_{F}a)\cos(\varphi)\\
{}&+P_{8}(S,m',f_{2},h,J,k_{F}a)\cos(2\varphi)+P_{9}(S,m',f_{2},h,J,k_{F}a)\\
{}&\cos(\varphi)\cos(2\varphi),\\
L_{4}={}&P_{10}(S,m',f_{2},h,J,k_{F}a)+P_{11}(S,m',f_{2},h,J,k_{F}a)\cos(\varphi)\\
{}&+P_{12}(S,m',f_{2},h,J,k_{F}a)\cos(2\varphi)+P_{13}(S,m',f_{2},h,J,k_{F}a)\\
{}&\cos(\varphi)\cos(2\varphi)+P_{14}(S,m',f_{2},h,J,k_{F}a)\cos(4\varphi),\\
K_{1}={}&f_{2}(P_{15}(S,m',f_{2},h,J,k_{F}a)\sin(\varphi)+P_{16}(S,m',f_{2},h,J,k_{F}a)),\\
K_{2}={}&f_{2}(P_{17}(S,m',f_{2},h,J,k_{F}a)+P_{18}(S,m',f_{2},h,J,k_{F}a)\cos(\varphi)\\
{}& +P_{19}(S,m',f_{2},h,J,k_{F}a)\sin(\varphi)+P_{20}(S,m',f_{2},h,J,k_{F}a)\sin(2\varphi)),\\
K_{3}={}&f_{2}(P_{21}(S,m',f_{2},h,J,k_{F}a)+P_{22}(S,m',f_{2},h,J,k_{F}a)\cos(\varphi)\\
{}&+P_{23}(S,m',f_{2},h,J,k_{F}a)\sin(\varphi)+P_{24}(S,m',f_{2},h,J,k_{F}a)\\
{}&\sin(2\varphi)+P_{25}(S,m',f_{2},h,J,k_{F}a)\sin(\varphi)\cos(2\varphi)).\nonumber
\end{split}
\end{align}
Here, $P_{i}$ ($i=1,2...25$) are functions of all parameters like exchange interaction ($J$), magnetization of the ferromagnets ($h$), spin ($S$) and magnetic moment ($m'$ ) of {magnetic impurity}, phase ($k_{F}a$) accumulated in ferromagnetic region and spin flip probability of {magnetic impurity} ($f_{2}$ ). Since these are large expressions we do not explicitly write them here. For no flip case- the spin flip probability of {magnetic impurity} is $f_{2}=0$. Thus, from above expressions: $K_{1}$, $K_{2}$, $K_{3}$ and also $A^{\prime}$, $B^{\prime}$, $C^{\prime}$ and $D^{\prime}$ vanish. Therefore, from Eq.~\ref{eqn}, $M_{1(2)}^{\prime}=0$ and $M_{3(4)}^{\prime}=0$, implying for no flip case equilibrium quantum spin torque vanishes ($\tau^{eq}\mid_{\theta\rightarrow0}=0$).}\\
\section{Ethics statement.}
This work did not involve any collection of human or animal data.
\section{Data accessibility statement.}
This work does not have any experimental data.
\section{Competing interests statement}
We have no competing interests.
\section{Authors' contributions}
C.B. conceived the proposal, S.P. did the calculations on the advice of C.B., C.B. and S.P. analyzed the results and wrote the paper. Both authors reviewed the manuscript.
\section{Funding}
This work was supported by the grant ``Non-local correlations in nanoscale systems: Role of decoherence, interactions, disorder and pairing symmetry'' from SCIENCE \& ENGINEERING RESEARCH BOARD, New Delhi, Government of India, Grant No. EMR/20l5/001836,
Principal Investigator: Dr. Colin Benjamin, National Institute of Science Education and Research, Bhubaneswar, India.
|
1,314,259,996,640 | arxiv | \section{Introduction}
Topological insulator (TI) has attracted broad interest in recent years\cit
{RE-1,RE-2}. The unique property of TI is that the bulk state has a energy
gap while the surface state is gapless. The topological property also have
been proposed for three dimensional (3D) semimetal\cit
{Weyl-1,Weyl-2,RE-3,Weyl-response,DSM-rev}. Up to now, three kinds of
topological semimetal have been discovered, i.e., 3D Dirac semimetal (DSM
\cite{DSM-rev,DSM-1,Na3Bi,Cd3As2,BaAgBi,Cava}, Weyl semimetal (WSM)\cit
{Weyl-1,Weyl-2,HgCr2Se4,balets} and node-line semimetal (NLS)\cit
{NSL-1,NSL-2,NSL-3,NSL-4}. The 3D DSM has four-fold degeneracy point formed
by two double degeneracy linear band crossing. Combining the crystal
symmetry and time reversal symmetry, 3D DSM can be robust against external
perturbations. Based on band structural calculation, several materials have
been proposed to be 3D DSM\cite{DSM-rev,DSM-1,Na3Bi,Cd3As2,BaAgBi,Cava} and
some of them already been confirmed by experiments\cit
{DSMExp-1,DSMExp-3,DSMExp-4,DSMExp-5}. If one breaks time reversal symmetr
\cite{Weyl-1,HgCr2Se4,balets,axion} or inversion symmetry\cit
{Weyl-inver-1,Weyl-inver-2,TaAs,TaAs-hasan}, the double degeneracy bands
will split, consequently the 3D DSM evolves into WSM. Very recently, the
predictions about WSM in TaAs family \cite{TaAs,TaAs-hasan} had been
confirmed experimentally \cite{TaAs-exp-1,TaAs-exp-2,TaAs-exp-3,TaAs-exp-4}.
Unlike DSM and WSM whose band crossing points distribute at separate $k$
points in the Brillouin zone (BZ), for the NLS, the crossing points around
the Fermi level form a closed loop. Several compounds had been proposed as
NLS included MTC\cite{NSL-2}, Bernal graphite\cite{NSL-mat-2},
hyperhoneycomb lattices\cite{NSL-mat-3} and antiperovskite Cu$_{3}$PdN\cit
{NSL-3,NSL-4} and Cu$_{3}$NZn\cite{NSL-4}. When SOC is neglected, for the
system with band inversion, time reversal symmetry together with inversion
symmetry or mirror symmetry will guarantee node line in 3D BZ\cit
{NSL-2,NSL-3,NSL-4,TaAs,NSL-mat-4,NSL-mat-5}. Same with TI\ and WSM, NLS
also has an characteristic surface state, namely, \emph{drumhead} like stat
\cite{NSL-1,NSL-2,NSL-3,NSL-4}. Such 2D flat band surface state may become a
route to achieve high temperature superconductivity\cite{NSL-app-1,NSL-app-2
.
In this article, based on first-principles calculations and effective model
analysis, we propose that CaTe in CsCl-type structure is a NLS with drumhead
like surface flat bands when SOC is ignored. As shown in Fig. 1(b), around
the $M$ point, there are three node-line rings, which is perpendicular to
each other. When SOC is included, these three node-line rings evolve into
two Dirac points along the $M-R$ line. The Dirac points are robust and
protected by the $C_{4}$ rotational symmetry. If the $C_{4}$ symmetry is
broken, the system becomes a strong topological insulator with $Z_{2}$
indices (1;000).
\begin{figure}[tbh]
\center\includegraphics[scale=0.4]{Fig1-crystal-BZ.eps}
\caption{(color online). (a) Crystal structure of CaTe in CsCl type phase. (b)The 3D BZ and
projected (001) two dimensional (2D) BZ of CaTe. Three dash circles are the
scheme of the three line nodes around the $M$ point. The blue circle is
parallel to $k_{z}=0$ plane, red circle is parallel to $k_{x}=0$ plane and
the green circle is parallel to $k_{y}=0$ plane.}
\label{Fig1-cry}
\end{figure}
\section{Crystal structure and method}
As one member of the alkaline-earth chalcogenides, CaTe have attracted
tremendous interests because of its technological applications ranging from
catalysis to luminescence\cit
{Crystal-1,Crystal-2,Crystal-3,Crystal-4,Crystal-5}. CaTe undergoes a phase
transition from NaCl-type structure at ambient conditions to CsCl-type
structure at hydrostatic pressure about 33 GPa\cite{Crystal-1,Crystal-2}.
The structure of CaTe in CsCl-type is shown in Fig.1(a). The space group of
this phase is $Pm\overline{3}m$ (NO. 221). The electronic band structure
calculations have been carried out using the full potential linearized
augmented plane wave method as implemented in WIEN2K package \cite{WIEN2K}.
To obtain accurate band inversion strength and band order, the modified
Becke-Johnson exchange potential together with local-density approximation
for the correlation potential (MBJLDA) has been applied \cite{mBJ}. The
plane-wave cutoff parameter R$_{MT}$K$_{max}$ is set to 7 and a $16\times
16\times 16$ mesh was used for the BZ integral. The SOC interaction is
included by using the second-order variational procedure.
\section{Electronic structure}
Firstly, we calculate the band structure of CaTe and show the result without
SOC in Fig. 2(a). By checking the wave functions, we find that the valence
bands and conduction bands are mainly contributed by 5$p_{z}$ (blue) state
of Te and 3$d_{z^{2}}$ (red) state of Ca, respectively, as shown in Fig.
2(a). The band inversion happened at $M$ point where the energy of Te-5
p_{z} $ state is higher than the energy of Ca-3$d_{z^{2}}$ state by about
0.75 eV. Interestingly, this kind of band inversion is not caused by the
SOC, which is different from most topological materials\cite{RE-1,RE-2}. We
calculate the electronic structure of CaTe by applying tensile strain to
check the origin of band inversion at $M$ point. The energy difference
between Te-5$p_{z}$ state and Ca-3$d_{z^{2}}$ state decreases as increasing
the tensile strain. We find that when a$\geq $1.05a$_{0}$, the band
inversion at M point disappear. With the time reversal symmetry and
inversion symmetry, the band inversion results in node lines as proved in
Ref. \cite{NSL-2}.
\begin{figure}[tbh]
\center\includegraphics[scale=0.4]{Fig2-band-struct.eps}
\caption{(color online). (a) Electronic structure of CaTe without SOC. The weights of Te-5
p_{z}$ (Ca-3$d_{z^{2}}$)state is proportional to the width of blue (red)
curves. (b) Electronic structure of CaTe with SOC. Along $\Gamma -M$, $M-X$
and $M-M^{\prime }$, a small gap is opened. The Dirac point at the $M-R$
line is protected by $C_{4}$ rotational symmetry. The inset is the detailed
structure around the M point. (see main text for detailed description).}
\label{Fig2-band}
\end{figure}
The effective Hamiltonian of node line around $M$ point can be established
by using the $\overset{\rightharpoonup }{k}\cdot $ $\overset{\rightharpoonup
}{p}$ method. Considering the crystal symmetry at $M$ point and time
reversal symmetry, the effective Hamiltonian can be written as following
form
\begin{equation*}
H(\overset{\rightharpoonup }{k})=g_{0}(\overset{\rightharpoonup }{k})\sigma
_{0}+g_{x}(\overset{\rightharpoonup }{k})\sigma _{x}+g_{z}(\overset
\rightharpoonup }{k})\sigma _{z},
\end{equation*
where the $\sigma _{x}\ $and $\sigma _{z}$ are Pauli matrices, $\sigma _{0}$
is unit matrix. $g_{0}(\overset{\rightharpoonup }{k
)=M_{0}-B_{0}(k_{x}^{2}+k_{y}^{2})-C_{0}k_{z}^{2}$, $g_{x}(\overset
\rightharpoonup }{k})=Ak_{x}k_{y}k_{z}$, $g_{z}(\overset{\rightharpoonup }{k
)=M_{z}-B_{z}(k_{x}^{2}+k_{y}^{2})-C_{z}k_{z}^{2}$. This system has both of
time reversal symmetry and inversion symmetry, thus, the component of
\sigma _{y}$ is zero\cite{NSL-2}. The eigenvalues of this $2\times 2$
effective Hamiltonian are $E(\overset{\rightharpoonup }{k})=g_{0}(\overset
\rightharpoonup }{k})\pm \sqrt{g_{x}^{2}(\overset{\rightharpoonup }{k
)+g_{z}^{2}(\overset{\rightharpoonup }{k})}$. When $g_{x}(\overset
\rightharpoonup }{k})=0$ and $g_{z}(\overset{\rightharpoonup }{k})=0$, the
nodal line will emergent. It can be easily checked that the equation $g_{z}
\overset{\rightharpoonup }{k})=0$ has solution only when $M_{z}B_{z}>0$ and
M_{z}C_{z}>0$. And $M_{z}B_{z}>0$ and $M_{z}C_{z}>0$ are also the condition
of band inversion. On the other hand, $g_{x}(\overset{\rightharpoonup }{k
)=0 $ confine the node lines in three mutually perpendicular planes (namely,
$k_{x}$=0 plane, $k_{y}$=0 plane and $k_{z}$=0 plane ) as illustrated in
Fig. 1(b). Due to the fact that $g_{0}(\overset{\rightharpoonup }{k})$ does
not equal to zero which breaks the electron-hole symmetry, consequently, the
node lines have finite energy dispersion.
When SOC is considered, three node lines evolve into two Dirac points at
M-R $ line as shown in Fig. 2(b). At $M$ point, the two states near Fermi
level belong to irreducible representation $\Gamma _{7}^{-}$ and $\Gamma
_{6}^{+}$, respectively. While along the $M-X$ line, two bands have the same
irreducible representation $\Gamma _{5}\ $as shown in Fig. 2(b), thus they
can hybridize with each other and open a small gap (about 50meV). For both
\Gamma -M$ line and $M-M^{\prime }$ line, the two bands around Fermi level
are also belong to the same irreducible representation, thus there is not
band crossing along $\Gamma -M$ line and $M-M^{\prime }$ line. Since the
band splitting is determined by the SOC, thus one can achieve the NLS by
doping the lighter atoms such as Se, S.
Along the $M-R$ line, which reserve the $C_{4}$ rotation symmetry, two
states with $\Gamma _{7}^{-}$ and $\Gamma _{6}^{+}$ at $M$ point evolve into
$\Gamma _{7}$ and $\Gamma _{6}$, thus the hybridization between these two
bands is forbidden, there is a Dirac point as shown in Fig.2(b). When the
C_{4}$ rotational symmetry is broken, like by strain effect, the band
crossing point will disappear, and this 3D DSM will become a strong TI with
topological indices $Z_{2}$ to be(1;0,0,0).
To understand the band inversion at $M$ point and the topological property
of this system, we derive a low energy effective Hamiltonian at $M$ point
based on the projection-operator method\cite{BaAgBi}. $M$ point has $D_{4h}$
symmetry and also time reversal symmetry. As discussed above, at $M$ point,
\Gamma _{7}^{-}$ symmetry state has angular momentum $j_{z}=\pm 3/2$ and
\Gamma _{6}^{+}$ symmetry state has angular momentum $j_{z}=\pm 1/2$.
Therefore using the basis of ($|j_{z}=-\frac{1}{2}\rangle _{d},|j_{z}=+\frac
1}{2}\rangle _{d},|j_{z}=-\frac{3}{2}\rangle _{p},|j_{z}=+\frac{3}{2}\rangle
_{p}$, the effective Hamiltonian around $M$ point can be written as (see
APPENDIX for detail.):
\begin{widetext}
\begin{equation*}
H_{eff}=\left(
\begin{matrix}
M_{1}(\overset{\rightharpoonup }{k}) & 0 & Ak_{+}+B(\overset{\rightharpoonup }{k}) & D(\overset{\rightharpoonup }{k}) \\
0 & M_{1}(\overset{\rightharpoonup }{k}) & D^{\ast }(\overset{\rightharpoonup }{k}) & -Ak_{-}-B^{\ast }(\overset{\rightharpoonup }{k})
\\
Ak_{-}+B^{\ast }(\overset{\rightharpoonup }{k}) & D(\overset{\rightharpoonup }{k}) & M_{2}(\overset{\rightharpoonup }{k}) & 0 \\
D^{\ast }(\overset{\rightharpoonup }{k}) & -Ak_{+}-B(\overset{\rightharpoonup }{k}) & 0 & M_{2}(\overset{\rightharpoonup }{k})
\end{matrix
\right)
\end{equation*
\end{widetext}where $M_{1}(\overset{\rightharpoonup }{k
)=M_{10}+M_{11}(k_{x}^{2}+k_{y}^{2})+M_{12}k_{z}^{2}$, $M_{2}(\overset
\rightharpoonup }{k})=M_{20}+M_{21}(k_{x}^{2}+k_{y}^{2})+M_{22}k_{z}^{2}$,
B(\overset{\rightharpoonup }{k
)=B_{1}k_{+}k_{z}^{2}+B_{2}(k_{x}^{3}+ik_{y}^{3})+iB_{3}k_{-}k_{x}k_{y}$, $D
\overset{\rightharpoonup }{k
)=D_{1}(k_{x}^{2}-k_{y}^{2})k_{z}+iD_{2}k_{x}k_{y}k_{z}$, $k_{\pm }=k_{x}\pm
ik_{y}$. Along the $k_{z}$ axis (where $k_{x}$=0, $k_{y}$=0) the effective
Hamiltonian is diagonal, and the eigenvalues are $E(\overset{\rightharpoonup
}{k})=M_{1}(\overset{\rightharpoonup }{k})$ and $E(\overset{\rightharpoonup
{k})=M_{2}(\overset{\rightharpoonup }{k})$. As mentioned above, the Dirac
point is on the $M-R$ line, thus it is interesting to discuss the effective
model along this line. Since there is the band inversion between $\left\vert
j_{z}=\pm \frac{1}{2}\right\rangle _{d}$ and $\left\vert j_{z}=\pm \frac{3}{
}\right\rangle _{p}$ at $M$ point, it is easy to obtain that $M_{10}<M_{20}
, $M_{22}<0<M_{12}$, and the Dirac points locate at $\overset
\rightharpoonup }{k_{c}}=(\frac{\pi }{a},\frac{\pi }{a},k_{zc}$=$\pm \sqrt
\frac{M_{20}-M_{10}}{M_{12}-M_{22}}})$. Neglecting the high-order terms, $E
\overset{\rightharpoonup }{k_{c}}+\overset{\rightharpoonup }{\delta k})$ can
be expressed as $(M_{12}+M_{22})k_{zc}\delta k_{z}\pm \sqrt
(M_{12}-M_{22})^{2}k_{zc}^{2}\delta k_{z}^{2}+A^{2}(\delta k_{x}^{2}+\delta
k_{y}^{2})}$, where $\delta k_{x,y,z}$ are small displacement from $\overset
\rightharpoonup }{k_{c}}$. In the vicinity of $\overset{\rightharpoonup }
k_{c}}$, the band dispersion is a linear, thus our effective Hamiltonian is
nothing but 3D\ massless Dirac fermions.
\begin{figure}[tbph]
\centering\includegraphics[scale=0.4]{Fig3-surface-state.eps}
\caption{(color online). The surface state of (001) surface of CaTe. (a) The surface state
is calculated without SOC. The flat surface state around Fermi level is
denoted by red color. The inset is the detailed band structure around the
\overline{M}$ point. (b) The surface state is calculated with SOC. The red
dots are the projected bulk Dirac nodes. The red lines between the bulk gap
at $\overline{X}$ point are surface states and the blue circle denotes the
surface Dirac cones. The inset is the detailed band structure around the
\overline{X}$ point.}
\label{Fig3-surface-state}
\end{figure}
The band inversion at $M$ point and the Dirac nodes in CaTe suggest the
existence of topological nontrivial surface state. To study the surface
states in CaTe we use a 200-unit-cells-thick (001) slab with top (bottom)
surface terminated by Ca (Te) atoms. The surface state is then calculated by
using the tight-binding method. The hopping parameters are determined from a
maximally localized Wannier functions (MLWFs)\cite{MLWFs}, which are
projected from the Bloch state derived from first-principles calculations.
Fig. 3(a)/(b) shows the surface state of CaTe (001) surface without/with
SOC, respectively. When SOC is ignored, the system is a NLS, and possess
nearly flat surface band around the Fermi energy. As shown in Fig. 3(a), our
numerical results find that the nearly flat surface "drumhead" state appears
in the interiors of the projected nodal line rings on the (001) surface
around the $\overline{M}$ point. Since the slab we used has two surfaces,
there are two surface states as shown in the red lines in the Fig. 3(a). The
particle-hole symmetry is broken by nonzero term $g_{0}(\overset
\rightharpoonup }{k})$, thus these two surface bands are not perfect flat
with about 70 meV bandwidth. This type of 2D flat bands are proposed as a
novel route to achieve high temperature superconductivity\cit
{NSL-app-1,NSL-app-2}.
When the SOC is included, three node lines are gapped out and become a pair
of Dirac points along the $M-R$ line, thus the NLS become a 3D DSM. There is
bulk Dirac node projected on $\overline{M}$ point (the red dot) as shown in
Fig. 3(b). Along the $\overline{M}-\overline{X}$, there is also a projected
bulk Dirac node, which locate near the $\overline{X}$ point denoted by red
dots. Fig. 3(b) is clearly shows that the gapped bulk states along the
\overline{\Gamma }-\overline{X}$ direction and the existence of surface
Dirac cones (in the blue circle) due to the topologically nontrivial $Z_{2}$
indices, like the same case in Na$_{3}$Bi\cite{Na3Bi} and Cd$_{3}$As$_{2}
\cite{Cd3As2}.
\section{Conclusion}
In summary, based on first-principles calculations and effective model
analysis, we suggest that CaTe in CsCl-type structure is a NLS when SOC is
ignored. There are three node-line rings which are perpendicular to each
other around the $M$ point. With band inversion at $M$ point, this NLS is
protected by the time reversal symmetry and inversion symmetry. When the SOC
is included, three node-line rings become a pair of Dirac points. These
Dirac nodes are robust and protected by the $C_{4}$ crystal symmetry and the
system become a DSM.
\section{Acknowledgement}
The work was supported by the National Key Project for Basic Research of
China (Grant No. 2014CB921104), NSFC under Grants No. 11525417 and 11374147.
The project is also funded by Priority Academic Program Development of
Jiangsu Higher Education Institutions. S.S. were supported by NSF DMR (Grant
No. 1411336). Y.D. is supported by the program B for Outstanding PhD
candidate of Nanjing University.
\section{APPENDIX}
The conduction and valence bands of CaTe at $M$ point are mainly contributed
by four states: $|j_{z}=-\frac{1}{2}\rangle _{d}$, $|j_{z}=+\frac{1}{2
\rangle _{d}$, $|j_{z}=-\frac{3}{2}\rangle _{p}$ and $|j_{z}=+\frac{3}{2
\rangle _{p}$, we thus use these states as the basis to build the effective
model Hamiltonian at the $M$\ point of BZ. As a $4\times 4$ Hermitian
matrix, the effective Hamiltonian can be expended as $H=\underset{i=1}
\overset{16}{\sum }}f_{i}(\overset{\rightharpoonup }{k})\Gamma _{i}$, where
f_{i}(\overset{\rightharpoonup }{k})$ are function of momentum \textbf{k}.
\Gamma _{i}$ are Dirac matrices, which can be written as the direct product
of $\sigma _{i}$ and $\tau _{j}$ ($\sigma _{i=1,2,3,4}$, $\tau _{j=1,2,3,4}$
are unit matrix $\sigma _{0}$ and three Pauli matrices $\sigma _{x}$,
\sigma _{y}$ and $\sigma _{z}$).
Under the operation of crystal symmetry and time reversal symmetry, the
Hamiltonian should be invariant. This requires the function $f_{i}(\overset
\rightharpoonup }{k})$ and the associated $\Gamma _{i}$ matrices belong to
the same irreducible representation. Thus the key problem is to determine
the irreducible representation for $f_{i}(\overset{\rightharpoonup }{k})$
and $\Gamma _{i}$ matrices, which can be done by the projection-operator
method.
\begin{table*}[tbh]
\centerin
\begin{tabular}{ccc}
\hline
$\Gamma $ matrices & representation & T \\ \hline
$\Gamma _{1}$,$\Gamma _{13}$ & $R_{1}$ & + \\
$\Gamma _{4},\Gamma _{16}$ & $R_{2}$ & - \\
$\{\Gamma _{2},\Gamma _{3}\},\{\Gamma _{14},\Gamma _{15}\}$ & $R_{5}$ & - \\
$\Gamma _{7}$ & $R_{10}$ & - \\
$\Gamma _{11}$ & $R_{10}$ & + \\
$\Gamma _{6}$ & $R_{11}$ & - \\
$\Gamma _{10}$ & $R_{11}$ & + \\
$\{\Gamma _{8},\Gamma _{9}\}$ & $R_{12}$ & - \\
$\{\Gamma _{5},\Gamma _{12}\}$ & $R_{12}$ & + \\ \hline
$d(k)$ & representation & T \\ \hline
$C$, $k_{z}^{2}$, $k_{x}^{2}+k_{y}^{2}$ & $R_{1}$ & + \\
$k_{x}^{2}-k_{y}^{2}$ & $R_{3}$ & + \\
$k_{x}k_{y}$ & $R_{4}$ & + \\
$\{k_{x}k_{z}$, $k_{y}k_{z}\}$ & $R_{5}$ & + \\
$k_{z},k_{z}^{3},(k_{x}^{2}+k_{y}^{2})k_{z}$ & $R_{9}$ & - \\
$k_{x}k_{y}k_{z}$ & $R_{10}$ & - \\
$\{k_{x}^{2}-k_{y}^{2}\}k_{z}$ & $R_{11}$ & - \\
(k_{x},k_{y}),(k_{x}^{3},k_{y}^{3}),(k_{x}^{2}k_{y},k_{y}^{2}k_{x}),(k_{x}k_{z}^{2},k_{y}k_{z}^{2})
$ & $R_{12}$ & - \\ \hline
\end{tabular
\caption{The character table of Dirac $\Gamma $ \ matrices and the
polynomials of the momentum $k$ for CaTe at M point.}
\end{table*}
Because the SOC is included, we use the double space group. Under the
projection-operator method, we present the irreducible representation of
Dirac $\Gamma $\ matrices and polynomials of $\overset{\rightharpoonup }{k}
, and their transformation under time reversal in Table I. With the Table I,
the effective model Hamiltonian of CaTe at $M$ point can be easily expressed
as: $H=f_{1}(\overset{\rightharpoonup }{k})\Gamma _{1}+f_{13}(\overset
\rightharpoonup }{k})\Gamma _{13}+f_{8}(\overset{\rightharpoonup }{k})\Gamma
_{8}-f_{9}(\overset{\rightharpoonup }{k})\Gamma
_{9}+D_{1}(k_{x}^{2}-k_{y}^{2})k_{z}\Gamma _{6}-D_{2}k_{x}k_{y}k_{z}\Gamma
_{7}$, where $f_{1}(\overset{\rightharpoonup }{k
)=C_{1}+m_{1}(k_{x}^{2}+k_{y}^{2})+n_{1}k_{z}^{2}$, $f_{13}(\overset
\rightharpoonup }{k})=C_{13}+m_{13}(k_{x}^{2}+k_{y}^{2})+n_{13}k_{z}^{2}$,
f_{8}(\overset{\rightharpoonup }{k
)=Ak_{x}+B_{1}k_{x}^{3}+B_{2}k_{x}k_{z}^{2}+B_{3}k_{y}^{2}k_{x}$, $f_{9}
\overset{\rightharpoonup }{k
)=Ak_{y}+B_{1}k_{y}^{3}+B_{2}k_{y}k_{z}^{2}+B_{3}k_{x}^{2}k_{y}$. $\Gamma
_{1}=\sigma _{0}\otimes \tau _{0}$, $\Gamma _{13}=\sigma _{3}\otimes \tau
_{0}$, $\Gamma _{8}=\sigma _{1}\otimes \tau _{3}$, $\Gamma _{9}=\sigma
_{2}\otimes \tau _{0}$, $\Gamma _{6}=\sigma _{1}\otimes \tau _{1}$, $\Gamma
_{7}=\sigma _{1}\otimes \tau _{2}$. Compare with the effective Hamiltonian,
we have $M_{10}(M_{20})=C_{1}\pm C_{13}$, $M_{11}(M_{21})=m_{1}\pm m_{13}$,
M_{12}(M_{22})=n_{1}\pm n_{13}$.
|
1,314,259,996,641 | arxiv | \section{Introduction}
\label{sec1} %
In paper \cite{Parkes} E.J.~Parkes presented a
categorization of solutions of the equation dubbed the extended
reduced Ostrovsky equation (exROE). The equation studied has the
form
\begin{gather}
\frac{\partial}{\partial x}\left({\cal D}^2u + \frac 12pu^2 +
\beta u \right) + q{\cal D}u = 0, \qquad \mbox{where} \quad {\cal
D} = \frac{\partial}{\partial t} + u\frac{\partial}{\partial x}
\label{eq1}
\end{gather}
with $p$, $q$, and $\beta$ being constant coef\/f\/icients. This
equation was derived from the Hirota--Satsuma-type shallow water
wave equation considered in \cite{Morrison-Parkes} (for details
see~\cite{Parkes}).
For stationary solutions, i.e.\ solutions in the form of
travelling waves depending only on one variable $\chi = x - Vt -
x_0$, this equation reduces to the simple third-order ODE:
\begin{gather}
\frac{d}{d\chi}\left[w\frac{d}{d\chi}\left(w\frac{dw}{d\chi}\right)
+ \frac 12pw^2 + (pV + \beta)w \right] + qw\frac{dw}{d\chi} = 0,
\label{eq2}
\end{gather}
where $V$ stands for the wave speed and $w = u - V$. (Note, that
in many contemporary papers including \cite{Parkes} authors call
such solutions simply ``travelling-wave solutions''. Such
terminology seems not good as nonstationary propagating waves also
are travelling waves. The term ``stationary waves'' widely used
earlier seems more adequate for the waves considered here.) In
paper \cite{Parkes}, equation~\eqref{eq2} was reduced by means of a
series of transformations of dependent and independent variables
to an auxiliary equation whose solutions were actually categorized
subject to some restrictions on the equation coef\/f\/icients, viz.:
\begin{gather*}
p + q \ne 0, \qquad qV -\beta \ne 0
\end{gather*}
(one more restriction on the constant of integration for that
auxiliary equation, $B = 0$, was used in \cite{Parkes}). Under
these restrictions, solutions to equation~\eqref{eq2} were found in
analytical form and corresponding wave prof\/iles were illustrated
graphically. Among solutions obtained there are both periodic and
solitary type solutions including multivalued loop periodic waves
and loop-solitons.
Similar loop solutions to exROE and some other equations were
earlier obtained by Ji-Bin Li~\cite{Li} who came to the conclusion
that loop solutions actually are compound solutions which consist
of three dif\/ferent independent branches. These branches may be
used in various combinations representing several types of
stationary propagating singular waves (waves with inf\/inite
gra\-dients). This conclusion completely coincides with the
conclusion of paper \cite{Stepanyants} where a complete
classif\/ication of stationary solutions of ROE was presented. ROE
derived by L.A.~Ostrov\-sky~\cite{Ostrovsky} in 1978 as a model for
the description of long waves in a rotating ocean (see~\cite{Stepanyants} and references therein) can be treated as a
particular case of exROE with $p = q$ and $\beta = 0$ (see~\cite{Parkes}).
\looseness=1
Below an analysis of stationary solutions to equation~\eqref{eq2} is
presented by the direct method avoiding any redundant
transformations of variables. The method used is based on the
phase plane concept and analogy of the equation studied with the
Newtonian equation for the point particle in a potential f\/ield.
Such approach seems more vivid and free of aforementioned
restrictions. This work can be considered also as complementary to
paper~\cite{Parkes} as the analysis presented may be helpful in
the understanding of basic properties of stationary solutions of
equation~\eqref{eq2}.
\section[Mechanical analogy, potential function and phase-plane method]{Mechanical analogy, potential function\\ and phase-plane method}
\label{sec2}
Equation \eqref{eq2} can be integrated once resulting in
\begin{gather}
w\frac{d}{d\chi}\left(w\frac{dw}{d\chi}\right) + \frac 12(p +
q)w^2 + (pV + \beta)w = C_1, \label{eq4}
\end{gather}
where $C_1$ is a constant of integration. By multiplying this
equation by $dw/d\chi$ and integrating once again, the equation
can be reduced to the form of energy conservation for a point
particle of unit mass moving in the potential f\/ield $P(w)$:
\begin{gather}
\frac 12 \left(\frac{dw}{d\chi}\right)^2 + P(w) = E, \label{eq5}
\end{gather}
where the ef\/fective ``potential energy'' as a function of
``displacement'' $w$ is
\begin{gather}
P(w) = \frac{p + q}{6}w - \frac{C_1}{w} - \frac{C_2}{w^2},
\label{eq6}
\end{gather}
and $C_2$ is another constant of integration. The constant $E =
-(pV + \beta)/2$ plays a role of the total energy of the particle,
i.e.\ the sum of the ``kinetic energy'', $K =
(1/2)(dw/d\chi)^2$, and the ``potential energy'', $P(w)$. As
follows from equation~\eqref{eq5}, real solutions can exist only for $E
\ge P(w)$. Various cases of the potential function \eqref{eq6} are
considered below and corresponding bounded solutions are
constructed. Unbounded solutions are not considered in this paper
as they are less interesting from the physical point of view;
nevertheless, their qualitative behavior becomes clear from the
general view of corresponding phase portraits.
\section[Particular case: $p + q = 0$]{Particular case: $\boldsymbol{p + q = 0}$}
\label{sec3} %
Consider f\/irst a particular case when the coef\/f\/icients in
equation~\eqref{eq1} are such that $p + q = 0$. Note, this is one of
the cases which were omitted from the consideration in paper~\cite{Parkes}. The potential function~\eqref{eq6} simplif\/ies
in this case. However, a variety of subcases can be distinguished
nevertheless even in this case depending on the coef\/f\/icients $C_1$
and $C_2$. All these subcases are studied in detail below.
{\bf 3a.} If $C_2 = 0$, the potential function represents
a set of antisymmetric hyperbolas located either in the f\/irst and
third quadrants or in the second and fourth quadrants as shown in
Fig.~\ref{f01}a. The corresponding phase plane $(w, w')$, where
$w' = dw/d\chi$, is shown in Fig.~\ref{f01}b for $C_1 = 1$ (for
$C_1 = -1$ the phase plane is mirror symmetrical with respect to
the vertical axis). For other values of $C_1$ phase portraits are
qualitatively similar to that shown in Fig.~\ref{f01} for $C_1 =
1$.
\begin{figure}[t]
\centerline{\includegraphics[width=150mm]{Stepanyants-Fig01}}
\caption{a) Potential function for the case $p + q = 0$, $C_2 = 0$
and two values of $C_1$: $C_1 = 1$ (solid lines), and $C_1 = -1$
(dashed lines). Two dots illustrate possible motions of point
particles in the potential f\/ield. b) Phase plane corresponding to
the potential function with $C_1 = 1$ and dif\/ferent values of $E$.
Line 1: $E = -1$; line 2: $E = -0.5$; lines 3: $E = 0.5$; lines 4:
$E = 1$.}
\label{f01}%
\end{figure}
Analysis of the phase portrait shows that there are no bounded
solutions for any positive $E$; corresponding trajectories both in
the left half and right half of the phase plane go to inf\/inity on~$w$ (see, e.g., lines~3 and~4 in Fig.~\ref{f01}b).
Meanwhile, solutions bounded on $w$ do exist for negative values
of $E$ (i.e.\ for $V > - \beta/p$), but they possess inf\/inite
derivatives when $w = 0$. Consider, for instance, motion of an
af\/f\/ix along the line~2 in Fig.~\ref{f01}b ($C_1 = 1$) from $w' =
\infty$ towards the axis $w$ where $w' = 0$. The qualitative
character of the motion becomes clear if we interpret it in terms
of ``particle coordinate'' $w$ and ``particle velocity''~$w'$
treating~$\xi$ as the time. The motion originates at some ``time''
$\xi_0$ with inf\/inite derivative and zero ``particle coordinate''
$w = 0$. Then, the ``particle coordinate'' $w$ increases to some
maximum value $w_{\max} = -C_1/E$ ($E < 0$) as the ``particle
velocity'' is positive. Eventually it comes to the rest having
zero derivative $w' = 0$ and $w = w_{\max}$. Another independent
branch of solution for the same value of $E$ corresponds to the
af\/f\/ix motion along the line 1 from the previously described rest
point at axis $w$ towards $w' = -\infty$ and $w = 0$.
All bounded analytical solutions for this case can be presented in
the universal implicit form:
\begin{gather}
\xi(y) - \xi_0 = \pm\left[\arctan{\left(\sqrt{\frac{y}{1 -
y}}\right) - \sqrt{y(1 - y)}}\right], \label{eq7}
\end{gather}
where $y = -Ew/C_1$, $\xi = -\sqrt{2}(-E)^{3/2}\chi/C_1$ and
$\xi_0$ is an arbitrary constant of integration. This solution
consists of two independent branches which correspond to signs
plus or minus in front of the square brackets in equation~\eqref{eq7}.
Each branch is def\/ined only on a compact support of axis $\xi$:
either on $-\pi/2 \le \xi - \xi_0 \le 0$ or on $0 \le \xi - \xi_0
\le \pi/2$ (see lines~1 and~$1'$ in Fig.~\ref{f02}). With the
appropriate choice of constants $\xi_0$ one can create a variety
of dif\/ferent solutions, e.g., the $V$-shape wave (see lines~2
and~$2'$), or a smooth-crest compacton, i.e.\ a compound
solitary wave def\/ined only for $|\xi - \xi_0| \le \pi/2$ (see
lines 3 and $3'$). Using a translational invariance of solutions
and their independency of each other, one can create periodic or
even chaotic sequences of compactons randomly located on axis
$\xi$.
\begin{figure}[t]
\centerline{\includegraphics[height=55mm]{Stepanyants-Fig02}}
\caption{Various particular solutions described by
equation~\eqref{eq7}.}
\label{f02}%
\end{figure}
The maximum of the function $y(\xi)$, $y_{\max} = 1$, corresponds
in terms of $w$ to $w_{\max} = -C_1/E$. Using the relationship
between $w$ and the original variable $u$ (see above), as well as
the def\/inition of the constant $E$, one can deduce the
relationship between the wave extreme value (wave maximum) and its
speed:
\begin{gather}
u_{\max} = V - \frac{C_1}{E} = V + \frac{2C_1}{pV + \beta}.
\label{eq8}
\end{gather}
Taking into account that we consider the case of $C_1 = 1$, and
negative values of $E$ are possible only when $V > -\beta/p$, the
plot of $u_{\max}(V)$ is such as presented in Fig.~\ref{f03}.
\begin{figure}[t]
\centerline{\includegraphics[height=60mm]{Stepanyants-Fig03}}
\caption{Maximum of the compacton solution~\eqref{eq7} against
speed in the original variables, equation~\eqref{eq8}. Dashed vertical
line corresponds to the limiting value of $V = -\beta/p$. The plot
is generated for $C_1 = p = \beta = 1$.}
\label{f03}%
\end{figure}
As follows from equation~\eqref{eq8}, a wave is entirely negative
($u_{\max} < 0$), when
\[
V < \frac{1}{2p}\left(\sqrt{\beta^2 -
8pC_1} - \beta\right),
\]
provided that $p < \beta^2/(8C_1)$. At
greater values of $V$, the wave prof\/ile contains both positive and
negative pieces, and for certain value of $V$ the total wave
``mass'' $I = \int u(\chi)d\chi$ vanishes (the integral here is
taken over the entire domain where function $u(\chi)$ is def\/ined).
{\bf 3b.} A similar analysis can be carried out for the
case when $C_1 = 0$, $C_2 \ne 0$. The potential function in this
case represents a set of symmetric quadratic hyperbolas located
either in the f\/irst and second quadrants or in the third and
fourth quadrants as shown in Fig.~\ref{f04}a for $C_2 = \pm 1$.
The corresponding phase plane is shown in Fig.~\ref{f04}b for $C_2
= 1$ only (there are no bounded solutions for $C_2 = -1$,
therefore this case is not considered here). For other positive
values of $C_2$ phase portraits are qualitatively similar to that
shown in Fig.~\ref{f04}b.
\looseness=1
Analysis of the phase portrait shows that there are no bounded
solutions for $C_2 = -1$, as well as for $C_2 = 1$ and any
positive $E$ (see, e.g., lines~3 and~4 in Fig.~\ref{f04}b);
they exist however for $C_2 = 1$ and negative values of $E$, but
possess inf\/inite derivatives at some values of $\chi$. In
normalized variables $y = (-E/C_2)^{1/2}w$, $\xi =
-E(2/C_2)^{1/2}\chi$ all possible solutions can be presented in
terms of independently chosen function branches describing a unit
circle in one of the four quad\-rants,~i.e.
\begin{gather}
(\xi - \xi_0)^2 + y^2 = 1, \label{eq9}
\end{gather}
where $\xi_0$ is an arbitrary constant of integration.
\begin{figure}[t]
\centerline{\includegraphics[width=150mm]{Stepanyants-Fig04}}
\caption{a)~Potential function for the case $p + q = 0$, $C_1 = 0$
and two values of $C_2$: $C_2 = 1$ (solid lines), and $C_2 = -1$
(dashed lines). Two dots illustrate possible motions of point
particles in the potential f\/ield. b)~Phase plane corresponding to
the potential function with $C_2 = 1$ and dif\/ferent values of $E$.
Line~1: $E = -1$; line~2: $E = -0.5$; lines~3: $E = 0.5$; lines~4:
$E = 1$. All lines are symmetrical with respect to axis $w'$ and
are labelled only in the left half of the phase plane.}
\label{f04}
\end{figure}
Playing with the constant $\xi_0$ one can create again a variety
of compacton-type solutions including multi-valued solutions. Some
examples of solitary compacton solutions are shown in
Fig.~\ref{f05}a; they include $N$-shaped waves, multi-valued
circle-shaped waves and semicircle positive-polarity pulses (due
to symmetry, the polarity of the f\/irst and last waves can be
inverted). In addition to those, various periodic and even chaotic
compound waves can be easily constructed; one of the possible
examples of a periodic solution is shown in Fig.~\ref{f05}b. Each
positive or negative half-period of any wave consists of two
independent branches originating at $y = 0$ and ending at $y = \pm
1$. The same is true for the pulse-type solutions shown in
Fig.~\ref{f05}a; they consist of independent symmetrical branches
as shown, for example, for the semicircle pulse in Fig.~\ref{f05}a
where they are labelled by symbols 1 and 2.
\begin{figure}[t]
\centerline{\includegraphics[height=75mm]{Stepanyants-Fig05}}
\caption{a) Some examples of pulse-type waves described by
equation~\eqref{eq9}: $N$-shaped wave; circle wave and semicircle
compacton. b)~One of the examples of a periodic wave with inf\/inite
derivatives at $y = 0$, $\xi = 2n + 1$, where $n$ is an entire
number.}
\label{f05}%
\end{figure}
The maximum of the function $y(\xi)$, $y_{\max} = 1$, corresponds
in terms of $w$ to the wave maxi\-mum, $w_{\max} = (-C_2/E)^{1/2}$.
Using a relationship between $w$ and the original variable $u$
(see above), as well as def\/inition of the constant~$E$, one can
deduce the relationship between the wave maximum and its speed:
\begin{gather}
u_{\max} = V + \sqrt{\frac{-C_2}{E}} = V + \sqrt{\frac{2C_2}{pV +
\beta}}. \label{eq10}
\end{gather}
The plot of $u_{\max}(V)$ is presented in Fig.~\ref{f06} for $V >
-\beta/p$ in accordance with the chosen constant $C_2 = 1$ and $E
< 0$.
As follows from equation~\eqref{eq10}, wave maximum (minimum) cannot be
less than the certain value, $U_{\max}$ ($-U_{\min}$), which occurs
at some speed $V_1$, where
\begin{gather*}
U_{\max} = \frac{1}{p}\left[\left(\frac{C_2p^2}{2}\right)^{1/3} -
\beta\right] + 2\left(\frac{C_2}{2p}\right)^{1/3}, \qquad V_1 =
\frac{1}{p}\left[\left(\frac{C_2p^2}{2}\right)^{1/3} -
\beta\right].
\end{gather*}
For all possible values of wave maximum $u_{\max} > U_{\max}$, two
values of wave speed are pos\-sib\-le, i.e.\ two waves of the
very same ``amplitude'' can propagate with dif\/ferent speeds. This
is illustrated by horizontal dashed line in Fig.~\ref{f06} drawn
for $u_{\max} = 2.5$. The same is true for waves of negative
polarity.
\begin{figure}[t]
\centerline{\includegraphics[height=55mm]{Stepanyants-Fig06}}
\caption{Dependence of the wave maximum on speed in original
variables, equation~\eqref{eq10}, as follows from solution~\eqref{eq9}.
Dashed vertical line corresponds to $V = -\beta/p$. The plot is
generated for $C_2 = p = \beta = 1$.}
\label{f06}%
\end{figure}
{\bf 3c.} Consider now the case when both $C_1$ and $C_2$
are nonzero but $p + q$ is still zero. There are in general four
possible combinations of signs of the parameters $C_1$ and $C_2$:
\begin{gather*}
{\rm i)} \ C_1 > 0 , \ C_2 > 0; \quad {\rm ii)} \ C_1 < 0 , \ C_2 > 0; \quad
{\rm iii)} \ C_1 > 0 , \ C_2 < 0 ; \quad {\rm iv)} \ C_1 < 0 , \ C_2 < 0.
\end{gather*}
The shape of the potential function $P(w)$ and corresponding
solutions are dif\/ferent for all these cases. However, among them
there are only two qualitatively dif\/ferent and independent cases,
whereas the two others can be obtained from those two cases using
simple symmetry reasons. This statement is illustrated by
Fig.~\ref{f07}, where the potential relief is shown for all four
aforementioned cases i)--iv).
\begin{figure}[t]
\centerline{\includegraphics[width=150mm]{Stepanyants-Fig07}}
\caption{Potential relief for the four dif\/ferent cases, i)--iv),
of various signs of constants $C_1$ and $C_2$. The plot was
generated for $C_1 = \pm 1$, $C_2 = \pm 0.1$.}
\label{f07}%
\end{figure}
As one can see from Fig.~\ref{f07}, cases i) and ii), as well as
iii) and iv), are mirror symmetrical counterparts of each other
with respect to the vertical axis. This implies that solutions for
the cases i) and ii), and correspondingly, iii) and iv), are
related by the simple sign interchange operation, i.e.\
$w_{\text{i)}} = -w_{\text{ii)}}$, $w_{\text{iii)}} = -w_{\text{iv)}}$. Therefore, below only two
qualitatively dif\/ferent cases are considered in detail, namely the
cases~i) and~iii).
Case i) is characterized by an inf\/inite potential well at the
origin, $w = 0$. This singularity in the potential function
corresponds to the existence of a singular straight line $w = 0$
on the phase plane (see Fig.~\ref{f08}). On both sides from this
singular line there are qualitatively similar trajectories which
correspond to bounded solutions having inf\/inite derivatives at the
edges. Quantitative dif\/ference between the ``left-hand side
solutions'' and ``right-hand side solutions'', apart of their
dif\/ferent polarity, is the former solutions (of negative polarity,
$w \le 0$) exist for $E \le P_{\max}$, whereas the latter ones (of
positive polarity, $w \ge 0$) exist for $E \le 0$. The potential
function has a maximum $P_{\max} = C_1^2/(4C_2)$ at $w =
-2C_2/C_1$. There are no bounded solutions for $E > P_{\max}$.
\begin{figure}[t]
\centerline{\includegraphics[height=85mm]{Stepanyants-Fig08}}
\caption{Phase portrait of equations~\eqref{eq4}, \eqref{eq5} for the
case i) (only those trajectories are shown which
correspond to particle motion within the potential well in
Fig.~\ref{f07}a). Line~1: $E = 2.5$; line~2: $E = 1$; lines~3: $E
= -1$; line 4: $E = -2$; line 4: $E = -5$.}
\label{f08}%
\end{figure}
Consider f\/irst bounded solutions which correspond to trajectories
shown in the left half-plane, $w \le 0$, in Fig.~\ref{f08}. For a
positive value of the parameter $E$ in the range $0 \le E \le
P_{\max}$, the analytical solution can be presented in the form{\samepage
\begin{gather}
\xi(y) = \pm 2\sqrt{Q}\!\left[\!\!\sqrt{(y + 2Q)^2\! - 4Q(Q - 1)} -
2Q\ln{\frac{y + 2Q + \sqrt{(y + 2Q)^2\! - 4Q(Q - 1)}}{2\sqrt{Q(Q -
1)}}}\!\right]\!,\!\! \label{eq12}
\end{gather}
where $\xi = \chi\sqrt{2C_2}(C_1/C_2)^2$, $y = w(C_1/C_2)$,
$Q = C_1^2/(4C_2E)$.}
The range of variability on $\xi$ is:
\[
|\xi| \le 4Q\left\{1 -
\sqrt{Q}\ln{\left[\big(\sqrt{Q} + 1\big)/\sqrt{Q -
1}\right]}\right\},
\]
whereas $y$ varies in the range
\[
-2\left[Q -
\sqrt{Q(Q - 1)}\right] \le y \le 0.
\]
The relationship between the wave minimum and its speed is:
\begin{gather}
u_{\min} = V + \frac{C_1}{pV + \beta} + \sqrt{\frac{2C_2}{pV +
\beta}\left[\frac{C_1^2}{2C_2(pV + \beta)} + 1\right]},
\label{eq13}
\end{gather}
where $pV + \beta < 0$ as $E > 0$.
If $E < 0$, then the solution is
\begin{gather}
\xi(y) = \pm 2\sqrt{-Q}\Bigg[\sqrt{4Q(Q - 1) - (y + 2Q)^2}\nonumber\\
\phantom{\xi(y) =}{} +
2Q\arctan{\left(\frac{y + 2Q}{\sqrt{4Q(Q - 1) - (y +
2Q)^2}}\right) + \pi Q}\Bigg]. \label{eq14}
\end{gather}
The range of variability on $\xi$ is: $|\xi| \le -4Q\left[1 +
\sqrt{-Q}\left(\arctan{\sqrt{-Q} - \pi/2}\right)\right]$, whereas
$y$ varies in the range $-2\left[Q + \sqrt{Q(Q - 1)}\right] \le y
\le 0$. The relationship between the wave minimum and its speed is
also given by equation~\eqref{eq13}, but with $pV + \beta > 0$.
Two special cases of solution \eqref{eq12} can be mentioned. When
$Q = 1$ ($E = P_{\max}$), solution~\eqref{eq12} with the
appropriate choice of the integration constant reduces to
\begin{gather}
\xi(y) = \pm 4\left[\frac{y}{2} - \ln{\left(1 +
\frac{y}{2}\right)}\right]. \label{eq15}
\end{gather}
This solution is unbounded on $\xi$, i.e.\ it is def\/ined in
the range: $|\xi| \le \infty$. However, the solution is bounded on
$y$: $-2 \le y \le 0$. The relationship between the wave minimum
and its speed is simple as both of them are constant values in
this special case:
\begin{gather}
V = -\frac{1}{p}\left(\beta + \frac{C_1^2}{2C_2}\right), \qquad
u_{\min} = V - 2\frac{C_2}{C_1} = -\frac 1p\left(\beta +
\frac{C_1^2}{2C_2} + 2p\frac{C_2}{C_1}\right). \label{eq16}
\end{gather}
Another special case corresponds to $Q = \infty$ ($E = 0$); in
this case equation~\eqref{eq12} after appropriate choice of integration
constant reduces to:
\begin{gather}
\xi(y) = \pm\frac 23\sqrt{y +1}(y - 2). \label{eq17}
\end{gather}
The range of variability on $\xi$ is: $|\xi| \le 4/3$, whereas $y$
varies in the range: $-1 \le y \le 0$. The relationship between
the wave minimum and its speed is also very simple as both of them
are again constants but dif\/ferent from those given by
equation~\eqref{eq16}; in this case they are:
\begin{gather*}
V = -\frac{\beta}{p}, \qquad u_{\min} = V - \frac{C_2}{C_1} =
-\left(\frac{\beta}{p} + \frac{C_2}{C_1}\right).
\end{gather*}
Bounded solutions corresponding to the trajectories shown in the
right half-plane in Fig.~\ref{f08} with $w \ge 0$, exist only for
negative $E$; they are given by equation~\eqref{eq14}, but with the
dif\/ferent range of variability of $y$: $0 \le y \le 2\left[-Q +
\sqrt{Q(Q - 1)}\right]$. The relationship between the wave maximum
and its speed is given again by equation~\eqref{eq13} where $u_{\max}$
should be substituted instead of $u_{\min}$ and $pV + \beta > 0$ as
$E < 0$ for these solutions.
Solutions \eqref{eq12}, \eqref{eq14}, \eqref{eq15} and
\eqref{eq17} are shown in Fig.~\ref{f09}. All these solutions are
of the compacton type; they consist of two independent branches
which can be matched dif\/ferently or unmatched at all. Lines 2 and
$2'$ represent an example when two branches are matched so that
they form a semi-oval; lines~3 and~$3'$ represent another example
when two branches are matched so that they form an inverted
``seagull''. On the basis of these ``elementary'' solutions,
various complex compound solutions can be constructed including
periodic or chaotic stationary waves.
The dashed line 1 in the f\/igure corresponds to $E = 0$ ($Q = 8$).
Another branch of the solution with the same value of $E = 0$
represents a solution of positive polarity which is unbounded both
on $\xi$ and $y$. For positive values of $E$, solutions of
negative polarity become wider and of greater ``amplitude'' (see
line~2). When $E$ further increases and approaches $P_{\max}$, the
solution becomes inf\/initely wide, but its minimum goes to $-2$. In
the limiting case $E = P_{\max}$ ($Q = 1$) two independent branches
of the solution can be matched dif\/ferently as shown by
dashed-dotted lines 3 and $3'$ in Fig.~\ref{f09}. The solution
vanishes in this case when $\xi = 0$ and goes to $-2$ when $\xi
\to \pm\infty$; this situation is described by equation~\eqref{eq15}.
\begin{figure}[t]
\centerline{\includegraphics[height=72mm]{Stepanyants-Fig09}}
\caption{Various solutions described by equations~\eqref{eq12},
\eqref{eq14}, \eqref{eq15} and \eqref{eq17}. Compactons of
negative polarity: line 1: $Q = \infty$; lines 2 and $2'$: $Q =
2$; lines 3 and $3'$: $Q = 1$: line 4: $Q = -0.1$. Compactons of
positive polarity: line 5: $Q = -0.1$; line 6: $Q = -0.25$.}
\label{f09}%
\end{figure}
For the negative $E$ there are two families of solutions: negative
one, corresponding to the left-hand side trajectories in
Fig.~\ref{f08}, and positive one, corresponding to the right-hand
side trajectories. When $E$, being negative, increases in absolute
value ($Q$ varies from $-\infty$ to $0_-$), solutions depart from
the line~1 in Fig.~\ref{f09} and gradually squeeze to the origin
(see line~4 for instance). For the same values of negative $E$,
positive solutions originated at inf\/inity also gradually shrink
and collapse in the origin (lines 6 and 5 demonstrate this
tendency).
Consider now the case iii) shown in Fig.~\ref{f07}b. The potential
function in this case has only one well of a f\/inite depth so that
$P_{\min} = C_1^2/(4C_2)$ at $w = -2C_2/C_1$, where $C_2$ is
negative now. There are no bounded solutions for negative w; they
exist however for positive $w$ and $E$ varying in the range
$P_{\min} \le E < 0$. The f\/inite value of the potential minimum
corresponds to the equilibrium point of the centre type in the
phase plane. There is also a family of closed trajectories for the
above indicated range of $E$ variation (see Fig.~\ref{f10}); these
trajectories correspond to periodic solutions.
\begin{figure}[t]
\centerline{\includegraphics[height=75mm]{Stepanyants-Fig10}}
\caption{Phase portrait of equations~\eqref{eq4}, \eqref{eq5} for the
case {\bf 3c} iii) (only those trajectories are shown
which correspond to particle motion within the potential well in
Fig.~\ref{f07}b). The dot at the center of closed lines indicates
an equilibrium point corresponding to the potential minimum ($E =
-2.5$ for the chosen set of parameters: $C_1 = 1$, $C_2 = -0.1$);
line 1: $E = -2$; line 2: $E = -1.5$; line 3: $E = -1$; line 4: $E
= -0.5$.}
\label{f10}%
\end{figure}
As usual, closed trajectories around the center ($E \ge P_{\min}$)
correspond to quasi-sinusoidal solutions. Whereas other closed
trajectories ($E > P_{\min}$) correspond to non-sinusoidal periodic
waves with smooth crests and sharp narrow troughs. The larger is
the value of $E$, the longer is the wave period. The period tends
to inf\/inity when $E \to 0_-$. The analytical form of this family
of solutions is described by the following equation:
\begin{gather}
\xi(y) = \pm 2\sqrt{Q}\Bigg[\sqrt{4Q(Q - 1) - (y + 2Q)^2}\nonumber\\
\phantom{\xi(y) =}{} +
2Q\arctan{\left(\frac{y + 2Q}{\sqrt{4Q(Q - 1) - (y +
2Q)^2}}\right) - \pi Q}\Bigg], \label{eq19}
\end{gather}
where $\xi = \chi\sqrt{-2C_2}(C_1/C_2)^2$, $y = w(C_1/C_2)$,
$Q = C_1^2/(4C_2E)$. Solution \eqref{eq19} is shown in
Fig.~\ref{f11} for dif\/ferent values of $Q$ (note that the solution
is negative in terms of $y$ because $C_2 < 0$). As follows from
equation~\eqref{eq19}, $y$ varies in the range:
\[
-2\left[Q + \sqrt{Q(Q
- 1)}\right] \le y \le -2\left[Q - \sqrt{Q(Q - 1)}\right],
\]
whereas the dependence of wave period $\Lambda$ on $Q$ is:
$\Lambda(Q) = 8\pi Q\sqrt{Q}$. The wave period varies from $8\pi$
to inf\/inity when $Q$ increases from unity to inf\/inity.
\begin{figure}[t]
\centerline{\includegraphics[height=60mm]{Stepanyants-Fig11}}
\caption{Various solutions described by equation~\eqref{eq19}. Line~1
(quasi-sinusoidal wave): $Q = 1.01$; line~2: $Q = 1.5$; line~3: $Q
= 2$; line~4: $Q = 2.5$. Dashed lines shows the equilibrium state
$y = -2$.}
\label{f11}%
\end{figure}
From the extreme values of $y$ (see above indicated range of its
variability) one can deduce the dependences of wave maximum and
minimum on speed in the original variables. The corresponding
formulae are:
\begin{gather}
u_{\max, \min}(V) = V + \frac{C_1}{pV + \beta} \mp
2C_2\sqrt{\frac{1}{2C_2(pV + \beta)}\left[\frac{C_1^2}{2C_2(pV +
\beta)} + 1\right]}, \label{eq20}
\end{gather}
where the upper sign in front of the root corresponds to the wave
maximum and lower sign -- to the wave minimum. These dependences
are plotted in Fig.~\ref{f12} for $V > -\beta/p$ in accordance
with the chosen values of constants $C_1 = 1$, $C_2 = -0.1$ and $E
< 0$. The asymptote $V = -\beta/p$ is shown in the f\/igure by the
vertical dashed line. As follows from equation~\eqref{eq17}, wave
maximum cannot be less than the certain value, $U_{\max}$, which
occurs at some speed $V_1$ shown in Fig.~\ref{f12}.
\begin{figure}[t]
\centerline{\includegraphics[height=65mm]{Stepanyants-Fig12}}
\caption{Dependences of wave maximum (solid line) and minimum
(dashed-dotted line) on speed in the original variables,
equation~\eqref{eq20}, as follows from the solution \eqref{eq19}.
Dashed vertical line corresponds to $V = -\beta/p$. The plot is
generated for $C_1 = 1$, $C_2 = -0.1$ and $p = \beta = 1$.}
\label{f12}%
\end{figure}
For all possible values of wave maximum $u_{\max} > U_{\max}$, two
values of wave speed are possible, i.e.\ two periodic waves
of the same maximum (but not minimum!) can propagate with
dif\/ferent speeds. This is illustrated by the horizontal dashed
line shown in Fig.~\ref{f12} and drawn for \mbox{$u_{\max} = 2.5$}. In
original variables quasi-sinusoidal waves exist when the speed is
close to its limiting value $V_{\max} = -\frac
1p\left(\frac{C_1^2}{2C_2} + \beta\right)$; there are no waves
with greater speed. When $V \le V_{\max}$, the wave minimum and
maximum are close to each other. Then, when the speed decreases,
the gap between wave maximum and minimum gradually increases and
goes to inf\/inity when the speed approaches its minimum value
$V_{\min} = -\beta/p$.
\section[General case: $p + q \ne 0$]{General case: $\boldsymbol{p + q \ne 0}$}
\label{sec4} %
Consider now a more general case when the coef\/f\/icients in
equation~\eqref{eq1} are such that $p + q \ne 0$. The basic equation
\eqref{eq5} can be presented in the new variables $\eta = (p +
q)\chi/6$ and $v = (p + q)w/6$ with the same constant of
integration $E = -(pV + \beta)/2$, but with new ef\/fective
potential function
\begin{gather}
P(v) = v - \frac{C_1}{v} - \frac{C_2}{v^2}. \label{eq22}
\end{gather}
The potential function is monotonic when $C_1 = C_2 = 0$, and
there are no bounded solutions in this case. Bounded solutions may
exist if at least one of these constants is nonzero. Below we
present possible forms of the potential function and corresponding
phase portraits of bounded solutions for various relationships
between constants $C_1$ and $C_2$. Qualitatively all these cases
are similar to those which have been described already in the
previous section, therefore we omit the detailed analysis and do
not present analytical solutions as they can be obtained
straightforwardly and expressed in terms of elliptic functions.
{\bf 4a}. If $C_2 = 0$, the potential function represents
a set of antisymmetric hyperbolas located either in the f\/irst and
third quadrants when $C_1 = -1$, or in the second and fourth
quadrants when $C_1 = 1$; this is shown in Fig.~\ref{f13}.
\begin{figure}[t]
\centerline{\includegraphics[height=60mm]{Stepanyants-Fig13}}
\caption{Potential function for the case $p + q \ne 0$, $C_2 = 0$
and two values of $C_1$: $C_1 = 1$ (solid line), and $C_1 = -1$
(dashed line). Dots $a$, $b$ and $c$ illustrate possible motion of
a point particle in the potential f\/ield.}
\label{f13}%
\end{figure}
For the case of $C_1 = 1$ only bounded solutions of a compacton
type are possible for positive $v$. Such solutions correspond to
the motion of particle $c$ shown in the f\/igure down to the potential
well. This family of pulse-type solutions exist both for negative
and positive $E$; all of them are bounded from the top with the
maximum values depending on $E$, have zero minimum values and
inf\/inite derivatives when $v = 0$. Corresponding phase plane is
presented in Fig.~\ref{f14}a.
For the case of $C_1 = -1$ there are two possibilities: i) there
is a family of compacton-type solutions with $v \le 0$; they
correspond to the motion of the particle $b$ down to the potential
well (particle motion to the left from the top of the ``hill''
corresponds to unbounded solutions). Possible values of particle
energy $E$ vary for such motions from minus inf\/inity to $P_{\max} =
-2\sqrt{-C_1}$, where $P_{\max}$ is the local maximum of the lower
branch of the potential function (see Fig.~\ref{f13}). The phase
portrait of such motions is shown in the left half of the phase
plane in Fig.~\ref{f14}b.
ii) Another possibility appears for the particle motion within the
potential well shown in the f\/irst quadrant of Fig.~\ref{f13} (see
the particle $a$). Within this well all phase trajectories are
closed and corresponding solutions are bounded and periodical;
they can be expressed in terms of elliptic functions. The phase
portrait of such motions is shown in the right half of the phase
plane in Fig.~\ref{f14}b.
{\bf 4b}. If $C_1 = 0$, but $C_2 \ne 0$, the potential
function also represents a set of antisymmetric hyperbolas located
either in the third quadrant and right half-plane in
Fig.~\ref{f15} when $C_2 = 1$, or in the f\/irst quadrant and left
half-plane of that f\/igure when $C_2 = -1$.
\begin{figure}[t]
\centerline{\includegraphics[width=150mm]{Stepanyants-Fig14}}
\caption{a) Phase plane corresponding to the potential function
with $C_1 = 1$ and various values of~$E$. Line~1: $E = -2$; lines~2: $E = -1$; lines~3: $E = 0$; lines 4: $E = 1$; lines 5: $E = 2$.
All trajectories in the left half-plane correspond to unbounded
solutions. b) Phase plane corresponding to the potential function
with $C_1 = -1$ and various values of $E$. Line 1: $E = -4$; lines
2: $E = -3$; lines~3: $E = -2$; lines~4: $E = -1$; lines 5: $E =
2.1$; lines 6: $E = 2.5$; lines 7: $E = 3$.}
\label{f14}%
\end{figure}
\begin{figure}[t]
\centerline{\includegraphics[height=60mm]{Stepanyants-Fig15}}
\caption{Potential function for the case $p + q \ne 0$, $C_1 = 0$
and two values of $C_2$: $C_2 = 1$ (solid lines), and $C_2 = -1$
(dashed lines). Dots $a$, $b$, $c$ and $d$ illustrate possible
motion of a point particle in the potential f\/ield.}
\label{f15}%
\end{figure}
\begin{figure}[t]
\centerline{\includegraphics[width=150mm]{Stepanyants-Fig16}}
\caption{Phase plane corresponding to the potential function
\eqref{eq22} with $C_1 = 0$. a) $C_2 = 1$ and various values of
$E$. Lines 1: $E = -3$; lines 2: $E = -2$; lines 3: $E = -1.89$;
lines 4: $E = -1$; lines 5: $E = 0$; lines~6: $E = 1$; lines 7: $E
= 2$. b) $C_2 = -1$ and various values of~$E$. Line 1: $E = -1$;
line 2: $E = 0$; line 3: $E = 1$; lines 4: $E = 2$; lines 5: $E =
2.5$; lines 6: $E = 3$. All trajectories in the left half-plane
correspond to unbounded solutions.}
\label{f16}%
\end{figure}
For the case of $C_2 = 1$ there are two possibilities: i) there is
a family of compacton-type solutions with $v \le 0$; they
correspond to the motion of the particle $b$ down to the potential
well (particle motion to the left from the top of the ``hill''
corresponds to unbounded solutions). Possible values of particle
energy $E$ vary for such motions from minus inf\/inity to $P_{\max} =
3(-C_2/4)^{1/3}$, where $P_{\max}$ is the local maximum of the left
branch of the potential function (see Fig.~\ref{f15}). The phase
portrait of such motions is shown in the left half-plane in
Fig.~\ref{f16}a.
ii) Another family of compacton-type solutions exist with $v \ge
0$; they correspond to the motion of the particle $c$ down to the
potential well. Possible values of particle energy $E$ for such
motions vary from minus to plus inf\/inity. Corresponding phase
plane is presented in the right half-plane in Fig.~\ref{f16}a.
For the case of $C_2 = -1$ bounded solutions are smooth periodic
waves which correspond to the particle oscillations in the
potential well shown in the f\/irst quadrant in Fig.~\ref{f15}.
Energy is positive for such motion and varies from $P_{\min} =
3(-C_2/4)^{1/3}$, where $P_{\min}$ is the local minimum of the
right branch of the potential function (see Fig.~\ref{f15}) to
inf\/inity. Analytical solution for such waves can be also expressed
in terms of cumbersome elliptic functions. Corresponding phase
plane is presented in Fig.~\ref{f16}b.
{\bf 4c}. Consider now the case when both $C_1 \ne 0$ and
$C_2 \ne 0$. The shape of the potential function is more complex
in this case in general and depends on the relationship between
the constants $C_1$ and $C_2$. The number and values of the
potential extrema are determined by the number of real roots of
the equation $P'(v) = 0$, where prime denotes the derivative on
$v$. This condition yields (see equation~\eqref{eq22}):
\begin{gather*}
v^3 + C_1v + 2C_2 = 0.
\end{gather*}
For real constants $C_1$ and $C_2$ this equation always has at
least one real root. The real root is single when $C_1 \ge
C_1^{\rm cr} \equiv -3C_2^{2/3}$; its value is given by the expression
\begin{gather*}
v = \left[\sqrt{(C_1/3)^3 + C_2^2} - C_2\right]^{1/3} -
(C_1/3)\left[\sqrt{(C_1/3)^3 + C_2^2} - C_2\right]^{-1/3}.
\end{gather*}
\begin{figure}[t]
\centerline{\includegraphics[width=145mm]{Stepanyants-Fig17}}
\caption{Potential function for the case $p + q \ne 0$. a)
Supercritical case: $C_1 = -1$ and two values of $C_2$: $C_2 = 1$
(solid lines), and $C_2 = -1$ (dashed lines); b) marginal case:
$C_1 = -3$ and the same two values of $C_2$; c) subcritical case:
$C_1 = -5$ and the same two values of $C_2$ (in the last case the
horizontal and vertical scales are doubled).}
\label{f17}%
\end{figure}
For the case $C_1 > C_1^{\rm cr}$, possible qualitative conf\/igurations
of the potential function are shown in Fig.~\ref{f17}a for the
particular choices of constants: $C_1 = -1$ and $C_2 = \pm 1$.
There is only one local minimum at the right branch of the
potential function for $C_2 = -1$ and a local maximum at the left
branch of the potential function for $C_2 = 1$. Almost the same
conf\/iguration of the potential function occurs for the marginal
case $C_1 = C_1^{\rm cr}$, as shown in Fig.~\ref{f17}b, however one
more local extremum appears -- on the left branch when $C_2 = -1$
and on the right branch when $C_2 = 1$. In the case $C_1 <
C_1^{\rm cr}$ the potential function is shown in Fig.~\ref{f17}c;
there are three local extrema of the potential function for any
value of $C_2 = \pm 1$.
The potential conf\/iguration in the supercritical case $C_1 >
C_1^{\rm cr}$ qualitatively is similar to the case shown in
Fig.~\ref{f15}, therefore the corresponding phase portraits are
similar to those shown in Fig.~\ref{f16}. In the marginal case,
$C_1 = C_1^{\rm cr}$, the potential conf\/iguration is also similar to
those two cases mentioned above, however there are some
peculiarities in the phase planes ref\/lecting the appearance of
embryos of new equilibrium points. Corresponding phase portraits
are shown in Fig.~\ref{f18}. The embryos appear in the vicinity of
$E = -3$ in Fig.~\ref{f18}a and in the vicinity of $E = 3$ in
Fig.~\ref{f18}b.
\begin{figure}[t]
\centerline{\includegraphics[width=145mm]{Stepanyants-Fig18}}
\caption{Phase plane corresponding to the marginal case, $C_1 =
C_1^{\rm cr}$. a) $C_1 = -3$, $C_2 = -1$. Line 1: $E = -3.05$; line 2:
$E = -3$; line 3: $E = -2.9$; line 4: $E = -2$; line 5: $E = 3.8$;
line 6: $E = 4$; line 7: $E = 4.5$. All trajectories in the left
half-plane correspond to unbounded solutions. b) $C_1 = -3$, $C_2
= 1$. Line 1: $E = -5$; lines 2: $E = -4$; lines 3: $E = -3.75$;
lines 4: $E = -3.5$; line 5: $E = 2.75$; line 6: $E = 3$; line 7:
$E = 3.25$; line 8: $E = 3.5$.}
\label{f18}%
\end{figure}
In the subcritical case $C_1 < C_1^{\rm cr}$ the situation is
dif\/ferent from the previous ones and should be considered
separately. In the case of $C_2 = -1$, there are two potential
wells, one of a f\/inite depth on the left branch of function $P(v)$
and another inf\/initely deep and wide well but bounded from the
bottom on the right branch of function $P(v)$ (see
Fig.~\ref{f17}c).
For the f\/irst potential well there is a family of closed
trajectories in the phase plane corresponding to periodic
solutions with the parameter $E$ varying between the local minimum
and maximum of the potential function; these solutions are
described by elliptic functions. All closed trajectories are
bounded by the loop of separatrix designated by symbol 3 in
Fig.~\ref{f19}a. Trajectories inside the separatrix loop next to
center correspond to quasi-sinusoidal waves, and the loop of the
separatrix corresponds to the solitary wave (soliton) which can be
treated as the limiting case of periodic waves. The soliton shape
is described by the following implicit formula:
\begin{gather}
\eta = \pm\sqrt{2}\left[\frac{v_1}{2\sqrt{v_2 - v_1}}
\ln{\left(\frac{\sqrt{v_2 - v_1} + \sqrt{v_2 - v}}{\sqrt{v_2 -
v_1} - \sqrt{v_2 - v}}\right)} + \sqrt{v_2 - v}\right], \qquad v_1
\le v \le v_2, \label{eq25}
\end{gather}
where $v_{1,2} = -\big(C_1 \mp \sqrt{C_1^2 - 3EC_2}\,\big)/E$,
$(v_1 < v_2)$ and $E = P_{\max}(C_1,C_2)$, where $P_{\max}(C_1,C_2)$
is the value of the potential local maximum shown in the left
half-plane of Fig.~\ref{f17}c. Solution \eqref{eq25} is shown in
Fig.~\ref{f20}a.
\begin{figure}[t]
\centerline{\includegraphics[width=145mm]{Stepanyants-Fig19}}
\caption{Phase plane corresponding to the subcritical case $C_1 <
C_1^{\rm cr}$. a) $C_1 = -5$, $C_2 = -1$. Line~1: $E = -6$; lines 2:
$E = -5$; line 3: $E = -4.25$; lines 4: $E = -4$; line 5: $E =
4.7$; line 6: $E = 5$; line~7: $E = 6$; line 8: $E = 8$. All
trajectories in the left half-plane outside of the closed loop of
separatrix correspond to unbounded solutions. b) $C_1 = -5$, $C_2
= 1$. Line 1: $E = -7$; line 2: $E = -5$; lines 3: $E = -4.657$;
lines 4: $E = -4$; lines 5: $E = 4.3$; line 6: $E = 5$; lines 7:
$E = 6.656$; lines 8: $E = 7$.}
\label{f19}%
\end{figure}
\begin{figure}[t]
\centerline{\includegraphics[width=145mm]{Stepanyants-Fig20}}
\caption{Soliton solutions on pedestals as described by
equations~\eqref{eq25}.}
\label{f20}%
\end{figure}
In original variables function $u$ describing soliton varies in
the range
\begin{gather*}
V + \frac{6v_1}{p + q} \le u \le V + \frac{6v_2}{p + q};
\end{gather*}
thus, the soliton amplitude amounts
\begin{gather}
A = 6\frac{v_2 - v_1}{p + q} = \frac{12}{p + q}\frac{\sqrt{C_1^2 -
3C_2P_{\max}(C_1,C_2)}}{P_{\max}(C_1,C_2)}; \label{eq27}
\end{gather}
The soliton velocity is
\begin{gather}
V = -\frac 1p\left[\beta + 2P_{\max}(C_1,C_2)\right]. \label{eq28}
\end{gather}
Equations \eqref{eq27} and \eqref{eq28} allow one to obtain a
direct relationship between the soliton's velocity and amplitude:
\begin{gather*}
A = -\frac{24}{p + q}\frac{\sqrt{C_1^2 + \frac 32 C_2(pV +
\beta)}}{pV + \beta}.
\end{gather*}
For the second potential well located in the right half-plane of
Fig.~\ref{f17}c, there is another family of closed trajectories in
the phase plane corresponding to periodic solutions with the
parame\-ter~$E$ varying between the local minimum of the potential
function and inf\/inity; these trajectories are shown in the right
half-plane of Fig.~\ref{f19}a.
In the case of $C_2 = 1$, there is a shallow potential well on the
right branch of function~$P(v)$ and one inf\/initely deep well at
the origin where the potential function is singular. For the
shallow well there is a family of closed trajectories in the phase
plane corresponding to periodic solutions with the parameter $E$
varying between the local minimum and maximum of the potential
function. All such trajectories are also bounded by the loop of
separatrix designated by symbol~7 in Fig.~\ref{f19}b. The loop of
separatrix corresponds to the solitary wave whose shape is
described by the same implicit formula \eqref{eq25}, but with
dif\/ferent values of constants $C_1$, $C_2$, $E$ and
$P_{\max}(C_1,C_2)$, where $P_{\max}(C_1,C_2)$ is the value of the
potential local maximum shown in the right half-plane of
Fig.~\ref{f17}c. This solution is shown in Fig.~\ref{f20}b. All
above relationships between soliton amplitude and velocity, as
well as between soliton amplitude or velocity and constants $C_1$
and $C_2$ remain the same as above.
For the inf\/initely deep well at the origin there are two families
of compactons with nonpositive and nonnegative values; the phase
plane for them is similar to that shown in Fig.~\ref{f08} and
solutions are similar to those shown in Fig.~\ref{f09}. The entire
phase portrait of the system in the case of $C_2 = 1$ is shown in
Fig.~\ref{f19}b. Phase trajectories corresponding to positive
compactons are not shown in detail in that f\/igure because they are
too close to each other and are in the narrow gap between the axis
$v'$ and external two unclosed branches of the separatix~7 (only
one such trajectory, line~5, is shown in Fig.~\ref{f19}b; all
other trajectories are similar).
\section{Conclusion}
\label{sec5} %
As was shown in the paper, the extended reduced Ostrovsky equation
\eqref{eq1} possesses periodic and solitary type solutions in general.
There is a variety of solitary-wave solutions including compactons
with inf\/inite derivatives at the edges, smooth solitons, and
periodic waves. All compactons, however, actually are of the
compound-type solutions, i.e., they consist of two or more
non-smooth branches. Among periodic waves depending on the
equation parameters, there are also both smooth solutions and
compound-type solutions which consist of periodic sequences of
non-smooth branches (see, e.g., Fig.~\ref{f05}b). Moreover,
using compacton solutions as the elementary blocks, one can
construct very complex compound solutions including stochastic
stationary waves.
The approach used in this paper and based on the qualitative
theory of dynamical systems is free from the limitations of paper
\cite{Parkes} and allows us to present a complete classif\/ication
of all possible solutions of stationary exROE. In particular,
solutions were obtained and analyzed in details for the case $p +
q = 0$ that was out of consideration in paper~\cite{Parkes}.
Another ``prohi\-bi\-ted'' combination of parameters, $qV - \beta \ne
0$, that was also out of consideration in paper \cite{Parkes},
does not even appear in our study. The approach exploited in the
present paper is based on a~vivid mechanical analogy between a~particle moving in a special potential f\/ield and considered
stationary exROE. This approach allows one to observe
qualitatively an entire family of all possible solutions even
without construction of particular exact solution. A similar
approach has been exploited recently in application to the reduced
Ostrovsky equation~\cite{Stepanyants,Li} and exROE~\cite{Li},
although in the last case, the complete solution classif\/ication
was not considered.
\pdfbookmark[1]{References}{ref}
|
1,314,259,996,642 | arxiv | \section{Introduction}
In the past few decades, many exotic $X$, $Y$, $Z$ particles had been observed at the Belle, BaBar, BESIII and LHCb collaborations \cite{PDG}, the intriguing fact is that many of their masses are near the hadron-hadron thresholds which shed light on the possible hadronic molecule interpretations \cite{Guo1}. In 2015, the LHCb collaboration observed two hidden-charm pentaquarks in the $\Lambda_b^0\rightarrow J/\psi pK^-$ decay process \cite{RAaij1}, namely, $P_c(4380)$ and $P_c(4450)$. In 2019, the observation was updated and $P_c(4312)$, $P_c(4440)$ and $P_c(4457)$ were reported by the LHCb collaborations \cite{RAaij2}, it reported that $P_c(4450)$ is actually the overlapping peak of $P_c(4440)$ and $P_c(4457)$. In the present work, we focus on the observation reported by the LHCb collaborations in 2020 for the hidden-charm strange pentaquark $P_{cs}(4459)$ in the $J/\psi\Lambda$ mass spectrum from amplitude analysis of $\Xi^-_b\rightarrow J/\psi \Lambda K^-$ decay \cite{RAaij3}, the state mass and width are $4458.8\pm2.9^{+4.7}_{-1.1}\,\rm{MeV}$ and $17.3\pm6.5^{+8.0}_{-5.7}\,\rm{MeV}$, respectively.
Due to the exotic hadronic structures, the $P_c$ states have been attracting lots of interests in strong interaction area \cite{Guo1,Esposito,Lebed,YRliu}. Now, for these $P_c$ states, a typical interpretation is that they are the S-wave hidden-charm meson--baryon molecules with definite isospin $I$, spin $J$ and parity $P$ \cite{Manuel,MingZhu,FuLai,MengLin,JunHe}, inspired by the interpretation of exotic $P_c$ states, many theoretic groups interpret the newly discovered $P_{cs}(4459)$ in a similar way. For example, in Ref. \cite{FYang}, the authors assume the $P_{cs}(4459)$ as $\bar{D}^*\Xi_c$ molecular state and study its strong decay via the consideration of its $J^P$ as $(\frac{3}{2})^-$ and $(\frac{1}{2})^-$. In the framework of QCD sum rules \cite{HXChenN}, the study support the assignment of $P_{cs}(4459)$ as the $\bar{D}^*\Xi_c$ hadronic molecular state of either $J^P=(\frac{1}{2})^-$ or $(\frac{3}{2})^-$. Applying the quasi potential Bethe-Salpeter equation approach \cite{JTZhuN}, the $P_{cs}(4459)$ is interpreted as $\bar{D}^*\Xi_c$ molecule with the $J^P=(\frac{3}{2})^-$. Under the one-boson-exchange model, the authors conclude that this exotic state is not the pure molecular state \cite{RChenN}. As for the other arguments about the properties of $P_{cs}(4459)$, one can consult the Refs. \cite{CWXiao,KAzizi,Uozdem,YHuang,PPShi} and so on.
Since the $J^P$ of $P_{cs}(4459)$ is not determined yet in experiment, as the above introduction indicates, the nature of this exotic state is still under debate. In Ref. \cite{WZG-penta-mole-CPC}, our group apply the color-singlet-color-singlet type pentaquark currents to study the $P_c$ and $P_{cs}$ states in a systemic way via the QCD sum rules, and assign $P_{cs}(4459)$ with the $J^P$ either $(\frac{1}{2})^-$ or $(\frac{3}{2})^-$, in that paper, the color-singlet-color-singlet type pentaquark currents of the isospin eigenstates are proposed. In Ref. \cite{wangxiuwuN}, the isospins are unambiguously distinguished to study the hadron-hadron molecules in the framework of QCD sum rules for the first time and the $P_c(4312)$, $P_c(4380)$, $P_c(4440)$ and $P_c(4457)$ are assigned as the $\bar{D}^{(*)}\Sigma_c^{(*)}$ molecules with low isospin $I=\frac{1}{2}$ in detail, intrigued by our previous works, we are very interested to investigate the present topic: what about the situation if we differentiate the isospins for the pentaquark states with strangeness? Can we clearly determine the nature of $P_{cs}(4459)$? What about the predictions of properties of the other possible pentaquark states with strangeness?
Among the popular theoretic methods, the QCD sum rules \cite{SVZ1,Reinders} is a powerful tool to study the hadronic interaction. It has achieved many successful descriptions, such as the tetraquark states \cite{WZGXZ1,ZSLXZ1,CHXXZ1,QCFXZ1,WZGXZ11}, tetraquark molecular states \cite{WZGXZ3,WZGXZ4}, pentaquark states \cite{WZGNNN1,WZGXZ5}, pentaquark molecular states \cite{WZG-penta-mole-CPC,CHXXZ2,CHXXZ3,WZGXZ7,WZGXZ9,WZGXZ13}, dibaryon and baryonium \cite{KodamaXZ1,ChenXZ4,WZGXZ10,WanXZ1,wangxiuwu3} and so on. However, the isospins of the states are seldom differentiated except our previous calculation \cite{wangxiuwuN}, it shows that mass of the high isospin state is a few dozens of $\rm{MeV}$ above that of the low one. As is known, deviation of a few dozens of $\rm{MeV}$ is enough to confuse the assignment of the state, thus, we argue that the differentiation of the isospin may be one of the key preconditions for the accurate assignment.
The article is organized as follows: in Sect.2, the QCD sum rules for the pentaquark states is derived; the numerical results and discussions are given in Sect.3; Sect.4 is reserved for our conclusions.
\section{QCD sum rules for the pentaquark states}
In the isospin space, the $u$ and $d$ quarks have the isospin eigenvalues $\frac{1}{2}$ and $-\frac{1}{2}$, respectively, thus the $\bar{D}^0$, $\bar{D}^{*0}$, $\bar{D}^-$, $\bar{D}^{*-}$, $\Xi_c^{\prime 0}$, $\Xi_c^{*0}$, $\Xi_c^{\prime +}$ and $\Xi_c^{*+}$ correspond to the isospin eigenstates $|\frac{1}{2},\frac{1}{2}\rangle$, $|\frac{1}{2},\frac{1}{2}\rangle$, $|\frac{1}{2},-\frac{1}{2}\rangle$, $|\frac{1}{2},-\frac{1}{2}\rangle$, $|\frac{1}{2},-\frac{1}{2}\rangle$, $|\frac{1}{2},-\frac{1}{2}\rangle$, $|\frac{1}{2},\frac{1}{2}\rangle$ and $|\frac{1}{2},\frac{1}{2}\rangle$, respectively. We can apply the following color-singlet currents to interpolate the above mesons and baryons,
\begin{eqnarray}
J^{\bar{D}^0}(x)&=&\bar{c}(x)i\gamma_5u(x)\, ,\nonumber \\
J^{\bar{D}^-}(x)&=&\bar{c}(x)i\gamma_5d(x)\, ,\nonumber \\
J^{\bar{D}^{*0}}_{\mu}(x)&=&\bar{c}(x)\gamma_{\mu} u(x)\, , \nonumber\\
J^{\bar{D}^{*-}}_{\mu}(x)&=&\bar{c}(x)\gamma_{\mu} d(x)\, , \nonumber\\
J^{\Xi_c^{\prime 0}}(x)&=&\varepsilon^{ijk}d^{iT}(x)C\gamma_{\mu}s^j(x)\gamma^{\mu}\gamma_5c^k(x)\, ,\nonumber\\
J^{\Xi_c^{*0}}_{\mu}(x)&=&\varepsilon^{ijk}d^{iT}(x)C\gamma_{\mu}s^j(x)c^k(x)\, , \nonumber\\
J^{\Xi_c^{\prime +}}(x)&=&\varepsilon^{ijk}u^{iT}(x)C\gamma_{\mu}s^j(x)\gamma^{\mu}\gamma_5c^k(x)\, , \nonumber\\
J^{\Xi_c^{*+}}_{\mu}(x)&=&\varepsilon^{ijk}u^{iT}(x)C\gamma_{\mu}s^j(x)c^k(x)\, ,
\end{eqnarray}
where, the superscripts $i, j, k$ are color indices and the $C$ represents the charge conjugation matrix. Based on the above currents of the mesons and baryons, we construct the color-singlet-color-singlet type five-quark currents to study the $\bar{D}\Xi_c^{\prime}$, $\bar{D}\Xi_c^{*}$, $\bar{D}^*\Xi_c^{\prime}$ and $\bar{D}^*\Xi_c^{*}$ pentaquark sates,
\begin{eqnarray}
J_{0}^{\bar{D}\Xi_c^{\prime}}(x)&=&\frac{1}{\sqrt{2}}J^{\bar{D}^0}(x)J^{\Xi_c^{\prime 0}}(x)-\frac{1}{\sqrt{2}}J^{\bar{D}^-}(x)J^{\Xi_c^{\prime +}}(x) \, , \nonumber\\
J_{1}^{\bar{D}\Xi_c^{\prime}}(x)&=&\frac{1}{\sqrt{2}}J^{\bar{D}^0}(x)J^{\Xi_c^{\prime0}}(x)+\frac{1}{\sqrt{2}}J^{\bar{D}^-}(x)J^{\Xi_c^{\prime+}}(x)\, ,\nonumber\\
J_{0;\mu}^{\bar{D}\Xi_c^*}(x)&=&\frac{1}{\sqrt{2}}J^{\bar{D}^0}(x)J^{\Xi_c^{*0}}_{\mu}(x)-\frac{1}{\sqrt{2}}J^{\bar{D}^-}(x)J^{\Xi_c^{*+}}_{\mu}(x)\, ,\nonumber\\
J_{1;\mu}^{\bar{D}\Xi_c^*}(x)&=&\frac{1}{\sqrt{2}}J^{\bar{D}^0}(x)J^{\Xi_c^{*0}}_{\mu}(x)+\frac{1}{\sqrt{2}}J^{\bar{D}^-}(x)J^{\Xi_c^{*+}}_{\mu}(x)\, ,\nonumber\\
J_{0;\mu}^{\bar{D}^{*}\Xi_c^{\prime}}(x)&=&\frac{1}{\sqrt{2}}J^{\bar{D}^{*0}}_{\mu}(x)J^{\Xi_c^{\prime0}}(x)-\frac{1}{\sqrt{2}}J^{\bar{D}^{*-}}_{\mu}(x)J^{\Xi_c^{\prime+}}(x)\, ,\nonumber\\
J_{1;\mu}^{\bar{D}^{*}\Xi_c^{\prime}}(x)&=&\frac{1}{\sqrt{2}}J^{\bar{D}^{*0}}_{\mu}(x)J^{\Xi_c^{\prime0}}(x)+\frac{1}{\sqrt{2}}J^{\bar{D}^{*-}}_{\mu}(x)J^{\Xi_c^{\prime+}}(x)\, ,\nonumber\\
J_{0;\mu\nu}^{\bar{D}^{*}\Xi_c^*}(x)&=&\frac{1}{\sqrt{2}}J^{\bar{D}^{*0}}_{\mu}(x)J^{\Xi_c^{*0}}_{\nu}(x)-\frac{1}{\sqrt{2}}J^{\bar{D}^{*-}}_{\mu}(x)J^{\Xi_c^{*+}}_{\nu}(x)+(\mu\leftrightarrow\nu)\, ,\nonumber\\
J_{1;\mu\nu}^{\bar{D}^{*}\Xi_c^*}(x)&=&\frac{1}{\sqrt{2}}J^{\bar{D}^{*0}}_{\mu}(x)J^{\Xi_c^{*0}}_{\nu}(x)+\frac{1}{\sqrt{2}}J^{\bar{D}^{*-}}_{\mu}(x)J^{\Xi_c^{*+}}_{\nu}(x)+(\mu\leftrightarrow\nu)\, ,
\end{eqnarray}
where, the subscripts $0$ and $1$ mean the isospins $I=0$ or $1$ \cite{WZG-penta-mole-CPC}, and these currents are isospin eigenstates, either $|0,0\rangle$ or $|1,0\rangle$. Consider the parity operator $\widehat{P}$, since $\widehat{P}\psi\widehat{P}^{-1}=\gamma^0\psi$ and $\widehat{P}\bar{\psi}\widehat{P}^{-1}=\bar{\psi}\gamma^0$, for which $\psi$ is the Dirac spinor, one can show that the above eight currents have the negative parity. The two-point correlation functions are then written as,
\begin{eqnarray}
\Pi(p)&=&i\int d^4x e^{ip\cdot x}\langle 0 |T\left\{ J (x) \bar{J}(0) \right\}| 0\rangle \, ,\nonumber\\
\Pi_{\mu\nu}(p)&=&i\int d^4x e^{ip\cdot x}\langle 0 |T\left\{ J_{\mu} (x) \bar{J}_{\nu}(0) \right\}| 0\rangle \, ,\nonumber\\
\Pi_{\mu\nu\alpha\beta}(p)&=&i\int d^4x e^{ip\cdot x}\langle 0 |T\left\{ J_{\mu\nu} (x) \bar{J}_{\alpha\beta}(0) \right\}| 0\rangle \, ,
\end{eqnarray}
where the currents
\begin{eqnarray}
J(x)&=&J_{0}^{\bar{D}\Xi_c^{\prime}}(x)\, , \,\,\,J_{1}^{\bar{D}\Xi_c^{\prime}}(x)\, ,\nonumber\\
J_{\mu} (x)&=&J_{0;\mu}^{\bar{D}\Xi_c^*}(x)\, , \,\,\, J_{1;\mu}^{\bar{D}\Xi_c^*}(x)\, ,\,\,\, J_{0;\mu}^{\bar{D}^{*}\Xi_c^{\prime}}(x)\,, \,\,\,J_{1;\mu}^{\bar{D}^{*}\Xi_c^{\prime}}(x)\, , \nonumber\\
J_{\mu\nu} (x)&=&J_{0;\mu\nu}^{\bar{D}^{*}\Xi_c^*}(x)\, , \, \,\, J_{1;\mu\nu}^{\bar{D}^{*}\Xi_c^*}(x)\, .
\end{eqnarray}
Note that, the currents $J(x)$, $J_\mu(x)$ and $J_{\mu\nu}(x)$ can couple potentially with the pentaquark states with not only the negative but also the positive parity. At the hadron side, we isolate the contribution of the ground state and write the correlation functions as,
\begin{eqnarray}
\Pi(p) & = & {(\lambda^{-}_{\frac{1}{2}})}^2 {\!\not\!{p}+ M_{-} \over M_{-}^{2}-p^{2} } + {(\lambda^{+}_{\frac{1}{2}})}^2 {\!\not\!{p}- M_{+} \over M_{+}^{2}-p^{2} } +\cdots \, ,\nonumber\\
&=&\Pi_{\frac{1}{2}}^1(p^2)\!\not\!{p}+\Pi_{\frac{1}{2}}^0(p^2)\, ,
\end{eqnarray}
\begin{eqnarray}
\Pi_{\mu\nu}(p) & = & {(\lambda^{-}_{\frac{3}{2}})}^2 {\!\not\!{p}+ M_{-} \over M_{-}^{2}-p^{2} } \left(- g_{\mu\nu}\right)+ {(\lambda^{+}_{\frac{3}{2}})}^2 {\!\not\!{p}- M_{+} \over M_{+}^{2}-p^{2} } \left(- g_{\mu\nu}\right) +\cdots \, ,\nonumber\\
&=&-\Pi_{\frac{3}{2}}^1(p^2)\!\not\!{p}\,g_{\mu\nu}-\Pi_{\frac{3}{2}}^0(p^2)\,g_{\mu\nu}+\cdots\, ,
\end{eqnarray}
\begin{eqnarray}
\Pi_{\mu\nu\alpha\beta}(p) & = & {(\lambda^{-}_{\frac{5}{2}})}^2 {\!\not\!{p}+ M_{-} \over M_{-}^{2}-p^{2} } \left( g_{\mu\alpha}g_{\nu\beta}+g_{\mu\beta}g_{\nu\alpha}
\right)+ {(\lambda^{+}_{\frac{5}{2}})}^2 {\!\not\!{p}- M_{+} \over M_{+}^{2}-p^{2} } \left( g_{\mu\alpha}g_{\nu\beta}+g_{\mu\beta}g_{\nu\alpha}\right) +\cdots \, , \nonumber\\
&=&\Pi_{\frac{5}{2}}^1(p^2)\!\not\!{p}\left( g_{\mu\alpha}g_{\nu\beta}+g_{\mu\beta}g_{\nu\alpha}\right)+\Pi_{\frac{5}{2}}^0(p^2)\,\left( g_{\mu\alpha}g_{\nu\beta}+g_{\mu\beta}g_{\nu\alpha}\right)+ \cdots \, ,
\end{eqnarray}
where, the subscripts $\frac{1}{2}$, $\frac{3}{2}$ and $\frac{5}{2}$ are the spins of the pentaquark states, the subscripts or superscripts $\pm$ of $\lambda$ and $M$ denote the positive-parity or
negative-parity, respectively, $\lambda$ with subscripts and superscripts are the pole residues which are used for the calculation of decays in the framework of QCD sum rules. The isospin indexes are not displayed in the above expressions.
The correlation functions are contracted at the quark-level via the Wick theorem, thus, they are in the forms in terms of full quark propagators. Followed by the operator product expansion, complicated structures of the correlation functions are derived. Same as the method applied in our previous studies \cite{WZG-penta-mole-CPC,WZGNNN1}, for the correlation functions $\Pi(p)$, we pick out the structures $\!\not\!{p}$ and $1$, thus, we apply current $J(x)$ to investigate the pentaquark states with the $J^P={(\frac{1}{2})}^\mp$. We select the structures $\!\not\!{p}g_{\mu\nu}$ and $g_{\mu\nu}$ for the correlation functions $\Pi_{\mu\nu}(p)$, then, the axis-vector currents $J_\mu(x)$ are used to couple the states with the $J^P={(\frac{3}{2})}^\mp$. We choose the structures $\!\not\!{p}\left(g_{\mu\alpha}g_{\nu\beta}+g_{\mu\beta}g_{\nu\alpha}\right)$ and $g_{\mu\alpha}g_{\nu\beta}+g_{\mu\beta}g_{\nu\alpha}$ from the correlation functions $\Pi_{\mu\nu\alpha\beta}(p)$, in this way, the tensor currents $J_{\mu\nu}$ are applied to study the states with the $J^P={(\frac{5}{2})}^\mp$.
We carefully analyze the contributions of all the related terms of vacuum condensates after the operator product expansion. The highest dimension of vacuum condensates are determined by the lead order Feynman diagrams which are $\langle\frac{\alpha_s}{\pi}GG\rangle\langle\overline{q}q\rangle^3$ and $\langle\overline{q}g_s\sigma Gq\rangle^2\langle\overline{q}q\rangle$ with the dimension $13$. Vacuum condensates proportional to the strong fine-structure constant $\mathcal{O}(\alpha_s^k )$ are selected for calculation under $k\leq1$ \cite{wangxiuwu}. Thus, in this work, there are solid reasons for us to choose the terms $\langle\bar{q}q\rangle$, $\langle\frac{\alpha_s}{\pi}GG\rangle$, $\langle\overline{q}g_s\sigma Gq\rangle$, $\langle\overline{q}q\rangle^2$, $\langle\frac{\alpha_s}{\pi}GG\rangle\langle\bar{q}q\rangle$, $\langle\overline{q}g_s\sigma Gq\rangle\langle\overline{q}q\rangle$, $\langle\bar{q}q\rangle^3$, $\langle\overline{q}g_s\sigma Gq\rangle^2$, $\langle\frac{\alpha_s}{\pi}GG\rangle\langle\overline{q}q\rangle^2$, $\langle\overline{q}g_s\sigma Gq\rangle\langle\bar{q}q\rangle^2$, $\langle\overline{q}q\rangle^4$, $\langle\overline{q}g_s\sigma Gq\rangle^2\langle\overline{q}q\rangle$ and $\langle\frac{\alpha_s}{\pi}GG\rangle\langle\overline{q}q\rangle^3$, where, $q=u,\,d$ and $s$. Consider the masses of the light quarks $u$ and $d$ are too tiny to make decent difference, we set their masses be zero and keep the terms of vacuum condensates proportional to $m_s$, where, $m_s$ is the mass of the $s$ quark, and we throw away the terms related to $m_s^k$ for $k\geq 2$ to avoid trivial calculation.
After the operator product expansion and chosen of the vacuum condensates, we solve the integrals in the coordinate space and momentum space, and then conduct Borel transform for the correlation functions in regards to $P^2=-p^2$,
\begin{eqnarray}
\widehat{B}_{T^2}{(P^2)}\Pi(p)=\int_{\Delta^2}^{\infty}ds\rho_{QCD}(s)\rm{exp}(-\frac{\it{s}}{\it{T}^{\rm{2}}})\, ,
\end{eqnarray}
\noindent where $\rho_{QCD}(s)$ is the QCD spectral density, $\Delta^2=4m_c^2$ in the present study, $T^2$ is the Borel parameter and $\widehat{B}_{T^2}{(P^2)}$ is the Borel operator which is defined as,
\begin{eqnarray}
\widehat{B}_{T^2}(P^2)=\lim_{-p^2,n\rightarrow\infty\atop -p^2/n\rightarrow T^2}\frac{(-p^2)^{(n+1)}}{n!}(\frac{d}{dp^2})^n\, .
\end{eqnarray}
Note that, due to the structure of the correlation functions of the pentaquark currents, the QCD spectral densities $\rho_{QCD}(s)$ contain two parts at the quark-gluon level, marked as $\rho^1_{QCD}(s)$ from $\Pi^1(p^2)$ and $\rho^0_{QCD}(s)$ from $\Pi^0(p^2)$, respectively. In the present work, the parities of the constructed currents are all negative , thus, it is natural for us to choose the negative parity of the potential pentaquark states. We take the quark-hadron duality below the continuum thresholds $s_0$ and get the QCD sum rules for the pentaquark states,
\begin{eqnarray}\label{QCDN}
2M_{-}{(\lambda^{-})}^2\exp\left( -M_{-}^2\tau\right)
&=& \int_{4m_c^2}^{s_0}ds \left[\sqrt{s}\rho^1_{QCD}(s)+\rho^0_{QCD}(s)\right]\exp\left( -\tau s\right)\, ,
\end{eqnarray}
\begin{eqnarray}\label{QCDSR-M}
M^2_{-} &=& \frac{-\frac{d}{d \tau}\int_{4m_c^2}^{s_0}ds \,\left[\sqrt{s}\,\rho^1_{QCD}(s)+\,\rho^0_{QCD}(s)\right]\exp\left(- \tau s\right)}{\int_{4m_c^2}^{s_0}ds \left[\sqrt{s}\,\rho_{QCD}^1(s)+\,\rho^0_{QCD}(s)\right]\exp\left( -\tau s\right)}\, ,
\end{eqnarray}
where, $\tau=\frac{1}{T^2}$. For simplicity, the detailed expressions of the complicated spectral densities $\rho^1_{QCD}(s)$ and $\rho^0_{QCD}(s)$ are not shown here, one can contact us via Email.
\section{Numerical results and discussions}
We apply the standard values of the vacuum condensates $\langle\overline{q}q\rangle=-(0.24\pm0.01\;{\rm GeV})^3$, $\langle\bar{s}s\rangle=(0.8\pm0.1)\langle\bar{q}q\rangle$, $\langle\overline{q}g_s\sigma Gq\rangle=m_0^2\langle\overline{q}q\rangle$, $\langle\overline{s}g_s\sigma Gs\rangle=m_0^2\langle\overline{s}s\rangle$, $m_0^2=(0.8\pm0.1)\;{\rm GeV}^2$, $\langle\frac{\alpha_s}{\pi}GG\rangle=(0.012\pm0.004)\;{\rm GeV}^4$ at the energy scale $\mu=1\;{\rm GeV}$ \cite{SVZ1,Reinders,ColangeloReview}, and choose the $
\overline{MS}$ mass $m_c(m_c)=(1.275\pm0.025)\;{\rm GeV}$ and $m_s(\mu=2\;{\rm GeV})=(0.095\pm0.005)\;{\rm GeV}$ from the Particle Data Group \cite{PDG}. We consider the energy-scale dependence of these parameters,
\begin{eqnarray}
\notag \langle\overline{q}q\rangle(\mu)&&=\;\;\langle\overline{q}q\rangle(1{\rm GeV})\left[\frac{\alpha_s(1{\rm GeV})}{\alpha_s(\mu)}\right]^{\frac{12}{33-2n_f}}\, ,\\
\notag \langle\overline{s}s\rangle(\mu)&&=\;\;\langle\overline{s}s\rangle(1{\rm GeV})\left[\frac{\alpha_s(1{\rm GeV})}{\alpha_s(\mu)}\right]^{\frac{12}{33-2n_f}}\, ,\\
\notag \langle\overline{q}g_s\sigma Gq\rangle(\mu)&& =\;\;\langle\overline{q}g_s\sigma Gq\rangle(1{\rm GeV})\left[\frac{\alpha_s(1{\rm GeV})}{\alpha_s(\mu)}\right]^{\frac{2}{33-2n_f}}\, ,\\
\notag \langle\overline{s}g_s\sigma Gs\rangle(\mu)&& =\;\;\langle\overline{s}g_s\sigma Gs\rangle(1{\rm GeV})\left[\frac{\alpha_s(1{\rm GeV})}{\alpha_s(\mu)}\right]^{\frac{2}{33-2n_f}}\, ,\\
\notag m_c(\mu)&&=\;\;m_c(m_c)\left[\frac{\alpha_s(\mu)}{\alpha_s(m_c)}\right]^{\frac{12}{33-2n_f}}\, ,\\
\notag m_s(\mu)&&=\;\;m_s(2{\rm GeV})\left[\frac{\alpha_s(\mu)}{\alpha_s(2{\rm GeV})}\right]^{\frac{12}{33-2n_f}}\, ,\\
\notag \alpha_s(\mu)&&=\;\;\frac{1}{b_0t}\left[1-\frac{b_1}{b_0^2}\frac{\rm{log}\emph{t}}{t}+\frac{b_1^2(\rm{log}^2\emph{t}-\rm{log}\emph {t}-1)+\emph{b}_0\emph{b}_2}{b_0^4t^2}\right]\, ,
\end{eqnarray}
where, $t=\rm{log}\frac{\mu^2}{\Lambda_{\emph{QCD}}^2}$, $\emph b_0=\frac{33-2\emph{n}_\emph{f}}{12\pi}$, $b_1=\frac{153-19n_f}{24\pi^2}$, $b_2=\frac{2857-\frac{5033}{9}n_f+\frac{325}{27}n_f^2}{128\pi^3}$
and $\Lambda_{QCD}=213$ MeV, $296$ MeV, $339$ MeV for the flavors $n_f=5,\,4,\,3$, respectively \cite{PDG,Narison}, since there are $u,\;d,\;s$ and $c$ quarks for the pentaquark states with strangeness in the present study, we set the flavor number $n_f=4$. As for the energy scale, we take account of the light flavor $SU(3)$ mass breaking effect and apply the modified energy scale formula \cite{WZG-penta-mole-CPC,WZGNNN2},
\begin{eqnarray}
\mu=\sqrt{M_{X/Y/Z/P}^2-4\mathbb{M}_c^2}-k\mathbb{M}_s\, ,
\end{eqnarray}
where, $\mathbb{M}_c$ represents the effective charm quark mass, we choose the updated value $\mathbb{M}_c=1.85\pm0.01$ \rm GeV \cite{WZG-penta-mole-CPC}, the $s$ quark number $k=1$ for the present study and $\mathbb{M}_s=0.2\;\rm{GeV}$. \cite{WZGNNN2}
The pole dominance and convergence of the operator product expansion are the basic criteria of the QCD sum rules. In order to testify whether the calculation satisfy these two basic criteria or not, we define the pole contributions (PC) and the contributions of the vacuum condensates of dimension $n$, which are listed as,
\begin{eqnarray}
{\rm PC}&=&\frac{\int_{4m_c^2}^{s_0}ds\left[\sqrt{s}\rho_{QCD}^1(s)+\rho_{QCD}^0(s)\right]\exp\left(-\frac{s}{T^2}\right)}
{\int_{4m_c^2}^{\infty}ds\left[\sqrt{s}\rho_{QCD}^1(s)+\rho_{QCD}^0(s)\right]\exp\left(-\frac{s}{T^2}\right)}\, ,
\end{eqnarray}
\begin{eqnarray}
D(n)&=&\frac{\int_{4m_c^2}^{s_0}ds\left[\sqrt{s}\rho_{QCD;n}^1(s)+\rho_{QCD;n}^0(s)\right]\exp\left(-\frac{s}{T^2}\right)}
{\int_{4m_c^2}^{s_0}ds\left[\sqrt{s}\rho_{QCD}^1(s)+\rho_{QCD}^0(s)\right]\exp\left(-\frac{s}{T^2}\right)}\, ,
\end{eqnarray}
where, the $\rho_{QCD;n}^1(s)$ and $\rho_{QCD;n}^0(s)$ are the spectral densities of the $n$ dimensional vacuum condensates picked out from $\rho_{QCD}^1(s)$ and $\rho_{QCD}^0(s)$, respectively. In this paper, we also make a detailed technical discussion about the contribution of the terms of vacuum condensates proportional to the $s$ quark mass $m_s$, we define $D(m_s)$ and $D(m_s,n)$ as,
\begin{eqnarray}
D(m_s)&=&\frac{\int_{4m_c^2}^{s_0}ds\left[\sqrt{s}\rho_{QCD;m_s}^1(s)+\rho_{QCD;m_s}^0(s)\right]\exp\left(-\frac{s}{T^2}\right)}
{\int_{4m_c^2}^{s_0}ds\left[\sqrt{s}\rho_{QCD}^1(s)+\rho_{QCD}^0(s)\right]\exp\left(-\frac{s}{T^2}\right)}\, ,
\end{eqnarray}
\begin{eqnarray}
D(m_s,n)&=&\frac{\int_{4m_c^2}^{s_0}ds\left[\sqrt{s}\rho_{QCD;n,m_s}^1(s)+\rho_{QCD;n,m_s}^0(s)\right]\exp\left(-\frac{s}{T^2}\right)}
{\int_{4m_c^2}^{s_0}ds\left[\sqrt{s}\rho_{QCD}^1(s)+\rho_{QCD}^0(s)\right]\exp\left(-\frac{s}{T^2}\right)}\, ,
\end{eqnarray}
where, $\rho_{QCD;m_s}^1$ and $\rho_{QCD;m_s}^0$ refer to the spectral densities proportional to $m_s$ picked out from $\rho_{QCD}^1$ and $\rho_{QCD}^0$, $\rho_{QCD;n,m_s}^1$ and $\rho_{QCD;n,m_s}^0$ are the spectral densities with $n$ dimensional vacuum condensates selected from $\rho_{QCD;m_s}^1$ and $\rho_{QCD;m_s}^0$, respectively.
As for the determination of the Borel platforms, we must select the suitable energy scales, continuum threshold parameters and Borel parameters. For the numerical results of the masses of the pentaquark states $\bar{D}\Xi^{\prime}_c$, $\bar{D}\Xi^{*}_c$, $\bar{D}^*\Xi^{\prime}_c$ and $\bar{D}^*\Xi^{*}_c$ with high and low isospins, we find the masses with high isospin $I=1$ are a few dozens of $\rm{MeV}$ above the low ones on the condition that we set the same input parameters including threshold parameter $s_0$, Borel parameter $T^2$, energy scale $\mu$ and so on, of course, applying the same parameters, although both the two obey the convergence of the operator product expansion at the same time, they can never uniformly satisfy the other basic requirements of the QCD sum rules, namely, the pole dominance criterion and satisfying the energy scale formula, thus, we slightly adjust parameters and determine them via trial and error, one can consult the detailed steps in Ref. \cite{wangxiuwu}.
Via trial and error, the Borel platforms are determined which are shown in the Fig. \ref{fig4}, and the numerical results extracted from the Borel platforms are displayed in the Table \ref{table}, in this table, one can clearly find the thresholds of the open meson and baryon pairs. The present study belongs to the systematic research of the color singlet-singlet-type pentaquark states under the eigenstates of isospins, the results of $P_c$ states from our previous study \cite{wangxiuwuN} are also attached in the Table \ref{table} for the sake of our whole physical perspectives of the S-wave meson-baryon molecule under the eigenstates of isospins for the pentaquark states with and without strangeness. The dimensional contribution of the Borel platforms' centers are shown in the Fig. \ref{fig1}, what's more, we draw the dimensional contribution of vacuum condensates proportional to $m_s$ in the Fig. \ref{fig2}. As for Fig. \ref{fig3}, it is used for the analyses of uncertainties of the masses and pole residues of the chosen examples, $\bar{D}\Xi^{*}$ pentaquark states with high and low isospins.
From Table \ref{table}, the pole contribution of all the considered states are around $(40-60)\%$ which means the pole dominance criterion is very well satisfied. In the Fig. 1, one can find the high dimensional contribution of the vacuum condensation play a tiny role, data show that $|D(12)|$ and $|D(13)|$ are less than $0.7\%$ for these eight states, thus, the convergence of the operator product expansion also holds well. The most important contribution come from the leading order, $\langle \bar{q}q\rangle$, $\langle \bar{q}q\rangle^2$, where, $q=u,\,d\,\,\rm{and}\,\,s$. Contribution of the gluon condensate $\langle\frac{\alpha_s}{\pi}GG\rangle$ is small, for all the states, $|D(4)|<4\%$. We make a detailed calculation about the contribution of the terms which are proportional to $m_s$ picked out from the spectral densities, and we find $D(m_s)$ of these eight states are all around $5\%$, thus it is accurate enough for us to consider the vacuum condensation proportional to $m_s^k$ up to $k=1$. Interestingly, for those terms of vacuum condensates proportional to $m_s$, $|D(m_s,\,4)|$, $|D(m_s,\,7)|$, $|D(m_s,\,9)|$, $|D(m_s,\,10)|$, $|D(m_s,\,11)|$, $|D(m_s,\,12)|$ and $|D(m_s,\,13)|$ are less than $0.5\%$, especially for the high dimensional terms, their contributions are even smaller, thus, it is safe to simplify the operator product expansion by neglecting $s$ quark mass for those terms which can save much work from trivial calculation. For the terms proportional to $m_s$, the worthy calculation are from the leading order, $\langle \bar{q}q\rangle$, $\langle\overline{q}g_s\sigma Gq\rangle$, $\langle \bar{q}q\rangle^2$ and $\langle \bar{q}q\rangle$$\langle\overline{q}g_s\sigma Gq\rangle$, where, $q=u,\,d$ and $s$.
The numerical results of the masses and pole residues are shown in the Table \ref{table} (also see Fig. \ref{fig4}), and their uncertainties are calculated via the standard error analysis formula,
\begin{eqnarray}
\notag\delta_f=\sqrt{\left(\frac{\partial f}{\partial x_i}\right)^2(x_i-\bar x_i)^2}\approx \sqrt{\left[f(\bar x_i\pm \Delta x_i)-f(\bar x_i)\right]^2}\, ,
\end{eqnarray}
where, $f$ mean the mass or pole residue, $x_i$ represent the central value of the input parameters $s_0$, $\langle\bar{q}q\rangle$, $m_c(m_c)$, $m_0$, $\mathbb{M}_c$, $m_s$, $\langle\frac{\alpha_s}{\pi}GG\rangle$ and $\langle\bar{s}s\rangle$, $\Delta x_i$ are the corresponding uncertainties of $x_i$, respectively. As is known, the uncertainties are of great importance for the judgement of the states, it should be meaningful to figure out the contribution of the uncertainty due to each individual input parameter. We define the relative uncertainty due to each individual parameter $x_i$ as,
\begin{eqnarray}
\notag\delta^{\prime}(x_i^{\pm})=\frac{|f(x\pm\Delta x_i)-f(x_i)|}{\delta_f^{\pm}}\, ,
\end{eqnarray}
where, the $+$ and $-$ refer to the upper and lower bounds of the uncertainties, $\delta_f^{\pm}$ mean the corresponding upper and lower bounds of uncertainties of the mass and pole residue, respectively. For convenience of discussion, the relative uncertainties due to the individual parameter are normalized as,
\begin{eqnarray}
\notag\delta(x_i^{\pm})=\frac{\delta^{\prime}(x_i^{\pm})}{\sum\limits_{i}\delta^{\prime}(x_i^{\pm})}\, .
\end{eqnarray}
We show the numerical results of $\delta(x_i^{\pm})$ of the $\bar{D}\Xi^*_c$ pentquark states with isospin $I=0$ and $I=1$ in the Fig. \ref{fig3}. One can find that the uncertainties are mainly from the threshold parameter $s_0$ which contributes around $50\%$ of the net relative uncertainties, $\langle\bar{q}q\rangle$, $m_c(m_c)$, $m_0$ and $\mathbb{M}_c$ account for around $(10-20)\%$. As is expected, the uncertainties of the upper and lower bounds do not have much difference. The uncertainties of $\langle\frac{\alpha_s}{\pi}GG\rangle$ and $m_s$ play tiny role for both the masses and pole residues, and it is reasonable to understand the phenomena based on the analyses of dimensional contribution. Interestingly, similar conclusion can be derived for the other six states in this aspect.
The numerical results of the masses and their uncertainties shown in the Table \ref{table} are extracted from the centers of the corresponding Borel platforms, the data of the masses reveal that the state masses with isospin $I=0$ are slightly below the related thresholds of the meson and baryon constitutes, for the ones with isospin $I=1$, they are a few dozens of $\rm{MeV}$ above the corresponding thresholds of the meson and baryon pairs. Consider upper and lower bounds of the uncertainties of the masses are around $70\,\rm{MeV}$, compare the state masses with thresholds of the meson and baryon pairs shown in Table \ref{table}, we can only conclude that those states with $I=0$ are possible molecular states and the states with $I=1$ are most likely to be the resonance states. Especially, for the pentaquark exotic states $P_{cs}(4459)$, since the masses of $\bar{D}^*\Xi_c^{\prime}$ are over $0.1\,\rm{GeV}$ above and around $0.2\,\rm{GeV}$ for $\bar{D}^*\Xi_c^{*}$, it is unlikely to be the molecular state of $\bar{D}^*\Xi_c^{\prime}$ or $\bar{D}^*\Xi_c^{*}$. The mass of $\bar{D}\Xi^{\prime}$ with isospin $I=1$ is $4.45_{-0.08}^{+0.07}$ which is near $4459\,\rm{MeV}$, but this state is slightly above the meson and baryon threshold, it is reasonable for us to assign it as the resonance state, more significantly, from the observed decay mode \cite{RAaij3}, the isospin of the exotic state $P_{cs}(4459)$ is zero. Numerical result of the mass of $\bar{D}\Xi_c^{*}$ with $I=1$ is $4.53_{-0.07}^{+0.07}$, it is about $70\,\rm{MeV}$ above $4459\,\rm{MeV}$, even if we consider its lower bound of uncertainty, its mass is still slightly above $P_{cs}(4459)$, what's more its isospin $I=1$. The mass of $\bar{D}\Xi_c^{*}$ with low isospin is $4.46\,\rm{GeV}$ which is in good agreement with the experimental result of $P_{cs}(4459)$, thus, it is nice for us to assign $P_{cs}(4459)$ as the molecular state of $\bar{D}\Xi_c^{*}$ with isospin $I=0$, then we determine the $J^P$ of $P_{cs}(4459)$ as $(\frac{3}{2})^-$, and its bounding energy is about $50\,\rm{MeV}$.
The pole residues of the considered states have the order of magnitude of $10^{-3}\,\rm{GeV}^6$, as the detailed numbers are listed in the Table 3, one can find the relative uncertainty of each pole residue is around $12\%$. The pole residues can be used to calculate the decays of these pentaquark states via the QCD sum rules in the consideration of three-point correlation functions, which may be the challenge research of our future work. In the present paper, we do not show the $\lambda-T^2$ curves, one can reach them via Email.
\section{Conclusions}
In the present work, we construct the color-singlet-color-singlet type five-quark currents with strangeness to study the pentaquark states under the isospin eigenstates $I=0$ and $I=1$ via the QCD sum rules. We make the detailed technical discussion about the contribution of the vacuum condensates proportional to $m_s$. Data show that it is accurate enough to neglect the terms related to $m_s^k$ for $k\geq 2$, what's more, only several low dimensional terms are worthy calculation in this aspect, this conclusion is helpful to avoid trivial job for future study. We analyze the uncertainties of both the masses and pole residues, and find the main contribution of the uncertainties are from the threshold parameter $s_0$ which account for around $50\%$ for the considered states, as for the parameters $\langle\bar{q}q\rangle$, $m_c(m_c)$, $m_0$ and $\mathbb{M}_c$, they contribute around $(10-20)\%$ for the uncertainties, and the others play tiny role. This may give us the reference to make the calculation even more accurate for the future work. We find the masses of the $\bar{D}\Xi^{\prime}$, $\bar{D}\Xi_c^{*}$, $\bar{D}^{*}\Xi_c^{\prime}$ and $\bar{D}^{*}\Xi_c^{*}$ pentaquarks with isospins $I=0$ and $I=1$, results show that the masses with low isospin are a few dozens of $\rm{MeV}$ below the high isospin ones, what's more, they are slightly below the corresponding thresholds of the meson and baryon pairs, thus we assign these low isospin states as the possible molecular states. Masses with high isospin are all above the corresponding thresholds of the meson and baryon constitutes, and they are assigned as the possible resonance states. The mass of $\bar{D}\Xi_c^{*}$ with isospin quantum number $I=0$ coincides well with that of the exotic state $P_{cs}(4459)$, we analyze the results and consider it is natural and reasonable to assign $P_{cs}(4459)$ as the $\bar{D}\Xi_c^{*}$ molecular state with the $IJ^P=0(\frac{3}{2})^-$, thus we figure out the bounding energy of this molecular state, around $50\,\rm{MeV}$. The numerical results and conclusion of this paper may present a reference for the experimental search of the other pentaquark states with strangeness except $P_{cs}(4459)$, they may be confronted to the future experimental observation via the search of $J/\psi \Lambda$ invariant mass spectrum, if the future experimental search for the other pentaquark states with strangeness predicted in the present work except $P_{cs}(4459)$ hold, this will in return solidify the assignment of $P_{cs}(4459)$ and shed light on the low-energy QCD dynamics.
\section*{Acknowledgements}
This work is supported by National Natural Science Foundation, Grant Number 12175068 and Youth Foundation of NCEPU, Grant Number 93209703.
|
1,314,259,996,643 | arxiv | \section{Introduction}
\setlength{\parindent}{2em}
Let $\mathbb{F}_q$ be a finite field of odd characteristic and $n\geq 1$ an integer number. Let $\mathbb{F}_q^{(n)}=\{(a_1, \ldots,a_n) : a_i\in \mathbb{F}_q\ {\rm for}\ i=1,\ldots,n\}$ be the $n$-dimensional vector space over $\mathbb{F}_q$. For the $m$-dimensional vector subspace $P$ of $\mathbb{F}_q^{(n)}$, any $m\times n$ matrix whose rows, $\alpha_1,\ldots, \alpha_m$, form a basis of the space $P$, is called a \emph{matrix representation} of the space $P$, which is also denoted by $P$. To simplify notation, sometimes we write $[\alpha_1,\ldots,\alpha_m]$ instead of the matrix $P$. Let $e_1=(1,0,\ldots,0), e_2=(0,1,\ldots,0), \ldots, e_n=(0,0,\ldots,1)$. Then $e_i,i=1,2,\ldots,n$, form a basis of $\mathbb{F}_q^{(n)}$. The set of all non-zero elements of $\mathbb{F}_q$, denoted by $\mathbb{F}_q^{*}$, form a cyclic group under field multiplication, called the \emph{multiplicative group} of $\mathbb{F}_q$. The set of all square elements of $\mathbb{F}_q^*$, denoted by $\mathbb{F}_q^{*2}$, is a subgroup of $\mathbb{F}_q^*$. Let $z$ be a fixed non-square element in $\mathbb{F}_q^*$ and
\begin{displaymath}
S_{2\nu+\delta,\Delta}=
\left( \begin{array}{ccc}
0 & I^{(\nu)} & \\
I^{(\nu)}& 0 & \\
& & \Delta
\end{array} \right),
\end{displaymath}
be four $(2\nu+\delta)\times (2\nu+\delta)$ nonsingular symmetric matrices, where~
\begin{displaymath}\Delta= \left\{ \begin{array}{ll}
\emptyset, & \textrm{if $\delta=0,$}\\
(1)\ {\rm or}\ (z), & \textrm{if $\delta=1,$}\\
{\rm diag}(1,-z), & \textrm{if $\delta=2.$}
\end{array} \right.
\end{displaymath}
To simplify notations, we omit $\emptyset$ when $\delta=0$ and write $S$ for $S_{2\nu+\delta,\Delta}$. A $(2\nu+\delta)\times (2\nu+\delta)$ nonsingular matrix $T$ over $\mathbb{F}_q$ is called \emph{orthogonal} with respect to $S$ if $TS\, ^{t}\!T=S$. The set of all $(2\nu+\delta)\times (2\nu+\delta)$ orthogonal matrices form a group under matrix multiplication, called the \emph{orthogonal group} of degree $2\nu+\delta$ with respect to $S$ over $\mathbb{F}_q$ and denoted by $O_{2\nu+\delta}\big(\mathbb{F}_q\big)$. The factor group $O_{2\nu+\delta}\big(\mathbb{F}_q\big) \big/ \big\{I, -I\big\}$ is called the \emph{projective orthogonal group} of degree $2\nu+\delta$ over $\mathbb{F}_q$ and denoted by $PO_{2\nu+\delta}\big(\mathbb{F}_q\big)$.
There is an action of $O_{2\nu+\delta}(\mathbb{F}_q)$ on $\mathbb{F}_q^{(2\nu+\delta)}$ defined as follows:
\begin{eqnarray*}
\mathbb{F}_q^{(2\nu+\delta)}\times O_{2\nu+\delta}(\mathbb{F}_q) &\longrightarrow& \mathbb{F}_q^{(2\nu+\delta)}\\
((x_1,\ldots,x_{2\nu+\delta}),T) &\longrightarrow& (x_1,\ldots,x_{2\nu+\delta})T.
\end{eqnarray*}
The vector space $\mathbb{F}_q^{(2\nu+\delta)}$ together with this group action is said to be the $(2\nu+\delta)$-dimensional \emph{orthogonal space} over $\mathbb{F}_q.$ The map: $\mathbb{F}_q^{(2\nu+\delta)}\times \mathbb{F}_q^{(2\nu+\delta)}\to \mathbb{F}_q$ given by $(\alpha,\beta)\mapsto\alpha S\,{^t\!\beta}$ is said to be an \emph{orthogonal inner product} of $\mathbb{F}_q^{(2\nu+\delta)}$ with respect to $S$. For any subspace $P$ of $\mathbb{F}_q^{(2\nu+\delta)}$, the set $\{\alpha \in \mathbb{F}_q^{(2\nu+\delta)}:\alpha S\, ^{t}\!\beta = 0$ for all $\beta \in P\}$, denoted by $P^\perp$, is said to be the \emph{dual subspace} of $P$ with respect to $S$.
\medskip
In general, determining all the automorphisms of a graph is an important but difficult problem both in graph theory and in algebraic theory. In 2006, Tang and Wan \cite{ztzw} introduced the concept of the symplectic graph over a finite field. They proved that the symplectic graph is strongly regular and determined its automorphism group. In \cite{zgzw,zwkz,zwkz2}, the authors introduced the concepts of the orthogonal graph and the unitary graph over a finite field and determined their automorphism groups, respectively. In 2014, Wong et al.\:\cite{dwxmjz} introduced the concept of the zero-divisor graph based on rank one upper triangular matrices and also determined its automorphism group. Motivated by previous studies, we introduce a new graph on non-trivial orthogonal spaces over a finite field for studying the interplay between properties of orthogonal subspaces and the structure of graphs.
\medskip
The \emph{orthogonal inner product graph} with relative to $S$ over $\mathbb{F}_q$, denoted by $Oi\big(2\nu+\delta,q\big)$, is the graph defined on all non-trivial orthogonal subspaces of $\mathbb{F}_q^{(2\nu+\delta)}$ with an edge between vertices $A$ and $B,$ denoted by $A$---$B,$ if and only if $AS\,^{t}\!B=0$. The set of all vertices and all edges of $Oi\big(2\nu+\delta,q\big)$ are denoted by $V\big(Oi\big(2\nu+\delta,q\big)\big)$ and $E\big(Oi\big(2\nu+\delta,q\big)\big),$ respectively$.$ Of course, if $A$---$B$ is in $E\big(Oi\big(2\nu+\delta,q\big)\big)$, then $B$ is a subspace of $A^\perp$ and $A$ is a subspace of $B^\perp$.
\medskip
In section 2, we show that $Oi\big(2\nu+\delta,q\big)$ is connected with diameter $4$ when $2\nu+\delta\geq3$ and also show that $Oi\big(2\nu_1+\delta_1,q_1\big)\cong Oi\big(2\nu_2+\delta_2,q_2\big)$ if and only if $\nu_1=\nu_2,$ $\delta_1=\delta_2$ and $q_1=q_2$. In section 3, we obtain two necessary and sufficient conditions: (1) two vertices of $V\big(Oi\big(2\nu+\delta,q\big)\big)$ are in the same orbit under the action of ${\rm Aut}\big(Oi\big(2\nu+\delta,q\big)\big)$ if and only if they are the same type subspace; (2) two edges $g$ and $f$ of $E\big(Oi\big(2\nu+\delta,q\big)\big)$ are in the same orbit under the action of ${\rm Aut}\big(Oi\big(2\nu+\delta,q\big)\big)$ if and only if the following conditions hold: (\romannumeral1) one vertex of $g$ and one vertex of $f$ are the same type subspace, (\romannumeral2) the other vertex of $g$ and the other vertex of $f$ are the same type subspace, and (\romannumeral3) the sum of the two vertices of $g$ and the sum of the two vertices of $f$ are also the same type subspace. Furthermore, we show that ${\rm Aut}\big(Oi\big(2\nu+\delta,q\big)\big)= PO_{2\nu+\delta}\big(\mathbb{F}_q\big)\cdot E_{2\nu+\delta}$, where $E_{2\nu+\delta}$ is a subgroup of ${\rm Aut}\big(Oi\big(2\nu+\delta,q\big)\big)$.
\section{\boldmath Some properties of orthogonal inner graphs}
In \cite{jabusrm}, a graph $G$ is \emph{connected} if there is a path between any two vertices of $G$; otherwise $G$ is disconnected. The \emph{distance} between vertices $a$ and $b$ in $G$ is the number of edges in a shortest path between $a$ and $b$ and denoted by $d(a,b)$. The \emph{diameter} of G is the greatest distance between any two vertices of $G$ and denoted by ${\rm diam}(G)$. The \emph{degree} of $x\in G$ is defined to be the number of edges of the form $x$---$y$ in $G$ and denoted by $\deg(x)$.
\begin{Theorem} Let $\mathbb{F}_q$ be a finite field. Then $Oi\big(2\nu+\delta,q\big)$ is a connected graph if and only if $2\nu+\delta\geq3$. Moreover, if $Oi\big(2\nu+\delta,q\big)$ is a connected graph, then diam$\big(Oi\big(2\nu+\delta,q\big)\big)=4$.
\end{Theorem}
\begin{proof} In order to prove the theorem, we need only consider three cases: $\delta=0,1$ and $2$. Without loss of generality, we need only consider the case of $\delta=0$. Other cases are similar to prove. Suppose that $\delta=0$.
Let $\delta=0$ and $\nu=1$. Clearly, orthogonal subspaces $[e_1]$ and $[e_2]$ of $\mathbb{F}_q^{(2)}$ are different. Since $[e_1]=[e_1]^{\bot}$, $[e_2]=[e_2]^{\bot}$ and $[e_1,e_2]^{\bot}=0$, we have d$([e_1],[e_2])=\infty$ and so $Oi\big(2,q\big)$ is not connected.
Let $\delta=0$, $\nu\geq2$ and $A,B\in V\big(Oi\big(2\nu,q\big)\big)$. If $A$ and $B$ are adjacent, then $A$---$B$ is a path of length 1. So in the remainder of the proof we assume that $A$ and $B$ are not adjacent. Then there exist $[\alpha],[\beta]\in V\big(Oi\big(2\nu,q\big)\big)$ such that $[\alpha]\subseteq A^\perp$ and $[\beta]\subseteq B^\perp$. If $[\alpha]$ and $[\beta]$ are adjacent, then $A$--- $[\alpha]$---$[\beta]$---$B$ is a path of length 3. If $[\alpha]$ and $[\beta]$ are not adjacent, then $\dim([\alpha,\beta]^{\perp})\geq2\nu-2>2$ and there exists $[\gamma]\subseteq [\alpha,\beta]^{\perp}$ such that $A$---$[\alpha]$---$[\gamma]$---$[\beta]$---$B$ is a path of length 4. Therefore $Oi\big(2\nu,q\big)$ is connected and diam$\big(Oi\big(2\nu,q\big)\big)\leq4$. Let $A=[e_1,\ldots,e_{2\nu-1}]$ and $B=[e_1,\ldots,e_{\nu-1},e_{\nu+1},\ldots,e_{2\nu}]$ with $A^{\bot}=[e_\nu]$ and $B^{\bot}=[e_{2\nu}]$. Then $A$---$[e_\nu]$---$[e_1]$---$[e_{2\nu}]$---$B$ is a shortest path of length 4 between $A$ and $B$ and so diam$\big(Oi\big(2\nu,q\big)\big)=4$.
\end{proof}
The set of all vertices of dimension 1 in $Oi\big(2\nu+\delta,q\big)$ can induce a subgraph of $Oi\big(2\nu+\delta,q\big)$, denoted by $Oi\big(2\nu+\delta,1,q\big)$, with an edge between vertices $A$ and $B$ if and only if $AS\,^{t}\!B=0.$ Of course, if $X\in V\big(Oi\big(2\nu+\delta,1,q\big)\big)$, then $X$---$X^{\perp}$ is in $E\big(Oi\big(2\nu+\delta,q\big)\big)$ with $\deg(X^{\perp})=1$. Next, we introduce some notation and terminology from \cite{zwG} that will be used in this paper$.$ We will denote by $\mathcal{M}(m,2s+\gamma,s,\Gamma;2\nu+\delta,\Delta)$ the set of subspaces of type $(m,2s+\gamma,s,\Gamma)$ of $\mathbb{F}_q^{(2\nu+\delta)}$ with respect to $S$ and write $N(m,2s+\gamma,s,\Gamma;2\nu+\delta,\Delta)= |\mathcal{M}(m,2s+\gamma,s,\Gamma;2\nu+\delta,\Delta)|.$ Let $P$ be a fixed subspace of type $(m,2s+\gamma,s,\Gamma)$ in $\mathbb{F}_q^{(2\nu+\delta)}.$ We will denote by $\mathcal{M}_P(m_1,2s_1+\gamma_1,s_1,\Gamma_1;m,2s+\gamma,s,\Gamma;2\nu+\delta,\Delta)$ the set of subspaces of type $(m_1,2s_1+\gamma_1,s_1,\Gamma_1)$ contained in $P$ and write $N(m_1,2s_1+\gamma_1,s_1,\Gamma_1;m,2s+\gamma,s,\Gamma;2\nu+\delta,\Delta)= |\mathcal{M}_P(m_1,2s_1+\gamma_1,s_1,\Gamma_1;m,2s+\gamma,s,\Gamma;2\nu+\delta,\Delta)|.$ Then
$$V\big(Oi\big(2\nu+\delta,1,q\big)\big)= \mathcal{M}(1,0,0;2\nu+\delta,\Delta)\bigcup\mathcal{M}(1,1,0,1;2\nu+\delta,\Delta) \bigcup\mathcal{M}(1,1,0,z;2\nu+\delta,\Delta)$$
and $|V\big(Oi\big(2\nu+\delta,1,q\big)\big)|=\frac{q^{2\nu+\delta}-1}{q-1}$ by \cite[Theorem 6.26]{zwG}.
\vskip 0.3cm
\begin{Theorem} Let $\mathbb{F}_{q_1}$ and $\mathbb{F}_{q_2}$ be finite fields. Then $Oi\big(2\nu_1+\delta_1,q_1\big)\cong Oi\big(2\nu_2+\delta_2,q_{2}\big)$ if and only if $\nu_1=\nu_2$, $\delta_1=\delta_2$ and $q_1=q_2$.
\end{Theorem}
\begin{proof} First suppose that $\nu_1=\nu_2$, $\delta_1=\delta_2$ and $q_1=q_2$. Then $Oi\big(2\nu_1+\delta_1,q_1\big)\cong Oi\big(2\nu_2+\delta_2,q_{2}\big)$, completing the proof in one direction.
For the other direction, suppose that $Oi\big(2\nu_1+\delta_1,q_1\big)\cong Oi\big(2\nu_2+ \delta_2,q_{2}\big)$. We need only consider $Oi\big(2\nu_i+\delta_i,q_i\big)$ for $i=1,2$. Then $(q_1^{2\nu_1+ \delta_1}-1)/(q_1-1)= (q_2^{2\nu_2+\delta_2}-1)/(q_2-1)$. Take $M_i$ being a maximum complete subgraph (without considering circles) of $Oi\big(2\nu_i+\delta_i,1,q_i\big)$ such that $XS_{2\nu_i+\delta_i} \, ^{t}\!Y=0$ for all different $X,Y\in M_i$. Let $N_i=\{ Z\in M_i:ZS_{2\nu_i+\delta_i} \, ^{t}\!Z\neq0\}$ for $i=1,2$. Then $\nu_1+\delta_1=|M_1|=|M_2|= \nu_2+\delta_2\ {\rm and}\ \delta_1=|N_1|=|N_2|=\delta_2.$ Thus $\nu_1=\nu_2$, $\delta_1=\delta_2$ and $q_1=q_2$.
\end{proof}
\begin{Proposition} \label{2.3} Let $Oi\big(2\nu+\delta,q\big)$ be the orthogonal inner graph over $\mathbb{F}_{q}$ and $\sigma\in {\rm Aut}\big(Oi\big(2\nu+\delta,q\big)\big)$. Then $\sigma\big(Oi\big(2\nu+\delta,1,q\big)\big)=Oi\big(2\nu+\delta,1,q\big)$.
\end{Proposition}
\begin{proof} Let $X\in V\big(Oi\big(2\nu+\delta,1,q\big)\big)$ and $\sigma\in {\rm Aut}\big(Oi\big(2\nu+\delta,q\big)\big)$. Then there exists $A\in V\big(Oi\big(2\nu+\delta,q\big)\big)$ with $ XS_{2\nu+\delta}\, ^{t}\!A=0$ and deg$(A)=2\nu+\delta-1$. Since $\sigma\in {\rm Aut}\big(Oi\big(2\nu+\delta,q\big)\big)$, it follows that $\sigma(X)S_{2\nu+\delta}\, ^{t}\!(\sigma(A))=0$ and deg$(\sigma(A))=2\nu+\delta-1$. So $\sigma(X)\in V\big(Oi\big(2\nu+\delta,1,q\big)\big)$. Since $\sigma$ is injective and $V\big(Oi\big(2\nu+\delta,1,q\big)\big)$ is finite, we have $\sigma\big(Oi\big(2\nu+\delta,1,q\big)\big)=Oi\big(2\nu+\delta,1,q\big)$.
\end{proof}
It follows that the complement graph of the orthogonal graph in \cite{zgzw} is a proper subgraph of $Oi\big(2\nu+\delta,1,q\big)$. In fact, the vertices set of the orthogonal graph in \cite{zgzw} contains all $1$-dimensional totally isotropic orthogonal subspaces of $\mathbb{F}_q^{(2\nu+\delta)}$. But $Oi\big(2\nu+\delta,1,q\big)$ contains all $1$-dimensional orthogonal subspaces of $\mathbb{F}_q^{(2\nu+\delta)}$.
\begin{Proposition} \label{2.4} Let $T\in O_{2\nu+\delta}\big(\mathbb{F}_q\big)$ and
$$\sigma_T:V\big(Oi\big(2\nu+\delta,q\big)\big)\longrightarrow V\big(Oi\big(2\nu+\delta,q\big)\big),A\longmapsto AT,$$
for $\delta=0,1$ or $2.$ Then
\noindent${\rm(1)}$ $\sigma_T\in {\rm Aut}\big(Oi\big(2\nu+\delta,q\big)\big),$
\noindent ${\rm(2)}$ For any $T_1,T_2\in O_{2\nu+\delta}\big(\mathbb{F}_q\big)$, $\sigma_{T_1}= \sigma_{T_2}$ if and only if $T_1=\pm T_2$.
\end{Proposition}
\begin{proof} (1) Let $T\in O_{2\nu+\delta}\big(\mathbb{F}_q\big)$. Then $T$ is nonsingular and $\sigma_T$ is a bijection. For any $A,B\in V\big(Oi\big(2\nu+\delta, q\big)\big)$, we know that $AS\, ^{t}\!B=ATS\, ^{t}\!(AT)$, then $A\text{---}B$ if and only if $\sigma_T(A)\text{---}\sigma_T(B)$. Thus we conclude that $\sigma_T\in {\rm Aut}\big(Oi\big(2\nu+\delta,q\big)\big)$.
(2) First of all, if $T_1=\pm T_2\in O_{2\nu+\delta}\big(\mathbb{F}_q\big)$, then it follows that $\sigma_{T_1}= \sigma_{T_2}$. Conversely, suppose that $\sigma_{T_1}=\sigma_{T_2}$. Then we need only consider them acting on $Oi\big(2\nu+\delta,q\big)$. According to Proposition \ref{2.3}, we deduce that $\sigma_{T_i}\big(Oi\big(2\nu+\delta,1,q\big)\big)=Oi\big(2\nu+\delta,1,q\big)$ for $i=1,2$. The following proof is similar to that of \cite[Proposition 3.1]{zgzw}.
\end{proof}
Note that every orthogonal matrix of $O_{2\nu+\delta}(\mathbb{F}_q)$ can induce an automorphism of $Oi\big(2\nu+\delta,q\big)$ and two different orthogonal matrices $T_1$ and $T_2$ induce the same automorphism of $Oi\big(2\nu+\delta,q\big)$ if and only if $T_1=\pm T_2$. Thus $PO_{2\nu+\delta}(\mathbb{F}_q)$ can be regarded as a subgroup of ${\rm Aut}\big(Oi\big(2\nu+\delta,q\big)\big)$.
\begin{Proposition} \label{2.5} Let $Oi\big(2\nu+\delta,1,q\big)$ be an induced subgraph of $Oi\big(2\nu+\delta,q\big)$ and $\sigma\in {\rm Aut}\big(Oi\big(2\nu+\delta,q\big)\big)$. Then $X$ and $\sigma(X)$ are the same type orthogonal subspace for $X\in V\big(Oi\big(2\nu+\delta,1,q\big)\big)$.
\end{Proposition}
\begin{proof} In order to prove the theorem, we need only consider three cases: $\delta=0,1$ and $2$. Without loss of generality, we need only consider the case of $\delta=0$. Other cases are similar to prove. Let $\delta=0$.
Let $X\in V\big(Oi\big(2\nu+\delta,1,q\big)\big)$. Then the type of $X$ is one of $(1,0,0)$, $(1,1,0,1)$ and $(1,1,0,z)$. If the type of $X$ is $(1,0,0)$, then it is easy to check that the type of $\sigma(X)$ is also $(1,0,0)$. By \cite[Theorem 6.4]{zgzw} and Proposition \ref{2.4}, we know that the sets $\mathcal{M}(1,0,0;2\nu+\delta,\Delta)$, $\mathcal{M}(1,1,0,1;2\nu+\delta,\Delta)$ and $\mathcal{M}(1,1,0,z;2\nu+\delta,\Delta)$ are three orbits of $Oi\big(2\nu+\delta,q\big)$ under the action of $PO_{2\nu+\delta,\Delta}(\mathbb{F}_q)$.
If the type of $X$ is $(1,1,0,1)$, then by \cite[Corollary 6.6]{zgzw} we know that the type of $X^{\perp}$ is
\begin{displaymath}
\left\{ \begin{array}{ll}
(2\nu-1,2\nu-1,\nu-1,1), & \text{if}~-1\in\mathbb{F}_q^{*2},\\
(2\nu-1,2\nu-1,\nu-1,z), & \text{if}~-1\notin\mathbb{F}_q^{*2}.
\end{array} \right.
\end{displaymath}
If the type of $X$ is $(1,1,0,z)$, then by \cite[Corollary 6.6]{zgzw} we know that the type of $X^{\perp}$ is
\begin{displaymath}
\left\{ \begin{array}{ll}
(2\nu-1,2\nu-1,\nu-1,z), & \text{if}~-1\in\mathbb{F}_q^{*2},\\
(2\nu-1,2\nu-1,\nu-1,1), & \text{if}~-1\notin\mathbb{F}_q^{*2}.
\end{array} \right.
\end{displaymath}
Let $$\mathcal{X}=\{Y\in V\big(Oi\big(2\nu,q\big)\big):YS_{2\nu}{^t\!X}=0 {\rm\ and\ the\ type\ of\ Y\ is\ (1,1,0,1)}\}.$$
If the type of $X$ is $(1,1,0,1)$, then by \cite[Theorem 6.33]{zgzw}, we have
\begin{displaymath}
|\mathcal{X}|=\left\{ \begin{array}{ll}
N(1,1,0,1;2\nu-1,2\nu-1,\nu-1,1;2\nu), & \text{if}~-1\in\mathbb{F}_q^{*2},\\
N(1,1,0,1;2\nu-1,2\nu-1,\nu-1,z;2\nu), & \text{if}~-1\notin\mathbb{F}_q^{*2},
\end{array} \right.
\end{displaymath}
\begin{displaymath}
=\left\{ \begin{array}{ll}
N(1,1,0,1;2\nu-1,1), & \text{if}~-1\in\mathbb{F}_q^{*2},\\
N(1,1,0,1;2\nu-1,z), & \text{if}~-1\notin\mathbb{F}_q^{*2}.
\end{array} \right.
\end{displaymath}
If the type of $X$ is $(1,1,0,z)$, then by \cite[Theorem 6.33]{zgzw}, we have
\begin{displaymath}
|\mathcal{X}|=\left\{ \begin{array}{ll}
N(1,1,0,1;2\nu-1,2\nu-1,\nu-1,z;2\nu), & \text{if}~-1\in\mathbb{F}_q^{*2},\\
N(1,1,0,1;2\nu-1,2\nu-1,\nu-1,1;2\nu), & \text{if}~-1\notin\mathbb{F}_q^{*2},
\end{array} \right.
\end{displaymath}
\begin{displaymath}
=\left\{ \begin{array}{ll}
N(1,1,0,1;2\nu-1,z), & \text{if}~-1\in\mathbb{F}_q^{*2},\\
N(1,1,0,1;2\nu-1,1), & \text{if}~-1\notin\mathbb{F}_q^{*2}.
\end{array} \right.
\end{displaymath}
If the type of $X_1$ is $(1,1,0,1)$ and the type of $X_2$ is $(1,1,0,z)$, then by \cite[Theorem 6.26]{zgzw}, we have $|\mathcal{X}_1|\neq|\mathcal{X}_2|$. Thus by Proposition \ref{2.3}, we know that $X$ and $\sigma(X)$ are the same type orthogonal subspace for $X\in V\big(Oi\big(2\nu+\delta,1,q\big)\big)$.
\end{proof}
\section{\boldmath Automorphism groups of orthogonal inner graphs}
In order to determine automorphism groups ${\rm Aut}\big(Oi\big(2\nu+\delta,q\big)\big)$ of orthogonal inner graphs $Oi(2\nu+\delta, \mathbb{F}_q)$, we need only consider three cases of $\delta=0,1$ and $2$.
Firstly, we will discuss the case of $\delta=0$. Let $S=S_{2\nu}$. In what follows, we write $f_i$ for $e_{\nu+i},$ $1\leq i\leq \nu$.
Then
$$e_i{S_{2\nu}}{^t\!f_i}=1,e_i{S_{2\nu}}{^t\!e_j}=f_i{S_{2\nu}}{^t\!f_j}=0\ {\rm for}\ 1\leq i,j\leq\nu$$
and
$$e_i{S_{2\nu}}{^t\!f_j}=0\ {\rm for}\ i\neq j,\ 1\leq i,j\leq\nu.$$
Let $\varphi_{2\nu}$ be the natural action of ${\rm Aut}(\mathbb{F}_q)$ on the group $\mathbb{F}_q^{*2}\times\mathbb{F}_q^*\times \cdots\times\mathbb{F}_q^*$ defined by
\begin{eqnarray*}
\varphi_{2\nu}:\big(\mathbb{F}_q^{*2}\times\mathbb{F}_q^*\times\cdots\times \mathbb{F}_q^*\big)\times{\rm Aut}(\mathbb{F}_q) &\longrightarrow& \mathbb{F}_q^{*2}\times\mathbb{F}_q^*\times \cdots\times\mathbb{F}_q^*\\
((k_1,k_2,\ldots,k_{\nu}),\pi) &\longrightarrow& (\pi(k_1),\pi(k_2),\ldots,\pi(k_{\nu}))
\end{eqnarray*}
for $\pi\in{\rm Aut}(\mathbb{F}_q)$, $k_1\in\mathbb{F}_q^{*2}$ and $k_2,\ldots,k_{\nu}\in\mathbb{F}_q^*$.
The \emph{semidirect product} of $\mathbb{F}_q^{*2}\times\mathbb{F}_q^*\times \cdots\times\mathbb{F}_q^*$ by ${\rm Aut}(\mathbb{F}_q)$ with respect to $\varphi_{2\nu}$, denoted by $(\mathbb{F}_q^{*2}\times\mathbb{F}_q^*\times \cdots\times\mathbb{F}_q^*)\rtimes_{\varphi_{2\nu}} {\rm Aut}(\mathbb{F}_q)$, is the group consisting of all elements of the form $(k_1,k_2\ldots, k_{\nu},\pi)$, with the multiplication defined by
$$(k_1,k_2,\ldots,k_{\nu},\pi)(k_1^{'},k_2^{'},\ldots,k_{\nu}^{'},\pi^{'})=(k_1\pi(k_1^{'}), k_2\pi(k_2^{'}),\ldots, k_{\nu}\pi(k_{\nu}^{'}),\pi\pi^{'}).$$
Define maps:
\begin{eqnarray*}
\sigma_{\pi}:V\big(Oi\big(2\nu+\delta,q\big)\big) & \longrightarrow & V\big(Oi\big(2\nu+\delta,q\big)\big)\\
\big(a_{ij}\big)_{m\times(2\nu+\delta)}&\longrightarrow&\big(\pi(a_{ij})\big)_{m\times (2\nu+\delta)}
\end{eqnarray*}
and
\begin{eqnarray*}
\sigma_{(k_1,k_2,\ldots,k_\nu,\pi)}:V\big(Oi\big(2\nu,q\big)\big) &\longrightarrow& V\big(Oi\big(2\nu,q\big)\big)\\
A &\longrightarrow& \sigma_\pi(A)\cdot{\rm diag}(k_1,k_2,\ldots,k_{\nu}, k_1^{-1},k_2^{-1},\ldots,k_{\nu}^{-1}),
\end{eqnarray*}
where $\pi\in{\rm Aut}(\mathbb{F}_q)$, $k_1\in\mathbb{F}_q^{*2}$ and $k_2\ldots,k_{\nu}\in \mathbb{F}_q^*$.
It is easy to check that $\sigma_{\pi}\in {\rm Aut}\big(Oi\big(2\nu,q\big)\big)$. Moreover, since
${\rm diag}(k_1,k_2,\ldots,k_{\nu},k_1^{-1},k_2^{-1}\ldots,k_{\nu}^{-1}) S_{2\nu}\,{^t({\rm diag}(k_1,k_2,\ldots,k_{\nu}, k_1^{-1},k_2^{-1},\ldots,k_{\nu}^{-1}))}=S_{2\nu},$
we have ${\rm diag}(k_1,k_2,\ldots,k_{\nu},k_1^{-1},\ldots,k_{\nu}^{-1})\in O_{2\nu}(\mathbb{F}_q).$ So $\sigma_{(k_1,k_2,\ldots,k_{\nu},\pi)}\in {\rm Aut}\big(Oi\big(2\nu,q\big)\big).$
Before considering the problem of automorphism groups of $Oi\big(2\nu,q\big)$, let$'$s look at a simple fact.
\begin{Example} \label{e1} Let $\mathbb{F}_q$ be a field. Then $$V\big(Oi\big(2,q\big)\big)=\{[(1,x)]:x\in \mathbb{F}_q\}\bigcup \{[(0,1)]\}$$
and
$$E\big(Oi\big(2,q\big)\big)=\{[(1,x)]\text{---}[(1,y)]: xy=-1,x,y\in \mathbb{F}_q\}\bigcup\{[(1,0)]\text{---}[(1,0)], [(0,1)] \text{---}[(0,1)]\}.$$
Clearly, $Oi\big(2,q\big)$ is a disconnected graph.
\end{Example}
\begin{Theorem}\label{ot1}
Let $\nu\geq2$ and $E_{2\nu}$ be the subgroup of ${\rm Aut}\big(Oi\big(2\nu,q\big)\big)$ defined as follows:
$$E_{2\nu}=\big\{ \sigma\in {\rm Aut}\big(Oi\big(2\nu,q\big)\big): \sigma\big([e_{i}]\big)=[e_{i}]~and~\sigma\big([f_{i}]\big)=[f_{i}] ~\text{for}~1\leq i\leq\nu\big\}.$$
Then ${\rm Aut}\big(Oi\big(2\nu,q\big)\big)=PO_{2\nu}\big(\mathbb{F}_q\big)\cdot E_{2\nu}$. Moreover, let
\begin{eqnarray*}
h:\left.{\big((\mathbb{F}_q^{*2}\times\mathbb{F}_q^*\times\cdots\times \mathbb{F}_q^*)\rtimes_{\varphi_{2\nu}}{\rm Aut}(\mathbb{F}_q)\big)}\middle/ K_{2\nu}\right. &\longrightarrow& E_{2\nu}\\
(k_1,k_2,\ldots,k_\nu,\pi)K_{2\nu} &\longrightarrow& \sigma_{(k_1,k_2,\ldots,k_\nu,\pi)},
\end{eqnarray*}
where $1_{\mathbb{F}_q}$ is an identity element of ${\rm Aut}(\mathbb{F}_q)$ and
\begin{displaymath}
K_{2\nu}=\left\{ \begin{array}{ll}
\{(c,c,\ldots,c,1_{\mathbb{F}_q}):c=\pm1\}, &\ {\rm if}\ -1\in\mathbb{F}_q^{*2},\\
\{(1,1,\ldots,1,1_{\mathbb{F}_q}) \}, &\ {\rm if}\ -1\notin\mathbb{F}_q^{*2}.
\end{array} \right.
\end{displaymath}
Then $h$ is a group isomorphism from ~$\left.{\big((\mathbb{F}_q^{*2}\times\mathbb{F}_q^*\times\cdots\times \mathbb{F}_q^*)\rtimes_{\varphi_{2\nu}}{\rm Aut}(\mathbb{F}_q)\big)}\middle/ K_{2\nu}\right.$ to $E_{2\nu}$.
\end{Theorem}
\begin{proof} Let $\tau\in {\rm Aut}\big(Oi\big(2\nu,q\big)\big)$. By Proposition \ref{2.3}, we can suppose that $\tau\big([e_{i}]\big)=[\alpha_{i}]$ and $\tau\big([f_{i}]\big) =[\beta_{i}]$ for $1\leq i\leq\nu$. Then
$$\alpha_i{S_{2\nu}}{^t\!\!\beta}_i\neq0,\alpha_iS_{2\nu}{^t\!\!\alpha}_j= \beta_iS_{2\nu}{^t\!\!\beta}_j=0\ {\rm for}\ 1\leq i,j\leq\nu$$
and
$$\alpha_iS_{2\nu}{^t\!\!\beta}_j=0\ {\rm for}\ i\neq j,\,1\leq i,j\leq\nu.$$
Let $A=[e_1,\ldots,e_{\nu},f_1,\ldots,f_{\nu}]$ and $A^{'}=[\alpha_{1}, \ldots, \alpha_{\nu},\beta_{1}, \ldots, \beta_{\nu}]$. Since $A^{'} S\, ^{t}\!A^{'}$ is a $2\nu \times 2\nu$ nonsingular symmetric matrix, we can choose $\alpha_i,\beta_i, 1\leq i\leq\nu,$ such that $\alpha_i S\, ^{t}\!\beta_i=1$. Then $AS\, ^{t}\!A=A^{'}S\, ^{t}\!A^{'}$. By \cite[Lemma 6.8]{zwG}, there exists $T\in O_{2\nu}(\mathbb{F}_q)$ such that $A=A^{'}T$. Let $\tau_1= \sigma_T\tau$. Then $\tau_1([e_i]) =[e_i]$ and $\tau_1([f_i]) =[f_i]$ for $1\leq i\leq\nu$. So $\tau_1\in E_{2\nu}$ and $\tau=\sigma_T^{-1}\tau_1\in PO_{2\nu}(\mathbb{F}_q) \cdot E_{2\nu}$. Thus ${\rm Aut}\big(Oi\big(2\nu,q\big)\big)= PO_{2\nu}(\mathbb{F}_q)\cdot E_{2\nu}$.
\vskip 0.15cm
Let $k_1\in\mathbb{F}_q^{*2},~k_2,\ldots,k_{\nu}\in\mathbb{F}_q^*$ and $\pi\in{\rm Aut}(\mathbb{F}_q)$. Then it is easy to check that
$$\sigma_{(k_1,k_2,\ldots,k_{\nu},\pi)}([e_i])=[e_i]\ {\rm and}\ \sigma_{(k_1,k_2,\ldots,k_{\nu},\pi)}([f_i])=[f_i]$$
for $1\leq i\leq\nu,$ and so $\sigma_{(k_1,k_2\ldots,k_{\nu},\pi)}\in E_{2\nu}$. Let
\begin{eqnarray*}
h^{'}:(\mathbb{F}_q^{*2}\times\mathbb{F}_q^*\times\cdots\times \mathbb{F}_q^*)\rtimes_{\varphi_{2\nu}} {\rm Aut}(\mathbb{F}_q) & \longrightarrow & E_{2\nu}\\
(k_1,k_2,\ldots,k_\nu,\pi) & \longrightarrow & \sigma_{(k_1,\ldots,k_\nu,\pi)}.
\end{eqnarray*}
Then by the definition of $\sigma_{(k_1,k_2,\ldots,k_\nu,\pi)}$ and Proposition\ref{2.4}, it is easily seen that $h^{'}$ is a group homomorphism and the kernel of $h^{'}$ is $K_{2\nu}$. Hence in order to show that $h$ is a group isomorphism from $\left.{\big((\mathbb{F}_q^{*2}\times\mathbb{F}_q^*\times\cdots\times \mathbb{F}_q^*)\rtimes_{\varphi_{2\nu}}{\rm Aut}(\mathbb{F}_q)\big)}\middle/ K_{2\nu}\right.$ to $E_{2\nu}$, it suffices to show that for any $\sigma\in E_{2\nu+1}$, there exist $k_1\in\mathbb{F}_q^{*2},$ $k_2,\ldots,k_{\nu}\in\mathbb{F}_q^*$ and $\pi\in{\rm Aut}(\mathbb{F}_q)$ such that $\sigma=\sigma_{(k_1,k_2,\ldots,k_\nu,\pi)}.$
\vskip 0.15cm
Let $\sigma\in E_{2\nu}$. And then we will prove that there exist ~$k_1\in\mathbb{F}_q^{*2},$ $k_2,\ldots,k_{\nu}\in\mathbb{F}_q^*$ and $\pi\in{\rm Aut}(\mathbb{F}_q)$ such that
$$\sigma=\sigma_{(k_1,k_2,\ldots,k_\nu,\pi)}.$$
Let $[(a_1,\ldots,a_{2\nu})]\in V\big(Oi\big(2\nu,q\big)\big)$ and suppose that $\sigma([(a_1,\ldots,a_{2\nu})])=[(b_1,\ldots,b_{2\nu})]$. Then
\begin{center}
$a_i=0$ if and only if $[(a_1,\ldots,a_{2\nu})]$---$[f_i]$
\end{center}
and
\begin{center}
$a_{\nu+i}=0$ if and only if $[(a_1,\ldots,a_{2\nu})]$---$[e_i]$
\end{center}
for $1\leq i\leq\nu$. Since $\sigma([e_i])=[e_i]$ and $\sigma([f_i]) =[f_i]$, $1\leq i\leq\nu$, we have
$$a_k=0\text{ if and only if }b_k=0 \text{ for }1\leq k\leq2\nu.$$
Moreover, $[(a_1,\ldots,a_{2\nu})]\in V\big(Oi\big(2\nu,q\big)\big)$ can be written uniquely as $[(0,\ldots,0,1,a_{i+1}^{'},\ldots,a_{2\nu}^{'})]$ if $a_1=\cdots=a_{i-1}=0$ and $a_i\neq0$. Therefore $\sigma([(0,\ldots,0,1,a_{i+1}^{'},\ldots,a_{2\nu}^{'})])$ can be written uniquely as $[(0,\ldots,0,1,b_{i+1},\ldots,b_{2\nu})].$
\vskip 0.1cm
Let $1\leq k\leq\nu$. For $[f_1+ae_k],[f_1+bf_k]\in V\big(Oi\big(2\nu,q\big)\big)$, there exist $a^{'},b^{'}\in\mathbb{F}_q$ such that $\sigma\big([e_1+ae_k]\big)=[e_1+ a^{'}e_k]$ and $\sigma\big([e_1+ bf_k]\big)=[e_1+b^{'}e_k]$. Thus we can define permutations $\pi_i$ of $\mathbb{F}_q$ with $\pi_i(0)=0$ such that
$$\sigma\big([e_1+ a_ie_i]\big)=[e_1+\pi_i(a_i)e_i]\,\ {\rm and}\,\ \sigma\big([e_1+ a_{\nu+i}f_i]\big)=[e_1+\pi_{\nu+i}(a_{\nu+i})f_i],$$
where $2\leq i\leq\nu.$
\vskip 0.15cm
In the following proof, we need only consider two cases: $\nu=2$ and $\nu>2$.
(1) $\nu=2$. Clearly, $V\big(Oi\big(4,q\big)\big)$ consists of four types of vertices: $\{[(1,a_2,a_3,a_4)]: a_2,a_3,a_4\in \mathbb{F}_q\}, \{[(0,1,\linebreak[1]a_3,a_4)]: a_3,a_4\in \mathbb{F}_q\}, \{[(0,0,1,a_4)]: a_4\in \mathbb{F}_q\}$ and $\{[(0,0,0,1)]\}$. If $\sigma([(x_1,x_2,x_3,x_4)])=[(y_1,y_2,y_3,\linebreak[1]y_4)]$ and $x_i=0$, then we get $y_i=0$ for $1\leq i\leq4$. Thus without restriction of generality we can assume that $a_i\neq0$ for $1\leq i\leq4$. Now we first determine that $\sigma([(0,0,1,a_4)]),\sigma([(0,1,a_3,a_4)])$ and $\sigma([(1,a_2,a_3,a_4)])$ for $a_2,a_3,\linebreak[1]a_4\in \mathbb{F}_q$.
Suppose that $\sigma([(0,0,1,a_4)])=[(0,0,1,a_4^{'})]$. Since $[(0,0,1,a_4)] \text{---}[(1,-a_4^{-1},0,0)]$, $[(0,0,1,a_4^{'})] \text{---}\allowbreak[(1,\pi_2(-a_4^{-1}),0,0)]$, i.e., $a_4^{'}= -\pi_2(-a_4^{-1})^{-1}$.
Thus $\sigma([(0,0,1,a_4)])=[(0,0,1,-\pi_2(-a_4^{-1})^{-1})]$.
Suppose that $\sigma([(0,1,a_3,a_4)])=[(0,1,a_3^{'},a_4^{'})]$. Since $[(0,1,a_3,a_4)]\text{---}[(1,0,0,-a_3)]$, it follows that $[(0,1,a_3^{'}, a_4^{'})]\text{---}[(1,0,0,\pi_4(-a_3))]$, i.e., $a_3^{'}=-\pi_4(-a_3)$. Since $[(0,1,a_3,a_4)]\text{---}[(1, -a_3a_4^{-1},0,\linebreak[1]0)]$, we see that $[(0,1,-\pi_4(-a_3), a_4^{'})]\text{---}[(1,\pi_2(-a_3a_4^{-1}), 0,0)]$, i.e., $a_4^{'}=\pi_2(-a_3a_4^{-1})^{-1}\pi_4(-a_3)$. Thus $\sigma([(0,1,a_3,a_4)])= [(0,1,-\pi_4(-a_3),\pi_2(-a_3a_4^{-1})^{-1}\pi_4(-a_3))].$
Similarly, we can prove that
$$\sigma([(1,a_2,\linebreak[1] a_3,a_4)])= [(1,\pi_2(a_2),-\pi_2(-a_3a_4^{-1})\pi_4(a_4), \pi_4(a_4))].$$
Next we will prove that $\pi$ is an automorphism of $\mathbb{F}_q$ for $\pi=\pi_2(1)^{-1}\pi_2=\pi_4(1)^{-1}\pi_4$. Let $a,b\in \mathbb{F}_q$.
Since $[(1,a,0,a)]\text{---}[(1,1,0,-1)]$ and $[(1,a,0,a)] \text{---}[(1,-1,0,1)]$, we have $[(1,\pi_2(a), 0,\pi_4(a))] \text{---}\linebreak[1] [(1,\pi_2(1),0,\pi_4(-1))]$ and $[(1,\pi_2(a),0,\pi_4(a))] \text{---}[(1,\pi_2(-1), 0,\pi_4(1))]$, i.e., $\pi_2(a)\pi_4(-1)+ \pi_2(1)\pi_4(a)=0$ and $\pi_2(a)\pi_4(1)+\pi_2(-1)\pi_4(a)=0$. Let $a=1$, then $-\pi_2(1)=\pi_2(-1)$ and $-\pi_4(1)=\pi_4(-1)$. Thus we deduce that $\pi_2(1)^{-1}\pi_2= \pi_4(1)^{-1}\pi_4$. Set $\pi=\pi_2(1)^{-1}\pi_2=\pi_4(1)^{-1}\pi_4$.
Since $[(1,-a,0,a)]\text{---}[(1,1,0,1)]$, we have $[(1,\pi_2(-a),0,\pi_4(a))] \text{---}[(1,\pi_2(1),0, \linebreak[1]\pi_4(1))]$, i.e., $\pi_2(-a)\pi_4(1)+\pi_2(1)\pi_4(a)=0$. It follows that $\pi_2(-a)=-\pi_2(a)$ since $\pi_2(1)^{-1}\pi_2(-a)= -\pi_4(1)^{-1}\pi_4(a)=-\pi_2(1)^{-1}\pi_2(a)$. Similarly, we have $\pi_4(-a)=-\pi_4(a)$.
Since $[(1,ab,0,a)]\text{---}[(1,b,0,-1)]$, we have $[(1,\pi_2(ab), 0,\pi_4(a))]\text{---} [(1,\pi_2(b),0,-\pi_4(1))]$, i.e., $-\pi_2(ab)\pi_4(1)+\pi_4(a)\pi_2(b)=0$. Then
$\pi_2(ab)=\pi_4(1)^{-1}\pi_4(a)\pi_2(b)=\pi_2(1)^{-1}\pi_2(a)\pi_2(b).$ Thus
$$\pi(ab)=\pi_2(1)^{-1}\pi_2(ab)=\pi_2(1)^{-1}\pi_2(1)^{-1}\pi_2(a)\pi_2(b)= \pi(a)\pi(b).$$
Let $ab\neq0$. Since $[(1,a+b,0,1)]\text{---}[(0,1,ab^{-1}, -b^{-1})]$, we have $[(1,\pi_2(a+b),0, \pi_4(1))] \text{---} [(0,1,\linebreak[1]\pi_4(ab^{-1}), -\pi_4(ab^{-1})\pi_2(a)^{-1})]$, i.e., $\pi_4(ab^{-1})- \pi_2(a+b)\pi_4(ab^{-1})\pi_2(a)^{-1}+ \pi_4(1)=0$. Then
$$\pi_2(a+b)=\pi_2(a)+ \pi_2(a)\pi_4(1)\pi_4(ab^{-1})^{-1}= \pi_2(a)+ \pi_2(1)\pi(a)\pi(ab^{-1})^{-1}~~$$
$$~\,=\pi_2(a)+\pi_2(1)\pi(a)\pi(a^{-1}b)=\pi_2(a)+\pi_2(b).~~~~~~~~~~~~\:~\:\,$$
Thus
$ \pi(a+b)=\pi_2(1)^{-1}\pi_2(a+b)=\pi_2(1)^{-1}(\pi_2(a)+\pi_2(b))= \pi(a)+\pi(b).$
Since $\pi_2$ is a permutation of $\mathbb{F}_q$, it follows that $\pi$ is injective. $\pi$ is surjective since $\mathbb{F}_q$ is finite. Thus $\pi$ is an automorphism of $\mathbb{F}_q$. Furthermore, we conclude that
$$\sigma([(a_1,a_2,a_3,a_4)])= [(\pi(a_1),\pi_2(1)\pi(a_2),\pi_2(1)\pi_4(1)\pi(a_3), \pi_4(1)\pi(a_4))].$$
Lastly, we will prove that there exist $k_1\in \mathbb{F}_q^{*2},k_2\in \mathbb{F}_q^*$ and $\pi\in {\rm Aut}(\mathbb{F}_q)$ such that $\sigma=\sigma_{(k_1,k_2,\pi)}$.
By Proposition \ref{2.5}, we know that $[(1,0,1,0)]$ and $\sigma([(1,0,1,0)])$ are the same type orthogonal subspace. Then we have $\pi_2(1)\pi_4(1)\in \mathbb{F}_q^{*2}$. Let $k_1=\pi_2(1)\pi_4(1),$ $k_2=\pi_2(1)$ and $\pi=\pi_2(1)^{-1}\pi_2=\pi_4(1)^{-1}\pi_4$. Then for $\big(a_{ij}\big)_{m\times4}\in V\big(Oi\big(4,q\big)\big)$, we have
\begin{displaymath}
\sigma\big(\big(a_{ij}\big)_{m\times4}\big)=\left(
\begin{array}{cccc}
k_1\pi(a_{11})&k_2\pi(a_{12}) &k_1^{-1}\pi(a_{13}) & k_2^{-1}\pi(a_{14})\\
\vdots&\vdots &\vdots&\vdots\\
k_1\pi(a_{m1})&k_2\pi(a_{m2}) &k_1^{-1}\pi(a_{m3}) & k_2^{-1}\pi(a_{m4})
\end{array}
\right).
\end{displaymath}
So $\sigma\big(\big(a_{ij}\big)_{m\times4}\big)= \sigma_{(k_1,k_2,\pi)}\big(\big(a_{ij}\big)_{m\times4}\big)$ and therefore $\sigma=\sigma_{(k_1,k_2,\pi)}.$
\vskip 1.5mm
(2) $\nu>2$. Similar to the case of $\nu=2$, we can prove that there exist $k_1\in\mathbb{F}_q^{*2},k_2,\ldots,k_{\nu}\in \mathbb{F}_q^*$ and $\pi\in{\rm Aut}(\mathbb{F}_q)$ such that $\sigma=\sigma_{(k_1,k_2,\ldots,k_{\nu},\pi)}.$
\end{proof}
\begin{Corollary}\label{3.3} Let $\nu\geq 1$ and $\mathbb{F}_q$ be a field of characteristic $p$. Then
\begin{displaymath}
\big|{\rm Aut}\big(Oi\big(2\nu,q\big)\big)\big|=\left\{ \begin{array}{ll}
2^{\frac{q+1}{2}}\cdot(\frac{q-1}{2})!, & {\rm\ if}\ \nu=1;\\
\frac{1}{2}q^{\nu(\nu-1)} \mathop{\prod}\limits_{i=1}^{\nu}(q^i-1) \mathop{\prod}\limits_{i=1}^{\nu-1}(q^i+1)[\mathbb{F}_q:\mathbb{F}_p],&\ {\rm if}\ \nu\geq2 \ {\rm and\ }-1\in\mathbb{F}_q^{*2};\\
q^{\nu(\nu-1)} \mathop{\prod}\limits_{i=1}^{\nu}(q^i-1) \mathop{\prod}\limits_{i=1}^{\nu-1}(q^i+1)[\mathbb{F}_q:\mathbb{F}_p],&\ {\rm if}\ \nu\geq2\ {\rm and\ }-1\notin\mathbb{F}_q^{*2}.
\end{array} \right.
\end{displaymath}
\end{Corollary}
\begin{proof} (1) Let $\nu=1$. Vertices $[e_1], [e_2]$ in $Oi\big(2,q\big)$ are totally isotropic subspaces. According to Example \ref{e1}, it is easy to calculate that $\big|{\rm Aut}\big(Oi\big(2,q\big)\big)\big|=2^{\frac{q+1}{2}}\cdot(\frac{q-1}{2})!$.
(2) Let $\nu\geq2$ and $-1\in\mathbb{F}_q^{*2}$. Then
$\big|{\rm Aut}\big(Oi\big(2\nu,q\big)\big)\big|= \frac{|PO_{2\nu}(\mathbb{F}_q)|\cdot|E_{2\nu}|}{|PO_{2\nu}(\mathbb{F}_q)\cap E{2\nu}|}.$ According to \cite[Theorem 6.21]{zwG}, we see that $|PO_{2\nu}(\mathbb{F}_q)|= \frac{1}{2}|O_{2\nu}(\mathbb{F}_q)|= \frac{1}{2}q^{\nu(\nu-1)} \prod_{i=1}^{\nu}(q^i-1) \prod_{i=0}^{\nu-1}(q^i+1).$ Clearly, we have $|E_{2\nu}|=\frac{1}{4}(q-1)^{\nu}\cdot[\mathbb{F}_q:\mathbb{F}_p]$, where $|{\rm Aut}(\mathbb{F}_q)|=[\mathbb{F}_q:\mathbb{F}_p]$. Let $\sigma_P\in PO_{2\nu}(\mathbb{F}_q)\cap E_{2\nu}$. Then $\sigma_P=\sigma_{(k_1, \ldots, k_{\nu},\pi)}$ for some $k_1\in \mathbb{F}_q^{*2},k_1,\ldots, k_{\nu}\in \mathbb{F}_q^*$ and $\pi\in {\rm Aut}(\mathbb{F}_q)$. So $|PO_{2\nu}(\mathbb{F}_q)\cap E_{2\nu}|=\frac{1}{4}(q-1)^{\nu}$ and thus
$$~~\big|{\rm Aut}\big(Oi\big(2\nu,q\big)\big)\big|= \frac{1}{2}q^{\nu(\nu-1)} \mathop{\prod}\limits_{i=1}^{\nu}(q^i-1) \mathop{\prod}\limits_{i=1}^{\nu-1}(q^i+1)[\mathbb{F}_q:\mathbb{F}_p].$$
(3) Let $\nu\geq2$ and $-1\notin\mathbb{F}_q^{*2}$. Similar to the case of $(2)$, we can prove that $\big|{\rm Aut}\big(Oi\big(2\nu,q\big)\big)\big|= q^{\nu(\nu-1)} \mathop{\prod}\limits_{i=1}^{\nu}(q^i-1) \mathop{\prod}\limits_{i=1}^{\nu-1}(q^i+1)[\mathbb{F}_q:\mathbb{F}_p].$
\end{proof}
Secondly, we will discuss the case of $\delta=1$. Since the groups $O_{2\nu+1,1}(\mathbb{F}_q)$ and $O_{2\nu+1,z}(\mathbb{F}_q)$ are isomorphic, we need only consider the case of $O_{2\nu+1,1}(\mathbb{F}_q)$. The other case is similar to prove.
Let $S=S_{2\nu+1,1}$. In what follows, we write $\varepsilon$ for $e_{2\nu+1}$. Then
$$e_i{S_{2\nu+1,1}}{^t\!f_i}=\varepsilon S_{2\nu+1,1}{^t\!\varepsilon}=1,e_i{S_{2\nu+1,1}}{^t\!e_j}=f_i{S_{2\nu+1,1}}{^t\!f_j}=0\ {\rm for}\ 1\leq i,j\leq\nu$$
and
$$e_i{S_{2\nu+1,1}}{^t\!f_j}=e_i{S_{2\nu+1,1}}{^t\!\varepsilon}= f_i{S_{2\nu+1,1}}{^t\!\varepsilon}=0\ {\rm for}\ i\neq j,1\leq i,j\leq\nu.$$
Similar to the case of $\delta=0$, let $\varphi_{2\nu+1}$ be the natural action of ${\rm Aut}(\mathbb{F}_q)$ on the group $\mathbb{F}_q^{*2}\times \mathbb{F}_q^*\times \cdots\times \mathbb{F}_q^*\times \mathbb{F}_3^*$ defined by $\varphi_{2\nu+1}(\pi)((k_1,k_2,\ldots, k_{\nu},\delta_1))= (\pi(k_1),\pi(k_2), \ldots, \pi(k_{\nu}),\pi(\delta_1))$ and $\big(\mathbb{F}_q^{*2}\times \mathbb{F}_q^*\times \cdots\times \mathbb{F}_q^*\times \mathbb{F}_3^*\big) \rtimes_{\varphi_{2\nu+1}} {\rm Aut}\big(\mathbb{F}_q\big)$ be a semidirect product group corresponding to $\varphi_{2\nu+1}$. Moreover, we define maps $\sigma_{(k_1,k_2,\ldots,k_{\nu},\delta_1,\pi)}$ from $V\big(Oi\big(2\nu+1,q\big)\big)$ to itself by
$$\sigma_{(k_1,k_2,\ldots,k_\nu, \delta_1,\pi)}(A)= \sigma_\pi(A)\cdot {\rm diag}(k_1,k_2,\ldots,k_\nu,k_1^{-1},k_2^{-1},\ldots,k_{\nu}^{-1},\delta_1),$$
where $k_1\in \mathbb{F}_q^{*2}$, $k_2,\ldots, k_{\nu}\in \mathbb{F}_q^*$, $\delta_1\in \mathbb{F}_3^*$ and $\pi\in {\rm Aut}(\mathbb{F}_q)$. It is easy to prove that $\sigma_{(k_1,k_2,\ldots,k_{\nu},\delta_1,\pi)}\in {\rm Aut}\big(Oi\big(2\nu+1,q\big)\big)$.
\begin{Theorem}\label{ot2}
Let $\nu\geq2$ and $E_{2\nu+1}$ be the subgroup of ${\rm Aut}\big(Oi\big(2\nu+1,q\big)\big)$ defined as follows
$$\big\{\sigma\in {\rm Aut}\big(Oi\big(2\nu+1,q\big)\big): \sigma\big([e_{i}]\big)=[e_{i}], \sigma\big([f_{i}]\big)=[f_{i}]\ {\rm and}\ \sigma\big([\varepsilon]\big)=[\varepsilon]\ {\rm for}\ 1\leq i\leq\nu\big\}.$$
Then ${\rm Aut}\big(Oi\big(2\nu+1,q\big)\big)=PO_{2\nu+1}(F_q)\cdot E_{2\nu+1}$. Moreover, let
\begin{eqnarray*}
h:\left.{\big((\mathbb{F}_q^{*2}\times\mathbb{F}_q^*\times\cdots\times \mathbb{F}_q^*\times \mathbb{F}_3^*)\rtimes_{\varphi_{2\nu+1}}{\rm Aut}(\mathbb{F}_q)\big)}\middle/ K_{2\nu+1}\right. &\longrightarrow& E_{2\nu+1}\\
(k_1,k_2,\ldots,k_\nu,\delta_1,\pi)K_{2\nu+1} &\longrightarrow& \sigma_{(k_1,k_2,\ldots,k_\nu,\delta_1,\pi)},
\end{eqnarray*}
where $1_{\mathbb{F}_q}$ is an identity element of ${\rm Aut}(\mathbb{F}_q)$, and
\begin{displaymath}
K_{2\nu+1}=\left\{ \begin{array}{ll}
\{(c,c,\ldots,c,c,1_{\mathbb{F}_q}):c=\pm1\}, &\ {\rm if}\ -1\in\mathbb{F}_q^{*2},\\
\{(1,1,\ldots,1,1,1_{\mathbb{F}_q}) \}, &\ {\rm if}\ -1\notin\mathbb{F}_q^{*2}.
\end{array} \right.
\end{displaymath}
Then $h$ is an isomorphism of groups from $\left.{\big((\mathbb{F}_q^{*2}\times\mathbb{F}_q^*\times\cdots\times \mathbb{F}_q^*\times \mathbb{F}_3^*)\rtimes_{\varphi_{2\nu+1}}{\rm Aut}(\mathbb{F}_q)\big)}\middle/ K_{2\nu+1}\right.$ to $E_{2\nu+1}$.
\end{Theorem}
\begin{proof} Similar to Theorem \ref{ot1}, we can prove that $${\rm Aut}\big(Oi\big(2\nu+1, q\big)\big)= PO_{2\nu+1,1}(\mathbb{F}_q)\cdot E_{2\nu+1}.$$
Define a map:
\begin{eqnarray*}
h^{'}:(\mathbb{F}_q^{*2}\times\mathbb{F}_q^*\times\cdots\times \mathbb{F}_q^*\times \mathbb{F}_3^*)\rtimes_{\varphi_{2\nu+1}} {\rm Aut}(\mathbb{F}_q) & \longrightarrow & E_{2\nu+1}\\
(k_1,k_2,\ldots,k_\nu,\delta_1,\pi) & \longrightarrow & \sigma_{(k_1,k_2,\ldots,k_\nu,\delta_1,\pi)}.
\end{eqnarray*}
By the definition of $\sigma_{(k_1,k_2,\ldots,k_\nu,\delta_1,\pi)}$, it is easy to prove that $h^{'}$ is a group homomorphism and the kernel of $h^{'}$ is $K_{2\nu+1}$. Hence in order to show that $h$ is a group isomorphism,
it suffices to show that for any $\sigma\in E_{2\nu+1}$, there exist $k_1\in\mathbb{F}_q^{*2},~k_2,\ldots,k_{\nu}\in\mathbb{F}_q^*$, $\delta_1\in \mathbb{F}_3^*$ and $\pi\in{\rm Aut}(\mathbb{F}_q)$ such that ~$$\sigma=\sigma_{(k_1,k_2,\ldots,k_{\nu},\delta_1,\pi)}.$$
\vskip 0.15cm
Let $\sigma\in E_{2\nu+1}$. And then we will prove that there exist $k_1\in\mathbb{F}_q^{*2}$, $k_2,\ldots,k_{\nu}\in\mathbb{F}_q^*$, $\delta_1\in \mathbb{F}_3^*$ and $\pi\in{\rm Aut}(\mathbb{F}_{q^2})$ such that $\sigma=\sigma_{(k_1,k_2,\ldots,k_{\nu},\delta_1,\pi)}$.
Let $[(a_1,a_2,\ldots,a_{2\nu+1})]\in V\big(Oi\big(2\nu+1,q\big)\big)$ and suppose that $\sigma([(a_1,a_2,\ldots,a_{2\nu+1})]) =[(b_1,b_2,\ldots,b_{2\nu+1})]$. Similar to the proof of the case $\delta=0$, we can conclude that
$$a_i=0\ {\rm if\ and\ only\ if}\ b_i=0,\ 1\leq i\leq2\nu+1,$$
and define permutations $\pi_j$ of $\mathbb{F}_q$ with $\pi_j(0)=0$ such that
$$\sigma\big([e_1+a_ke_k]\big)=[k_1e_1+\pi_k(a_k)e_k], \ $$
$$\ \ \sigma\big([e_1+a_{\nu+k}f_{k}]\big)= [k_1e_1+\pi_{\nu+k}(a_{\nu+k})f_{k}]\,$$
and
$$\ \ \ \ \ \sigma\big([e_1+a_{2\nu+1}\varepsilon]\big)= [k_1e_1+\pi_{2\nu+1}(a_{2\nu+1})\varepsilon],\ $$
where $j=2,\ldots,\nu,\nu+2,\ldots,2\nu+1$, $k=2,\ldots,\nu$ and $k_1^2=\pi_2(1)^{-1}\pi_4(1)^{-1}$.
\vskip 0.15cm
In the following proof, we need only consider two cases: $\nu=2$ and $\nu>2$.
(1) $\nu=2$. By Theorem \ref{ot1}, we know that $\sigma$ carries the vertex $[(a_1,a_2,a_3,a_4,a_{5})]$ of $Oi\big(5,q\big)$ into the vertex
$$[(k_1\pi(a_1),k_2\pi(a_2), k_1^{-1}\pi(a_{3}),k_2^{-1}\pi(a_{4}), x_{5})],$$
where $k_1=\pi_2(1)\pi_4(1),$ $k_2=\pi_2(1)$, $\pi=\pi_2(1)^{-1}\pi_2= \pi_4(1)^{-1}\pi_4$ and $x_{5}$ is to be determined.
In order to determine the value of $x_{5},$ we first determine the images of $[e_i+a_{5}e_5]$ and $[f_i+a_{5}e_5]$ for $i=1,2$ under $\sigma$. Suppose that $\sigma([e_i+a_5\varepsilon])= [k_ie_i+a_{5}^{'}\varepsilon]$ and $\sigma([f_i+a_5\varepsilon])= [k_i^{-1}f_i+a_{5}^{'}\varepsilon]$ for $i=1,2$.
Suppose that $\sigma([(0,0,1,-a_{5},1)])= [(0,0,k_1^{-1},k_2^{-1}\pi(a_{5}),x)]$ and $\sigma([(0,0,1,-a_{5},-1)])= [(0,0,\allowbreak k_1^{-1},-k_2^{-1}\pi(a_{5}),y)]$.
Since $[(0,0,1,-a_{5}, 1)]$---$[(1,0,0,0,-1)]$, we have $[(0,0,k_1^{-1},-k_2^{-1}\pi(a_{5}),x)]$ --- $[(k_1,0,0,0,\pi_5(-1))],$ i.e., $x=-\pi_{5}(-1)^{-1}$. Since $[(0,1,0,0, a_{5})]$ --- $[(0,0,1,-a_5,1)] $, we have $[(0, k_2, 0,0,a_{5}^{'})]$ --- $[(0,0,k_1^{-1},-k_2^{-1}\pi(a_{5}), -\pi_{5}(-1)^{-1})]$, i.e., $a_{5}^{'}= -\pi_{5}(-1)\pi(a_{5})$. Since $[(0,0,1,-a_{5},-1)]$ --- $[(1,0,0,0,1)]$, we have $[(0,0,k_1^{-1},-k_2^{-1}\pi(a_{5}),y)]$ --- $[(k_1,0,0,0,\pi_5(1))]$, i.e., $y=-\pi_{5}(1)^{-1}$. Since $[(0,1,0,0,- a_{5})]$ --- $[(0,0,1,-a_{5},-1)]$, we have $[(0,k_2,0,0,\pi_{5}(-1)\pi(a_{5}))]$ --- $[(0,0,k_1^{-1},-k_2^{-1}\pi(a_{5}),-\pi_{5}(1)^{-1})]$, i.e., $\pi_{5}(1)=-\pi_{5}(-1)$. Thus $\sigma([(0,1,0,0,a_{5})])= [(0,k_2,0,0,\pi_{5}(1)\pi(a_{5}))].$
Similarly, we can prove that $\sigma([(0,0,0,1,a_{5})])= [(0,0,0,k_2^{-1},\pi_{5}(1)\pi(a_{5}))]$, $\sigma([(1,0,0,0,a_{5})])= [(k_1,0,0,0,\pi_{5}(1)\pi(a_{5}))]$ and $\sigma([(0,0,1,0,a_{5})])= [(0,0,k_1^{-1},0,\pi_{5}(1)\pi(a_{5}))]$.
Since $[(1,0,0,0,1)]$ --- $[(0,0,1,0,-1)]$, it follows that $[(k_1,0,0,0,\pi_{5}(1))]$ --- $[(0,0,k_1^{-1},0,\pi_{5}(1))]$, i.e., $\pi_{5}(1)^2=1$. Hence we have $\pi_{5}(1)\in F_3^*$. It is easy to check that
$$\sigma([(a_1,a_2,a_3,a_4,,a_{5})])=[(k_1\pi(a_1),k_2\pi(a_2), k_1^{-1}\pi(a_{3}),k_2^{-1}\pi(a_{4}), \pi_5(1)\pi(a_{5}))].$$
Lastly, Let $k_1=\pi_2(1)\pi_{4}(1), k_2=\pi_{2}(1), \delta_1=\pi_5(1)$ and $\pi=\pi_2(1)^{-1}\pi_2=\pi_4(1)^{-1}\pi_4=\pi_5(1)^{-1}\pi_5$. Then for $A=\big(a_{ij}\big)_{m\times5}\in V\big(Oi\big(5,q\big)\big)$,
\begin{displaymath}
\sigma\big(A\big)=\left(
\begin{array}{ccccc}
k_1\pi(a_{11}) & k_2\pi(a_{12}) &k_1^{-1}\pi(a_{13}) &k_2^{-1}\pi(a_{14}) & \delta_1\pi(a_{15})\\
\vdots&\vdots & \vdots& \vdots& \vdots\\
k_1\pi(a_{m1}) & k_2\pi(a_{m2}) &k_1^{-1}\pi(a_{m3}) & k_2^{-1}\pi(a_{m4})&\delta_1\pi(a_{15})
\end{array}
\right)=\sigma_{(k_1,k_2,\delta_1,\pi)}\big(A\big).
\end{displaymath}
So $\sigma=\sigma_{(k_1,k_2,\delta_1,\pi)}.$
\vskip 1.5mm
(2) $\nu>2$. Similar to the case of $\nu=2$, we can prove that there exist $k_1\in\mathbb{F}_q^{*2},k_2\ldots,k_{\nu}\in \mathbb{F}_q^*, \delta_1\in\mathbb{F}_3^{*2}$ and $\pi\in{\rm Aut}(\mathbb{F}_q)$ such that $\sigma=\sigma_{(k_1,k_2\ldots,k_{\nu},\delta_1,\pi)}.$
\end{proof}
\begin{Corollary} Let $\nu\geq2$ and $\mathbb{F}_q$ be a field of characteristic $p$. Then
\begin{displaymath}
\big|{\rm Aut}\big(Oi\big(2\nu+1,q\big)\big)\big|=\left\{ \begin{array}{ll}
\frac{1}{2}q^{\nu^2} \mathop{\prod}\limits_{i=1}^{\nu}(q^i-1) \mathop{\prod}\limits_{i=1}^{\nu}(q^i+1)[\mathbb{F}_q:\mathbb{F}_p], & \textrm{µ±}~-1\in\mathbb{F}_q^{*2},\\
q^{\nu^2} \mathop{\prod}\limits_{i=1}^{\nu}(q^i-1) \mathop{\prod}\limits_{i=1}^{\nu}(q^i+1)[\mathbb{F}_q:\mathbb{F}_p], & \textrm{µ±}~-1\notin\mathbb{F}_q^{*2},
\end{array} \right.
\end{displaymath}
\end{Corollary}
\begin{proof}
The proof is similar to that of Corollary \ref{3.3}.
\end{proof}
Finally, we will discuss the case of $\delta=2$. Let $S=S_{2\nu+2,\Delta}$ and $\Delta={\rm diag}(1,-z)$ with $-1\in\mathbb{F}_q^{*2}$. In what follows, we write $\kappa$ for $e_{2\nu+2}$. Then
\begin{center}
$e_i{S_{2\nu+2}}{^t\!f_i}=\varepsilon S_{2\nu+2}{^t\!\varepsilon}=1, \kappa S_{2\nu+2}{^t\!\kappa}=-z,e_i{S_{2\nu+2}}{^t\!e_j}=f_i{S_{2\nu+2}}{^t\!f_j}=0$ for $1\leq i,j\leq\nu$
\end{center}
and
\begin{center}
$e_i{S_{2\nu+2}}{^t\!f_j}=e_i{S_{2\nu+2}}{^t\!\varepsilon}= f_i{S_{2\nu+2}}{^t\!\varepsilon}=e_i{S_{2\nu+2}}{^t\!\kappa}= f_i{S_{2\nu+2}}{^t\!\kappa}=0$ for $i\neq j,1\leq i,j\leq\nu.$
\end{center}
Similar to the case of $\delta=0$, let $\varphi_{2\nu+2}$ be the natural action of ${\rm Aut}(\mathbb{F}_q)$ on the group $\mathbb{F}_q^{*2}\times\mathbb{F}_q^*\times\cdots\times\mathbb{F}_q^* \times\mathbb{F}_3^*\times\mathbb{F}_3^*$ defined by $\varphi_{2\nu+2}(\pi)((k_1,k_2,\ldots, k_{\nu},\delta_1,\delta_2))= (\pi(k_1),\pi(k_2),\ldots,\pi(k_{\nu}),\pi(\delta_1),\pi(\delta_2))$ and $\big(\mathbb{F}_q^{*2}\times \mathbb{F}_q^*\times \cdots\times \mathbb{F}_q^*\times \mathbb{F}_3^*\times \mathbb{F}_3^*\big) \rtimes_{\varphi_{2\nu+2}} {\rm Aut}\big(\mathbb{F}_q\big)$ be a semidirect product group corresponding to $\varphi_{2\nu+2}$. Moreover, we define maps $\sigma_{(k_1,k_2,\ldots,k_{\nu},\delta_1,\delta_2,\pi)}$ from $V\big(Oi\big(2\nu+2,q\big)\big)$ to itself by
$$\sigma_{(k_1,k_2,\ldots,k_\nu,\delta_1,\delta_2,\pi)}(A)= \sigma_\pi(A)\cdot {\rm diag}(k_1,k_2,\ldots,k_\nu,k_1^{-1},k_2^{-1},\ldots,k_{\nu}^{-1},\delta_1,\delta_2),$$
where $k_1\in \mathbb{F}_q^{*2},k_2,\ldots,k_{\nu}\in \mathbb{F}_q^*,\delta_1,\delta_2\in \mathbb{F}_3^*$ and $\pi\in {\rm Aut}(\mathbb{F}_q)$. It is easy to prove that $\sigma_{(k_1,k_2,\ldots,k_{\nu},\delta_1,\delta_2,\pi)}\in {\rm Aut}\big(Oi\big(2\nu+2,q\big)\big)$.
\begin{Theorem} \label{ot3}
Let $\nu\geq2$ and $E_{2\nu+2}$ be the subgroup of ${\rm Aut}\big(Oi\big(2\nu+2,q\big)\big)$ defined as follows:
$$\big\{\sigma\in {\rm Aut}\big(Oi\big(2\nu+2,q\big)\big): \sigma([e_{i}])=[e_{i}], \sigma([f_{i}])=[f_{i}],\sigma([\varepsilon])=[\varepsilon] \ {\rm and }\ \sigma([\kappa])=[\kappa]\ {\rm for}\ 1\leq i\leq\nu\big\}.$$
Then ${\rm Aut}\big(Oi\big(2\nu+2,q\big)\big)=PO_{2\nu+2}\big(\mathbb{F}_q\big)\cdot E_{2\nu+2}$. Moreover, let
\begin{eqnarray*}
h:\left.{\big((\mathbb{F}_q^{*2}\times\mathbb{F}_q^*\times\cdots\times \mathbb{F}_q^*\times \mathbb{F}_3^*\times \mathbb{F}_3^*)\rtimes_{\varphi_{2\nu+2}}{\rm Aut}(\mathbb{F}_q)\big)}\middle/ K_{2\nu+2}\right. &\longrightarrow& E_{2\nu+2}\\
(k_1,k_2,\ldots,k_\nu,\delta,\pi)K_{2\nu+2} &\longrightarrow& \sigma_{(k_1,k_2,\ldots,k_\nu,\delta_1,\delta_2,\pi)},
\end{eqnarray*}
where $1_{\mathbb{F}_q}$ is an identity element of ${\rm Aut}(\mathbb{F}_q)$, and
$K_{2\nu+2}=\{(c,c,\ldots,c,c,c,1_{\mathbb{F}_q}) \in(\mathbb{F}_q^{*2}\times\mathbb{F}_q^*\times\cdots\times \mathbb{F}_q^* \times \mathbb{F}_3^* \times \mathbb{F}_3^*)\rtimes_{\varphi_{2\nu+2}}{\rm Aut}(\mathbb{F}_q):c=\pm1\}$.
Then $h$ is an isomorphism of groups.
\end{Theorem}
\begin{proof} Similar to Theorem \ref{ot1}, we can show that ${\rm Aut}\big(Oi\big(2\nu+2, q\big)\big)=PO_{2\nu+2}\big(\mathbb{F}_q\big)\cdot E_{2\nu+2}$.
Define a map:
\begin{eqnarray*}
h^{'}:(\mathbb{F}_q^{*2}\times\mathbb{F}_q^*\times\cdots\times \mathbb{F}_q^*\times \mathbb{F}_3^*\times \mathbb{F}_3^*)\rtimes_{\varphi_{2\nu+2}} {\rm Aut}(\mathbb{F}_q) & \longrightarrow & E_{2\nu+2}\\
(k_1,k_2,\ldots,k_\nu,\delta_1,\delta_2,\pi) & \longrightarrow & \sigma_{(k_1,k_2,\ldots,k_\nu,\delta_1,\delta_2,\pi)}.
\end{eqnarray*}
By the definition of $\sigma_{(k_1,k_2,\ldots,k_\nu,\delta_1,\delta_2,\pi)}$, it is easy to prove that $h^{'}$ is a group homomorphism and the kernel of $h^{'}$ is $K_{2\nu+2}$. Hence in order to show that $h$ is a group isomorphism,
it suffices to show that for any $\sigma\in E_{2\nu+2}$, there exist $k_1\in\mathbb{F}_q^{*2},k_2,\ldots,k_{\nu}\in\mathbb{F}_q^*$, $\delta_1,\delta_2\in \mathbb{F}_3^*$ and $\pi\in{\rm Aut}(\mathbb{F}_q)$ such that ~$$\sigma=\sigma_{(k_1,k_2,\ldots,k_{\nu},\delta_1,\delta_2,\pi)}.$$
Let $\sigma\in E_{2\nu+2}$. And then we will prove that there exist $k_1\in\mathbb{F}_q^{*2}$, $k_2,\ldots,k_{\nu}\in\mathbb{F}_q^*$, $\delta_1,\delta_2\in \mathbb{F}_3^*$ and $\pi\in{\rm Aut}(\mathbb{F}_{q^2})$ such that $\sigma=\sigma_{(k_1,k_2,\ldots,k_{\nu},\delta_1,\delta_2,\pi)}$.
Let $[(a_1,a_2,\ldots,a_{2\nu+1},a_{2\nu+2})]\in V\big(Oi\big(2\nu+2,q\big)\big)$ and suppose that $\sigma([(a_1,a_2,\ldots,a_{2\nu+1},a_{2\nu+2})]) =[(b_1,b_2,\ldots,b_{2\nu+1},b_{2\nu+2})]$. Similar to the proof of the case $\delta=0$ and $\delta=1$, we can conclude that
$$a_i=0\ {\rm if\ and\ only\ if}\ b_i=0\ {\rm for}\ 1\leq i\leq2\nu+2,$$
and define permutations $\pi_j$ of $\mathbb{F}_q$ with $\pi_j(0)=0$ such that
$\sigma\big([e_1+a_ke_k]\big)=[k_1e_1+\pi_k(a_k)e_k],$ $\sigma\big([e_1+a_{\nu+k}f_{k}]\big)= [k_1e_1+\pi_{\nu+k}(a_{\nu+k})f_{k}],$
$\sigma\big([e_1+a_{2\nu+1}\varepsilon]\big)= [k_1e_1+\pi_{2\nu+1}(a_{2\nu+1})\varepsilon]$
and $\sigma\big([e_1+a_{2\nu+2}\kappa]\big)= [k_1e_1+\pi_{2\nu+2}(a_{2\nu+2})\kappa],$
where $k=2,\ldots,\nu$ and $k_1^{2}=\pi_2(1)^{-1}\pi_4(1)^{-1}$.
\vskip 0.15cm
In the following proof, we need only consider two cases: $\nu=2$ and $\nu>2$.
(1) $\nu=2$. By Theorem \ref{ot2}, we know that $\sigma$ carries the vertex $[(a_1,a_2,a_3,a_4,a_5,a_6)]$ of $Oi\big(6,q\big)$ into the vertex
$$[(k_1\pi(a_1),k_2\pi(a_2), k_1^{-1}\pi(a_3),k_2^{-1}\pi(a_{4}),\delta_1\pi(a_5), x_6)],$$
where $k_1^{-1}k_2=\pi_2(1)$, $k_1^2=\pi_2(1)^{-1}\pi_4(1)^{-1}$, $\delta_1=\pi_5(1)$, $\pi=\pi_2(1)^{-1}\pi_2=\pi_4(1)^{-1}\pi_4= \pi_5(1)^{-1}\pi_5$ and $x_6$ is to be determined.
In order to determine the value of $x_6,$ we first determine the images of ~$[e_i+a_6\kappa]$, $[f_i+a_6\kappa]$ and $[\varepsilon+a_6\kappa]$, $i=1,2$ under $\sigma$. Suppose that $\sigma([e_i+a_6\kappa])= [k_ie_i+a_6^{'}\kappa]$, $\sigma([f_i+a_6\kappa])= [k_i^{-1}f_i+a_6^{'}\kappa]$, $i=1,2$, and $\sigma([\varepsilon+a_6\kappa])= [\delta_1\varepsilon+a_6^{'}\kappa]$.
Suppose that $\sigma([e_3+a_6e_4+z^{-1}\kappa])= [k_1e_3+k_2^{-1}\pi(a_6)e_4+x\kappa]$.
Since $[e_3+a_6e_4+z^{-1}\kappa]$--- $[e_{1}+\kappa]$, we have $[k_1^{-1}e_3+k_2^{-1}\pi(a_6)e_4+x\kappa]$---$[k_1e_1+\pi_6(1)\kappa]$, i.e., $x=z^{-1}\pi_6(1)^{-1}$. Since $[e_2+a_6\kappa]$ --- $[e_3+ a_6e_4+
z^{-1}\kappa]$, we have $[k_2e_2+a_6^{'}\kappa]$ --- $[k_1^{-1}e_3+k_2^{-1}\pi(a_6)e_4 +z^{-1}\pi_6(1)^{-1}\kappa]$, i.e., $a_6^{'}= \pi_6(1)\pi(a_6)$. Thus $\sigma([e_2+a_6\kappa])=[k_2e_2+\pi_6(1)\kappa]$.
Similarly, we can prove that $\sigma([e_4+a_{6}\kappa])= [k_2^{-1}e_4+\pi_6(1)\pi(a_{6})\kappa],$ $\sigma([e_{1}+ a_{6}\kappa])=[k_1e_{1}+ \pi_6(1)\pi(a_6)\kappa],$ $\sigma([e_3+ a_{6}\kappa])= [k_1^{-1}e_3+\pi_6(1)\pi(a_{6})\kappa]$ and $\sigma([\varepsilon+a_{6}\kappa])= [\delta_1\varepsilon+\pi_6(1)\pi(a_{6})\kappa].$
We know that $[e_2+\kappa]$---$[e_4+z^{-1}\kappa]$, and then $[k_2e_2+\pi_6(1)\kappa]$---$[k_2^{-1}e_4+\pi(z^{-1})\pi_6(1)\kappa]$. Thus there exists $\delta\in \mathbb{F}_3^*$ such that $\pi_6(1)=\delta\sqrt{\pi(z)z^{-1}}.$ Moreover, it is easy to check that
$$\sigma([(a_1,a_2,a_3,a_4,,a_5,a_6)])=
[(k_1\pi(a_1),k_2\pi(a_2), k_1^{-1}\pi(a_3),k_2^{-1}\pi(a_{4}),\delta_1\pi(a_5), \pi_6(1)\pi(a_6))].$$
Lastly, Let $k_1^{-1}k_2=\pi_2(1)$, $k_1=\pi_2(1)^{-1}\pi_4(1)^{-1}$, $\delta_1=\pi_5(1)$, $\delta_2=\delta$ and $\pi=\pi_2(1)^{-1}\pi_2=\pi_4(1)^{-1}\pi_4= \pi_5(1)^{-1}\pi_5=\pi_6(1)^{-1}\pi_6$. Then for $A=\big(a_{ij}\big)_{m\times6}\in V\big(Oi\big(6,q\big)\big)$,
\begin{displaymath}
\sigma\big(A\big)=\left(
\begin{array}{cccccc}
k_1\pi(a_{11}) & k_2\pi(a_{12}) &k_1^{-1}\pi(a_{13}) &k_2^{-1}\pi(a_{14}) & \delta_1\pi(a_{15})& \delta_2\sqrt{\pi(z)z^{-1}}\pi(a_{16}) )\\
\vdots&\vdots & \vdots& \vdots& \vdots & \vdots\\
k_1\pi(a_{m1}) & k_2\pi(a_{m2}) &k_1^{-1}\pi(a_{m3}) & k_2^{-1}\pi(a_{m4})&\delta_1\pi(a_{15})& \delta_2\sqrt{\pi(z)z^{-1}}\pi(a_{m6}) )
\end{array}
\right)
\end{displaymath}
\begin{displaymath}
=\,\sigma_{(k_1,k_2,\delta_1,\delta_2,\pi)}\big(A\big).\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\end{displaymath}
So $\sigma=\sigma_{(k_1,k_2,\delta_1,\delta_2,\pi)}.$
\vskip 1.5mm
(2) $\nu>2$. Similar to the case of $\nu=2$, we can prove that there exist $k_1\in\mathbb{F}_q^{*2},k_2\ldots,k_{\nu}\in \mathbb{F}_q^*, \delta_1,\delta_2\in\mathbb{F}_3^{*2}$ and $\pi\in{\rm Aut}(\mathbb{F}_q)$ such that $\sigma=\sigma_{(k_1,k_2\ldots,k_{\nu},\delta_1,\delta_2,\pi)}.$
\end{proof}
\begin{Corollary} Let $\nu\geq2$ and $F_q$ be a field of characteristic $p$. Then
\begin{displaymath}
\big|{\rm Aut}\big(Oi\big(2\nu+2,q\big)\big)\big|=\frac{1}{2}q^{\nu(\nu+1)} \mathop{\prod}\limits_{i=1}^{\nu}(q^i-1) \mathop{\prod}\limits_{i=1}^{\nu+1}(q^i+1)[\mathbb{F}_q:\mathbb{F}_p].
\end{displaymath}
\end{Corollary}
\begin{proof}
The proof is similar to that of Corollary \ref{3.3}.
\end{proof}
\section{The action of Automorphism groups of orthogonal inner graphs of odd characteristic}
\begin{Theorem}\label{4.1} Let $Oi\big(2\nu+\delta,q\big)$ be the orthogonal inner product graph over $\mathbb{F}_q$ and ~$\mathcal{M}(m,2s+\gamma,s,\Gamma;2\nu+\delta,\Delta)\neq\varnothing$ for $1\leq m<2\nu+\delta$. Then $\mathcal{M}(m,2s+\gamma,s,\Gamma;2\nu+\delta,\Delta)$ is exactly one orbit of $V\big(Oi\big(2\nu+\delta,q\big)\big)$ under the action of ${\rm Aut}\big(Oi\big(2\nu+\delta,q\big)\big)$.
\end{Theorem}
\begin{proof}
By \cite[Lemma 6.4]{zwG} and Proposition \ref{2.4}, we know that for any ~$A,B\in\mathcal{M}(m,2s+\gamma,s,\Gamma;2\nu+\delta,\Delta)$, there exists ~$T\in\mathcal{F}_{2\nu+\delta,\Delta}(q)$ such that $\sigma_T(A)=\sigma_T(B).$
Thus ${\rm Aut}\big(Oi\big(2\nu+\delta,q\big)\big)$ is transitive on $\mathcal{M}(m,2s+\gamma,s,\Gamma;2\nu+\delta,\Delta)$.
In order to prove that $\mathcal{M}(m,2s+\gamma,s,\Gamma;2\nu+\delta,\Delta)$ is exactly one orbit of $V\big(Oi\big(2\nu+\delta,q\big)\big)$ under the action of ${\rm Aut}\big(Oi\big(2\nu+\delta,q\big)\big)$, we need only need to show that ~$\sigma\in{\rm Aut}\big(Oi\big(2\nu+\delta,q\big)\big)$, we have
$\sigma(\mathcal{M}(m,2s+\gamma,s,\Gamma;2\nu+\delta,\Delta))= \mathcal{M}(m,2s+\gamma,s,\Gamma;2\nu+\delta,\Delta).$ In the following proof, we will consider three cases: $\delta=0,1$ and $2$.
Let $\delta=0$, $\sigma\in{\rm Aut}\big(Oi\big(2\nu,q\big)\big)$ and ~$A\in\sigma(\mathcal{M}(m,2s+\gamma,s,\Gamma;2\nu,\Delta)$.
By Theorem \ref{ot1}, we know that there exist $k_1\in\mathbb{F}_q^{*2},~k_2,\ldots,k_{\nu}\in\mathbb{F}_q^*$ and $\pi\in{\rm Aut}(\mathbb{F}_q)$ such that $\sigma=\sigma_{(k_1,k_2,\ldots,k_{\nu},\pi)}.$
By the knowledge of the finite field, we know that for non-square element $z\in\mathbb{F}_q$, there exists $a\in\mathbb{F}_q^{*2}$ such that $z=a\pi(z)$. If $AS\,{^t\!A}$ is cogredient to $M(m_1,2s_1+\gamma_1,s_1,\Gamma_1)$, then it is easy to check that $\sigma(A)S\,{^t\!(\sigma(A))}$ is cogredient to $M(m_1,2s_1+\gamma_1,s_1,\Gamma_1).$
So $A$ and $\sigma(A)$ are the same type orthogonal subspace. Thus we have
$\sigma(\mathcal{M}(m,2s+\gamma,s,\Gamma;2\nu,\Delta))= \mathcal{M}(m,2s+\gamma,s,\Gamma;2\nu,\Delta).$
Similar to the case of $\delta=0$, we can prove that
$\sigma(\mathcal{M}(m,2s+\gamma,s,\Gamma;2\nu+\delta,\Delta))= \mathcal{M}(m,2s+\gamma,s,\Gamma;2\nu+\delta,\Delta)$ when $\delta=1$ and $\delta=2$.
Thus $\mathcal{M}(m,2s+\gamma,s,\Gamma;2\nu+\delta,\Delta)$ is exactly one orbit of $V\big(Oi\big(2\nu+\delta,q\big)\big)$ under the action of ${\rm Aut}\big(Oi\big(2\nu+\delta,q\big)\big)$.
\end{proof}
For $A\in Oi\big(2\nu+\delta,q\big)$, we define
$t(A)=(m,2s+\gamma,s,\Gamma;2\nu+\delta,\Delta)$ if the type of $A$ is $(m,2s+\gamma,s,\Gamma;2\nu+\delta,\Delta)$ and also define
\begin{displaymath}
\phi(A)= \left\{ \begin{array}{ll}
0, & {\rm\ if\ |A|\ is\ a\ non\text{-square}\ element},\\
1, &{\rm\ if\ |A|\ is\ a\ square\ element}.
\end{array} \right.
\end{displaymath}
For $a\in\mathbb{F}_q^*$, we set $a^0=1$ and $a^1=a$.
\begin{Proposition}\label{4.2} Let $Oi\big(2\nu+\delta,q\big)$ be the orthogonal inner product graph over $\mathbb{F}_q$ and $X_1\text{---}X_2,\linebreak[4]Y_1\text{---}Y_2\in E\big(Oi\big(2\nu+\delta,q\big)\big).$ Then $X_1$---$X_2$ and $Y_1$---$Y_2$ are in the same orbit of $E\big(Oi\big(2\nu+\delta,q\big)\big)$ under the action of $O_{2\nu+\delta,\Delta}\big(\mathbb{F}_q\big)$ if and only if one of the following is true$:$ $(1)$ $t(X_1)=t(Y_1),$ $t(X_2)=t(Y_2),$ $t(X_1+X_2)=t(Y_1+Y_2);$ $(2)$ $t(X_1)=t(Y_2),$ $t(X_2)=t(Y_1),$ $t(X_1+X_2)=t(Y_1+Y_2).$
\end{Proposition}
\begin{proof} ($\Rightarrow$) Suppose that $X_1$---$X_2$ and $Y_1$---$Y_2$ are in the same orbit of $E\big(Oi\big(2\nu+\delta,q\big)\big)$ under the action of $O_{2\nu+\delta,\Delta}(\mathbb{F}_q)$. Then there exists $T\in O_{2\nu+\delta,\Delta}(\mathbb{F}_q)$ such that one of the following is true: $(1) X_1T=Y_1,$ $X_2T=Y_2$; $(2)X_1T=Y_2,$ $X_2T=Y_1.$ Without loss of generality we can assume that $X_1T=Y_1$ and $X_2T=Y_2$. Then $t(X_1)=t(Y_1)$ and $t(X_2)=t(Y_2)$ by Proposition \ref{2.4} and Theorem \ref{4.1}. It is sufficient to prove $t(X_1+X_2)=t(Y_1+Y_2)$. Let $t(X_1)=t(Y_1)=(m_1,2s_1+\gamma_1,s_1,\Gamma_1;2\nu+\delta,\Delta)$ and $t(X_2)=t(Y_2)=(m_2,2s_2+\gamma_2,s_2,\Gamma_2;2\nu+\delta,\Delta)$. Then we have
\begin{displaymath}
X_1T=\begin{bmatrix} X_{11} \\ \vdots \\ X_{1m_1}\end{bmatrix}T=
\begin{bmatrix} Y_{11} \\ \vdots \\ Y_{1m_1}\end{bmatrix}=Y_1
{\rm\ and\ }X_2T=\begin{bmatrix} X_{21} \\ \vdots \\ X_{2m_2}\end{bmatrix}T=
\begin{bmatrix} Y_{21} \\ \vdots \\ Y_{2m_2}\end{bmatrix}=Y_2.
\end{displaymath}
It is easy to check that $(X_1+X_2)T=(Y_1+Y_2)$. By Lemma \cite[Lemma 6.4]{zwG}, we have $t(X_1+X_2)=t(Y_1+Y_2)$.
\vskip 0.13cm
When $X_1T=Y_2$ and $X_2T=Y_1,$ we conclude similarly that $t(X_1)=t(Y_2),$ $t(X_2)=t(Y_1)$ and $t(X_1+X_2)=t(Y_1+Y_2).$
\vskip 0.15cm
($\Leftarrow$) Suppose that one of the following is true: $(1)$ $t(X_1)=t(Y_1),$ $t(X_2)=t(Y_2),$ $t(X_1+X_2)=t(Y_1+Y_2)$; $(2)$ $t(X_1)=t(Y_2),$ $t(X_2)=t(Y_1),$ $t(X_1+X_2)=t(Y_1+Y_2)$. Without loss of generality we can assume that $t(X_1)=t(Y_1)=(m_1,r_1),$ $t(X_2)=t(Y_2)=(m_2,r_2)$ and $t(X_1+X_2)=t(Y_1+Y_2)=(m,r).$ Then there exist matrix representations
\begin{displaymath}
\begin{bmatrix} \alpha_{1} \\ \vdots \\ \alpha_{m-m_2} \\ \gamma_1 \\ \vdots \\ \gamma_{m_1+m_2-m}\end{bmatrix},~ \begin{bmatrix}\beta_{1} \\ \vdots \\ \beta_{m-m_1} \\ \gamma_1 \\ \vdots \\ \gamma_{m_1+m_2-m}\end{bmatrix}~\text{and}~ \begin{bmatrix} \gamma_1 \\ \vdots \\ \gamma_{m_1+m_2-m}\end{bmatrix}
\end{displaymath}
of $X_1,$ $X_2$ and $X_1\bigcap X_2$ respectively such that
\begin{displaymath}
X_1+X_2=\begin{bmatrix} \alpha_{1} \\ \vdots \\ \alpha_{m-m_2} \\ \beta_{1}\\ \vdots \\ \beta_{m-m_1} \\ \gamma_1 \\ \vdots \\ \gamma_{m_1+m_2-m}\end{bmatrix},
\end{displaymath}
and
$$\alpha_iS\,{^t\!\beta_j}=0,\: \alpha_iS\,{^t\!\gamma_k}=0,\: \beta_jS\,{^t\!\gamma_k}=0$$
where $1\leq i\leq m-m_2$, $1\leq j\leq m-m_1$ and $1\leq k\leq m_1+m_2-m$. It is easy to verify that the type of $t(X_1\bigcap X_2)$ is determine by $X_1$ and $X_2$. We can choose a suitable basis
$$\alpha_{1},\ldots,\alpha_{m-m_2},~\beta_{1},\ldots,\beta_{m-m_1},~ \gamma_1,\ldots,\linebreak[4]\gamma_{m_1+m_2-m}$$
of $X_1+X_2$
such that $(X_1+X_2)S\,{^t(X_1+X_2)}=$
$${\rm diag}(x_1,\ldots,x_{r_1-1},z^{\phi(X_1)},0,\ldots,0,x_1,\ldots,x_{r_2-1},z^{\phi(X_2)}, 0,\ldots,0,x_1,\ldots,x_{r_3-1},z^{\phi(X_3)}0,\ldots,0),$$
where $x_h=1$, $1\leq h<\max\{r_1,\: r_2,\: r_3\}$.
Similarly, we can prove that there exists a matrix representation
\begin{displaymath}
Y_1+Y_2=\begin{bmatrix}\alpha_{1}^{'} \\ \vdots \\ \alpha_{m-m_2}^{'} \\ \beta_{1}^{'}\\ \vdots \\ \beta_{m-m_1}^{'} \\ \gamma_1^{'} \\ \vdots \\ \gamma_{m_1+m_2-m}^{'}\end{bmatrix},
\end{displaymath}
of $Y_1+Y_2$ such that
\begin{displaymath}
Y_1=\begin{bmatrix}\alpha_{1}^{'} \\ \vdots \\ \alpha_{m-m_2}^{'} \\ \gamma_1^{'} \\ \vdots \\ \gamma_{m_1+m_2-m}^{'}\end{bmatrix}, ~~ Y_2=\begin{bmatrix}\beta_{1}^{'}\\ \vdots \\ \beta_{m-m_1}^{'} \\ \gamma_1^{'} \\ \vdots \\ \gamma_{m_1+m_2-m}^{'}\end{bmatrix},
\end{displaymath}
and
$(Y_1+Y_2)S\,{^t(Y_1+Y_2)}=$
$${\rm diag}(y_1,\ldots,y_{r_1-1},z^{\phi(Y_1)},0,\ldots,0,y_1,\ldots,y_{r_2-1},z^{\phi(Y_2)}, 0,\ldots,0,y_1,\ldots,y_{r_3-1},z^{\phi(Y_3)}0,\ldots,0),$$
where $y_h=1$, $1\leq h< \max\{r_1,\: r_2,\: r_3\}$ and $\phi(X_i)=\phi(Y_i),i=1,2,3$.
By \cite[Lemma 6.8]{zwG}, there exists $T\in O_{2\nu+\delta,\Delta}\big(\mathbb{F}_q\big)$ such that $(X_1+X_2)T=Y_1+Y_2$, $X_1T=Y_1$ and $X_2T=Y_2$. So $X_1$---$X_2$ and $Y_1$---$Y_2$ are in the same orbit of $E\big(Oi\big(2\nu+\delta,q\big)\big)$ under the action of $O_{2\nu+\delta,\Delta}\big(\mathbb{F}_q\big)$.
\vskip 0.13cm
When $t(X_1)=t(Y_2),$ $t(X_2)=t(Y_1)$ and $t(X_1+X_2)=t(Y_1+Y_2),$ we conclude similarly that $X_1$---$X_2$ and $Y_1$---$Y_2$ are in the same orbit of $E\big(Oi\big(2\nu+\delta,q\big)\big)$ under the action of $O_{2\nu+\delta,\Delta}\big(\mathbb{F}_q\big)$.
\end{proof}
\vskip 0.3cm
\begin{Theorem} \label{4.3} Let $Oi\big(2\nu+\delta,q\big)$ be the orthogonal inner product graph over $\mathbb{F}_q$ and $X_1\text{---}X_2,Y_1\text{---}Y_2\in E\big(Oi\big(2\nu+\delta,q\big)\big).$ Then $X_1$---$X_2$ and $Y_1$---$Y_2$ are in the same orbit of $E\big(Oi\big(2\nu+\delta,q\big)\big)$ under the action of ${\rm Aut}\big(Oi\big(2\nu+\delta,q\big)\big)$ if and only if one of the following is true$:$ $(1)$ $t(X_1)=t(Y_1),$ $t(X_2)=t(Y_2),$ $t(X_1+X_2)=t(Y_1+Y_2);$ $(2)$ $t(X_1)=t(Y_2),$ $t(X_2)=t(Y_1),$ $t(X_1+X_2)=t(Y_1+Y_2).$
\end{Theorem}
\begin{proof} ($\Rightarrow$) Suppose that $X_1$---$X_2$ and $Y_1$---$Y_2$ are in the same orbit of $E\big(Oi\big(2\nu+\delta,q\big)\big)$ under the action of ${\rm Aut}\big(Oi\big(2\nu+\delta,q\big)\big).$ Then there exists $\sigma\in{\rm Aut}\big(Oi\big(2\nu+\delta,q\big)\big)$ such that one of the following is true: $\sigma(X_1)=Y_1,$ $\sigma(X_2)=Y_2$; $\sigma(X_1)=Y_2,$ $\sigma(X_2)=Y_1.$ Without loss of generality we can assume that $\sigma(X_1)=Y_1$ and $\sigma(X_2)=Y_2$. Then
$t(X_1)=t(Y_1)$ and $t(X_2)=t(Y_2)$ by Theorem \ref{4.1}. It is sufficient to prove $t(X_1+X_2)=t(Y_1+Y_2)$. In order to prove $t(X_1+X_2)=t(Y_1+Y_2),$ we need only consider three cases: $\delta=0,1$ and 2. We first consider the case $\delta=0$. Let $\delta=0$. By Theorem \ref{ot1}, we know that there exist $k_1\in\mathbb{F}_q^{*2},k_2\ldots,k_{\nu}\in\mathbb{F}_q^*$ and $\pi\in{\rm Aut}(\mathbb{F}_q)$ such that $\sigma=\sigma_{(k_1,k_2,\ldots,k_{\nu},\pi)}.$
By Proposition \ref{4.2}, we know that $X_1+X_2$ and $\sigma_{\pi^{-1}}\sigma(X_1+X_2)$ are the same type orthogonal subspace. It is easy to check that $M(m,2s+\gamma,s,\Gamma)$ is cogredient to $\sigma_{\pi}(M(m,2s+\gamma,s,\Gamma))$. So $\sigma_{\pi^{-1}}\sigma(X_1+X_2)$ and $\sigma(X_1+X_2)$ are the same type orthogonal subspace. Thus
$X_1+X_2$ and $\sigma(X_1+X_2)=Y_1+Y_2$ are the same type orthogonal subspace.
Similar to the case of $\delta=0$, we can prove that
$X_1+X_2$ and $Y_1+Y_2$ are the same type orthogonal subspace when $\delta=1$ and $\delta=2$.
($\Leftarrow$) Suppose that one of the following is true: $(1)$ $t(X_1)=t(Y_1),$ $t(X_2)=t(Y_2),$ $t(X_1+X_2)=t(Y_1+Y_2)$; $(2)$ $t(X_1)=t(Y_2),$ $t(X_2)=t(Y_1),$ $t(X_1+X_2)=t(Y_1+Y_2).$
By Proposition~\ref{2.4}~and~\ref{4.1}, it is easy to check that there exist $T\in O_{2\nu+\delta,\Delta}(\mathbb{F}_q)$ and $\sigma_T\in {\rm Aut}\big(Oi\big(2\nu+\delta,q\big)\big)$ such that
$$\sigma_T(X_1)=X_1T=Y_1~~{\rm and}~~\sigma_T(X_2)=X_2T=Y_2$$
or
$$\sigma_T(X_1)=X_1T=Y_2~~{\rm and}~~\sigma_T(X_2)=X_2T=Y_1.$$
Thus $X_1$---$X_2$ and $Y_1$---$Y_2$ are in the same orbit of $E\big(Oi\big(2\nu+\delta,q\big)\big)$ under the action of ${\rm Aut}\big(Oi\big(2\nu+\delta,q\big)\big)$.
\end{proof}
{\small |
1,314,259,996,644 | arxiv | \section*{Introduction}
In our previous paper \cite{General_strong_bound}, we studied the diameter of word metrics given by finitely many conjugacy classes on arithmetic Chevalley groups $G(\Phi,R)$ for $R$ either a semi-local ring or a ring of S-algebraic integers. The main result was the following theorem:
\begin{mTheorem}\cite[Theorem~3.1]{General_strong_bound}\label{general_thm}
Let $\Phi$ be an irreducible root system of rank at least $2$ and let $R$ be a commutative ring with $1$. Additionally, let $G(\Phi,R)$ be boundedly generated by root elements and if $\Phi=C_2$ or $G_2$, then we further assume $(R:2R)<\infty.$
Then there is a constant $C(\Phi,R)\in\mathbb{N}$ such that for all finite, normally generating subset $S$ of $G(\Phi,R)$, it holds
\begin{equation*}
\|G(\Phi,R)\|_S\leq C(\Phi,R)|S|.
\end{equation*}
\end{mTheorem}
For a group $G$ and a natural number $k\in\mathbb{N},$ the worst possible diameter of all conjugation word norm $\|\cdot\|_S$ for $S$ a normally generating set with $|S|=k$, is denoted by $\Delta_k(G).$ Consider Section~\ref{definition} for a more precise definition. Using this terminology, Theorem~\ref{general_thm} shows that $\Delta_k(G(\Phi,R))$
has an upper bound proportional to $k$. But Theorem~\ref{general_thm} was proven abstractly by way of a model-theoretic compactness argument and consequently gives no information on the minimal possible choice for $C(\Phi,R).$
In this paper, we remedy this issue in two ways: First, we will give an explicit upper bound on the $\Delta_k({\rm Sp}_{2n}(R))$ for $R$ semi-local or a ring of S-algebraic integers and $n\geq 3$. In the case of semi-local rings, this upper bound is linear in the rank of $\Phi$ and in the case of algebraic integers, it is quadratic in the rank of $\Phi.$ Second, we will show that if one such $R$ has at least $k$ maximal ideals, then $\Delta_k({\rm Sp}_{2n}(R))\geq 2nk$ holds. Combining these upper and lower bounds settles the asymptotics of $\Delta_k({\rm Sp}_{2n}(R))$ for semi-local rings in $k,n$ and the number of maximal ideals:
\
\begin{mTheorem}\label{strong_bound_explicit_semi_local}
Let $R$ be a principal ideal domain with precisely $q\in\mathbb{N}$ many maximal ideals and let $n\geq 3$ and $k\in\mathbb{N}$ be given.
Then
\begin{enumerate}
\item{$\Delta_k({\rm Sp}_{2n}(R))\leq 576(3n-2)\min\{q,5nk\}$,}
\item{$\Delta_k({\rm Sp}_{2n}(R))\geq 2nk$ for $k\leq q$ and}
\item{$\Delta_k({\rm Sp}_{2n}(R))\geq 2nq$ for $k\geq q+1.$}
\end{enumerate}
\end{mTheorem}
This theorem generalizes in a certain sense classical results by Liebeck and Lawther \cite{lawther1998diameter} stating that the conjugacy diameters of finite groups of Lie type are proportional to the rank of the underlying root system. On the other hand, combining the aforementioned upper and lower bounds on $\Delta_k({\rm Sp}_{2n}(R))$ for $R$ a ring of S-algebraic integers of class number $1$ gives a restriction on the possible asymptotic of $\Delta_k({\rm Sp}_{2n}(R))$:
\begin{mTheorem}\label{strong_bound_explicit_alg_integer}
Let $R$ be a ring of S-algebraic integers with class number one and let $n\geq 3$ and $k\in\mathbb{N}$ be given. Further set
\begin{equation*}
\Delta(R):=
\begin{cases}
&\text{135, if }R\text{ is a quadratic imaginary ring of integers or }\mathbb{Z}\\
&\text{12, if }R\text{ is neither of the above}
\end{cases}
\end{equation*}
Then
\begin{enumerate}
\item{$\Delta_k({\rm Sp}_{2n}(R))\leq 960(12n+\Delta(R))nk$ and}
\item{$2nk\leq\Delta_k({\rm Sp}_{2n}(R)).$}
\end{enumerate}
\end{mTheorem}
While the calculations done in this paper to provide upper bounds on $\Delta_k({\rm Sp}_{2n}(R))$ are similar to those appearing in the paper \cite{KLM} by Kedra, Libman and Martin for ${\rm SL}_n(R),$ there are some modifications necessary due to the presence of two different root lengths in the root system $C_n.$ As a consequence, rather than using one Hessenberg form to simplify matrices as in \cite{KLM}, we need two distinct Hessenberg forms to accommodate these two root lengths. As seen in \cite{General_strong_bound} these two root lengths are a major problem for $C_2,$ where we pay a price of introducing additional powers of $2$ for passing back and forth between short and long roots. However, for $C_n$ for $n\geq 3,$ the presence of a root subsystem of $C_n$ isomorphic to $A_{n-1}$ and spanned by simple roots enables us to avoid this particular problem. While we still pay a price to pass between different root lengths, the situation is overall much nicer than for $C_2$.
Providing lower bounds on $\Delta_k({\rm Sp}_{2n}(R))$ on the other hand is done by way of considering the conjugacy diameter of long root elements in the groups ${\rm Sp}_{2n}(K)$ for various fields $K.$\\
The paper is divided into five sections: In the first section, we introduce some notation and definitions used in the rest of the paper. In the second section, we address how to write root elements in normal subgroups of ${\rm Sp}_{2n}(R)$ for $n\geq 3$ generated by a single conjugacy class as certain bounded products of said conjugate. In the third section, we talk about how stable range conditions can be used to show that ${\rm Sp}_{2n}(R)$ for $R$ semi-local or a ring of S-algebraic integers is generated by a number of conjugates of root elements proportional to $n$ and show the first part of Theorem~\ref{strong_bound_explicit_semi_local}. In the fourth section, we use bounded generation results for S-arithmetic Chevalley groups to prove the first part of Theorem~\ref{strong_bound_explicit_alg_integer}. In the last section, we describe normal generating sets $S$ of ${\rm Sp}_{2n}(R)$ whose word norms $\|\cdot\|_S$ have large diameters to finish the proofs of Theorem~\ref{strong_bound_explicit_semi_local} and \ref{strong_bound_explicit_alg_integer}.
\section*{Acknowledgments}
I want to thank Benjamin Martin for his continued support and advice.
\section{Definitions}\label{definition}
Let $G$ be a group and $S$ a finite subset of $G$, such that the conjugacy classes $C_G(S)$ of $S$ in $G$ generate $G.$ In this paper $\|\cdot\|_S:G\to\mathbb{N}_0$ denotes the word norm given by $C_G(S)$ in $G$. We further set
\begin{equation*}
B_S(k):=\{A\in G|\|A\|_S\leq k\}
\end{equation*}
for $k\in\mathbb{N}$ and $\|G\|_S={\rm diam}(\|\cdot\|_S)$ of $G$ as the minimal $N\in\mathbb{N}$, such that $B_S(N)=G$ or as $+\infty$ if there is no such $N$.
Further define for $k\in\mathbb{N}$ the invariant
\begin{equation*}
\Delta_k(G):=\sup\{{\rm diam}(\|\cdot\|_S)|\ S\subset G\text{ with }\card{S}\leq k,\dl S\dr=G\}\in\mathbb{N}_0\cup\{\rm\infty\}
\end{equation*}
with $\Delta_k(G)$ defined as $-\infty$, if there is no normally generating set $S\subset G$ with $\card{S}\leq k.$ The group $G$ is called \textit{strongly bounded}, if $\Delta_k(G)$ is finite for all $k\in\mathbb{N}$. Also note $\Delta_k(G)\leq\Delta_{k+1}(G)$ for all $k\in\mathbb{N}$.
For $\Phi$ an irreducible root system and $R$ a commutative ring with $1,$ we will omit defining the simply-connected split Chevalley-De-Mazure group $G(\Phi,R)$ and the corresponding root elements $\varepsilon_{\alpha}(x)$ in this paper. We instead refer the interested reader to \cite[Section~2.1,2.2]{General_strong_bound} for a brief account of such definitions and to \cite{MR3616493} for further details regarding root elements. The \textit{elementary subgroup} $E(\Phi,R)$ (or $E(R)$ if $\Phi$ is clear from the context) is defined as the subgroup of $G(\Phi,R)$ generated by the elements $\varepsilon_{\phi}(x)$ for $\phi\in\Phi$ and $x\in R.$ It should be understood for the purposes of this paper that a system of simple roots in the root system $\Phi$ is chosen and fixed throughout. In particular, it is always clear which roots in $\Phi$ are positive and simple.
Further for a non-trivial ideal $I\subset R$, we denote the group homomorphism $G(\Phi,R)\to G(\Phi,R/I)$ induced by the quotient map $\pi_I:R\to R/I$ by $\pi_I$ as well. This group homomorphism is commonly called the \textit{reduction homomorphism induced by $I$.}
The subgroup $U^+(\Phi,R)$, called \textit{the subgroup of upper unipotent elements of $G(\Phi,R)$}, is the subgroup of $G(\Phi,R)$ generated by the root elements $\varepsilon_{\phi}(x)$ for $x\in R$ and $\phi\in\Phi$ a positive root. Similarly, one can define $U^-(\Phi,R)$, \textit{the subgroup of lower unipotent elements} of $G(\Phi,R)$ by root elements for negative roots.
Further, we define the following two word norms:
\begin{mydef}\label{root_elements_word_norms}
Let $R$ be a commutative ring with $1$ and $\Phi$ an irreducible root system such that $G(\Phi,R)$ is generated by root elements. Then define the two sets
\begin{align*}
{\rm EL}:=\{\varepsilon_{\phi}(t)|\ t\in R,\phi\in\Phi\}\text{ and }
{\rm EL}_Q:=\{A\varepsilon_{\phi}(t)A^{-1}|\ t\in R,\phi\in\Phi,A\in G(\Phi,R)\}.
\end{align*}
Then
\begin{enumerate}
\item{define the word norm $\|\cdot\|_{{\rm EL}}:G(\Phi,R)\to\mathbb{N}_0$ as $\|1\|_{{\rm EL}}:=0$ and as
\begin{equation*}
\|X\|_{{\rm EL}}:=\min\{n\in\mathbb{N}|\exists A_1,\dots,A_n\in {\rm EL}: X=A_1\cdots A_n\}
\end{equation*}
for $X\neq 1.$}
\item{
define the word norm $\|\cdot\|_{{\rm EL}_Q}:G(\Phi,R)\to\mathbb{N}_0$ as $\|1\|_{{\rm EL}_Q}:=0$ and as
\begin{equation*}
\|X\|_{{\rm EL}_Q}:=\min\{n\in\mathbb{N}|\exists A_1,\dots,A_n\in {\rm EL}_Q: X=A_1\cdots A_n\}
\end{equation*}
for $X\neq 1.$}
\end{enumerate}
\end{mydef}
\begin{remark}
The group $G(\Phi,R)$ is \textit{boundedly generated by root elements}, if there is a natural number $N:=N(\Phi,R)\in\mathbb{N}$ such that
$\|A\|_{{\rm EL}}\leq N$ holds for all $A\in G(\Phi,R).$
\end{remark}
The group elements $\varepsilon_{\phi}(t)$ are \textit{additive in $t\in R$}, that is $\varepsilon_{\phi}(t+s)=\varepsilon_{\phi}(t)\varepsilon_{\phi}(s)$ holds for all $t,s\in R$. Further, a couple of commutator formulas, expressed in the next lemma, hold. We will use the additivity and the commutator formulas implicitly throughout the thesis usually without reference.
\begin{Lemma}\cite[Proposition~33.2-33.5]{MR0396773}
\label{commutator_relations}
Let $R$ be a commutative ring with $1$ and let $\Phi$ be an irreducible root system of rank at least $2.$ Let $\alpha,\beta\in\Phi$ be roots with $\alpha+\beta\neq 0$ and let $a,b\in R$ be given.
\begin{enumerate}
\item{If $\alpha+\beta\notin\Phi$, then $(\varepsilon_{\alpha}(a),\varepsilon_{\beta}(b))=1.$}
\item{If $\alpha,\beta$ are positive, simple roots in a root subsystem of $\Phi$ isomorphic to $A_2$, then\\
$(\varepsilon_{\beta}(b),\varepsilon_{\alpha}(a))=\varepsilon_{\alpha+\beta}(\pm ab).$}
\item{If $\alpha,\beta$ are positive, simple roots in a root subsystem of $\Phi$ isomorphic to $C_2$ with $\alpha$ short and $\beta$ long, then
\begin{align*}
&(\varepsilon_{\alpha+\beta}(b),\varepsilon_{\alpha}(a))=\varepsilon_{2\alpha+\beta}(\pm 2ab)\text{ and}\\
&(\varepsilon_{\beta}(b),\varepsilon_{\alpha}(a))=\varepsilon_{\alpha+\beta}(\pm ab)\varepsilon_{2\alpha+\beta}(\pm a^2b).
\end{align*}
}
\end{enumerate}
\end{Lemma}
Before continuing, we will define the Weyl group elements in $G(\Phi,R)$:
\begin{mydef}
Let $R$ be a commutative ring with $1$ and let $\Phi$ be a root system. Define for $t\in R^*$ and $\phi\in\Phi$ the elements:
\begin{equation*}
w_{\phi}(t):=\varepsilon_{\phi}(t)\varepsilon_{-\phi}(-t^{-1})\varepsilon_{\phi}(t)
\end{equation*}
We will often write $w_{\phi}:=w_{\phi}(1).$
\end{mydef}
Using these Weyl group elements, we obtain:
\begin{Lemma}\cite[Chapter~3, p.~23, Lemma~20(b)]{MR3616493}
\label{Weyl_group_conjugation_invariance1}
Let $R$ be a commutative ring with $1$ and $\Phi$ an irreducible root system with $\Pi$ its system of simple roots.
Let $\phi\in\Phi,\alpha\in\Pi$ and $x\in R,t\in R^*$ be given. Then $\varepsilon_{\phi}(x)^{w_{\alpha}}=\varepsilon_{w_{\alpha}(\phi)}(\pm x)$ holds and so for each $S\subset G(\Phi,R)$, one has
\begin{equation*}
\|\varepsilon_{\phi}(x)\|_S=\|\varepsilon_{w_{\alpha}(\phi)}(x)\|_S.
\end{equation*}
Here the element $w_{\alpha}(\phi)$ is defined by the action of $W(\Phi)$ on $\Phi$.
\end{Lemma}
In particular, Lemma~\ref{Weyl_group_conjugation_invariance1} implies for $\Phi$ an irreducible root system, $\phi\in\Phi, k\in\mathbb{N}$ and
$S\subset G(\Phi,R)$, that the set $\{x\in R|\ \varepsilon_{\phi}(x)\in B_S(k)\}$ only depends on the length of the root $\phi$ and not on the particular $\phi$ in question. Thus the following definition makes sense:
\begin{mydef}
Let $R$ be a commutative ring with $1$ and $\Phi$ an irreducible root system and let $S\subset G(\Phi,R)$ be given. Then for $k\in\mathbb{N}_0$
define the subset $\varepsilon_s(S,k)$ of $R$ as $\{x\in R|\ \varepsilon_{\phi}(x)\in B_S(k)\}$ for any short root $\phi\in\Phi.$
\end{mydef}
Next, note:
\begin{mydef}
Let $R$ be a commutative ring with $1$, $I$ an ideal in $R$, $\Phi$ an irrducible root system and $S$ a subset of $G(\Phi,R)$. Then define the following two subsets of maximal ideals in $R:$
\begin{enumerate}
\item{$V(I):=\{m\text{ maximal ideal in }R|I\subset m\}$ and}
\item{$\Pi(S):=\{ m\text{ maximal ideal of $R$}|\ \forall A\in S:\pi_m(A)\text{ central in }G(\Phi,R/m)\}$}
\end{enumerate}
\end{mydef}
We also note the following observation:
\begin{Lemma}\label{intersection_v_Pi}
Let $R$ be a commutative ring with $1$, $I_1,I_2$ two ideals in $R,$ $\Phi$ an irreducible root system in $R$ and $S,T$ two subsets of $G(\Phi,R).$ Then
$V(I_1+I_2)=V(I_1)\cap V(I_2)$ and $\Pi(S\cup T)=\Pi(S)\cap\Pi(T)$ holds.
\end{Lemma}
The following corollary is crucial for the later analysis:
\begin{Corollary}\cite[Corollary~3.11]{General_strong_bound}
\label{necessary_cond_conj_gen}
Let $R$ be a commutative ring with $1$, $\Phi$ an irreducible root system of rank at least $3$ and assume $G(\Phi,R)=E(\Phi,R)$.
Then a subset $S$ of $G(\Phi,R)$ normally generates $G(\Phi,R)$ precisely if $\Pi(S)=\emptyset.$
\end{Corollary}
\section{Generalized Hessenberg forms for ${\rm Sp}_{2n}(R)$ and level ideals}\label{section_matrix_calculations_sp_2n}
This section is quite similar to the proof of \cite[Theorem~6.1]{KLM}. The main problem with the following argument is not so much the actual argument, but the temptation to start the investigation with $n=2$ instead of $n\geq 3$. For this section, we use a representation of the complex, simply-connected Lie group ${\rm Sp}_{2n}(\mathbb{C})$ that gives the following, classical definition of $G(C_n,R)={\rm Sp}_{2n}(R):$
\begin{mydef}
Let $R$ be a commutative ring with $1$ and let
\begin{equation*}
{\rm Sp}_{2n}(R):=\{A\in R^{2n\times 2n}|A^TJA=J\}
\end{equation*}
be given with
\begin{equation*}
J=
\left(\begin{array}{c|c}
0_n & I_n \\
\midrule
-I_n & 0_n
\end{array}\right)
\end{equation*}
\end{mydef}
This implies the following:
\begin{Lemma}
Let $R$ be a commutative ring with $1$ and let $A\in Sp_{2n}(R)$ be given with
\begin{equation*}
A=\left(\begin{array}{c|c}
A_1 & A_2 \\
\midrule
A_3 & A_4
\end{array}\right)
\end{equation*}
for $A_1,A_2,A_3,A_4\in R^{n\times n}.$ Then the equation
\begin{equation*}
A^{-1}=-JA^TJ=
\left(\begin{array}{c|c}
A_1^T & -A_2^T \\
\midrule
-A_3^T & A_4^T
\end{array}\right)
\end{equation*}
holds.
\end{Lemma}
We use this identity frequently in the following matrix calculations usually without reference. Every symplectic matrix can be writen as a $4\times 4$-block matrix of $n\times n$-matrices and this decomposition shows up naturally in the calculation. Therefore we will often signify this decomposition in blocks using vertical and horizontal lines in the following matrices as done in the above lemma for example. These lines serve merely as an optical help to read the calculations and have no further meaning. Let $n\geq 2$ be given. We can choose a system of positive simple roots $\{\alpha_1,\dots,\alpha_{n-1},\beta\}$ in $C_n$ such that the Dynkin-diagram of this system of positive simple roots has the following form
\begin{center}
\begin{tikzpicture}[
shorten >=1pt, auto, thick,
node distance=2.5cm,
main node/.style={circle,draw,font=\sffamily\small\bfseries},
mynode/.style={rectangle,fill=white,anchor=center}
]
\node[main node] (1) {$\beta$};
\node[main node] (3) [left of=1] {$\alpha_1$};
\node[mynode] (4) [left of=3] {$\cdot\cdot\cdot$};
\node[main node] (5) [left of=4] {$\alpha_{n-1}$};
\node[mynode] (6) [left of=5] {$C_n:$};
\path (3) edge [double,<-] node {} (1);
\path (4) edge [] node {} (3);
\path (5) edge [] node {} (4);
\end{tikzpicture}
\end{center}
Then subject to the choice of the maximal torus in ${\rm Sp}_{2n}(\mathbb{C})$ as diagonal matrices in ${\rm Sp}_{2n}(\mathbb{C})$, the root elements for simple roots in $G(C_n,R)={\rm Sp}_{2n}(R)$ can be chosen as: $\varepsilon_{\alpha_i}(t)=I_{2n}+t(e_{n-i,n-i+1}-e_{2n-i+1,2n-i})$ for $1\leq i\leq n-1$ and $\varepsilon_{\beta}(t)=I_{2n}+te_{n,2n}$ for all $t\in R.$
More generally, the root elements $\varepsilon_{\phi}(x)$ for short, positive roots in $\phi\in C_n$ and $x\in R$ are then either $I_{2n}+t(e_{ij}-e_{n+j,n+i})$ for $1\leq i<j\leq n$ or $I_{2n}+t(e_{i,n+j}+e_{j,n+i})$ for $1\leq i<j\leq n.$ The root elements $\varepsilon_{\psi}(x)$ for long, positive roots in $\psi\in C_n$ and $t\in R$ are then $I_{2n}+xe_{i,n+i}$ for $1\leq i\leq n$. Root elements for negative roots $\phi\in C_n$ and $x\in R$ are then $\varepsilon_{\phi}(x)=\varepsilon_{-\phi}(x)^T.$ The goal of this section is to prove the following:
\begin{Theorem}\label{level_ideal_explicit_Sp_2n}
Let $R$ be a principal ideal domain, $n\geq 3$ and let $A\in{\rm Sp}_{2n}(R)$ be given. Then there is an ideal $I(A)$ in $R$ such that
\begin{enumerate}
\item{$V(I(A))\subset\Pi(\{A\})$ and}
\item{$I(A)\subset\varepsilon_s(A,320n)$ hold.}
\end{enumerate}
\end{Theorem}
\subsubsection{The first Hessenberg form}
We start with a Lemma that gives us a conjugate of a matrix $A$ with a lot of zero entries similar to the Hessenberg forms used in \cite{KLM}:
\begin{Lemma}\label{first_Hessenberg_sp_2n}
Let $R$ be a principal ideal domain, $n\geq 3$ and $A\in Sp_{2n}(R)$ be given. Then there is an element
$B\in Sp_{2n}(R)$ such that $A':=B^{-1}AB$ has the following form
\begin{equation*}
A'=\left(\begin{array}{c|c}
\begin{matrix}
a'_{1,1} & a'_{1,2} & a'_{1,3} & \cdot & a'_{1,n-2} & a'_{1,n-1} & a'_{1,n}\\
a'_{2,1} & a'_{2,2} & a'_{2,3} & \cdot & a'_{2,n-2} & a'_{2,n-1} & a'_{2,n}\\
0 & a'_{3,2} & a'_{3,3} & \cdot & a'_{3,n-2} & a'_{3,n-1} & a'_{3,n}\\
0 & 0 & a'_{4,3} & \cdot & a'_{4,n-2} & a'_{4,n-1} & a'_{4,n}\\
\cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\
0 & 0 & 0 & \cdot & 0 & a'_{n,n-1} & a'_{n,n}
\end{matrix}
& A'_2 \\
\midrule
A'_3 & A'_4
\end{array}\right)
\end{equation*}
with $a'_{11}=a_{11}$ and $a'_{21}=gcd(a_{21},a_{31},\dots,a_{n1})$ up to multiplication with a unit in $R$ and $A'_2,A_3',A'_4\in R^{n\times n}$. We call a matrix of the form of $A'$ in $Sp_{2n}(R)$ a matrix in \textit{first Hessenberg form.}
\end{Lemma}
\begin{proof}
If $a_{3,1}=0$, then define $A^{(3)}:=A$.
Otherwise choose $t_3:=\gcd(a_{2,1},a_{3,1})$. Observe that $x_3:=-\frac{a_{3,1}}{t_3}$ and $y_3:=\frac{a_{2,1}}{t_3}$ are coprime elements of $R$ and hence, we can find elements $u_3,v_3\in R$ with $u_3y_3-x_3v_3=1.$ This implies that the matrix
\begin{align*}
T_3:=
\left(\begin{array}{c|c}
\begin{matrix}
1 & \ & \ & \ \\
\ & u_3 & v_3 & \ \\
\ & x_3 & y_3 & \ \\
\ & \ & \ & I_{n-3}
\end{matrix}
& 0_n\\
\midrule
0_n &
\begin{matrix}
1 & \ & \ & \ \\
\ & y_3 & -x_3 & \ \\
\ & -v_3 & u_3 & \ \\
\ & \ & \ & I_{n-3}
\end{matrix}
\end{array}
\right)
\end{align*}
is an element of ${\rm Sp}_{2n}(R).$ The matrix $A^{(3)}:=T_3AT_3^{-1}$ has the $(1,1)$-entry $a_{1,1}$ and the $(3,1)$-entry
$x_3a_{2,1}+y_3a_{3,1}=-\frac{a_{3,1}}{t_3}a_{2,1}+\frac{a_{2,1}}{t_3}a_{3,1}=0.$ The entries of $A^{(3)}$ are denoted by $a_{k,l}^{(3)}.$ Next, if $a_{4,1}^{(3)}=0$, then define $A^{(4)}:=A^{(3)}.$ Otherwise choose $t_4:=\gcd(a_{2,1}^{(3)},a_{4,1}^{(3)})$. Observe that $x_4:=-\frac{a_{4,1}^{(3)}}{t_4}$ and $y_4:=\frac{a_{2,1}^{(3)}}{t_4}$ are coprime elements of $R$ and hence, we can find elements $u_4,v_4\in R$ with $u_4y_4-x_4v_4=1.$ This implies that the matrix
\begin{align*}
T_4:=
\left(\begin{array}{c|c}
\begin{matrix}
1 & \ & \ & \ & \ \\
\ & u_4 & 0 & v_4 & \ \\
\ & 0 & 1 & 0 & \ \\
\ & x_4 & 0 & y_4 & \ \\
\ & \ & \ & \ & I_{n-4}
\end{matrix}
& 0_n\\
\midrule
0_n &
\begin{matrix}
1 & \ & \ & \ & \ \\
\ & y_4 & 0 &-x_4 & \ \\
\ & 0 & 1 & 0 & \ \\
\ & -v_4 & 0 & u_4 & \ \\
\ & \ & \ & \ & I_{n-4}
\end{matrix}
\end{array}
\right)
\end{align*}
is an element of ${\rm Sp}_{2n}(R).$ The matrix $A^{(4)}:=T_4A^{(3)}T_4^{-1}$ has the $(1,1)$-entry $a_{1,1}^{(3)}=a_{1,1},$ the $(3,1)$-entry $0$
and the $(4,1)$-entry $x_4a_{2,1}^{(3)}+y_4a_{4,1}^{(3)}=-\frac{a_{4,1}^{(3)}}{t_4}a_{2,1}^{(3)}+\frac{a_{2,1}^{(3)}}{t_4}a_{4,1}^{(3)}=0.$
The entries of $A^{(4)}$ are denoted by $a_{k,l}^{(4)}.$ Carrying on this way, we find that the matrix $A^{(n)}$ is conjugate to $A$ in ${\rm Sp}_{2n}(R)$ and has the $(1,1)$-entry $a_{1,1}$ and $a_{3,1}^{(n)}=a_{4,1}^{(n)}=\cdots=a_{n,1}^{(n)}=0.$ Further, the construction implies the existence of a matrix $D\in{\rm SL}_{n-1}(R)$ with
\begin{equation*}
\left(\begin{array}{cc}
1 &
\begin{matrix}
0 & \cdots & 0
\end{matrix}
\\
\begin{matrix}
0\\ \cdot\\ \cdot\\ \cdot\\ 0
\end{matrix}
& D
\end{array}\right)\cdot
\begin{pmatrix}
a_{1,1}\\ a_{2,1}\\ a_{3,1}\\ \cdot\\ \cdot\\ a_{n,1}
\end{pmatrix}
=\begin{pmatrix}
a_{1,1}\\ a_{2,1}^{(n)}\\ 0\\ \cdot\\ \cdot\\ 0
\end{pmatrix}
\end{equation*}
But this implies that $a_{2,1}^{(n)}$ is a multiple of $\gcd(a_{2,1},\dots,a_{n,1}).$ Further, note $D^{-1}\in{\rm SL}_{n-1}(R)$ and hence
\begin{equation*}
\left(\begin{array}{cc}
1 &
\begin{matrix}
0 & \cdots & 0
\end{matrix}
\\
\begin{matrix}
0\\ \cdot\\ \cdot\\ \cdot\\ 0
\end{matrix}
& D^{-1}
\end{array}\right)\cdot
\begin{pmatrix}
a_{1,1}\\ a_{2,1}^{(n)}\\ 0\\ \cdot\\ \cdot\\ 0
\end{pmatrix}
=\begin{pmatrix}
a_{1,1}\\ a_{2,1}\\ a_{3,1}\\ \cdot\\ \cdot\\ a_{n,1}
\end{pmatrix}
\end{equation*}
implies that all of the elements of $a_{2,1},\dots,a_{n,1}$ are multiples of $a_{2,1}^{(n)}$ and hence $\gcd(a_{2,1},\dots,a_{n,1})$ is also a multiple of
$a_{2,1}^{(n)}.$ So, up to multiplication with a unit $a_{2,1}^{(n)}=\gcd(a_{2,1},\dots,a_{n,1}).$
Hence the first column of the matrix $A^{(n)}$ has the form described in the Lemma. The remaining columns of $A^{(n)}$ can be brought to the desired form in a similar way, by conjugating with a matrix of the form
\begin{equation*}
\left(\begin{array}{c|c}
\begin{matrix}
I_2 & \ \\
\ & D
\end{matrix}
& 0_n\\
\midrule
0_n &
\begin{matrix}
I_2 & \ \\
\ & D^{-T}
\end{matrix}
\end{array}\right)
\end{equation*}
for $D\in {\rm SL}_{n-2}(R).$ Note, that under conjugation with such a matrix, the first column of $A^{(n)}$ stays fixed and hence this yields the lemma.
\end{proof}
\begin{remark}
\begin{enumerate}
\item{Upper Hessenberg matrices in $R^{n\times n}$ are matrices $A=(a_{ij})$ with $a_{ij}=0$ for $i>j+1.$ They are commonly used tools in numerical mathematics \cite{MR2978290} and define subvarieties of flag varieties which have been extensively studied \cite{MR1115324} as well.
}
\item{The proof strategy for Lemma~\ref{first_Hessenberg_sp_2n} is an adaption of \cite[Theorem~III.1]{MR0340283} to the group ${\rm Sp}_{2n}(R).$ Lemma~\ref{first_Hessenberg_sp_2n} (and Lemma~\ref{second_Hessenberg} describing the second Hessenberg form) are actually the only steps in the proof of Theorem~\ref{level_ideal_explicit_Sp_2n} requiring $R$ to be a principal ideal domain.}
\end{enumerate}
\end{remark}
The strategy to prove Theorem~\ref{level_ideal_explicit_Sp_2n} is to calculate carefully chosen nested commuators of matrices in first (and in the next subsection second) Hessenberg-form with increasingly less entries until one arrives at root elements.
\begin{Lemma}
\label{first_commutator_formula_first_Hessenberg}
Let $R$ be a commutative ring with $1$ and $n\geq 3$ and let $A$ be a matrix in first Hessenberg form in $Sp_{2n}(R)$ and $B:=A^{-1}.$ Then
$X:=(A,I_{2n}+e_{1,n+1})$ has the following form:
\begin{equation*}
X=\left(\begin{array}{c|c}
\begin{matrix}
x_{1,1} & x_{1,2} & \cdot & x_{1,n}\\
x_{2,1} & x_{2,2} & \cdot & x_{2,n}\\
0 & 0 & \cdot & 0\\
\cdot & \cdot & \cdot & \cdot\\
0 & 0 & \cdot & 1 \\
\hline\
x_{n+1,1} & x_{n+1,2} & \cdot & x_{n+1,n}\\
\cdot & \cdot & \cdot & \cdot\\
x_{2n,1} & x_{2n,2} & \cdot & x_{2n,n}
\end{matrix}
&
\begin{matrix}
x_{1,n+1} & x_{1,n+2} & 0 & \cdot & 0\\
x_{2,n+1} & x_{2,n+2} & 0 & \cdot & 0\\
0 & 0 & 0 & \cdot & 0\\
\cdot & \cdot & \cdot&\cdot&\cdot\\
0 & 0 & 0 & \cdot & 0\\
\hline\
x_{n+1,n+1} & x_{n+1,n+2} & 0 & \cdot & 0\\
\cdot & \cdot & \cdot&\cdot&\cdot\\
x_{2n,n+1} & x_{2n,n+2} & 0 & \cdot & 1\\
\end{matrix}
\end{array}\right)
\end{equation*}
with $x_{1,n+1}=a_{11}(b_{n+1,n+1}-b_{n+1,1})-1$ and $x_{2,n+1}=a_{21}(b_{n+1,n+1}-b_{n+1,1}).$
\end{Lemma}
\begin{proof}
Now let $A_2,A_3,A_4\in R^{n\times n}$ be given such that $A$ has the following form:
\begin{equation*}
A=\left(\begin{array}{c|c}
\begin{matrix}
a_{11} & a_{12} & a_{13} & \cdot & a_{1,n-2} & a_{1,n-1} & a_{1n}\\
a_{21} & a_{22} & a_{23} & \cdot & a_{2,n-2} & a_{2,n-1} & a_{2n}\\
0 & a_{32} & a_{33} & \cdot & a_{3,n-2} & a_{3,n-1} & a_{3n}\\
0 & 0 & a_{43} & \cdot & a_{4,n-2} & a_{4,n-1} & a_{4n}\\
\cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\
0 & 0 & 0 & \cdot & 0 & a_{n,n-1} & a_{n,n}
\end{matrix}
& A_2 \\
\midrule
A_3 & A_4
\end{array}\right)
\end{equation*}
Then for the matrices $B_2:=-A_2^T,B_3:=-A_3^T,B_1:=A_4^T\in R^{n\times n},$ one has:
\begin{equation*}
B=\left( \begin{array}{c|c}
B_1 & B_2 \\
\midrule
B_3 &
\begin{matrix}
b_{n+1,n+1} & b_{n+1,n+2} & 0 & \cdot & 0 & 0 & 0\\
\cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\
b_{2n-3,n+1} & b_{2n-3,n+2} & b_{2n-3,n+3} & \cdot & b_{2n-3,2n-2} & 0 & 0\\
b_{2n-2,n+1} & b_{2n-2,n+2} & b_{2n-2,n+3} & \cdot & b_{2n-2,2n-2} & b_{2n-2,2n-1} & 0\\
b_{2n-1,n+1} & b_{2n-1,n+2} & b_{2n-1,n+3} & \cdot & b_{2n-1,2n-2} & b_{2n-1,2n-1} & b_{2n-1,2n}\\
b_{2n,n+1} & b_{2n,n+2} & b_{2n,n+3} & \cdot & b_{2n,2n-2} & b_{2n,2n-1} & b_{2n,2n}
\end{matrix}
\end{array}\right)
\end{equation*}
Observe first:
\begin{align*}
&Ae_{1,n+1}A^{-1}=Ae_{1,n+1}B\\
&=\left(\begin{array}{c|c}
\begin{matrix}
0 &\cdot & 0\\
0 &\cdot & 0\\
0 &\cdot & 0\\
\cdot &\cdot & \cdot\\
0 &\cdot & 0\\
\hline\
0 &\cdot & 0\\
\cdot & \cdot & \cdot\\
0 &\cdot & 0\\
\end{matrix}
&
\begin{matrix}
a_{1,1} & 0 & 0 & \cdot & 0 & 0 & 0\\
a_{2,1} & 0 & 0 & \cdot & 0 & 0 & 0\\
0 & 0 & 0 & \cdot & 0 & 0 & 0\\
\cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\
0 & 0 & 0 & \cdot & 0 & 0 & 0\\
\hline\
a_{n+1,1} & 0 & 0 & \cdot & 0 & 0 & 0\\
\cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot\\
a_{2n,1} & 0 & 0 & \cdot & 0 & 0 & 0
\end{matrix}
\end{array}\right)
B
\end{align*}
\begin{align*}
&=
\left(\begin{array}{c|c}
\begin{matrix}
a_{11}b_{n+1,1} & a_{11}b_{n+1,2} & \cdot & a_{11}b_{n+1,n}\\
a_{21}b_{n+1,1} & a_{21}b_{n+1,2} & \cdot & a_{21}b_{n+1,n}\\
0 & 0 & \cdot & 0\\
\cdot & \cdot & \cdot & \cdot\\
0 & 0 & \cdot & 0\\
\hline\
a_{n+1,1}b_{n+1,1} & a_{n+1,1}b_{n+1,2} & \cdot & a_{n+1,1}b_{n+1,n}\\
\cdot & \cdot & \cdot & \cdot\\
a_{2n,1}b_{n+1,1} & a_{2n,1}b_{n+1,2} & \cdot & a_{2n,1}b_{n+1,n}\\
\end{matrix}
&
\begin{matrix}
a_{11}b_{n+1,n+1} & a_{11}b_{n+1,n+2} & 0 & \cdot & 0\\
a_{21}b_{n+1,n+1} & a_{21}b_{n+1,n+2} & 0 & \cdot & 0 \\
0 & 0 & 0 & \cdot & 0 \\
\cdot &\cdot & \cdot & \cdot & \cdot\\
0 & 0 & 0 & \cdot & 0 \\
\hline\
a_{n+1,1}b_{n+1,n+1} & a_{n+1,1}b_{n+1,n+2} & 0 & \cdot & 0 \\
\cdot & \cdot & \cdot&\cdot&\cdot\\
a_{2n,1}b_{n+1,n+1} & a_{2n,1}b_{n+1,n+2} & 0 & \cdot & 0
\end{matrix}
\end{array}\right)
\end{align*}
This implies that
\begin{align*}
&(A,I_{2n}+e_{1,n+1})=A(I_{2n}+e_{1,n+1})A^{-1}(I_{2n}-e_{1,n+1})=(I_{2n}+Ae_{1,n+1}A^{-1})(I_{2n}-e_{1,n+1})\\
&=I_{2n}+Ae_{1,n+1}A^{-1}-e_{1,n+1}-Ae_{1,n+1}A^{-1}e_{1,n+1}\\
&\\
&=
\tiny\left(\begin{array}{c|c}
\begin{matrix}
1+a_{11}b_{n+1,1} & a_{11}b_{n+1,2} & \cdot & a_{11}b_{n+1,n}\\
a_{21}b_{n+1,1} & 1+a_{21}b_{n+1,2} & \cdot & a_{21}b_{n+1,n}\\
0 & 0 & \cdot & 0\\
\cdot & \cdot & \cdot & \cdot\\
0 & 0 & \cdot & 1 \\
\hline\
a_{n+1,1}b_{n+1,1} & a_{n+1,1}b_{n+1,2} & \cdot & a_{n+1,1}b_{n+1,n}\\
\cdot & \cdot & \cdot & \cdot\\
a_{2n,1}b_{n+1,1} & a_{2n,1}b_{n+1,2} & \cdot & a_{2n,1}b_{n,n}\\
\end{matrix}
&
\begin{matrix}
a_{11}(b_{n+1,n+1}-b_{n+1,1})-1 & a_{11}b_{n+1,n+2} & 0 & \cdot & 0\\
a_{21}(b_{n+1,n+1}-b_{n+1,1}) & a_{21}b_{n+1,n+2} & 0 & \cdot & 0\\
0 & 0 & 0 & \cdot & 0\\
\cdot & \cdot & \cdot&\cdot&\cdot\\
0 & 0 & 0 & \cdot & 0\\
\hline\
a_{n+1,1}(b_{n+1,n+1}-b_{n+1,1})+1 & a_{n+1,1}b_{n+1,n+2} & 0 & \cdot & 0 \\
\cdot & \cdot & \cdot&\cdot&\cdot\\
a_{2n,1}(b_{n+1,n+1}-b_{n+1,1}) & a_{2n,1}b_{n+1,n+2} & 0 & \cdot & 1
\end{matrix}
\end{array}\right)
\end{align*}
This is precisely the form claimed in the lemma.
\end{proof}
Next, we use the commutator from the previous lemma to obtain the following:
\begin{Lemma}\label{second_commutator_formula_first_Hessenberg}
Let $R$ be a commutative ring with $1$ and $n\geq 3$ and let $A$ be a matrix in first Hessenberg form in $Sp_{2n}(R)$ and $B=A^{-1}.$ Then
\begin{equation*}
(a_{11}(b_{n+1,n+1}-b_{n+1,1})-1,a_{21}(b_{n+1,n+1}-b_{n+1,1}))\subset\varepsilon_s(A,32).
\end{equation*}
\end{Lemma}
\begin{proof}
Let $X$ be the matrix obtained from $A$ as in Lemma~\ref{first_commutator_formula_first_Hessenberg} and $Y$ its inverse. We will prove the lemma by showing that
the two principal ideals $(a_{21}(b_{n+1,n+1}-b_{n+1,1}))$ and $(a_{11}(b_{n+1,n+1}-b_{n+1,1})-1)$ are both subsets of $\varepsilon_s(A,16).$ In order to show the first inclusion, we will show for $x\in R$ arbitrary that
\begin{align*}
B_A(16)\ni&(((X,I_{2n}+e_{2n,1}+e_{n+1,n}),I_{2n}+e_{n+1,1}),I_{2n}+x(e_{12}-e_{n+2,n+1}))\\
&=I_n-\left(a_{11}(b_{n+1,n+1}-b_{n+1,1})-1\right)x(e_{2n,2}+e_{n+2,n}).
\end{align*}
We first study the following term:
\begin{align*}
&X(e_{2n,1}+e_{n+1,n})X^{-1}=(Xe_{2n,1})X^{-1}+X(e_{n+1,n}X^{-1})=e_{2n,1}Y+Xe_{n+1,n}\\
&=\left(\begin{array}{c|c}
\begin{matrix}
0 & 0 & \cdot & 0\\
\cdot &\cdot &\cdot &\cdot\\
0 & 0 & \cdot & 0\\
0 & 0 & \cdot & 0\\
\hline\
0 & 0 & \cdot & 0\\
\cdot & \cdot & \cdot & \cdot\\
0 & 0 & \cdot & 0\\
y_{11} & y_{12} & \cdot & y_{1n}
\end{matrix}
&
\begin{matrix}
0 & 0 & 0 & \cdot & 0\\
\cdot & \cdot&\cdot &\cdot &\cdot\\
0 & 0 & 0 & \cdot & 0\\
0 & 0 & 0 & \cdot & 0\\
\hline\
0 & 0 & 0 & \cdot & 0\\
\cdot & \cdot&\cdot &\cdot &\cdot\\
0 & 0 & 0 & \cdot & 0\\
y_{1,n+1} & y_{1,n+2} & 0 & \cdot & 0
\end{matrix}
\end{array}\right)
+
\left(\begin{array}{c|c}
\begin{matrix}
0 & \cdot & 0 & x_{1,n+1}\\
0 & \cdot & 0 & x_{2,n+1}\\
0 & \cdot & 0 & 0\\
\cdot & \cdot & \cdot & \cdot\\
0 & \cdot & 0 & 0\\
\hline\
0 & \cdot & 0 & x_{n+1,n+1}\\
\cdot & \cdot & \cdot & \cdot\\
0 & \cdot & 0 & x_{2n,n+1}
\end{matrix}
&
\begin{matrix}
0 & \cdot & 0\\
0 & \cdot & 0\\
0 & \cdot & 0\\
\cdot & \cdot & \cdot\\
0 & \cdot & 0\\
\hline\
0 & \cdot & 0\\
\cdot & \cdot & \cdot\\
0 & \cdot & 0
\end{matrix}
\end{array}\right)\\
&\\
&=
\left(\begin{array}{c|c}
\begin{matrix}
0 & \cdot & 0 & x_{1,n+1}\\
0 & \cdot & 0 & x_{2,n+1}\\
0 & \cdot & 0 & 0\\
\cdot & \cdot & \cdot & \cdot\\
0 & \cdot & 0 & 0\\
\hline\
0 & \cdot & 0 & x_{n+1,n+1}\\
0 & \cdot & 0 & x_{n+2,n+1}\\
\cdot & \cdot & \cdot & \cdot\\
0 & \cdot & 0 & x_{2n-1,n+1}\\
y_{11} & \cdot & y_{1,n-1} & y_{1n}+x_{2n,n+1}
\end{matrix}
&
\begin{matrix}
0 & 0 &0 & \cdot & 0\\
0 & 0 &0 &\cdot & 0\\
0 & 0 &0 &\cdot & 0\\
\cdot &\cdot & \cdot & \cdot & \cdot\\
0 & 0 &0 &\cdot & 0\\
\hline\
0 & 0 &0 &\cdot & 0\\
0 & 0 &0 &\cdot & 0\\
\cdot &\cdot & \cdot & \cdot & \cdot\\
0 & 0 &0 &\cdot & 0 \\
y_{1,n+1} & y_{1,n+2} &0 & \cdot & 0
\end{matrix}
\end{array}\right)
\end{align*}
Next, observe that $X(e_{2n,1}+e_{n+1,n})X^{-1}(e_{2n,1}+e_{n+1,n})=y_{1,n+1}e_{2n,n}.$ Hence the innermost commutator $Z\in B_A(4)$ has the form:
\begin{align*}
Z&=(X,I_{2n}+e_{2n,1}+e_{n+1,n})=(I_{2n}+X(e_{2n,1}+e_{n+1,n})X^{-1})(I_{2n}-e_{2n,1}-e_{n+1,n})\\
&=I_{2n}+X(e_{2n,1}+e_{n+1,n})X^{-1}-(e_{2n,1}+e_{n+1,n})
-X(e_{2n,1}+e_{n+1,n})X^{-1}(e_{2n,1}+e_{n+1,n})\\
&=I_{2n}+e_{2n,1}X^{-1}+Xe_{n+1,n}-e_{2n,1}-e_{n+1,n}-y_{1,n+1}e_{2n,n}
\end{align*}
Next, set $U:=Z^{-1}$. Then $U$ also has the form
\begin{align*}
\left(\begin{array}{c|c}
\begin{matrix}
1 & \cdot & 0 & u_{1,n}\\
0 & \cdot & 0 & u_{2,n}\\
0 & \cdot & 0 & 0\\
\cdot & \cdot & \cdot & \cdot\\
0 & \cdot & 0 & 1\\
\hline\
0 & \cdot & 0 & u_{n+1,n}\\
0 & \cdot & 0 & u_{n+2,n}\\
\cdot & \cdot & \cdot & \cdot\\
0 & \cdot & 0 & u_{2n-1,n}\\
u_{2n,1} & \cdot & u_{2n,n-1} & u_{2n,n}
\end{matrix}
&
\begin{matrix}
0 & 0 &0 & \cdot & 0\\
0 & 0 &0 &\cdot & 0\\
0 & 0 &0 &\cdot & 0\\
\cdot &\cdot & \cdot & \cdot & \cdot\\
0 & 0 &0 &\cdot & 0\\
\hline\
1 & 0 &0 &\cdot & 0\\
0 & 1 &0 &\cdot & 0\\
\cdot &\cdot & \cdot & \cdot & \cdot\\
0 & 0 &0 &\cdot & 0 \\
u_{2n,n+1} & u_{2n,n+2} &0 & \cdot & 1
\end{matrix}
\end{array}\right)
\end{align*}
First, observe
\begin{equation*}
Ze_{n+1,1}Z^{-1}=(e_{n+1,1}+z_{2n,n+1}e_{2n,1})U=e_{n+1,1}+u_{1,n}e_{n+1,n}+z_{2n,n+1}(e_{2n,1}+u_{1,n}e_{2n,n}).
\end{equation*}
This implies for the second inner-most commutator $S\in B_A(8)$ that
\begin{align*}
S:=(Z,I_{2n}+e_{n+1,1})&=(I_{2n}+Ze_{n+1,1}Z^{-1})(I_{2n}-e_{n+1,1})\\
&=(I_{2n}+e_{n+1,1}+u_{1,n}e_{n+1,n}+z_{2n,n+1}(e_{2n,1}+u_{1,n}e_{2n,n}))(I_{2n}-e_{n+1,1})\\
&=I_{2n}+u_{1,n}e_{n+1,n}+z_{2n,n+1}(e_{2n,1}+u_{1,n}e_{2n,n})
\end{align*}
with $u_{1,n}=z_{2n,n+1}=-z_{1,n}=-(a_{11}(b_{n+1,n+1}-b_{n+1,1})-1).$ But now one can easily check that in fact
$(S,I_{2n}+x(e_{12}-e_{n+2,n+1}))=I_{2n}-\left(a_{11}(b_{n+1,n+1}-b_{n+1,1})-1\right)x(e_{2n,2}+e_{n+2,n})$ holds for all $x\in R$ and this nested commutator is an element of $B_A(16)$. This finishes the proof of the first inclusion. For the second inclusion, it suffices to show for $x\in R$ arbitrary that
\begin{align*}
B_A(16)\ni&(((X,I_{2n}+e_{2n,1}+e_{n+1,n}),I_{2n}+e_{n+2,2}),I_{2n}+x(e_{2,1}-e_{n+1,n+2}))\\
&=I_n-xa_{21}(b_{n+1,n+1}-b_{n+1,1})(e_{2n,1}+e_{n+1,n}).
\end{align*}
But this calculation works the same way as the one for the first inclusion, so we omit it.
\end{proof}
\subsubsection{The second Hessenberg Form}
\begin{Lemma}
\label{second_Hessenberg}
Let $R$ be a principal ideal domain and let $n\geq 3$ be given. Then for each $A\in{\rm Sp}_{2n}(R)$ there is a matrix $B\in{\rm Sp}_{2n}(R)$ such that
$A':=BAB^{-1}$ has the form:
\begin{equation*}
A'=\left(\begin{array}{c|c}
A'_1 & A'_2 \\
\midrule
\begin{matrix}
a'_{n+1,1} & a'_{n+1,2} & a'_{n+1,3} & \cdot & a'_{n+1,n-2} & a'_{n+1,n-1} & a'_{n+1,n}\\
a'_{n+2,1} & a'_{n+2,2} & a'_{n+2,3} & \cdot & a'_{n+2,n-2} & a'_{n+2,n-1} & a'_{n+2,n}\\
0 & a'_{n+3,2} & a'_{n+3,3} & \cdot & a'_{n+3,n-2} & a'_{n+3,n-1} & a'_{n+3,n}\\
0 & 0 & a'_{n+4,3} & \cdot & a'_{n+4,n-2} & a'_{n+4,n-1} & a'_{n+4,n}\\
\cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\
0 & 0 & 0 & \cdot & 0 & a'_{2n,n-1} & a'_{2n,n}
\end{matrix}
& A'_4
\end{array}\right)
\end{equation*}
with $a'_{n+2,1}=\gcd(a_{n+2,1},a_{n+3,1},\dots,a_{2n,1})$ up to a multiplication by a unit in $R.$ We call a matrix of the form of $A'$ in $Sp_{2n}(R)$ a matrix in \textit{second Hessenberg form.}
\end{Lemma}
We omit the proof, as it is very similar to the one of Lemma~\ref{first_Hessenberg_sp_2n}. One can then prove the following lemma by running through an analogous chain of calculations as the one showing Lemma~\ref{second_commutator_formula_first_Hessenberg}.
\begin{Lemma}
\label{first_commutator_formula_second_Hessenberg}
Let $R$ be a commutative ring with $1$ and $n\geq 3$ and let $A$ be a matrix in second Hessenberg form in $Sp_{2n}(R)$ and $B=A^{-1}.$ Then
\begin{equation*}
(a_{n+2,1}(b_{n+1,n+1}-b_{n+1,1}),a_{11}(b_{n+1,n+1}-b_{n+1,1})-1)\subset\varepsilon_s(A,32).
\end{equation*}
\end{Lemma}
\subsubsection{Constructing the level ideal}
We will apply the previous calculations to various matrices. First, note the following proposition:
\begin{Proposition}
\label{first_column}
Let $R$ be a principal ideal domain, $n\geq 3$ and $A=(a_{ij})_{1\leq i,j\leq 2n}\in{\rm Sp}_{2n}(R)$ be given. Then there are ideals
\begin{enumerate}
\item{$I_1^{(1)}(A)\subset\varepsilon_s(A,32)$ with $a_{2,1},\dots,a_{n,1}\in I_1^{(1)}(A)$ and}
\item{$I_1^{(2)}(A)\subset\varepsilon_s(A,32)$ with $a_{n+2,1},\dots,a_{2n,1}\in I_1^{(2)}(A)$.}
\end{enumerate}
We denote the ideal $I_1^{(1)}(A)+I_1^{(2)}(A)\subset\varepsilon_s(A,64)$ by $I_1(A).$
\end{Proposition}
\begin{proof}
The proof will be split in two parts. First we are going to construct the ideal $I_1^{(1)}(A)$ containing $a_{2,1},\dots,a_{n,1}$ and then the second ideal
$I_1^{(2)}(A)$ containing $a_{n+2,1},\dots,a_{2n,1}.$ For the first ideal put $A$ in first Hessenberg form and call the resulting matrix $A'=(a'_{ij})_{1\leq i,j\leq 2n}$ with inverse $B'=(b'_{ij})_{1\leq i,j\leq 2n}.$ Then apply Lemma~\ref{second_commutator_formula_first_Hessenberg} to $A'$ to obtain
\begin{equation*}
I^{(1)}_1(A):=(a'_{11}(b'_{n+1,n+1}-b'_{n+1,1})-1,a'_{21}(b'_{n+1,n+1}-b'_{n+1,1}))\subset\varepsilon_s(A,32).
\end{equation*}
Note $a'_{11}(b'_{n+1,n+1}-b'_{n+1,1})\equiv 1\text{ mod }I^{(1)}_1(A)$ and hence it follows
\begin{equation*}
0=0\cdot a'_{11}\equiv a'_{21}(b'_{n+1,n+1}-b'_{n+1,1})a'_{11}\equiv a'_{21}\cdot 1=a'_{21}\text{ mod }I^{(1)}_1(A).
\end{equation*}
Thus $a'_{21}\in I^{(1)}_1(A)$ holds. But according to Lemma~\ref{first_Hessenberg_sp_2n}, the entry $a'_{21}$ of the matrix $A'$ agrees with ${\rm gcd}(a_{2,1},\dots,a_{n,1})$ up to multiplication with a unit for the entries $a_{21},\dots,a_{n,1}$ of the initial matrix $A$. So in particular, we obtain for an arbitrary matrix $A\in{\rm Sp}_{2n}(R)$ that $(a_{21},\dots,a_{n,1})$ is a subset of $I^{(1)}_1(A).$ Using the second Hessenberg form and Lemma~\ref{first_commutator_formula_second_Hessenberg} yields the ideal $I^{(2)}_1(A)\subset\varepsilon_s(A,32)$ with $a_{n+2,1},\dots,a_{2n,1}\in I^{(2)}_1(A)$.
\end{proof}
The proposition yields all of-diagonal entries of the first column save for the single entry $a_{n+1,1}$ as arguments $x$ for root elements $\varepsilon_{\phi}(x)$
for $\phi\in C_n$ short. We can now prove Theorem~\ref{level_ideal_explicit_Sp_2n}:
\begin{proof}
First, define for $2\leq k\leq n$ the elements:
\begin{equation*}
w_k:=e_{1,k}-e_{k,1}+e_{n+1,n+k}-e_{n+k,n+1}+\sum_{1\leq j\leq 2n, j\neq 1,k,n+1,n+k} e_{j,j}\in{\rm Sp}_{2n}(R).
\end{equation*}
The first column of the matrix $A_k:=w_kAw_k^{-1}$ is
\begin{equation*}
\tiny{(a_{k,k},a_{2,k},\dots,a_{k-1,k},-a_{1,k},a_{k+1,k},\dots,a_{n,k},a_{n+k,k},a_{n+2,k}\dots,a_{n+k-1,k},-a_{n+1,k},a_{n+k+2,k},\dots,a_{2n,k})^T.}
\end{equation*}
Hence applying Proposition~\ref{first_column} to all of the matrices $A_2,\dots,A_n$ and the matrix $A_1:=A$, there are ideals
$I_1(A_1),\dots,I_1(A_n)$ all of them contained in $\varepsilon_s(A,64)$ with
\begin{equation*}
a_{1,k},\dots,a_{n,k},a_{n+1,k},\dots,a_{n+k-1,k},a_{n+k+2,k},\dots,a_{2n,k}\in I_1(A_k)
\end{equation*}
for $k\geq 2$ and $a_{2,1},\dots,a_{n,1},a_{n+2,1},\dots,a_{2n,1}\in I_1(A_1).$ So, the ideal $I_2(A):=I_1(A_1)+\cdots+I_1(A_n)$ is contained in $\varepsilon_s(A,64n).$
Further, $I_2(A)$ contains all off-diagonal entries of the first $n$ columns of $A$ except possibly the entries $a_{n+1,1},a_{n+2,2},\dots,a_{2n,n}.$ Next, observe that $J$ itself is an element of ${\rm Sp}_{2n}(R)$ and choose $M_1,M_2,M_3,M_4\in R^{n\times n}$ with
\begin{equation*}
A=\left(\begin{array}{c|c}
\begin{matrix}
M_1\\
\hline\
M_3
\end{matrix}
&
\begin{matrix}
M_2\\
\hline\
M_4
\end{matrix}
\end{array}\right).
\end{equation*}
Then we obtain
\begin{align*}
A':=J^{-1}AJ&=
\left(\begin{array}{c|c}
\begin{matrix}
0_n\\
\hline\
I_n
\end{matrix}
&
\begin{matrix}
-I_n\\
\hline\
0_n
\end{matrix}
\end{array}\right)
\cdot
\left(\begin{array}{c|c}
\begin{matrix}
M_1\\
\hline\
M_3
\end{matrix}
&
\begin{matrix}
M_2\\
\hline\
M_4
\end{matrix}
\end{array}\right)
\cdot J=
\left(\begin{array}{c|c}
\begin{matrix}
-M_3\\
\hline\
M_1
\end{matrix}
&
\begin{matrix}
-M_4\\
\hline\
M_2
\end{matrix}
\end{array}\right)\cdot
\left(\begin{array}{c|c}
\begin{matrix}
0_n\\
\hline\
-I_n
\end{matrix}
&
\begin{matrix}
I_n\\
\hline\
0_n
\end{matrix}
\end{array}\right)\\
&\\
&=\left(\begin{array}{c|c}
\begin{matrix}
M_4\\
\hline\
-M_2
\end{matrix}
&
\begin{matrix}
-M_3\\
\hline\
M_1
\end{matrix}
\end{array}\right)
\end{align*}
This implies, that if we apply the previous construction of $I_2(A)$ to the matrix $A'$, then we obtain an ideal
$I_2(A')\subset\varepsilon_s(A',64n)=\varepsilon_s(A,64n)$ that contains all off-diagonal entries of the last $n$ columns of $A$,
except possibly the entries $a_{1,n+1},\dots,a_{n,2n}.$ Thus if we consider the ideal $I'_3(A):=I_2(A)+I_2(A')\subset\varepsilon_s(A,128n)$, it follows:
\begin{align*}
A\equiv
\tiny{\left(\begin{array}{c|c}
\begin{matrix}
a_{11} & 0 & 0 & \cdot & 0\\
0 & a_{22} & 0 & \cdot & 0\\
\cdot &\cdot &\cdot &\cdot &\cdot \\
0 & 0 & 0 & \cdot & a_{nn} \\
\hline\
a_{n+1,1} & 0 & 0 & \cdot &0\\
0 & a_{n+2,2} & 0 & \cdot &0\\
\cdot & \cdot & \cdot & \cdot & \cdot\\
0 & 0 & 0 & \cdot & a_{2n,n} \\
\end{matrix}
&
\begin{matrix}
a_{1,n+1} & 0 & 0 & \cdot &0\\
0 & a_{2,n+2} & 0 & \cdot &0\\
\cdot & \cdot & \cdot & \cdot & \cdot\\
0 & 0 & 0 & \cdot & a_{n,2n} \\
\hline\
a_{n+1,n+1} & 0 & 0 & \cdot & 0\\
0 & a_{n+2,n+2} & 0 & \cdot & 0\\
\cdot &\cdot &\cdot &\cdot &\cdot \\
0 & 0 & 0 & \cdot & a_{2n,2n} \\
\end{matrix}
\end{array}\right)}
{\rm mod}\ I'_3(A).
\end{align*}
Thus the ideal $I_3(A):=(a_{ij},a_{i,n+j},a_{n+i,j},a_{n+i,n+j}|1\leq i\neq j\leq n)$ is contained in $I'_3(A)\subset\varepsilon_s(A,128n).$
Consequently, one also has
\begin{align*}
A^{-1}\equiv
\tiny{\left(\begin{array}{c|c}
\begin{matrix}
a_{n+1,n+1} & 0 & 0 & \cdot & 0\\
0 & a_{n+2,n+2} & 0 & \cdot & 0\\
\cdot &\cdot &\cdot &\cdot &\cdot \\
0 & 0 & 0 & \cdot & a_{2n,2n} \\
\hline\
-a_{n+1,1} & 0 & 0 & \cdot &0\\
0 & -a_{n+2,2} & 0 & \cdot &0\\
\cdot & \cdot & \cdot & \cdot & \cdot\\
0 & 0 & 0 & \cdot & -a_{2n,n} \\
\end{matrix}
&
\begin{matrix}
-a_{1,n+1} & 0 & 0 & \cdot &0\\
0 & -a_{2,n+2} & 0 & \cdot &0\\
\cdot & \cdot & \cdot & \cdot & \cdot\\
0 & 0 & 0 & \cdot & -a_{n,2n} \\
\hline\
a_{1,1} & 0 & 0 & \cdot & 0\\
0 & a_{2,2} & 0 & \cdot & 0\\
\cdot &\cdot &\cdot &\cdot &\cdot \\
0 & 0 & 0 & \cdot & a_{n,n} \\
\end{matrix}
\end{array}\right)}
{\rm mod}\ I_3(A).
\end{align*}
These congruences for $A$ and $A^{-1}$ imply
\begin{align*}
& A'':=(A,I_{2n}+e_{1,2}-e_{n+2,n+1})\\
&=\left(I_{2n}+A(e_{1,2}-e_{n+2,n+1})A^{-1}\right)\cdot(I_{2n}-e_{1,2}+e_{n+2,n+1})\\
&\equiv \left[I_{2n}+(a_{11}e_{12}+a_{n+1,1}e_{n+1,2}-a_{2,n+2}e_{2,n+1}-a_{n+2,n+2}e_{n+2,n+1})A^{-1}\right]\\
&\ \ \ \ \cdot (I_{2n}-e_{1,2}+e_{n+2,n+1})\\
&\equiv
[I_{2n}+a_{11}(a_{n+2,n+2}e_{12}-a_{2,n+2}e_{1,n+2})
+a_{n+1,1}(a_{n+2,n+2}e_{n+1,2}-a_{2,n+2}e_{n+1,n+2})\\
&\ \ \ \ -a_{2,n+2}(-a_{n+1,1}e_{2,1}+a_{11}e_{2,n+1})-a_{n+2,n+2}(-a_{n+1,1}e_{n+2,1}+a_{11}e_{n+2,n+1})]\\
&\ \ \ \ \cdot(I_{2n}-e_{1,2}+e_{n+2,n+1})\\
&=I_{2n}+a_{11}(a_{n+2,n+2}e_{12}-a_{2,n+2}e_{1,n+2})
+a_{n+1,1}(a_{n+2,n+2}e_{n+1,2}-a_{2,n+2}e_{n+1,n+2})\\
&\ \ \ \ -a_{2,n+2}(-a_{n+1,1}e_{2,1}+a_{11}e_{2,n+1})-a_{n+2,n+2}(-a_{n+1,1}e_{n+2,1}+a_{11}e_{n+2,n+1})\\
&\ \ \ \ -e_{1,2}+e_{n+2,n+1}-a_{n+1,1}a_{2,n+2}e_{22}-a_{n+1,1}a_{2,n+2}e_{n+1,n+1}\\
&\ \ \ \ -a_{n+2,n+2}a_{n+1,1}e_{n+2,2}-a_{11}a_{2,n+2}e_{1,n+1}\\
&=
\tiny\left(\begin{array}{c|c}
\begin{matrix}
1 & a_{11}a_{n+2,n+2}-1 & 0 & \cdot & 0\\
a_{2,n+2}a_{n+1,1} & 1-a_{n+1,1}a_{2,n+2} & 0 & \cdot & 0\\
\cdot &\cdot &\cdot &\cdot &\cdot \\
0 & 0 & 0 & \cdot & 1 \\
\hline\
0 & a_{n+1,1}a_{n+2,n+2} & 0 & \cdot &0\\
a_{n+2,n+2}a_{n+1,1} & -a_{n+2,n+2}a_{n+1,1} & 0 & \cdot &0\\
\cdot & \cdot & \cdot & \cdot & \cdot\\
0 & 0 & 0 & \cdot & 0 \\
\end{matrix}
&
\begin{matrix}
-a_{11}a_{2,n+2} & -a_{2,n+2}a_{11} & 0 & \cdot &0\\
-a_{2,n+2}a_{11} & 0 & 0 & \cdot &0\\
\cdot & \cdot & \cdot & \cdot & \cdot\\
0 & 0 & 0 & \cdot & 0 \\
\hline\
1-a_{n+1,1}a_{2,n+2} & -a_{n+1,1}a_{2,n+2} & 0 & \cdot & 0\\
1-a_{n+2,n+2}a_{11} & 1 & 0 & \cdot & 0\\
\cdot &\cdot &\cdot &\cdot &\cdot \\
0 & 0 & 0 & \cdot & 1 \\
\end{matrix}
\end{array}\right)
{\rm mod}\ I_3(A).
\end{align*}
Note that the $(n+2,1)$-entry $a''_{n+2,1}$ of $A''$ is congruent to $a_{n+2,n+2}a_{n+1,1}$ modulo $I_3(A)$ and the
$(1,2)$-entry of $A''$ is congruent to $a_{n+2,n+2}a_{11}-1$ modulo $I_3(A).$ Further, note that $A''\in B_A(2).$
Next, apply Proposition~\ref{first_column}(2) to the matrix $A''$ to obtain an ideal
\begin{equation*}
I_4^{(1)}(A):=I_1^{(3)}(A'')\subset\varepsilon_s(A'',32)\subset\varepsilon_s(A,64)
\end{equation*}
that contains $a''_{n+2,1}$, an element, which is congruent to $a_{n+2,n+2}a_{n+1,1}$ modulo $I_3(A).$
So for each element $X=(x_{ij})$ of ${\rm Sp}_{2n}(R)$, there is an ideal $I_4^{(1)}(X)\subset\varepsilon_s(X,64)$ which contains modulo $I_3(X)$ the element
$x_{n+2,n+2}x_{n+1,1}.$
Consider next, the matrix $B_A(2)\ni A''_{2}:=w_2A''w_2^{-1}$ and note that its $(2,1)$-entry is congruent modulo $I_3(A)$ to
$a_{11}a_{n+2,n+2}-1.$ Apply Proposition~\ref{first_column}(1) to $A''_2$ to obtain an ideal
\begin{equation*}
I_4^{(2)}(A):=I_1^{(1)}(A''_2)\subset\varepsilon_s(A''_2,32)\subset\varepsilon_s(A,64)
\end{equation*}
that contains the $(2,1)$-entry of $A''_2$, which is congruent to $a_{11}a_{n+2,n+2}-1$ modulo $I_3(A).$
The properties of these ideals imply that the ideal $I^{(3)}_4(A):=I_4^{(1)}(A)+I_4^{(2)}(A)$ is contained in $\varepsilon_s(A,64+64)=\varepsilon_s(A,128)$ and
contains modulo $I_3(A),$ the elements $a_{n+2,n+2}a_{11}-1$ and $a_{n+2,n+2}a_{n+1,1}$ and consequently the element $a_{n+1,1}$ modulo $I_3(A).$
Phrased differently, for each matrix $X\in{\rm Sp}_{2n}(R)$, there is an ideal $I^{(3)}_4(X)\subset\varepsilon_s(X,128)$, which contains the elements $x_{n+1,1}$ and $x_{n+2,n+2}x_{11}-1$ modulo the ideal $I_3(X).$
Observe that for $k=3,\dots,n$, the conjugate $A_k$ of $A$ defined before, has
\begin{enumerate}
\item{$(n+1,1)$-entry equal to $a_{n+k,k}$,}
\item{$(n+2,n+2)$-entry equal to $a_{n+2,n+2}$ and}
\item{$(1,1)$-entry equal to $a_{k,k}$.}
\end{enumerate}
Further, the conjugate $A_2$ of $A$ defined before has
\begin{enumerate}
\item{$(n+1,1)$-entry equal to $a_{n+2,2}$,}
\item{$(n+2,n+2)$-entry equal to $a_{n+1,n+1}$ and}
\item{$(1,1)$-entry equal to $a_{2,2}$.}
\end{enumerate}
Hence applying the previous construction of the ideal $I^{(3)}_4(X)$ to the conjugates $A_2,A_3,\dots,A_n$ of $A$ then yields ideals
$I^{(3)}_4(A_2),\dots,I^{(3)}_4(A_n)\subset\varepsilon_s(A,128)$
with the properties that
\begin{enumerate}
\item{for $k=2,3\dots,n$, the ideal $I^{(3)}_4(A_k)$ contains the elements $a_{n+k,k}$ modulo the ideal $I_3(A_k)=I_3(A)$ and}
\item{for $k=3,\dots,n$, the ideal $I^{(3)}_4(A_k)$ contains the element $a_{n+2,n+2}a_{k,k}-1$ modulo the ideal $I_3(A_k)=I_3(A).$}
\end{enumerate}
To summarize, the ideal
\begin{equation*}
I_4(A):=I_3(A)+I^{(3)}_4(A)+I^{(3)}_4(A_2)+\cdots+I^{(3)}_4(A_n)\subset\varepsilon_s(A,256n)
\end{equation*}
contains all the entries $a_{n+1,1},\dots,a_{2n,n}$ and $a_{n+2,n+2}a_{1,1}-1,a_{n+2,n+2}a_{3,3}-1,\dots,a_{n+2,n+2}a_{n,n}-1$.
This implies:
\begin{align*}
A\equiv
\tiny{\left(\begin{array}{c|c}
\begin{matrix}
a_{11} & 0 & 0 & \cdot & 0\\
0 & a_{22} & 0 & \cdot & 0\\
\cdot &\cdot &\cdot &\cdot &\cdot \\
0 & 0 & 0 & \cdot & a_{nn} \\
\hline\
0 & 0 & 0 & \cdot &0\\
0 & 0 & 0 & \cdot &0\\
\cdot & \cdot & \cdot & \cdot & \cdot\\
0 & 0 & 0 & \cdot & 0 \\
\end{matrix}
&
\begin{matrix}
a_{1,n+1} & 0 & 0 & \cdot &0\\
0 & a_{2,n+2} & 0 & \cdot &0\\
\cdot & \cdot & \cdot & \cdot & \cdot\\
0 & 0 & 0 & \cdot & a_{n,2n} \\
\hline\
a_{n+1,n+1} & 0 & 0 & \cdot & 0\\
0 & a_{n+2,n+2} & 0 & \cdot & 0\\
\cdot &\cdot &\cdot &\cdot &\cdot \\
0 & 0 & 0 & \cdot & a_{2n,2n} \\
\end{matrix}
\end{array}\right)}
{\rm mod}\ I_4(A).
\end{align*}
But $A$ is an element of ${\rm Sp}_{2n}(R)$ and hence $a_{ll}a_{n+l,n+l}\equiv 1\text{ mod }I_4(A)$ holds for all $l=1,\dots,n$. Thus $(a_{n+l,n+l}+I_4(A))^{-1}=a_{l,l}+I_4(A)$ holds in $R/I_4(A)$. On the other hand, $a_{n+2,n+2}a_{1,1}-1,a_{n+2,n+2}a_{3,3}-1,\dots,a_{n+2,n+2}a_{n,n}-1$ are all elements of $I_4(A)$ and hence
\begin{equation*}
a_{1,1}+I_4(A)=a_{3,3}+I_4(A)=\dots=a_{n,n}+I_4(A)=(a_{n+2,n+2}+I_4(A))^{-1}=a_{2,2}+I_4(A)
\end{equation*}
holds in the ring $R/I_4(A)$ as well. Thus we obtain
\begin{align*}
A\equiv
\tiny{\left(\begin{array}{c|c}
\begin{matrix}
a_{22} & 0 & 0 & \cdot & 0\\
0 & a_{22} & 0 & \cdot & 0\\
\cdot &\cdot &\cdot &\cdot &\cdot \\
0 & 0 & 0 & \cdot & a_{22} \\
\hline\
0 & 0 & 0 & \cdot &0\\
0 & 0 & 0 & \cdot &0\\
\cdot & \cdot & \cdot & \cdot & \cdot\\
0 & 0 & 0 & \cdot & 0 \\
\end{matrix}
&
\begin{matrix}
a_{1,n+1} & 0 & 0 & \cdot &0\\
0 & a_{2,n+2} & 0 & \cdot &0\\
\cdot & \cdot & \cdot & \cdot & \cdot\\
0 & 0 & 0 & \cdot & a_{n,2n} \\
\hline\
a_{n+2,n+2} & 0 & 0 & \cdot & 0\\
0 & a_{n+2,n+2} & 0 & \cdot & 0\\
\cdot &\cdot &\cdot &\cdot &\cdot \\
0 & 0 & 0 & \cdot & a_{n+2,n+2} \\
\end{matrix}
\end{array}\right)}
{\rm mod}\ I_4(A).
\end{align*}
Note in particular, that all diagonal entries of $A$ reduce to units in $R/I_4(A).$
Similarly, for $A'=J^{-1}AJ$ consider the conjugates $A'_k:=w_kA'w_k^{-1}$ for $k=2,\dots,n$. Observe that for $k=3,\dots,n$ the $(n+1,1)$-entry of $A'_k$ is
$-a_{k,n+k}$ and the $(n+2,n+2)$-entry is $a_{2,2}.$ For $A'_2$ the
$(n+1,1)$-entry is $-a_{2,n+2}$ and the $(n+2,n+2)$-entry is $a_{1,1}.$
Further, for $A'$ the $(n+1,1)$-entry is $-a_{1,n+1}$ and the $(n+2,n+2)$-entry is $a_{2,2}.$\\
Next, consider the ideals $I_4^{(1)}(A'),I_4^{(1)}(A'_2),\dots,I_4^{(1)}(A'_n)\subset\varepsilon_s(A,64)$ and observe that according to the construction of these ideals, one has that
\begin{enumerate}
\item{the ideal $I_4^{(1)}(A')$ contains the element $-a_{1,n+1}a_{2,2}$ modulo $I_3(A')=I_3(A),$}
\item{for $k=3,\dots,n$, the ideal $I_4^{(1)}(A'_k)$ contains the element $-a_{k,n+k}a_{2,2}$ modulo $I_3(A'_k)=I_3(A)$ and}
\item{the ideal $I_4^{(1)}(A'_2)$ contains the element $-a_{2,n+2}a_{1,1}$ modulo $I_3(A'_2)=I_3(A).$}
\end{enumerate}
Next, consider the ideal:
\begin{align*}
I'(A)&:=I_3(A)+I^{(3)}_4(A)+I^{(3)}_4(A_2)+\cdots+I^{(3)}_4(A_n)+I_4^{(1)}(A')+I_4^{(1)}(A'_2)+\cdots+I_4^{(1)}(A'_n)\\
&\subset\varepsilon_s(A,256n+64n)=\varepsilon_s(A,320n).
\end{align*}
As $I_3(A)\subset I'(A)$, one concludes that
\begin{enumerate}
\item{$-a_{1,n+1}a_{2,2}$ is an element of $I'(A),$}
\item{for $k=3,\dots,n$, the element $-a_{k,n+k}a_{2,2}$ is contained in $I'(A)$ and}
\item{the element $-a_{2,n+2}a_{1,1}$ is contained in $I'(A).$}
\end{enumerate}
But remember that all diagonal entries of $A$ reduce to units in $R/I_4(A)$ and consequently also reduce to units in $R/I'(A).$ Hence as
$a_{1,n+1}a_{2,2},a_{3,n+3}a_{2,2},\dots,a_{n,2n}a_{2,2}$ and $a_{2,n+2}a_{1,1}$ are all elements of $I'(A),$ we obtain that
$a_{1,n+1},a_{3,n+3},\dots,a_{n,2n},a_{2,n+2}$ are also elements of $I'(A).$ Hence we obtain
\begin{align*}
A\equiv
\tiny{\left(\begin{array}{c|c}
\begin{matrix}
a_{22} & 0 & 0 & \cdot & 0\\
0 & a_{22} & 0 & \cdot & 0\\
\cdot &\cdot &\cdot &\cdot &\cdot \\
0 & 0 & 0 & \cdot & a_{22} \\
\hline\
0 & 0 & 0 & \cdot &0\\
0 & 0 & 0 & \cdot &0\\
\cdot & \cdot & \cdot & \cdot & \cdot\\
0 & 0 & 0 & \cdot & 0 \\
\end{matrix}
&
\begin{matrix}
0 & 0 & 0 & \cdot &0\\
0 & 0 & 0 & \cdot &0\\
\cdot & \cdot & \cdot & \cdot & \cdot\\
0 & 0 & 0 & \cdot & 0 \\
\hline\
a_{n+2,n+2} & 0 & 0 & \cdot & 0\\
0 & a_{n+2,n+2} & 0 & \cdot & 0\\
\cdot &\cdot &\cdot &\cdot &\cdot \\
0 & 0 & 0 & \cdot & a_{n+2,n+2} \\
\end{matrix}
\end{array}\right)}
{\rm mod}\ I'(A).
\end{align*}
Next, consider the ideal
\begin{align*}
I(A)&:=I'_3(A)+I^{(3)}_4(A)+I^{(3)}_4(A_2)+\cdots+I^{(3)}_4(A_n)+I_4^{(1)}(A')+I_4^{(1)}(A'_2)+\cdots+I_4^{(1)}(A'_n)
\end{align*}
Remember that $I'_3(A)$ is also contained in $\varepsilon_s(A,128n)$ same as $I_3(A).$ Thus the ideal $I(A)$ is also contained in $\varepsilon_s(A,320n)$ same as
$I'(A).$ Further, $I(A)$ contains $I'(A)$, because $I'_3(A)$ contains $I_3(A).$ Thus abusing notation and remembering
$a_{11}\equiv a_{22}\text{ mod }I(A)$, we obtain
\begin{equation*}
A\equiv a_{11}I_n\oplus a_{11}^{-1}I_n\text{ mod }I(A).
\end{equation*}
Next, remember that $I'_3(A)$ contains the ideal $I_1^{(1)}(A)$ and so according to the construction of $I_1^{(1)}(A)$ in the proof of Proposition~\ref{first_column},
the ideal $I'_3(A)$ contains the element $a'_{11}(b'_{n+1,n+1}-b'_{n+1,1})-1$ for $A'=(a'_{ij})$ being the matrix $A$ put in first Hessenberg-form and $B':=(A')^{-1}.$
However, we know from the proof of Lemma~\ref{first_Hessenberg_sp_2n} that $A'=DAD^{-1}$ for $D=D'\oplus (D')^{-T}$ for $D'\in{\rm SL}_n(R).$ Thus
\begin{equation*}
A\equiv A'\text{ mod }I(A)\text{ and }B\equiv B'\text{ mod }I(A)
\end{equation*}
hold for $B:=A^{-1}.$ So, we conclude $a_{11}(b_{n+1,n+1}-b_{n+1,1})-1$ is an element of $I(A).$ However, $A$ is a diagonal matrix modulo $I(A)$ and so
$B$ is as well. Thus $b_{n+1,1}$ is an element of $I(A)$ and further $b_{n+1,n+1}+I(A)=a_{11}+I(A)$ holds, too. So summarizing, we conclude that $a_{11}^2-1$ is an element of $I(A).$
To finish the proof let $m$ be an element of $V(I(A)).$ Then $(a_{11}-1)\cdot(a_{11}+1)=a_{11}^2-1$ is an element of $m$ and thus either
\begin{equation*}
a_{11}\equiv 1\text{ mod }m\text{ or }a_{11}\equiv -1\text{ mod }m
\end{equation*}
holds. But in either case $a_{11}+m=(a_{11}+m)^{-1}$ holds and so $A$ reduces to a scalar matrix modulo $m.$ Thus $m\in\Pi(\{A\})$ and this finishes the proof.
\end{proof}
\begin{remark}
For a given element $A\in {\rm Sp}_{2n}(R),$ it is possible that any one of the many intermediate ideals $I$ making up $I(A)$ in the previous proof is already the entire ring $R.$ In this case, it is problematic to speak about units in the quotient $R/I$ or $R/I(A)$. However, if any of the intermediate ideals $I$ is already the entire ring $R$, then the claim of Theorem~\ref{level_ideal_explicit_Sp_2n} is obvious anyway, because then $V(I(A))=\emptyset$ holds.
\end{remark}
We also note the following corollary:
\begin{Corollary}
\label{sum_decomposition_level_ideal_local}
Let $R$ be a principal ideal domain, $n\geq 3, A\in{\rm Sp}_{2n}(R)$. Then the ideal $I(A)$ of Theorem~\ref{level_ideal_explicit_Sp_2n}
is a sum of ideals $J_1(A),\dots,J_{7n}(A)$ such that $J_i(A)\subset\varepsilon_s(A,64)$ holds for all $1\leq i\leq 7n.$
\end{Corollary}
\begin{proof}
Recall the Weyl group elements
\begin{equation*}
w_k:=e_{1,k}-e_{k,1}+e_{n+1,n+k}-e_{n+k,n+1}+\sum_{1\leq j\leq 2n, j\neq 1,k,n+1,n+k} e_{j,j}\in{\rm Sp}_{2n}(R).
\end{equation*}
for $k=2,\dots,n.$
Then $X_k$ shall denote the conjugates $w_kXw_k^{-1}$ for $k=2,\dots,n$ and an arbitrary $X\in{\rm Sp}_{2n}(R).$
Going through the construction of $I(A)$ in the proof of Theorem~\ref{level_ideal_explicit_Sp_2n}, one can see that $I(A)$ is (contained in) the sum of the following ideals:
\begin{enumerate}
\item{
$I^{(1)}_1(A),I^{(1)}_1(A_2),\dots,I^{(1)}_1(A_n)$, $I^{(2)}_1(A),I^{(2)}_1(A_2),\dots,I^{(2)}_1(A_n)$,
$I^{(1)}_1(A'),I^{(1)}_1(A'_2),\dots,I^{(1)}_1(A'_n)$ and $I^{(2)}_1(A'),I^{(2)}_1(A'_2),\dots,I^{(2)}_1(A'_n)$ for $A':=J^{-1}AJ.$ These $4n$ ideals are all individually contained in $\varepsilon_s(A,32).$}
\item{$I_4^{(1)}(A),I_4^{(1)}(A_2),\dots,I_4^{(1)}(A_n)$ and $I_4^{(2)}(A),I_4^{(2)}(A_2),\dots,I_4^{(2)}(A_n).$
These $2n$ ideals are all individually contained in $\varepsilon_s(A,64).$}
\item{$I_4^{(1)}(A'),I_4^{(1)}(A'_2),\dots,I_4^{(1)}(A'_n).$ These $n$ ideals are all individually contained in $\varepsilon_s(A,64).$}
\end{enumerate}
So to summarize: $I(A)$ is the sum of $7n$ ideals that are all individually contained in $\varepsilon_s(A,64).$
\end{proof}
\section{Stable range conditions, matrix decompositions and semi-local rings}
We first define the stable range of rings:
\begin{mydef}\cite[Ch.~1,§4]{MR0174604}
The \textit{(Bass) stable range} of a commutative ring $R$ with $1$ is the smallest $n\in\mathbb{N}$ with the following property:
If any $v_0,\dots,v_m\in R$ generate the unit ideal $R$ for $m\geq n$, then there are $t_1,\dots,t_m$ such that the elements
$v_1':=v_1+t_1v_0,\dots,v_m':=v_m+t_mv_0$ also generate the unit ideal. If no such $n$ exists, $R$ has stable range $+\infty.$
\end{mydef}
Recalling the choices made for the symplectic group in Section~\ref{section_matrix_calculations_sp_2n}, we obtain the following decomposition for symplectic groups:
\begin{Proposition}
\label{Sp_2n_stable_upper_lower_decomposition}
Let $R$ be a ring of stable range at most $2$ such that the group ${\rm Sp}_4(R)$ is generated by its root elements and let $n\geq 2$ be given.
Then identifying ${\rm Sp}_4(R)$ with the subgroup
\begin{equation*}
{\rm Sp}_4(R)
=\left\{
\left(\begin{array}{c|c}
\begin{matrix}
I_{n-2} & \ \\
\ & A
\end{matrix}
&
\begin{matrix}
0_{n-2} & \ \\
\ & B
\end{matrix}\\
\midrule
\begin{matrix}
0_{n-2} & \ \\
\ & C
\end{matrix}
&
\begin{matrix}
I_{n-2} & \ \\
\ & D
\end{matrix}
\end{array}\right)
|\
\left(\begin{array}{c|c}
A & B\\
\midrule
C & D
\end{array}
\right)\in{\rm Sp}_4(R)
\right\}
\end{equation*}
of ${\rm Sp}_{2n}(R)$, the following decomposition holds for the elementary subgroup $E(C_n,R)$ of ${\rm Sp}_{2n}(R):$
$E(C_n,R)=(U^+(C_n,R)\cdot U^-(C_n,R))^2\cdot{\rm Sp}_4(R).$
\end{Proposition}
\begin{remark}
The product $(U^+(C_n,R)\cdot U^-(C_n,R))^2$ is a short hand for $\{A\cdot B\cdot C\cdot D|A,C\in U^+(C_n,R),B,D\in U^-(C_n,R)\}\subset{\rm Sp}_{2n}(R)$ and not a Cartesian product.
\end{remark}
\begin{proof}
In Section~\ref{section_matrix_calculations_sp_2n}, we choose a system of positive simple roots $\{\alpha_1,\dots,\alpha_{n-1},\beta\}$ in $C_n$ such that the Dynkin-diagram of this system of positive simple roots has the following form
\begin{center}
\begin{tikzpicture}[
shorten >=1pt, auto, thick,
node distance=2.5cm,
main node/.style={circle,draw,font=\sffamily\small\bfseries},
mynode/.style={rectangle,fill=white,anchor=center}
]
\node[main node] (1) {$\beta$};
\node[main node] (3) [left of=1] {$\alpha_1$};
\node[mynode] (4) [left of=3] {$\cdot\cdot\cdot$};
\node[main node] (5) [left of=4] {$\alpha_{n-1}$};
\node[mynode] (6) [left of=5] {$C_n:$};
\path (3) edge [double,<-] node {} (1);
\path (4) edge [] node {} (3);
\path (5) edge [] node {} (4);
\end{tikzpicture}
\end{center}
We only sketch the proof of this proposition, which proceeds by induction on $n\in\mathbb{N}.$ First, the statement is obvious for $n=2.$ Next, set
\begin{equation*}
X:=U^+(C_n,R)U^-(C_n,R)U^+(C_n,R)U^-(C_n,R){\rm Sp}_4(R).
\end{equation*}
Then $I_{2n}\in X$ holds and so quite similarly to classical proofs in algebraic K-theory like \cite[Theorem~2.5]{197877}, it suffices to show that for all simple roots $\phi\in C_n$ and $x\in R,$ one has $\varepsilon_{-\phi}(x)\cdot X\subset X.$ This in turn is done by distinguishing the cases $\phi=\alpha_{n-1}$ and
$\phi\neq\alpha_{n-1}$ and arguing similarly as in the proof of Tavgen's \cite[Proposition~1]{MR1044049}. If $\phi\neq\alpha_{n-1}$, one uses the induction hypothesis and if $\phi=\alpha_{n-1},$ one uses a similar decomposition result for ${\rm SL}_n(R)$ by Vaserstein \cite[Lemma~9]{MR961333} for the subgroup $E(A_{n-1},R)$ given by the positive, simple roots $\alpha_1,\dots,\alpha_{n-1}$ in $C_n$.
\end{proof}
In a similar fashion to Proposition~\ref{Sp_2n_stable_upper_lower_decomposition}, one can prove the following proposition invoking \cite[Lemma~9]{MR961333} for the stable range $1$-case:
\begin{Proposition}
\label{rank_1_boundedness}
Let $R$ be a commutative ring with $1$ and $N\in\mathbb{N}$ such that
\begin{align*}
&G(A_1,R)=E(A_1,R)=(U^+(A_1,R)U^-(A_1,R))^N,\\
&G(A_1,R)=E(A_1,R)=U^-(A_1,R)(U^+(A_1,R)U^-(A_1,R))^N
\end{align*}
or
\begin{equation*}
G(A_1,R)=E(A_1,R)=(U^+(A_1,R)U^-(A_1,R))^NU^+(A_1,R)
\end{equation*}
holds.
Then
\begin{align*}
&E(\Phi,R)=(U^+(\Phi,R)U^-(\Phi,R))^N,\\
&E(\Phi,R)=U^-(\Phi,R)(U^+(\Phi,R)U^-(\Phi,R))^N
\end{align*}
or
\begin{equation*}
E(\Phi,R)=(U^+(\Phi,R)U^-(\Phi,R))^NU^+(\Phi,R)
\end{equation*}
respectively holds for all irreducible root systems $\Phi.$ Further,
\begin{equation*}
E(\Phi,R)=(U^+(\Phi,R)U^-(\Phi,R))^2
\end{equation*}
holds for $R$ a ring of stable range $1.$
\end{Proposition}
Next, we give a more detailed analysis of the asymptotics of bounded generation for ${\rm Sp}_{2n}.$ In this context, recall the word norm $\|\cdot\|_{{\rm EL}_Q}$ from Definition~\ref{root_elements_word_norms}.
\begin{Proposition}
\label{sp_2n_sl_n_stable_root_elements_principal_ideal_domain}
Let $R$ be a principal ideal domain and let $n\geq 3$. If ${\rm Sp}_4(R)$ and ${\rm Sp}_{2n}(R)$ are generated by its root elements and there is a $K\in\mathbb{N}$ with
\begin{equation*}
\|{\rm Sp}_4(R)\|_{{\rm EL}_Q}\leq K,
\end{equation*}
then
\begin{equation*}
\|{\rm Sp}_{2n}(R)\|_{{\rm EL}_Q}\leq 12(n-2)+K.
\end{equation*}
\end{Proposition}
\begin{proof}
Considering ${\rm Sp}_4(R)$ as a subgroup of ${\rm Sp}_{2n}(R)$ as done in Proposition~\ref{Sp_2n_stable_upper_lower_decomposition},
we first prove by induction that:
\begin{Claim}\label{claim_upper_triangular_elq}
For each $A\in U^+(C_n,R)$ there is an $A'\in U^+(C_2,R)$ with $\|A'^{-1}A\|_{{\rm EL}_Q}\leq 3(n-2)$ for $n\geq 2.$
\end{Claim}
First, the claim is clear for $n=2$. Let $A\in U^+(C_n,R)$ be given. Then it has the form
\begin{align*}
A=
\left(\begin{array}{c|c}
\begin{matrix}
\begin{matrix}
1 & a_{1,2} & \cdot & \cdot & a_{1,n}\\
\ & 1 & a_{2,3} & \cdot & a_{2,n}\\
\ & \ & 1 & \cdot & \cdot \\
\ & \ & \ & \cdot & \cdot \\
\ & \ & \ & \ & 1 \\
\end{matrix}\\
\midrule
\ \\
\ \\
O_n
\ \\
\ \\
\
\end{matrix}
&
\begin{matrix}
a_{1,n+1} & \cdot & \cdot & \cdot & a_{1,2n}\\
a_{2,n+1} & \cdot & \cdot & \cdot & a_{2,2n}\\
a_{3,n+1} & \cdot & \cdot & \cdot & a_{3,2n}\\
\cdot & \cdot & \cdot & \cdot & \cdot \\
a_{n,n+1} & \cdot & \cdot & \cdot & a_{n,2n}\\
\midrule
\ 1 & \ & \ & \ & \ \\
-a_{1,2} & \ 1 & \ & \ & \ \\
\cdot & -a_{2,3} & 1 & \ & \ \\
\cdot & \cdot & \cdot & \cdot & \ \\
-a_{1,n} & -a_{2,n} & \cdot & \cdot & \ 1
\end{matrix}
\end{array}\right)
\end{align*}
Multiplying $A$ with the matrix
\begin{align*}
&T:=(I_{2n}-a_{1,2}(e_{1,2}-e_{n+2,n+1}))\cdot(I_{2n}-a_{1,3}(e_{1,3}-e_{n+3,n+1}))\cdots(I_{2n}-a_{1,n}(e_{1,n}-e_{2n,n+1}))
\end{align*}
from the right yields an element $B$ of $U^+(C_n,R)$ with the first $n$ entries of the first row of $B$ being $0$, except for the $(1,1)$-entry, which is $1.$
However, according to the proof of Lemma~\ref{first_Hessenberg_sp_2n}, there is a matrix $D\in{\rm Sp}_{2n}(R)$ of the form
\begin{align*}
D=
\left(
\begin{array}{c|c}
\begin{matrix}
1 & \ \\
\ & D'
\end{matrix}
& 0_n\\
\midrule
0_n &
\begin{matrix}
1 & \ \\
\ & D'^{-T}
\end{matrix}
\end{array}
\right)
\end{align*}
for $D'\in{\rm SL}_{n-1}(R)$ such that the first column of $DT^TD^{-1}$ has the form
\begin{equation*}
(1,t,0,\dots,0)^T
\end{equation*}
for $t={\rm gcd}(-a_{1,2},-a_{1,3},\dots,-a_{1,n}).$ However, due to the form of $T^T$ and $D$, this implies that
$DT^TD^{-1}=I_{2n}+t(e_{21}-e_{n+1,n+2})$ and hence $D^TTD^{-T}=I_{2n}+t(e_{12}-e_{n+2,n+1})$ holds. This implies $\|T\|_{{\rm EL}_Q}\leq 1.$ Then $B=A\cdot T$ has the form
\begin{align*}
B=\left(\begin{array}{c|c}
\begin{matrix}
\begin{matrix}
1 & 0 & \cdot & \cdot & 0\\
\ & 1 & b_{2,3} & \cdot & b_{2,n}\\
\ & \ & 1 & \cdot & \cdot \\
\ & \ & \ & \cdot & \cdot \\
\ & \ & \ & \ & 1 \\
\end{matrix}\\
\midrule
\ \\
\ \\
O_n
\ \\
\ \\
\
\end{matrix}
&
\begin{matrix}
b_{1,n+1} & \cdot & \cdot & \cdot & b_{1,2n}\\
b_{2,n+1} & \cdot & \cdot & \cdot & b_{2,2n}\\
\cdot & \cdot & \cdot & \cdot & \cdot \\
b_{n,n+1} & \cdot & \cdot & \cdot & b_{n,2n}\\
b_{n,n+1} & \cdot & \cdot & \cdot & b_{n,2n}\\
\midrule
\ 1 & \ & \ & \ & \ \\
\ 0 & \ 1 & \ & \ & \ \\
\ 0 & -b_{2,3} & 1 & \ & \ \\
\ \cdot & \cdot & \cdot & \cdot & \ \\
\ 0 & -b_{2,n} & \cdot & \cdot & \ 1
\end{matrix}
\end{array}\right)
\end{align*}
Next, multiplying $B$ with
\begin{align*}
S:=&(I_{2n}-b_{1,n+1}e_{1,n+1})\cdot(I_{2n}-b_{1,n+2}(e_{1,n+2}+e_{2,n+1}))\cdot(I_{2n}-b_{1,n+3}(e_{1,n+3}+e_{3,n+1}))\\
&\cdots(I_{2n}-b_{1,2n}(e_{1,2n}+e_{n,n+1}))
\end{align*}
from the right yields an element $C\in U^+(C_n,R)$ whose first row is $(1,0,\dots,0).$ But applying the proof of Lemma~\ref{second_Hessenberg}, we can find a matrix of the form
\begin{align*}
E=
\left(
\begin{array}{c|c}
\begin{matrix}
1 & \ \\
\ & E'
\end{matrix}
& 0_n\\
\midrule
0_n &
\begin{matrix}
1 & \ \\
\ & E'^{-T}
\end{matrix}
\end{array}
\right)
\end{align*}
for $E'\in{\rm SL}_{n-1}(R)$ such that the first column of $ES^TE^{-1}$ has the form $(1,0,\dots,0,-b_{1,n+1},s,\dots,0)^T$
for $s={\rm gcd}(b_{1,n+2},b_{1,n+3},\dots,b_{1,2n}).$ However, due to the form of $S^T$ and $E$, this implies that
$ES^TE^{-1}=(I_{2n}-b_{1,n+1}e_{n+1,1})\cdot(I_{2n}+s(e_{n+1,2}+e_{n+2,1}))$ and hence
\begin{equation*}
E^TSE^{-T}=(I_{2n}-b_{1,n+1}e_{1,n+1})\cdot(I_{2n}+s(e_{1,n+2}+e_{2,n+1}))
\end{equation*}
holds. This implies that $\|T\|_{{\rm EL}_Q}\leq 2.$
But note that $C$ must be an element of the subgroup $U^+(C_{n-1},R)$ of $U^+(C_n,R)$, if its first row is $(1,0,\dots,0)^T.$
This yields by induction that there is a $C'\in U^+(C_2,R)$ with
\begin{equation*}
\|C'^{-1}C\|_{{\rm EL}_Q}\leq 3(n-1-2)=3(n-3)
\end{equation*}
holds. Hence setting $A'$ as $C'$, one obtains from $C=ATS$ that
\begin{align*}
\|A'^{-1}A\|_{{\rm EL}_Q}&=\|C'^{-1}CS^{-1}T^{-1}\|_{{\rm EL}_Q}\leq\|C'^{-1}C\|_{{\rm EL}_Q}+\|T\|_{{\rm EL}_Q}+\|S\|_{{\rm EL}_Q}\\
&\leq 3(n-3)+3=3(n-2).
\end{align*}
Thus the claim holds for all $n\geq 2.$ Let $A\in{\rm Sp}_{2n}(R)$ be given. Principal ideal domains have stable range at most $2$ and so Proposition~\ref{Sp_2n_stable_upper_lower_decomposition} yields that
\begin{equation*}
{\rm Sp}_{2n}(R)=(U^+(C_n,R)U^-(C_n,R))^2{\rm Sp}_4(R)
\end{equation*}
for all $n\geq 2.$ Hence there are $u_1^+,u_2^+\in U^+(C_n,R),u_1^-,u_2^-\in U^-(C_n,R)$ as well as
$Z\in{\rm Sp}_4(R)$ with $A=u_1^+u_1^-u_2^+u_2^-Z$.
But $U^+(C_n,R)$ and $U^-(C_n,R)$ are conjugate in ${\rm Sp}_{2n}(R)$. Hence applying the claim of the first part of the proof to the $u_1^+,u_1^-,u_2^+,u_2^-$ yields
$X_1,X_2,Y_1,Y_2\in{\rm Sp}_{2n}(R)$ with
\begin{equation*}
\|X_1\|_{{\rm EL}_Q},\|X_2\|_{{\rm EL}_Q},\|Y_1\|_{{\rm EL}_Q},\|Y_2\|_{{\rm EL}_Q}\leq 3(n-2)
\end{equation*}
and $v_1^+,v_2^+\in U^+(C_2,R)$ and $v_1^-,v_2^-\in U^-(C_2,R)$ such that $u_1^+=v_1^+X_1,u_2^+=v_2^+X_2,u_1^-=v_1^-Y_1,u_2^-=v_2^-Y_2.$
But this implies
\begin{align*}
A&=u_1^+u_1^-u_2^+u_2^-Z=(v_1^+X_1)\cdot(v_1^-Y_1)\cdot(v_2^+X_2)\cdot(v_2^-Y_2)Z\\
&=(v_1^+X_1(v_1^+)^{-1})\cdot(v_1^+v_2^-Y_1(v_1^+v_2^-)^{-1})\cdot(v_1^+v_2^-v_2^+X_2(v_1^+v_2^-v_2^+)^{-1})\\
&\ \ \ \cdot(v_1^+v_2^-v_2^+v_2^-X_2(v_1^+v_2^-v_2^+v_2^-)^{-1})\cdot(v_1^+v_2^-v_2^+v_2^-)\cdot Z\\
&=(X_1^{v_1^+})\cdot(Y_1^{v_1^+v_2^-})\cdot(X_2^{v_1^+v_2^-v_2^+})\cdot(Y_2^{v_1^+v_2^-v_2^+v_2^-})\cdot(v_1^+v_2^-v_2^+v_2^-)\cdot Z.
\end{align*}
But $(v_1^+v_2^-v_2^+v_2^-)\cdot Z$ is an element of ${\rm Sp}_4(R)$ and hence $\|(v_1^+v_2^-v_2^+v_2^-)\cdot Z\|_{{\rm EL}_Q}\leq K$ holds. This implies
\begin{align*}
\|A\|_{{\rm EL}_Q}
&=\|(X_1^{v_1^+})\cdot(Y_1^{v_1^+v_2^-})\cdot(X_2^{v_1^+v_2^-v_2^+})\cdot(Y_2^{v_1^+v_2^-v_2^+v_2^-})\cdot(v_1^+v_2^-v_2^+v_2^-)\cdot Z\|_{{\rm EL}_Q}\\
&\leq\|X_1\|_{{\rm EL}_Q}+\|Y_1\|_{{\rm EL}_Q}+\|X_2\|_{{\rm EL}_Q}+\|Y_2\|_{{\rm EL}_Q}+\|(v_1^+v_2^-v_2^+v_2^-)\cdot Z\|_{{\rm EL}_Q}\\
&\leq 4\cdot 3\cdot(n-2)+K=12(n-2)+K.
\end{align*}
This yields the statement of the proposition.
\end{proof}
One also obtains:
\begin{Corollary}\label{sp_2n_sl_n_stable_root_elements_principal_ideal_domain}
Let $R$ be a principal ideal domain of stable range $1$ with ${\rm Sp}_{2n}(R)=E(C_n,R)$ for $n\geq 2.$ Then
$\|{\rm Sp}_{2n}(R)\|_{{\rm EL}_Q}\leq 9n-6.$
\end{Corollary}
\begin{proof}
First, note that according to Proposition~\ref{rank_1_boundedness}, we have ${\rm Sp}_{2n}(R)=(U^+(C_n,R)U^-(C_n,R))^2.$ Thus for each $A\in{\rm Sp}_{2n}(R),$ there are $u_1^+,u_2^+\in U^+(C_n,R)$ and $u_1^-,u_2^-\in U^-(C_n,R)$ with $A=u_1^+u_1^-u_2^+u_2^-.$ Then
\begin{align*}
\|A\|_{{\rm EL}_Q}&=\|u_1^+u_1^-u_2^+u_2^-\|_{{\rm EL}_Q}=\|(u_1^-)^{u_1^+}\cdot(u_1^+u_2^+)\cdot u_2^-\|_{{\rm EL}_Q}\\
&\leq\|(u_1^-)^{u_1^+}\|_{{\rm EL}_Q}+\|u_1^+u_2^+\|_{{\rm EL}_Q}+\|u_2^-\|_{{\rm EL}_Q}\\
&=\|(u_1^-)\|_{{\rm EL}_Q}+\|u_1^+u_2^+\|_{{\rm EL}_Q}+\|u_2^-\|_{{\rm EL}_Q}=3\|U^+(C_n,R)\|_{{\rm EL}_Q}.
\end{align*}
The last equation follows from the fact, that $U^+(C_n,R)$ and $U^-(C_n,R)$ are conjugate in ${\rm Sp}_{2n}(R).$ Next, according to
Claim~\ref{claim_upper_triangular_elq}, for each $u\in U^+(C_n,R),$ there is a $u'\in U^+(C_2,R)$ such that $\|u\cdot u'\|_{{\rm EL}_Q}\leq 3(n-2).$ But
$C_2^+$ only contains four roots and hence $\|u'\|_{{\rm EL}_Q}\leq 4$ and so $\|u\|_{{\rm EL}_Q}\leq 3n-2.$ This finishes the proof.
\end{proof}
We are now able to prove the first part of Theorem~\ref{strong_bound_explicit_semi_local}:
\begin{proof}
Let $S=\{A_1,\dots,A_k\}\subset{\rm Sp}_{2n}(R)$ normally generate ${\rm Sp}_{2n}(R)$. For $1\leq i\leq k$, let $I(A_i)\subset\varepsilon_s(A_i,320n)$ be the ideal given by Theorem~\ref{level_ideal_explicit_Sp_2n} with $V(I(A_i))\subset\Pi(\{A_i\}).$ However, Corollary~\ref{necessary_cond_conj_gen} yields $V(I(A_1)+\cdots+I(A_k))\subset\Pi(S)=\emptyset$ and so no maximal ideal can contain the ideal $I(A_1)+\cdots+I(A_k)$. Thus $\sum_{i=1}^k I(A_i)=R$ and so
\begin{equation}\label{explicit_upper_bounds_semi_local_rings_eq_1}
R=\varepsilon_s(S,320nk).
\end{equation}
According to Corollary~\ref{sum_decomposition_level_ideal_local}, each of the $I(A_i)$ is a sum of $7n$ ideals $J_1(A_i),\dots,J_{7n}(A_i),$
each of which is contained in $\varepsilon_s(A_i,64)$. Hence
\begin{equation*}
\sum_{i=1}^k \sum_{j=1}^{7n}J_j(A_i)=R
\end{equation*}
holds. Next, let $m$ be one of the maximal ideals of $R.$ Clearly not all of the ideals $J_j(A_i)$ can be contained in $m.$ Hence there are
$i(m)\in\{1,\dots,k\}$ and $j(m)\in\{1,\dots,7n\}$ with
\begin{equation*}
J_{j(m)}(A_{i(m)})\not\subset m.
\end{equation*}
But this implies that
\begin{equation*}
\sum_{m\text{ maximal ideal in R}} J_{j(m)}(A_{i(m)})
\end{equation*}
cannot be contained in any maximal ideal and thus must be the entire ring $R.$ But this implies
\begin{equation}\label{explicit_upper_bounds_semi_local_rings_eq_2}
R=\varepsilon_s(S,64q)
\end{equation}
Summarizing (\ref{explicit_upper_bounds_semi_local_rings_eq_1}) and (\ref{explicit_upper_bounds_semi_local_rings_eq_2}) yields
\begin{equation}
R=\varepsilon_s(S,64\min\{q,5nk\})
\end{equation}
holds. However, let $\alpha$ be a short, positive simple root in $C_n$ and $\beta$ a long, positive, simple root in $C_n$, such that the root subsystem of $C_n$ spanned by $\alpha$ and $\beta$ is isomorphic to $C_2.$ Next, we know that $\varepsilon_{\alpha}(x),\varepsilon_{\alpha+\beta}(\pm x)$ are elements of
$B_S(64\min\{q,5nk\})$ and hence
\begin{equation*}
\varepsilon_{2\alpha+\beta}(\pm x)=(\varepsilon_{\alpha}(x),\varepsilon_{\beta}(1))\cdot\varepsilon_{\alpha+\beta}(\mp x)
\end{equation*}
is an element of $B_S(192\min\{q,5nk\}).$ Phrased differently, the set $B_S(192\min\{q,5nk\})$ contains all root elements of ${\rm Sp}_{2n}(R).$
But $R$ is semi-local and hence of stable range $1$ by \cite[Lemma~6.4, Corollary~6.5]{MR0174604}. So Corollary~\ref{sp_2n_sl_n_stable_root_elements_principal_ideal_domain} yields $\|{\rm Sp}_{2n}(R)\|_{{\rm EL}_Q}\leq 9n-6.$ This bound together with the fact that $B_S(192\min\{q,5nk\})$ contains all root elements, implies
\begin{align*}
\|{\rm Sp}_{2n}(R)\|_S&\leq\|{\rm Sp}_{2n}(R)\|_{{\rm EL}_Q}\cdot\|{\rm EL}_Q\|_S\leq (9n-6)\cdot 192\min\{q,5nk\}\\
&=576(3n-2)\min\{q,5nk\}.
\end{align*}
This finishes the proof.
\end{proof}
\section{Rings of S-algebraic integers}
First, recall the definition of S-algebraic integers:
\begin{mydef}\cite[Chapter~I, \S 11]{MR1697859}\label{S-algebraic_numbers_def}
Let $K$ be a finite field extension of $\mathbb{Q}$. Then let $S$ be a finite subset of the set $V$ of all valuations of $K$ such that $S$ contains all archimedean valuations. Then the ring $\C O_S$ is defined as
\begin{equation*}
\C O_S:=\{a\in K|\ \forall v\in V-S: v(a)\geq 0\}
\end{equation*}
and $\C O_S$ is called \textit{the ring of $S$-algebraic integers in $K.$} Rings of the form $\C O_S$ are called \textit{rings of S-algebraic integers.}
\end{mydef}
Remember the word norm $\|\cdot\|_{\rm EL}$ from Definition~\ref{root_elements_word_norms}. Then for $R$ a ring of S-algebraic integers, the group ${\rm Sp}_4(R)$ is boundedly generated by root elements as observed by Tavgen:
\begin{Theorem}\cite{MR1044049}
\label{Tavgen}
Let $K$ be a number field and $R$ a ring of S-algebraic integers in $K.$ Further let
\begin{equation*}
\Delta:=\max\{|\{p|\ p\text{ a prime divisor of }{\rm discr}_{K|\mathbb{Q}}\}|,1\}
\end{equation*}
be given. Then $\|{\rm Sp}_{4}(R)\|_{\rm EL}\leq 180\Delta+27.$ Furthermore, if $R$ is a principal ideal domain or $\Delta=1$, then the bounds can be improved to
$\|{\rm Sp}_{4}(R)\|_{\rm EL}\leq 159.$
\end{Theorem}
\begin{remark}
This is not the bounded generation result as found in \cite{MR1044049}. Instead it is a summary of the result \cite[Corollary~4]{MR1044049} and
\cite[Proposition~1]{MR1044049} for the first inequality. The second inequality comes from applying possible improvements as appearing in Carter and Keller's paper
\cite{MR704220} in the principal ideal domain-case and the $\Delta=1$-case.
\end{remark}
Also note the following bounded generation result for ${\rm SL}_2(R)$ by Rapinchuk, Morgan and Sury:
\begin{Theorem}\cite[Theorem~1.1]{MR3892969}\label{Rapinchuck_bounded_generation}
Let $R$ be a ring of S-algebraic integers with infinitely many units. Then $\|{\rm SL}_2(R)\|_{{\rm EL}}\leq 9.$
\end{Theorem}
We can prove the first part of Theorem~\ref{strong_bound_explicit_alg_integer} now:
\begin{proof}
Let $S=\{A_1,\dots,A_k\}$ be a normal generating set of ${\rm Sp}_{2n}(R).$ The proof proceeds in two steps: We first show that all root elements
of ${\rm Sp}_{2n}(R)$ are contained in the ball $B_S(960n|S|).$ This implies that ${\rm EL}_Q=\{A\varepsilon_{\phi}(x)A^{-1}|x\in R,\phi\in C_n,A\in{\rm Sp}_{2n}(R)\}$
is a subset of $B_S(960n|S|).$ Second, we show that $\|{\rm Sp}_{2n}(R)\|_{{\rm EL}_Q}\leq 12n+\Delta(R)$. This two steps then imply the first part of
Theorem~\ref{strong_bound_explicit_alg_integer} as follows:
\begin{align*}
\|{\rm Sp}_{2n}(R)\|_S\leq \|{\rm EL}_Q\|_S\cdot\|{\rm Sp}_{2n}(R)\|_{{\rm EL}_Q}\leq 960n|S|\cdot(12n+\Delta(R)).
\end{align*}
For the first step, note that Theorem~\ref{level_ideal_explicit_Sp_2n} implies that there are ideals $I(A_1),\dots,I(A_k)$ such that for all $i=1,\dots,k$, one has
$I(A_i)\subset\varepsilon_s(A_i,320n)$ and $V(I(A_i))\subset\Pi(\{A_i\}).$ But this implies first that $I_S:=I(A_1)+\cdots+I(A_k)\subset\varepsilon_s(S,320n\cdot|S|)$
and second using Lemma~\ref{intersection_v_Pi}:
\begin{equation*}
V(I_S)=V(I(A_1))\cap\dots\cap V(I(A_k))\subset\Pi(A_1)\cap\dots\cap\Pi(A_k)=\Pi(S).
\end{equation*}
However, we know $\Pi(S)=\emptyset$ from Corollary~\ref{necessary_cond_conj_gen} and so $I_S$ is not contained in any maximal ideal of $R$. Thus $I_S=R$ and
$\varepsilon_s(S,320n\cdot|S|)=R.$ Then proceeding as in the proof of the first part of Theorem~\ref{strong_bound_explicit_semi_local}, one obtains
that $B_S(960n\cdot|S|)$ contains all root elements of ${\rm Sp}_{2n}(R)$ and hence the first step of the proof is finished.
For the second step, we first give upper bounds on $\|{\rm Sp}_4(R)\|_{{\rm EL}_Q}$ depending on $R$. First, if $R$ is a quadratic imaginary ring of integers or $\mathbb{Z}$, we have $\|{\rm Sp}_4(R)\|_{{\rm EL}_Q}\leq\|{\rm Sp}_4(R)\|_{EL}\leq 159$ according to Theorem~\ref{Tavgen}.
On the other hand, if $R$ is not a ring of quadratic imaginary integers or $\mathbb{Z}$, then $R$ has infinitely many units according to \cite[Corollary~11.7]{MR1697859}. This implies $\|{\rm SL}_2(R)\|_{EL}\leq 9$ for those rings by Theorem~\ref{Rapinchuck_bounded_generation}.
But rings of algebraic integers have stable range at most $2$ and so according to Proposition~\ref{rank_1_boundedness}, this implies
\begin{equation*}
{\rm Sp}_4(R)=(U^+(C_2,R)U^-(C_2,R))^4U^+(C_2,R)\text{ or }{\rm Sp}_4(R)=U^-(C_2,R)(U^+(C_2,R)U^-(C_2,R))^4.
\end{equation*}
But $C_2$ has four positive roots and hence $\|{\rm Sp}_4(R)\|_{{\rm EL}_Q}\leq\|{\rm Sp}_4(R)\|_{EL}\leq 4\cdot 9=36$ holds. Hence setting
\begin{equation*}
\Delta'(R):=
\begin{cases}
&\text{159, if }R\text{ is a quadratic imaginary ring of integers or }\mathbb{Z}\\
&\text{36, if }R\text{ is neither of the above}
\end{cases}
\end{equation*}
implies $\|{\rm Sp}_4(R)\|_{{\rm EL}_Q}\leq\Delta'(R)$ for all rings of S-algebraic integers with class number $1.$
Proposition~\ref{sp_2n_sl_n_stable_root_elements_principal_ideal_domain} then implies
\begin{equation*}
\|{\rm Sp}_{2n}(R)\|_{{\rm EL}_Q}\leq 12(n-2)+\|{\rm Sp}_4(R)\|_{{\rm EL}_Q}\leq 12(n-2)+\Delta'(R)=12n+\Delta(R).
\end{equation*}
This finishes the second step and the proof.
\end{proof}
\begin{remark}
One could improve the upper bounds on $\Delta_k({\rm Sp}_{2n}(R))$ in Theorem~\ref{strong_bound_explicit_alg_integer} and Theorem~\ref{strong_bound_explicit_semi_local} by a factor of $1.5$ by using that short root elements have better upper bounds with respect to $\|\cdot\|_S$ than long ones. However, this would require a more cumbersome argument.
\end{remark}
\section{Lower bounds on $\Delta_k({\rm Sp}_{2n}(R))$}
First, we need the following:
\begin{Proposition}
\label{dimension_counting_sln_sp_2n}
Let $K$ be a field, $t\in K-\{0\}, n\geq 2$ and $\phi\in C_n$ long. Then the element $E:=\varepsilon_{\phi}(t)$ normally generates ${\rm Sp}_{2n}(K)$ and
$\|{\rm Sp}_{2n}(K)\|_E\geq 2n.$
\end{Proposition}
\begin{proof}
First, we show that $E$ indeed normally generates ${\rm Sp}_{2n}(K).$ To this end, first note that it is well-known that the group ${\rm Sp}_{2n}(K)$ is generated by its root elements. Hence according to Corollary~\ref{necessary_cond_conj_gen}, the element $E$ normally generates ${\rm Sp}_{2n}(R),$ if $\Pi(\{E\})=\emptyset.$ However, the field $K$ has only one maximal ideal, namely $(0),$ and so $\Pi(\{E\})$ can only be non-empty, if $E$ is trivial, which is not the case.
Next, using the conventions from Section~\ref{section_matrix_calculations_sp_2n}, we can (possibly after conjugation with Weyl group elements) assume
$E=I_{2n}+te_{1,n+1}.$ We define the subspace $I(l):=\{v\in K^{2n}|l(v)=v\}$ for a linear map $l:K^{2n}\to K^{2n}.$ We prove next that for $l_1,l_2:K^{2n}\to K^{2n}$, one has
\begin{equation}
\label{fix_space_dim_formula}
{\rm dim}_K(I(l_1l_2))\geq{\rm dim}_K(I(l_1))+{\rm dim}_K(I(l_2))-2n.
\end{equation}
To see this, observe first that $I(l_1)\cap I(l_2)\subset I(l_1l_2)$ and hence
\begin{align*}
{\rm dim}_K(I(l_1l_2))&\geq{\rm dim}_K(I(l_1)\cap I(l_2))={\rm dim}_K(I(l_1))+{\rm dim}_K(I(l_2))-{\rm dim}_K(\langle I(l_1),I(l_2)\rangle)\\
&\geq {\rm dim}_K(I(l_1))+{\rm dim}_K(I(l_2))-2n.
\end{align*}
Observe that the linear map $E:K^{2n}\to K^{2n}$ induced by $E$ has
\begin{equation*}
I(E^{-1})=I(E)=Ke_1\oplus\cdots\oplus Ke_n\oplus Ke_{n+2}\oplus\cdots\oplus Ke_{2n}.
\end{equation*}
Hence ${\rm dim}_K I(E)=2n-1={\rm dim}_K I(E^{-1})$ holds. Note further for $X\in K^{2n\times 2n}$,
$A\in {\rm GL}_{2n}(K)$ and $v\in K^{2n}$, that the following holds:
\begin{equation*}
v\in I(X)\text{ precisely if }Av\in I(AXA^{-1}).
\end{equation*}
Hence $I(AXA^{-1})=AI(X)$ holds and thus ${\rm dim}_K I(X)={\rm dim}_K I(AXA^{-1}).$ Hence for each conjugate $X$ of $E$ or $E^{-1}$ in ${\rm Sp}_{2n}(K)$, one has ${\rm dim}_K(I(X))=2n-1.$ Next, let $X_1,\dots,X_k$ be either conjugates of $E$ or $E^{-1}$ in ${\rm Sp}_{2n}(K)$ or $I_{2n}.$ Then ${\rm dim}_K(I(X_1\cdots X_k))\geq 2n-k$ follows by induction on $k\in\mathbb{N}$ from (\ref{fix_space_dim_formula}). This implies in particular that for each $A\in B_E(2n-1)$ there is a non-trivial vector $v(A)\in K^{2n}$ fixed by $A.$ Hence each element of $B_E(2n-1)$ has eigenvalue $1.$ So if $\|{\rm Sp}_{2n}(K)\|_E\leq 2n-1$ or equivalently $B_E(2n-1)={\rm Sp}_{2n}(K)$ were to hold, then each element $A\in{\rm Sp}_{2n}(K)$ would have eigenvalue $1.$ Thus it suffices to give an element $A\in {\rm Sp}_{2n}(K)$ without the eigenvalue $1$ to finish the proof. To this end, observe that for $B\in{\rm SL}_n(K)$, the matrix
\begin{align*}
A=\left(
\begin{array}{c|c}
B & 0_n\\
\midrule
0_n & B^{-T}
\end{array}
\right)
\end{align*}
is an element of ${\rm Sp}_{2n}(R)$ with characteristic polynomial
\begin{equation*}
\chi_A(x)=\chi_B(x)\chi_{B^{-T}}(x)=\chi_B(x)\chi_{B^{-1}}(x).
\end{equation*}
But this implies that $A$ has eigenvalue $1$ precisely if either $B$ or $B^{-1}$ has eigenvalue $1.$ Yet $B^{-1}$ has eigenvalue $1$ precisely if $B$ does. Thus it suffices to provide an element $B\in{\rm SL}_n(K)$ without eigenvalue $1$ to finish the proof, but such matrices $B$ clearly exist.
\end{proof}
\begin{remark}
This `dimension counting'-strategy is quite well-known and was mentioned to me by B. Karlhofer in a different context, but it is also alluded to in Lawther's and Liebeck's paper \cite[p.~120]{lawther1998diameter}.
\end{remark}
We can finish the proof of Theorem~\ref{strong_bound_explicit_semi_local} now by providing lower bounds on $\Delta_k({\rm Sp}_{2n}(R))$:
\begin{proof}
Let $q<+\infty$ be the number of maximal ideals in $R.$ Note that $\Delta_{k}({\rm Sp}_{2n}(R))\leq\Delta_{k+1}({\rm Sp}_{2n}(R))$ holds for all $k\in\mathbb{N}.$ Thus we can restrict ourselves to the case of $k\leq q.$ Next, let $\C M_1,\dots,\C M_q$ be the maximal ideals in $R.$ Note that as maximal ideals $\C M_i$ and $\C M_j$ are coprime for $1\leq i\neq j\leq q$. Hence we obtain using the Chinese Remainder Theorem that
\begin{equation*}
p:R\to\prod_{i=1}^q R/\C M_i,x\mapsto (x+\C M_1,\dots,x+\C M_q)
\end{equation*}
is an epimorphism. Thus we can pick elements $x_1,\dots,x_k\in R$ such that
\begin{equation*}
p(x_j)=(0+\C M_1,\dots,0+\C M_{j-1},1+\C M_j,0+\C M_{j+1},\dots,0+\C M_k,1+\C M_{k+1},\dots,1+\C M_q).
\end{equation*}
holds for all $1\leq j\leq k.$ Next, let $\phi$ be a long root in $C_n$ and set $S:=\{\varepsilon_{\phi}(x_1),\dots,\varepsilon_{\phi}(x_k)\}.$ We finish the proof now by showing two claims:
First, that $S$ is a normally generating subset of ${\rm Sp}_{2n}(R)$ and second that $\|{\rm Sp}_{2n}(R)\|_S\geq 2nk$ holds.
To show the first claim, note that due to the choice of the $x_1,\dots,x_k$ and the fact that the $\C M_1,\dots,\C M_q$ are all the maximal ideals of $R$, we obtain for $1\leq j\leq k$ that:
\begin{equation*}
\Pi(\{\varepsilon_{\phi}(x_j)\})=\{\C M_i|1\leq i\neq j\leq k\}
\end{equation*}
and hence $\Pi(S)=\emptyset.$ But then Corollary~\ref{necessary_cond_conj_gen} implies that $S$ is indeed a normal generating set of ${\rm Sp}_{2n}(R).$
Next, consider the map
\begin{equation*}
\pi:{\rm Sp}_{2n}(R)\to\prod_{i=1}^k{\rm Sp}_{2n}(K_i),A\mapsto(\pi_{\C M_1}(A),\dots,\pi_{\C M_k}(A))
\end{equation*}
for the fields $K_i$ defined by $K_i:=R/\C M_i$ and the $\pi_{\C M_i}:{\rm Sp}_{2n}(R)\to{\rm Sp}_{2n}(K_i)$ being the corresponding reduction homomorphisms.
Then note that
\begin{align*}
\pi(\varepsilon_{\phi}(x_i))&=
(\varepsilon_{\phi}(x_i+\C M_1),\dots,\varepsilon_{\phi}(x_i+\C M_{i-1}),\varepsilon_{\phi}(x_i+\C M_i),\varepsilon_{\phi}(x_i+\C M_{i+1}),\dots,\varepsilon_{\phi}(x_i+\C M_k))\\
&=(1,\dots,1,\varepsilon_{\phi}(1+\C M_i),1,\dots,1).
\end{align*}
Thus the only non-trivial component of $\pi(\varepsilon_{\phi}(x_i))$ is the ${\rm Sp}_{2n}(K_i)$-component equal to $\varepsilon_{\phi}(x_i+\C M_i)$ and
${\rm Sp}_{2n}(K_i)$ is normally generated by $\varepsilon_{\phi}(x_i+\C M_i)$. Also this implies that the only non-trivial component of any conjugate of
$\pi(\varepsilon_{\phi}(x_i))$ is the ${\rm Sp}_{2n}(K_i)$-component. Together this implies that $\pi(S)$ normally generates $\prod_{i=1}^k {\rm Sp}_{2n}(K_i)$ and
\begin{equation*}
\|{\rm Sp}_{2n}(R)\|_S\geq \|\prod_{i=1}^k {\rm Sp}_{2n}(K_i)\|_{\pi(S)}=\sum_{i=1}^k\|{\rm Sp}_{2n}(K_i)\|_{\varepsilon_{\phi}(x_i+\C M_i)}.
\end{equation*}
Thus to finish the proof, it suffices to now apply Proposition~\ref{dimension_counting_sln_sp_2n} to obtain
\begin{equation*}
\|{\rm Sp}_{2n}(K_i)\|_{\varepsilon_{\phi}(x_i+\C M_i)}\geq 2n
\end{equation*}
for all $i=1,\dots,k$.
\end{proof}
The proof for the second part of Theorem~\ref{strong_bound_explicit_alg_integer} works the same way. The only difference is that the $x_1,\dots,x_k$ are instead chosen as $x_i:=p_1\cdots\hat{p_i}\cdots p_k$ for $p_1,\dots,p_k$ the generators of $k$ distinct maximal ideals and the hat denoting the omission of the corresponding prime factory.
\section*{Closing remarks}
In the course of this paper, we assumed that the ring $R$ is a principal ideal domain. This however is mainly a method to simplify the calculations. Instead one can also use that both semi-local rings and rings of algebraic integers have stable range at most $2$ and then argue similar to the proof of
Bass' \cite[Theorem~4.2(e)]{MR0174604}. However, this requires further intermediate steps and does not change the overall asymptotic of the bounds on
$\Delta_k({\rm Sp}_{2n}(R))$ in $k$ and $n$, but merely the appearing coefficients.
Kedra, Libman and Martin \cite{KLM} have shown that $\Delta_k({\rm SL}_n(R))$ for $n\geq 3$ also has an upper bound proportional to $n^2k$ for $R$ a ring of S-algebraic integers with class number $1$. It is clear that similar arguments as in the present paper will also work for all S-arithmetic Chevalley groups $G(\Phi,R)$ for $\Phi$ an irreducible root system of rank at least $3$ to show that
\begin{equation*}
{\rm rank}(\Phi)\cdot k\lesssim\Delta_k(G(\Phi,R))\lesssim{\rm rank}(\Phi)^2\cdot k.
\end{equation*}
This raises the question regarding the true asymptotic of $\Delta_k(G(\Phi,R))$. At the time of writing, we were not aware of results that would help decide this. However, the anomynous referee for \cite{General_strong_bound} suggested a strategy that would among else, imply that the asymptotic of $\Delta_k(G(\Phi,R))$ agrees with the lower linear bound, if for each irreducible root system $\Phi$ of rank at least $3$ and each $I$ a non-trivial principal ideal domain in $R$, the following conjecture holds:
\begin{Conjecture}\label{fundamental_conjecture}
Let $X_{I,\Phi}:=\{A\varepsilon_{\phi}(x)A^{-1}|x\in I,A\in G(\Phi,R),\phi\in\Phi\}$ be given and let $N_{I,\Phi}$ be the subgroup of $G(\Phi,R)$ generated by $X_{I,\Phi}.$ Then there is a constant $K:=K(I,\Phi)$ proportional to ${\rm rank}(\Phi)$ and independent of $R$ and $I$ such that $N_{I,\Phi}=X_{I,\Phi}^K$ holds.
\end{Conjecture}
There is only one such result known to us, a theorem by Morris \cite[Theorem~6.1(1)]{MR2357719} implying that there is such a $K(I,A_n)$ for $n\geq 3$, but this $K(I,A_n)$ depends on the cardinality of $R/I,$ the number $n$ and the degree $[K:\mathbb{Q}]$ of the number field $K$ containing $R.$
It is easy to see that a constant $K(I,A_n)$ as required must depend on $n$ (or equivalently ${\rm rank}(A_n)$), but whether such a constant $K(I,A_n)$ must depend on $I$, $|R/I|$ and $[K:\mathbb{Q}]$ is somewhat unclear.
In an upcoming paper about bounds for $\Delta_k({\rm Sp}_4(R))$, we will deal with a relatively simple special case for $I=2R$ and $\Phi=C_2$ to show that at least in certain cases $K(I,\Phi)$ can be made to not depend on $R$ and the field extension $K|\mathbb{Q}$.
|
1,314,259,996,645 | arxiv | \section{Introduction}
The virial expansion was developed in the beginning of the 20th century \cite{onnes1901expression} to provide an estimate of the deviation of the behavior of real gases from the ideal gas equation. Since then, the second virial coefficient $b_2$ has been studied in great detail because it can be calculated analytically and gives the leading order correction for real gases. The virial coefficients reflect, order by order, the effects of interactions on the $n$-body problem: the $n$-th order virial coefficient is determined by solving the $n$-body problem. Therefore, one can obtain the shift in $b_2$ from the free particle case {\it analytically} by solving the 2-body Schr{\"o}dinger equation with an interaction potential. Beth and Uhlenbeck, in their famous paper~\cite{beth1937quantum}, showed that this shift $\delta b_2$ is related to the scattering phase shift in 3D. Since then the result has been generalized in lower dimensions as well \cite{portnoi1998levinson,kristensen2016second,cui2012quasi}. Recently, it was proven that $\delta b_2$ is the imprint of the quantum scale anomaly in dilute quantum gases in 1D with three-body and 2D with two-body local interactions \cite{daza2018virial,drut2018quantum} using a many-body path integral formalism \cite{ordpa}. In an effort to extend this work with a 1D local derivative-delta interaction, it was demonstrated that the derivative-delta potential shares a number of properties with the delta-function potential \cite{camblong2019quantum}. One of them is that a straightforward application of the Beth-Uhlenbeck (BU) formula misses a $-1/2$ term to give a \say{wrong} result in the limit $E_B \rightarrow 0$ ($E = -E_B$, $E_B>0$ is the bound state energy). It has been previously shown that in 1D, the correct BU formula has an additional $-1/2$ term which produces the correct result for the 1D delta-function in the limit $E_B\rightarrow 0$. The origin of this mysterious factor in 1D has been mentioned in several works \cite{amaya2005second,dodd1974cluster,sassoli1994levinson,barton1985levinson}.
\noindent The present article is an effort to make the formalism more understandable and to connect different contexts that share the same property with regards to the $-1/2$ term. In this note, we use the spectral-density approach to show in detail that this term is due to the non-normalizability of the scattering states and the contribution of the zero energy state of the system. We have used an infrared (large volume, length in 1D) cutoff regularization method to systematically control the divergences in the calculation, which gives us the desired $-1/2$ term. For simplicity, and for the sake of better understanding, we have used the results of scattering from a delta-function potential, but the method can be implemented for general potentials as well. In addition to providing the correction term in the BU formula, the spectral density approach can also be used to derive the 1D Levinson's theorem \cite{boya2007theorem}. We have also investigated similar subtleties using different methods based on quantum field theoretical treatments of the partition function \cite{inPrep}. These methods also give the correct $-1/2$ term, but in some its appearance seems simpler than others. We have chosen to give a sketch of the calculation using the method of \cite{PhysRevA.84.053633} in appendix \ref{appendixLeyronas}, since the subtleties there appear to be closely related to the ones in the spectral density method investigated here. A more comprehensive and detailed review of these issues will appear elsewhere \cite{inPrep}.
\noindent This article is organized as follows: in Sec. \ref{sec:BUf} we show how a naive application of the BU formula gives an incorrect result for the 1D delta-function interaction. In Sec. \ref{sec:Spectral} we use the spectral density method to find the correction factor to the BU formula. Finally, we discuss the relation between Levinson's theorem and the spectral density method in Sec. \ref{sec:Levinson} followed by a brief discussion of the whole topic in Sec. \ref{sec:Conclusion}.
\section{Beth-Uhlenbeck formula\label{sec:BUf}}
The BU formula \cite{beth1937quantum} connects the shift of the second virial coefficient from the free case $(\delta b_2)$ to the scattering phase shift of the potential. For a phase shift $\delta_l(k)$ corresponding to the $l$-th partial wave, the BU formula in 1D is given by [see appendix \ref{appendixbu1d}],
\begin{equation}
\delta b_2 = \sum_{B} e^{\beta E_B} + \frac{1}{\pi}\sum_{l=0,1} \int_{0}^{\infty}dk \hskip 0.2em e^{-\beta k^2} \frac{d\delta_l(k)}{dk},\label{BUrelation}
\end{equation}
where $E=-E_B$ $(E_B>0)$ is a bound state energy and $\beta = 1/k_BT$. For the 1D delta-function potential, there is only $s$-wave scattering phase shift, and hence the only possible value of $l$ is 0. The scattering phase shift for this potential is given by $\delta_0(k) = \arctan\left(\frac{\sqrt{E_B}}{k}\right)$ in the units of reduced mass $\mu= 1/2$ \cite{lapidus1969phase}. From here on we will omit the $l$ subscript on the phase shift and just denote it by $\delta(k)$. Using the BU formula given in Eq. (\ref{BUrelation}), and plugging in the phase shift of the delta-function one obtains
\begin{equation}
\delta b_2 = e^{\beta E_B} - \frac{1}{2} e^{\beta E_B}\left(1-\text{erf}\left[\sqrt{\beta E_B} \right]\right).
\end{equation}
It is easy to see that this expression is incorrect by checking the limit $E_B \rightarrow 0$. When the potential strength goes to zero, one should expect that the shift in the second virial coefficient vanishes. In this case, instead of obtaining zero, we get $\delta b_2 = 1/2$ in the $E_B\rightarrow 0$ limit. Clearly, this is incorrect and eq. (\ref{BUrelation}) is missing terms. To determine the missing terms, we evaluate the quantity $\delta b_2$ using the spectral density method.
\section{Spectral density method \label{sec:Spectral}}
Green's functions have extensive use in quantum mechanics \cite{economou1983green}. One of them is the usage of the spectral representation of the retarded Green's function to find out the spectral function. The spectral function denotes the probability that a certain state with momentum $k$ has energy $E$. This is exactly what the local density of states is. The local density of states can then be integrated with respected to real space to give the global density of states, a quantity which is related to scattering phase shifts. In this section, we are going to exploit this relation between the density of states and the retarded Green's function to find the missing term in the BU formula.
\noindent The global density of states in the spectral density method is given by \cite{gasparian1996partial}
\begin{equation}
\frac{dN}{dE} = -\frac{1}{\pi}\int_{-\infty}^{\infty} dx \text{ Im}[G(x,x)], \label{dos_spectral}
\end{equation}
where $G(x,x')$ is the retarded Green's function of the system. For an attractive 1D delta-function potential we have exactly one bound state and a continuum of scattering states. So, the Hilbert space can be separated into two parts. One part includes the bound states and the other includes the scattering states. With this structure, the Green's function can be written as
\begin{equation}
G(x,x') = \lim_{\epsilon\rightarrow 0} \frac{\pb{x}\psi_B^*(x')}{E+E_B+i\epsilon} + \int_{-\infty}^{\infty} \frac{dk}{2\pi} \frac{\pk{x}\psi_k^*(x')}{E-k^2+i\epsilon},\label{gf_spectral}
\end{equation}
where the first term correspond to the bound state $(\pb{x})$ and the integral is over the momentum of scattering states $(\pk{x})$. In Eq. (\ref{gf_spectral}) it is implicitly understood that the scattering states are defined only for positive energy of the system ($E>0$) and bound states are defined for negative energy of the system ($E<0$).
\noindent The density of states changes when the interaction is turned on. Without the interaction potential, we will only have a free particle with a continuous energy spectrum. The appearance of the bound states when the interaction is nonzero indicates the change in density of states. This change can be written in terms of Eq. (\ref{dos_spectral}),
\begin{equation}
\frac{d\Delta N}{dE} = -\frac{1}{\pi}\int_{-\infty}^{\infty} dx \text{ Im}[G(x,x)] + \frac{1}{\pi}\int_{-\infty}^{\infty} dx \text{ Im}[G_0(x,x)].
\end{equation}
$G_0(x,x')$ is the Green's function for the non-interacting case (free particle). The change in the density of states is related to the difference between the classical and quantum calculation of the second virial coefficient $\delta b_2$ by (see appendix \ref{appendixbu1d})
\begin{equation}
\delta b_2 = \int_{-\infty}^{\infty} dE \hskip 0.2em e^{-\beta E} \frac{d\Delta N}{dE}. \label{spectraldeltab2}
\end{equation}
Writing out the whole expression explicitly we get
\begin{align}
\delta b_2 &= \int_{-\infty}^{\infty} dE \hskip 0.2em e^{-\beta E}\left(-\frac{1}{\pi}\right)\int_{-\infty}^{\infty} dx \text{ Im}[G(x,x)-G_0(x,x)]\\
&= \int_{-\infty}^{\infty} dE \hskip 0.2em e^{-\beta E}\left(-\frac{1}{\pi}\right)\int_{-\infty}^{\infty} dx \text{ Im}\left[ \frac{|\pb{x}|^2}{E+E_B+i\epsilon} + \int_{-\infty}^{\infty} \frac{dk}{2\pi} \frac{|\pk{x}|^2- |\psi_0(x)|^2}{E-k^2+i\epsilon}\right].\label{deltab2}
\end{align}
where $\psi_0(x)$ is the free particle wave function (it is still dependent on $k$, but we use $\psi_0(x)$ for notational convenience). In appendix \ref{appendixSpectraltoBU} we verify that this form of writing the second virial coefficient using the spectral density function is equivalent to the BU formula. So, it may seem that this method of calculating the shift in the second virial coefficient will yield the same incorrect result for $\delta b_2$ in the case of a delta-function potential. Indeed, with a naive straightforward calculation one would end up with the same result. However, a careful consideration of divergent quantities (like the normalization of scattering states), which can explicitly be recognized in this method, yields the correct expression for $\delta b_2$ and provides a modification of the BU formula.
\noindent To arrive at the BU formula involving phase shift from the expression with density of states, we have used a certain approximation (see appendix \ref{appendixbu1d}). First, we evaluated the number of states in a large box of volume $L$ and then took the limit $L\rightarrow \infty$. The error in following this argument is in general negligible but in certain cases this boundary condition can contribute significantly. We will see in the following discussion how this approximation contributes to the evaluation of $\delta b_2$ in the spectral density formalism.
\noindent From Eq. (\ref{deltab2}), the calculation of the bound state energy part is rather straightforward and gives a contribution of $e^{\beta E_B}$ (see appendix \ref{appendixSpectraltoBU}). So, here we consider the scattering part of $\delta b_2$ and denote it by $\delta b_2^{sc}$ which is given by
\begin{equation}
\delta b_2^{sc} = \int_{0}^{\infty} dE \hskip 0.2em e^{-\beta E}\left(-\frac{1}{\pi}\right)\int_{-\infty}^{\infty} dx \text{ Im}\left[ \int_{-\infty}^{\infty} \frac{dk}{2\pi} \frac{|\pk{x}|^2- |\psi_0(x)|^2}{E-k^2+i\epsilon}\right].\\
\end{equation}
The imaginary part of the denominator in the limit $\epsilon \rightarrow 0$ gives a delta-function in $k$ (Sokhotski-Plemelj theorem), yielding
\begin{align}
\delta b_2^{sc} = \int_{0}^{\infty} dE \hskip 0.2em e^{-\beta E}\int_{-\infty}^{\infty} dx \int_{-\infty}^{\infty} \frac{dk}{2\pi} \left(\left|\pk{x}\right|^2- \left|\psi_0(x)\right|^2\right) \delta(E-k^2).
\end{align}
We can write the scattering states for a delta-function potential as \cite{griffiths2018introduction}
\begin{equation}
\pk{x} = e^{ikx} + R(|k|) e^{i|k||x|}, \hskip 1em R(|k|) = -\frac{\kappa}{\kappa + i|k|},
\end{equation}
where $\kappa = \sqrt{E_B}$ is the wave vector corresponding to the bound state energy. After performing the integration over $k$ (using $\delta(f(x)) = \sum_{x_0} \frac{\delta(x-x_0)}{|f'(x_0)|}$, where $f(x_0)=0$) we get
\begin{align}
\delta b_2^{sc} = -\int_{0}^{\infty} dE \hskip 0.2em e^{-\beta E}\int_{-\infty}^{\infty} dx \frac{\sqrt{E_B}}{\pi\sqrt{E}(E+E_B)}\bigg[&\sqrt{E_B} \cos\left(\sqrt{E}(|x|-x)\right)\nonumber\\
&+\sqrt{E}\sin\left(\sqrt{E}(|x|-x)\right)-\frac{\sqrt{
E_B}}{2}\bigg].
\end{align}
The $x$ integral of the oscillating terms are divergent. The physical origin of this divergence is the non-normalizability of the scattering states. To get our way around this divergence, we introduce an infrared cutoff regularization procedure. In this process we change the limit of integration $\int_{-\infty}^{\infty} dx \rightarrow \int_{-a}^{a}dx$ with $a\rightarrow\infty$. This is conceptually the same as considering a box of length $2a$ and then taking the boundaries to infinity. We only take the limit $a\rightarrow\infty$ after performing the $E$ integral and hope that the divergences disappear. The quantity with the cutoff regularization becomes,
\begin{align}
\delta b_2^{sc} = -\lim_{a\rightarrow \infty}\int_{0}^{\infty} dE \hskip 0.2em e^{-\beta E}\int_{-a}^{a} dx \frac{\sqrt{E_B}}{\pi\sqrt{E}(E+E_B)}\bigg[&\sqrt{E_B} \cos\left(\sqrt{E}(|x|-x)\right)\nonumber\\
&+\sqrt{E}\sin\left(\sqrt{E}(|x|-x)\right)-\frac{\sqrt{
E_B}}{2}\bigg].
\end{align}
The integration over $x$ is now finite and can be carried out to yield
\begin{align}
-\lim_{a\rightarrow \infty}\int_{0}^{\infty} dE \hskip 0.2em e^{-\beta E} \frac{\sqrt{E_B}}{2\pi\sqrt{E}(E+E_B)}\left[1-\cos(2\sqrt{E}a)+\frac{\sqrt{E_B}}{\sqrt{E}}\sin(2\sqrt{E}a)\right].
\end{align}
The first term in the bracket is independent of the cutoff parameter $a$, and produces the same result as the BU formula,
\begin{equation}
-\int_{0}^{\infty} dE \hskip 0.2em e^{-\beta E} \frac{\sqrt{E_B}}{2\pi\sqrt{E}(E+E_B)} = -\frac{
1}{2} e^{\beta E_B}\left(1-\text{erf}\left[\sqrt{\beta E_B}\right]\right).
\end{equation}
So, the correction term to the BU formula appears from the cutoff regularization and needs to be evaluated explicitly. With the substitution of $E=k^2$, the term involving the cutoff parameter becomes
\begin{align}
\lim_{a\rightarrow \infty}\int_{0}^{\infty} dk \hskip 0.2em e^{-\beta k^2} \frac{\sqrt{E_B}}{\pi(k^2+E_B)}\left[\cos(2ka)-\frac{\sqrt{E_B}}{k}\sin(2ka)\right]. \label{cutoffcorrection}
\end{align}
The first term within the brackets can be rewritten as
\begin{equation}
\lim_{a\rightarrow \infty}\frac{1}{2\pi} \int_{-\infty}^{\infty} dk \hskip 0.2eme^{-\beta k^2} \frac{\sqrt{E_B}}{(k^2+E_B)} e^{2ika} = 0.\label{cutoffcos}
\end{equation}
The integration can be done by considering a semicircular contour in the upper half plane because $a>0$ (Fig. \ref{fig:contours}(a)). The pole $k=+i\sqrt{E_B}$ contributes to the residue and in the limit $a\rightarrow \infty$, the residue vanishes. The second term in Eq. (\ref{cutoffcorrection}), however, has three poles, at $k=\pm i \sqrt{E_B}$ and $k=0$. We consider a similar semicircular contour in the upper half plane but now there is another pole on the real axis $k=0$ (Fig. \ref{fig:contours}(b)). To avoid this, we take an infinitesimal semicircular detour around it (with radius $\epsilon\rightarrow 0$). The contribution from the pole inside the contour gives a zero contribution in the limit $a\rightarrow\infty$ like before but the small infinitesimal arc contributes a $-\frac{1}{2}$ term,
\begin{equation}
-\lim_{a\rightarrow \infty}\frac{1}{2\pi i} \int_{-\infty}^{\infty} dk \hskip 0.2eme^{-\beta k^2} \frac{E_B}{k(k^2+E_B)} e^{2ika} = -\frac{1}{2}. \label{cutoffsin}
\end{equation}
\begin{figure}
\centering
\begin{subfigure}{0.47\textwidth}
\begin{tikzpicture}
\def0.2{0}
\def3{2.6}
\def0.5{0.5}
\draw (-3.3, 0) -- (3.3,0)
(0, -3.3) -- (0, 3.3);
\draw[thick,black,xshift=2pt,
decoration={ markings,
mark=at position 0.25 with {\arrow{latex}},
mark=at position 0.75 with {\arrow{latex}}},
postaction={decorate}]
(-3,0) -- (3,0);
\draw[thick,black,xshift=2pt,
decoration={ markings,
mark=at position 0.2 with {\arrow{latex}},
mark=at position 0.4 with {\arrow{latex}},
mark=at position 0.6 with {\arrow{latex}},
mark=at position 0.8 with {\arrow{latex}}},
postaction={decorate}]
(3,0) arc (0:180:3) -- (-3,0);
\begin{scriptsize}
\draw [fill=black] (0,2)circle (3.5pt);
\draw [fill=black] (0,-2)circle (3.5pt);
\end{scriptsize}
\node at (3,-0.45){$Re\{k\}$};
\node at (-0.1,3.7) {$Im\{k\}$};
\node at (1,2) {$i\sqrt{E_B}$};
\node at (1,-2) {$-i\sqrt{E_B}$};
\end{tikzpicture}
\caption{} \label{ta}
\end{subfigure}
\hspace*{\fill}
\begin{subfigure}{0.47\textwidth}
\begin{tikzpicture}
\def0.2{0}
\def3{2.6}
\def0.5{0.5}
\draw (-3.3, 0) -- (3.3,0)
(0, -3.3) -- (0, 3.3);
\draw[thick,black,xshift=2pt,
decoration={ markings,
mark=at position 0.5 with {\arrow{latex}}},
postaction={decorate}]
(-3,0) -- (-0.5,0);
\draw[thick,black,xshift=2pt,
decoration={ markings,
mark=at position 0.5 with {\arrow{latex}}},
postaction={decorate}]
(0.5,0) -- (3,0);
\draw[thick,black,xshift=2pt,
decoration={ markings,
mark=at position 0.2 with {\arrow{latex}},
mark=at position 0.4 with {\arrow{latex}},
mark=at position 0.6 with {\arrow{latex}},
mark=at position 0.8 with {\arrow{latex}}},
postaction={decorate}]
(3,0) arc (0:180:3) -- (-3,0);
\draw[thick,black,xshift=2pt,
decoration={ markings,
mark=at position 0.5 with {\arrow{latex}}},
postaction={decorate}]
(-0.5,0) arc (180:0:0.5) -- (0.5,0);
\begin{scriptsize}
\draw [fill=black] (0,0) circle (3.5pt);
\draw [fill=black] (0,2)circle (3.5pt);
\draw [fill=black] (0,-2)circle (3.5pt);
\end{scriptsize}
\node at (3,-0.45){$Re\{k\}$};
\node at (-0.45,3.7) {$Im\{k\}$};
\node at (1,2) {$i\sqrt{E_B}$};
\node at (1,-2) {$-i\sqrt{E_B}$};
\end{tikzpicture}
\caption{} \label{tb}
\end{subfigure}
\hspace*{\fill}
\caption{(a) Contour for doing integration in Eq. (\ref{cutoffcos}). The poles are at $k=\pm i\sqrt{E_B}$ but there is no pole at $k=0$. (b) Integration contour required for in Eq. (\ref{cutoffsin}). The poles are at $k=\pm i\sqrt{E_B}$ and at $k=0$. The small detour around $k=0$ contributes the extra $-1/2$ term.}
\label{fig:contours}
\end{figure}
\noindent Combining Eq. (\ref{cutoffcos}), (\ref{cutoffsin}) and the contribution from the bound state part we now have the complete result,
\begin{equation}
\delta b_2 = e^{\beta E_B} -\frac{
1}{2} e^{\beta E_B}\left(1-\text{erf}\left[\sqrt{\beta E_B}\right]\right) - \frac{1}{2} . \label{deltab2corrected}
\end{equation}
We see that in the limit $E_B\rightarrow 0$, this now gives the correct answer which is $\delta b_2 = 0$. The $-1/2$ term came from the residue of the pole at $k=0$, which is equivalent to the contribution of the zero-energy state. It shows that we must be careful while dealing with the boundary terms that produces a divergence (taking the limit that the box length goes to infinity).
\section{Correction to BU formula in 1D and Levinson's theorem \label{sec:Levinson}}
In the previous section we found that there are subtle correction terms to the BU formula appearing from the infrared regularization procedure and carefully taking the length of the box to infinity. So, where did it go wrong in the BU formula derivation? As shown in appendix \ref{appendixSpectraltoBU}, the scattering part of Eq. (\ref{deltab2}) can also be rewritten as
\begin{equation}
\delta b_2^{sc} = \int_{0}^{\infty} \frac{dk}{\pi} \hskip 0.2em e^{-\beta k^2} \int_{-\infty}^{\infty} dx \left(|\pk{x}|^2- |\psi_0(x)|^2\right)\label{probdensitymain}.
\end{equation}
Looking closely at Eq. (\ref{probdensitymain}), one can recognize that the $x$ integral over the scattering states are divergent as well. Instead of treating this divergence carefully, we rewrote the probability density (density of states) in terms of phase shifts assuming that there is no consequence of ignoring the effects of taking the box boundaries to infinity. In fact, one can carry out the same cutoff regularization scheme starting from Eq. (\ref{probdensitymain}) (considering $\pk{x} = \cos(k|x|+\delta(k))$) and get a more generalized correction term involving phase shifts equivalent to Eq. (\ref{cutoffcorrection}). This approach gives us the modification to the BU formula
\begin{gather}
\delta b_2 = \sum_{B} e^{\beta E_B} + \int_{0}^{\infty} \frac{dk}{\pi} \frac{d\delta(k)}{dk}\hskip 0.2em e^{-\beta k^2} + I,\label{BUcorrected}\\
\text{where } I = \lim_{a\rightarrow \infty}\int_{0}^{\infty} dk \hskip 0.2em \frac{e^{-\beta k^2}}{2\pi k}\left[\sin(2ka)(\cos(2\delta(k))-1) + \cos(2ka)\sin(2\delta(k))\right] \label{correctionphaseshift}
\end{gather}
One can check that Eq. (\ref{correctionphaseshift}) and Eq. (\ref{cutoffcorrection}) are identical by plugging in the phase shift for the delta-function potential $\delta(k) = \arctan\left(\frac{\sqrt{E_B}}{k}\right)$. The contribution of the correction term depends on the nature of the interacting system at $k=0$. For a system with a bound state at zero energy ($k=0$), the integral $I$ vanishes in the limit $a\rightarrow \infty$ (because there is no pole anymore at $k=0$). Otherwise it will contribute a nonzero correction to Eq. (\ref{BUcorrected}). This statement has a very close relation to Levinson's theorem \cite{levinson1949determination,levinson1949uniqueness,newton1977noncentral,wellner1964levinson}. Levinson's theorem relates the number of bound states for a symmetric potential to the phase shift at zero energy and infinite energy.
\noindent To see how the spectral density formalism relates to Levinson's theorem in 1D, we refer back to Eq. (\ref{dos_spectral}). Instead of finding $\delta b_2$, now we want to calculate $\Delta N$ by integrating over all energy. Integrating over all energies should give $\Delta N = 0$ because the introduction of the potential only changes the density of states, but we still are within the same Hilbert space \cite{weinberg2015lectures}. From Eq. (\ref{dos_spectral}), integrating both sides w.r.t. $E$, we obtain
\begin{equation}
\Delta N = \int_{-\infty}^{\infty} dE \hskip 0.2em \left(-\frac{1}{\pi}\right)\int_{-\infty}^{\infty} dx \text{ Im}\left[\sum_B \frac{|\pb{x}|^2}{E+E_B+i\epsilon} + \int_{-\infty}^{\infty} \frac{dk}{2\pi} \frac{|\pk{x}|^2- |\psi_0(x)|^2}{E-k^2+i\epsilon}\right] \label{deltaN}.
\end{equation}
Here, we have introduced a sum over the number of bound states in the expression of the Green's function. In case of the attractive delta-function potential there is just one bound state, but in general there can be more than one bound state for a given potential. In Eq. (\ref{deltaN}) the $k$ and $x$ integration can be carried out in the same way as in Sec. \ref{sec:Spectral} to arrive at the result
\begin{equation}
\Delta N = \sum_{B}\int_{-\infty}^{0}dE \hskip 0.2em \delta(E+E_B) + \frac{1}{\pi}\int_{0}^{\infty} dk \hskip 0.2em \frac{d\delta}{dk} + I_1,
\end{equation}
where
\begin{equation}
I_1 = \lim_{a\rightarrow \infty}\int_{0}^{\infty} dk \hskip 0.2em \frac{1}{2\pi k}\left[\sin(2ka)(\cos(2\delta(k))-1) + \cos(2ka)\sin(2\delta(k))\right].
\end{equation}
Carrying out the E integral on the RHS now gives
\begin{align}
\Delta N &= N_B + \frac{\delta(\infty)-\delta(0)}{\pi} + I_1\\
\implies N_B &= \frac{\delta(0)-\delta(\infty)}{\pi} - I_1. \label{Levinson}
\end{align}
where $N_B$ is the number of bound states. Eq. (\ref{Levinson}) is precisely the Levinson's theorem in 1D for the even parity case which relates the number of bound states to the scattering phase shift. For a potential with $E_B> 0$ ($E=-E_B<0$), $I_1$ is non-zero, otherwise for a half bound state ($E_B=0$) it is zero. In the case of delta-function potential which does not support any half bound state, $I_1 = -1/2$, and $\delta(0) = \pi/2$, giving $N_B = 1$, which is exactly the number of bound states supported by the potential.
\noindent One can show Levinson's theorem in 1D by using a number of methods, including Sturm-Liouville method, and the $S$-matrix method \cite{dong2000levinson,sassoli1994levinson,barton1985levinson,eberly1965quantum}. From those calculations one ends up with the following form of Levinson's theorem (in the even parity case),
\begin{equation}
N_B = \begin{cases}
\frac{\delta(0)-\delta(\infty)}{\pi} \text{ \hskip 2.9em for critical case}\\
\frac{\delta(0)-\delta(\infty)}{\pi} + \frac{1}{2} \text{ \hskip 1em for non-critical case}.
\end{cases}
\end{equation}
where by the critical case we mean the case with $E_B = 0$ (half bound state) and non-critical case means for $E_B>0$. So, we see that the value of the correction term $I_1$ is actually universal and is equal to $-1/2$ (for $E_B>0$) irrespective of the exact form of the potential. We have used the even parity $(\pk{x} = \cos(k|x|+\delta(k)))$ case to arrive at Levinson's theorem and the correction term to the BU formula using the spectral density method. One can similarly use an odd parity scattering wave function and follow the same regularization procedure to arrive at the odd parity results of the Levinson's theorem as well.
\section{Discussion \label{sec:Conclusion}}
Cutoff regularization schemes, both ultraviolet (high energy/momentum) and infrared (large volume/small energy/momentum) are highly used in quantum field theory to deal with infinities in the calculations. In this note we saw how using an infrared cutoff can yield the correction term in the original BU formula in 1D, and leads to the unusual form of Levinson's theorem in this case. Compared to other methods, the spectral density method provides a rather straightforward and physically insightful way to derive the extra $-1/2$ correction term in the BU formula, as well as the corresponding Levinson's theorem in 1D. We also use a quantum field theory method that shows that it is only the zero-energy behavior of the theory, regardless of the actual functional form of the potential, that dictates these correction terms; and hence we see an apparent surprising universality of the values of the integrals $I$ and $I_1$.
\section*{Acknowledgment}
This work was supported in part by the U.S. National Science Foundation under Grant No. PHY1452635 (Computational Physics Program), the US Army Research Office Grant No. W911NF-15-1-0445, and the University of San Francisco Faculty Development Fund.
|
1,314,259,996,646 | arxiv | \section{Introduction}
Rotational core collapse occurs in the first phase of the collapse-driven
supernova explosion. The collapse takes place in a dynamical time scale and
the bounce occurs when the central part of the star collapses into a nuclear
density and then stiffens its equation of state. The final outcome of the
collapse forms a compact object, such as a neutron star or a black hole,
which is a candidate for sources of gravitational waves \citep{CT02}. There
are two different approaches to investigate the rotational core collapse. One
is to take into account the realistic picture of the rotational collapse such
as realistic equation of state of the neutron star and/or neutrino effect in
Newtonian gravity \citep[e.g.][]{FH00,KYS03,MRBJS}. The other is to
illustrate simplified physics in relativistic gravitation
\citep[e.g.][]{DFM01,DFM02}. Since one of our aims in this paper is to
investigate the possibility of gravitational wave sources in rotational core
collapse, we should at least take into account relativistic gravitation,
which significantly affects the quantitative behavior of gravitational waves
from rotational core collapse \citep{DFDFIMS}.
Dynamical bar instability in a rotating equilibrium star takes place when the
ratio $\beta (\equiv T/W)$ between rotational kinetic energy $T$ and the
gravitational binding energy $W$ exceeds the critical value $\beta_{\rm dyn}$.
Determining the onset of the dynamical bar-mode instability, as well as the
subsequent evolution of an unstable star, requires a fully nonlinear
hydrodynamic simulation. Simulations performed in Newtonian gravity
\citep[e.g.][]{TDM,DGTB,WT,HCS,SHC,HC,TIPD,NCT,LL,Liu02} have shown that
$\beta_{\rm dyn}$ depends only very weakly on the stiffness of the equation of
state. $\beta_{\rm dyn}$ becomes small for stars with high degree of
differential rotation \citep{TH90,PDD,SKE}. Simulations in relativistic
gravitation \citep{SBS00,SSBS01} have shown that $\beta_{\rm dyn}$ decreases
with the compaction of the star, indicating that relativistic gravitation
enhances the bar-mode instability. The dynamical bar instability potentially
occurs during the collapse since the nondimensional value $\beta$ scales as
$R^{-1}$ in the dimensional analysis where $R$ is the radius of the star. Let
us briefly discuss a picture of core bounce. When the star collapses, $\beta$
increases and the star starts to deform its shape to form a bar when the star
exceeds the critical value of dynamical instability. During the bounce phase,
$\beta$ falls below the threshold of dynamical instability, and the star
cannot deform furthermore. In such a situation, what happens to the
nonaxisymmetric deformation of the star?
There are several papers that investigate rotational core collapse as a
potential source of gravitational waves in axisymmetric spacetime in Newtonian
gravity \citep{ZM,KYS03}, in conformally flat spacetime
\citep{DFM01,DFM02,Siebel03}, and in full general relativity
\citep{SAF,SS04,Shibata04} to estimate the amount of gravitational radiation
\citep{FHH02}. Recently, 3D calculations in full general relativity have been
established to investigate the nonaxisymmetric deformation of the star in
rotational core collapse. \citet{DSY} investigated the collapse of a
differentially rotating $n=1$ polytropic star in 3D by depleting the pressure
and found that the collapsing star forms a torus which fragments into
nonaxisymmetric clumps. \citet{SS05} investigated in rotational core collapse
and found that a burst type of gravitational waves was emitted. In addition,
they argued that a very limited window for the rotating star satisfies to
exceed the threshold of dynamical instability in the core collapse.
\citet{Zink05} presented a fragmentation of an $n=3$ toroidal polytropic star
to a binary system inducing $m=2$ density perturbation and claimed that they
found a new scenario to form a binary black hole.
Our purpose in this paper is the following twofold. One is to investigate a
necessary condition and mechanism to enhance the dynamical bar instability in
a collapsing star. \citet{Brown} performed long time integration of the
rotational core collapse to investigate whether a dynamical bar can
significantly form during the evolution. He found that the dynamical bar
instability sets in at $\beta_{\rm dyn} \approx 0.23$, which is far below the
standard value $\beta_{\rm dyn} \approx 0.27$. He found that the role of
dynamical bar in rotational core collapse may be quite different from that in
the equilibrium star. He also found that the bar instability grows slowly in
core bounce. His finding can be interpreted that bar instability is initiated
by the interaction between the core part and the surrounding part of the star.
Therefore the ``dynamical'' bar instability in the collapsing star is quite
different from that in the equilibrium state.
The other is the importance of probing whether the rotational core collapse
becomes a promising source of gravitational waves. Direct detection of
gravitational waves by ground based and space based interferometers is of
great importance in general relativity, in astrophysics, and in cosmology.
Once a bar has formed in the neutron star, we may expect quasi-periodic
gravitational waves in the {\rm kHz} band, which may be detectable in advanced
LIGO \citep{CT02}. \citet{RMR} investigated the rotational core collapse
using parametric equation of state in Newtonian gravity, including the thermal
pressure and the nuclear matter when the density exceeds the threshold of
nuclear density. They found that although the rotating star has a
nonaxisymmetric deformation, the amplitude of gravitational waves does not
significantly grow during the deformation. In fact, they compared the
gravitational waveform computed in their simulation with that in the previous
2D calculation \citep{ZM} and found that it is quite similar to each other.
Therefore, even if the nonaxisymmetric instability takes place during the
collapse, the bar does not significantly grow at the core bounce. What else
do we need to enhance the bar formation in the rotational core collapse?
Our interest is to focus on the core bounce of the rotational core collapse
in order to investigate the angular momentum redistribution and to investigate
the possibility of gravitational wave sources. Accordingly we construct a
mimic model to investigate the above two issues. We deplete the pressure of
the equilibrium star to initiate collapse and bounce. We also concentrate on
the structure of the star, spheroidal and toroidal, to investigate the
dynamical bar instability in the collapsing star. Stellar collapses and
mergers may also lead to differentially rotating stars \citep[e.g.][]{Ott04}.
For the coalescence of binary irrotational neutron stars
\citep{SU00,SU02,STU03}, the presence of differential rotation may temporarily
stabilize the ``hypermassive'' remnant, which constructs a toroidal structure
and may therefore have important dynamical effects. Although they use stiff
equation of state ($n=1$) it is possible to construct a toroidal star in
nature.
This paper is organized as follows. In Sec. \ref{sec:bequation} we summarize
our basic equation of relativistic hydrodynamics in conformally flat
spacetime. In Sec. \ref{sec:NR} we show our numerical results of dynamical
bar instability in rotational core collapse, and summarize our findings in
Sec. \ref{sec:Discussion}. Throughout this paper we use geometrized units
($G = c = 1$) and adopt Cartesian coordinates $(x,y,z)$ with the coordinate
time $t$. Greek and Latin indices run over $(t, x, y, z)$ and $(x, y, z)$,
respectively.
\section{Relativistic Hydrodynamics in Conformally Flat Spacetime}
\label{sec:bequation}
In this section, we describe the basic equations in conformally flat spacetime
\citep[e.g.][]{IN,WM89,Saijo04}. We solve the fully relativistic equations of
hydrodynamics, but neglect nondiagonal spatial metric components.
\subsection{The gravitational field equations}
We define the spatial projection tensor $\gamma^{\mu\nu} \equiv g^{\mu\nu} +
n^{\mu} n^{\nu}$, where $g^{\mu\nu}$ is the spacetime metric, $n^{\mu} =
(1/\alpha, -\beta^i/\alpha)$ the unit normal to a spatial hypersurface, and
where $\alpha$ and $\beta^i$ are the lapse and shift. Within a first
post-Newtonian approximation, the spatial metric $g_{ij} = \gamma_{ij}$ may
always be chosen to be conformally flat
\begin{eqnarray}
\gamma_{ij} = \psi^{4} \delta_{ij},
\end{eqnarray}
where $\psi$ is the conformal factor \citep[see][]{Chandra65, BDS}. The
spacetime line element then reduces to
\begin{eqnarray}
&&
ds^{2} =
( - \alpha^{2} + \beta_{k} \beta^{k} ) dt^{2} + 2 \beta_{i} dx^{i} dt
+ \psi^{4} \delta_{ij} dx^{i} dx^{j}.
\nonumber
\\
\end{eqnarray}
We adopt maximal slicing, for which the trace of the extrinsic curvature
$K_{ij}$ vanishes,
\begin{equation}
K \equiv \gamma^{ij} K_{ij} = 0.
\end{equation}
The gravitational field equations in conformally flat spacetime for the five
unknown $\alpha$, $\beta^i$, and $\psi$ can then be derived conveniently
from the 3+1 formalism. The equation for the lapse $\alpha$, shift
$\beta^{i}$, and conformal factor $\psi$ with a maximal slicing implies
$\partial_t K = 0$, shall be written as
\begin{widetext}
\begin{eqnarray}
&&
\triangle (\alpha \psi) = 2 \pi \alpha \psi^{5}
(\rho_{\rm H} + 2 S) + \frac{7}{8} \alpha \psi^{5} K_{ij} K^{ij},
\\
&&
\delta_{il} \triangle \beta^{l} +
\frac{1}{3} \partial_{i} \partial_{l} \beta^{l} =16 \pi \alpha J_{i}
+
\left(\partial_{j}\ln \left( \frac{\alpha}{\psi^{6}} \right)\right)
\left(
\partial_{i} \beta^{j} + \delta_{il} \delta^{jk} \partial_{k} \beta^{l} -
\frac{2}{3} \delta_{i}^{j} \partial_{l} \beta^{l}
\right),
\\
&&
\triangle \psi =
- 2 \pi \psi^{5} \rho_{\rm H} - \frac{1}{8} \psi^{5} K_{ij} K^{ij},
\label{eqn:HamiltonianC}
\end{eqnarray}
\end{widetext}
where $S = \gamma_{jk} T^{jk}$, $\Delta \equiv \delta^{ij} \partial_i
\partial_j$ is the flat space Laplacian and $J_{i} \equiv -n^{\mu}
\gamma^{\nu}_{~i} T_{\mu \nu}$ is the momentum density. In the definition of
$J_{i}$, $T_{\mu\nu}$ is the stress energy tensor, $\rho_{\rm H} \equiv
n^{\mu} n^{\nu} T_{\mu \nu}$ is the mass-energy density measured by a normal
observer.
\subsection{The matter equations}
For a perfect fluid, the energy momentum tensor takes the form
\begin{equation}
T^{\mu \nu} =
\rho \left( 1 + \varepsilon + \frac{P}{\rho} \right) u^{\mu} u^{\nu} +
Pg^{\mu\nu},
\end{equation}
where $\rho$ is the rest-mass density, $\varepsilon$ the specific internal
energy, $P$ the pressure, and $u^{\mu}$ the four-velocity. We adopt a
$\Gamma$-law equation of state in the form
\begin{equation}
P = (\Gamma - 1) \rho \varepsilon,
\label{eqn:gammalaw1}
\end{equation}
where $\Gamma$ is the adiabatic index which we set $\Gamma = 5/3$, $3/2$,
$7/5$ in this paper. In the absence of thermal dissipation,
Eq.~(\ref{eqn:gammalaw1}), together with the first law of thermodynamics,
implies a polytropic equation of state
\begin{equation}
P = \kappa \rho^{1+1/n},
\end{equation}
where $n=1/(\Gamma-1)$ is the polytropic index and $\kappa$ is a constant.
From $\nabla_{\mu} T^{\mu\nu}=0$ together with the equation of state (Eq.
[\ref{eqn:gammalaw1}]), we can derive the energy and Euler equations according
to
\begin{widetext}
\begin{eqnarray}
&&
\frac{\partial e_{*}}{\partial t}+
\frac{\partial (e_{*} v^{j})}{\partial x^{j}} =
- \frac{1}{\Gamma}(\rho \epsilon)^{-1+1/\Gamma}
P_{\rm vis}
\frac{\partial}{\partial x^{i}}
( \alpha u^{t} \psi^{6} v^{i} )
\label{eqn:Energy}
,\\
&&
\frac{\partial(\rho_{*} \tilde u_{i})}{\partial t}
+ \frac{\partial (\rho_* \tilde u_{i} v^{j})}{\partial x^{j}}
=
- \alpha \psi^{6} (P + P_{\rm vis})_{,i}
- \rho_{*} \alpha \tilde u^{t} \alpha_{,i}
+ \rho_{*} \tilde u_{j} \beta^{j}_{~,i}
+ \frac{2 \rho_{*} \tilde u_{k} \tilde u_{k}}{\psi^{5} \tilde u^{t}}
\psi_{,i}
,
\label{eqn:Euler}
\end{eqnarray}
\end{widetext}
where
\begin{eqnarray}
e_{*} &=& (\rho \varepsilon)^{1/\Gamma} \alpha u^{t} \psi^{6},\\
v^{i} &=& {dx^i \over dt}=\frac{u^{i}}{u^{t}},\\
\rho_{*} &=& \rho \alpha u^{t} \psi^{6},\\
\tilde{u}^{t} &=& ( 1 + \Gamma \varepsilon ) u^{t}
,\\
\tilde{u}_{i} &=& ( 1 + \Gamma \varepsilon ) u_{i}
,
\end{eqnarray}
and $v^{i}$, $u^{\mu}$, $P_{\rm vis}$ is the 3-velocity, 4-velocity, pressure
viscosity, respectively. Note that we treat the matter fully
relativistically; the conformally flat approximation only enters through
simplifications in the coupling to the gravitational fields. As a consequence
to treat shocks we also need to solve the continuity equation
\begin{equation}
\frac{\partial \rho_{*}}{\partial t}
+\frac{\partial (\rho_{*} v^{i})}{\partial x^{i}} = 0,
\label{eqn:BConservation}
\end{equation}
separately.
\begin{table*}
\caption
{Relativistic Rotating Equilibrium stars}
\begin{center}
\begin{tabular}{c c c c c c c c c c c}
\hline
\hline
Model &
$n$\footnotemark[1] &
$\hat{A}$\footnotemark[2] &
$R_{p}/R_{e}$\footnotemark[3] &
$\rho_{0}^{\rm max}$\footnotemark[4] &
$R_{c}$\footnotemark[5] &
$M$\footnotemark[6] &
$J$\footnotemark[7] &
$T/W$\footnotemark[8] &
$R_{c}/M$ &
$P_{\rm dep}$\footnotemark[9]
\\
\hline
I-a & $1.5 $ & $0.5$ & $0.467$ &
$8.26 \times 10^{-4}$ & $6.56$ &
$1.64 \times 10^{-1}$ & $4.49 \times 10^{-2}$ &
$1.59 \times 10^{-1}$ & $3.99 \times 10^{1}$ &
$70 \%$
\\
I-b & $1.5$ & $0.8$ & $0.467$ &
$1.36 \times 10^{-3}$ & $7.22$ &
$1.79 \times 10^{-1}$ & $5.25 \times 10^{-2}$ &
$1.59 \times 10^{-1}$ & $4.04 \times 10^{1}$ &
$70 \%$
\\
\hline
II-a & $2.0$ & $0.5$ & $0.450$ &
$1.10 \times 10^{-4}$ & $2.53 \times 10^{1}$ &
$6.29 \times 10^{-1}$ & $6.33 \times 10^{-1}$ &
$1.63 \times 10^{-1}$ & $4.03 \times 10^{1}$ &
$60 \%$
\\
II-b & $2.0$ & $0.6$ & $0.417$ &
$2.19 \times 10^{-4}$ & $2.64 \times 10^{1}$ &
$6.62 \times 10^{-1}$ & $6.69 \times 10^{-1}$ &
$1.62 \times 10^{-1}$ & $3.98 \times 10^{1}$ &
$60 \%$
\\
\hline
III-a & $2.5$ & $0.3$ & $0.417$ &
$1.05 \times 10^{-5}$ & $8.63 \times 10^{1}$ &
$2.16$ & $6.90$ &
$1.59 \times 10^{-1}$ & $3.99 \times 10^{1}$ &
$50 \%$
\\
III-b & $2.5$ & $0.42$ & $0.417$ &
$2.24 \times 10^{-5}$ & $8.54 \times 10^{1}$ &
$2.13$ & $6.58$ &
$1.60 \times 10^{-1}$ & $4.00 \times 10^{1}$ &
$50 \%$
\\
\hline
\end{tabular}
\label{tbl:initial}
\footnotetext[1]{Polytropic index}
\footnotetext[2]{Parameter of the degree of differential rotation}
\footnotetext[3]{Ratio of the polar proper radius to the
equatorial proper radius}
\footnotetext[4]{Maximum rest-mass density}
\footnotetext[5]{Equatorial circumferential radius}
\footnotetext[6]{Gravitational mass}
\footnotetext[7]{Angular momentum}
\footnotetext[8]{$T$: Rotational kinetic energy;
$W$: Gravitational binding energy}
\footnotetext[9]{Ratio of pressure depletion to the equilibrium star}
\end{center}
\end{table*}
\subsection{Numerical techniques for solving gravitational field equations}
We have reduced Einstein equations in conformally flat spacetime to ten
elliptic equations for ten variables ($B_{i}$, $\chi$, $\psi$, $\alpha \psi$,
$P_{i}$, $\eta$) using the same techniques in the previous paper
\citep{Saijo04}
\begin{eqnarray}
\Delta B_{i} &=& 8 \pi \psi^{6} J_{i} \equiv 4 \pi S_{B_{i}},
\label{eqn:CFBx}
\\
\Delta \chi &=& - 8 \pi \psi^{6} J_{i} x^{i} \equiv 4 \pi S_{\chi},
\\
\Delta \psi &=&
- 2 \pi \psi^{5} \rho_{\rm H} -
\frac{1}{8} \psi^{-7} \hat{A}_{ij} \hat{A}^{ij}
\equiv 4 \pi S_{\psi},
\\
\Delta (\alpha \psi) &=&
2 \pi \alpha \psi (\rho_{\rm H} + 2 S) +
\frac{7}{8} \alpha \psi^{-7} \hat{A}_{ij} \hat{A}^{ij}
\nonumber \\
&\equiv& 4\pi S_{\alpha\psi},
\\
\Delta P_{i} &=& 4 \pi \alpha \hat{J}_{i} \equiv 4 \pi S_{P_{i}},
\\
\Delta \eta &=& -4 \pi \alpha \hat{J}_{i} x^{i} \equiv 4 \pi S_{\eta}
\label{eqn:CFeta}
.
\end{eqnarray}
We use the asymptotic fall off behavior for metric quantities at large radius
in order to set an appropriate boundary condition at the grid edge
\citep{Saijo04}. The definition of $B_{i}$, $\chi$, $P_{i}$, $\eta$ and the
basic procedure to solve ten elliptic equations are given in Ref.
\citep{Saijo04}. Since we parallelized our code, we change the method to
solve the elliptic equation to PCG method \citep[e.g.][]{PCG}.
We monitor the rest mass $M_{0}$, gravitational mass $M$, and the angular
momentum $J$
\begin{eqnarray}
M_{0} &=& \int \rho_{*} d^{3}x
,\\
M &=& - \frac{1}{2 \pi} \oint_{\infty} \nabla^{i} \psi dS_{i}
\nonumber \\
&=&
\int
\biggl[
\left[
( \rho + \rho \varepsilon + P ) (\alpha u^{t})^{2} - P
\right]
\psi^{5}
\nonumber \\
&&
+ \frac{1}{16 \pi} \psi^{5} K_{ij} K^{ij}
\biggl] d^{3}x
,\\
J &=&
- \frac{1}{2 \pi} \oint_{\infty} ( x K^{j}_{y} - y K^{j}_{x} ) \psi^{6} dS_{i}
\nonumber \\
&=&
\int (x J_{y} - y J_{x}) \psi^{6} d^{3} x
,
\end{eqnarray}
during the evolution. We also compute rotational kinetic energy $T$, proper
mass $M_{p}$, gravitational binding energy $W$ as
\begin{eqnarray}
M_{p} &=&
\int \rho u^{t} ( 1 + \epsilon ) \sqrt{-g} d^{3} x
\nonumber \\
&=&
\int \rho_{*} ( 1 + \varepsilon ) d^{3} x
,\\
T &=&
\frac{1}{2} \int \Omega T^{t}_{\phi} \sqrt{-g} d^{3} x
\nonumber \\
&=& \frac{1}{2} \int \Omega (x J_{y} - y J_{x}) \psi^{6} d^{3} x
,\\
W &=&
M_{p} + T - M,
\end{eqnarray}
where $\Omega$ the angular velocity of the star.
Since we use a polytropic equation of state at $t=0$, it is convenient to
rescale all quantities with respect to $\kappa$. Since $\kappa^{n/2}$ has
dimensions of length, we introduce the following nondimensional variables
\citep[e.g.][]{CST92}
\begin{equation}
\begin{array}{c c c}
\bar{t} = \kappa^{-n/2} t
, &
\bar{x} = \kappa^{-n/2} x
, &
\bar{y} = \kappa^{-n/2} y
, \\
\bar{z} = \kappa^{-n/2} z
, &
\bar{\Omega} = \kappa^{n/2} \Omega
, &
\bar{M} = \kappa^{-n/2} M
, \\
\bar{R} = \kappa^{-n/2} R
, &
\bar{J} = \kappa^{-n} J
. &
\end{array}
\end{equation}
Henceforth, we adopt nondimensional quantities, but omit the bars for
convenience (equivalently, we set $\kappa = 1$).
\begin{figure*}
\begin{center}
\includegraphics[width=0.7\textwidth]{fig1.eps}
\end{center}
\caption{
Evolution of quadrupole diagnostics in 6 different collapsing stars. Roman
character in the panel corresponds to the model type in Table
\ref{tbl:initial}. Solid lines and dash lines represent toroidal and
spheroidal stars, which correspond to model $a$ and $b$ in Table
\ref{tbl:initial}, respectively.
\label{fig:dig}
}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=0.7\textwidth]{fig2.eps}
\end{center}
\caption{
Snapshot of the coordinate density along the $x$-axis for 6 different stars
(see Table \ref{tbl:initial}). Solid lines and dash lines represent
$t/P_{\rm c}=$
I(a) (9.61, 16.0), I(b) (9.67, 15.3),
II(a) (2.39, 12.0), II(b) (3.25, 12.0),
III(a) (3.19, 7.77), III(b) (3.20, 7.77), respectively.
\label{fig:density}
}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=0.7\textwidth]{fig3.eps}
\end{center}
\caption{
Density contour in the equatorial plane of 2 collapsing stars (Model I[a] and
[b] of Table \ref{tbl:initial}). Snapshots are plotted at the parameter
($t/t_{\rm dyn}$, $\rho^{*}_{\rm max}$) $=$
I(a)-i ($9.61$, $9.13 \times 10^{-3}$),
I(a)-ii ($16.0$, $6.04 \times 10^{-3}$),
I(b)-i ($9.67$, $1.33 \times 10^{-2}$),
I(b)-ii ($15.3$, $1.57 \times 10^{-2}$),
respectively. The contour lines denote coordinate densities
$\rho^{*} = \rho^{*}_{\rm max} \times 10^{- 0.220 (16-i)} (i=1, \cdots, 15)$.
\label{fig:n15con}
}
\end{figure*}
Our code is based on the conformally flat hydrodynamics scheme of
Ref. \cite{Saijo04}, to which the reader is referred for a more detailed
description, discussion and test. We choose the axis of rotation to align
with the $z$ axis, and assume planar symmetry across the equator. The
equations of hydrodynamics are then solved on a uniform grid of size $200
\times 200 \times 60$. We terminate our simulations after a sufficient
number of central rotation periods (between 8 and 16) in order for us to
detect dynamical instabilities. Because of our flux-conserving difference
scheme the rest mass $M_{0}$ is conserved up to a round-off error, except if
matter leaves the computational grid (which was never more than 0.01\% of the
total rest mass). In all cases reported in Sec.~\ref{sec:NR} the total
gravitational mass $M$ and the angular momentum $J$ were conserved up to
$\sim 0.1 \%$ and less than about $5 \%$ of their initial values.
\section{Rotational Core Collapse}
\label{sec:NR}
We basically follow the scheme of Ref. \citep{KEH89} to construct a
differentially rotating equilibrium star. The detailed procedure we used
for constructing it is given in Ref. \citep{Saijo04}. We construct the
equilibrium star under the condition of fixing $\beta$ and $R_{\rm c}/M$,
where $R_{\rm c}$ is the circumferential radius. This is because we keep
approximately the same condition to extend the threshold of dynamical bar
instability by depleting the pressure of the star to initiate collapse. We
set one toroidal star (corresponds to model a in Table \ref{tbl:initial}) and
one spheroidal star (corresponds to model b in Table \ref{tbl:initial}) for
each polytropic index to focus on the structure of the star.
\begin{figure*}
\begin{center}
\includegraphics[width=0.7\textwidth]{fig4.eps}
\end{center}
\caption{
Density contour in the equatorial plane of 2 collapsing stars (Model II[a] and
[b] of Table \ref{tbl:initial}). Snapshots are plotted at the parameter
($t/t_{\rm dyn}$, $\rho^{*}_{\rm max}$) $=$
II(a)-i ($3.19$, $1.13 \times 10^{-3}$),
II(a)-ii ($12.0$, $4.62 \times 10^{-3}$),
II(b)-i ($3.25$, $3.39 \times 10^{-3}$),
II(b)-ii ($12.0$, $8.71 \times 10^{-3}$), respectively.
The contour lines denote coordinate densities
$\rho^{*} = \rho^{*}_{\rm max} \times 10^{- 0.200 (16-i)} (i=1, \cdots, 15)$.
\label{fig:n20con}
}
\end{figure*}
We choose the rotation profile of the star as \citep{KEH89}
\begin{equation}
u^{t} u_{\varphi} = A^2 (\Omega_{c} - \Omega)
,
\end{equation}
which represents in the Newtonian limit ($u^{t} \rightarrow 1$,
$u_{\varphi} \rightarrow \varpi^2 \Omega$), so-called $j$-constant law, as
\begin{equation}
\Omega = \frac{A^{2} \Omega_{\rm c}}{\varpi^{2} + A^{2}}
,
\end{equation}
where $A$ is a parameter representing the degree of differential rotation,
$\varpi$ is the cylindrical radius of the star. Since $A$ has a dimension of
length, we normalize it with the proper equatorial radius $\bar{R}_{e}$,
($A = \bar{R}_{e} \hat{A}$). We summarize our 6 different initial data sets
of relativistic rotating equilibrium stars in Table \ref{tbl:initial}.
To monitor the development of $m=2$ modes we compute a ``quadrupole
diagnostic'' \citep{SBS03}
\begin{equation}
Q = \left< e^{i m \varphi} \right>_{m=2} =
\frac{1}{M_{0}} \int \rho_{*}
\frac{(x^{2}-y^{2}) + i (2 x y)}{x^{2}+y^{2}} d^3 x,
\label{eqn:quadrupole}
\end{equation}
where a bracket denotes the density weighted average. In the following we
only plot the real parts of $Q$.
To enhance any $m=2$ instability, we disturb the initial equilibrium density
$\rho_{\rm eq}$ by a nonaxisymmetric perturbation:
\begin{equation}
\rho = \rho_{\rm eq}
\left( 1 +
\delta \frac{x^{2}-y^{2}}{R_{\rm eq}^{2}}
\right),
\label{eqn:DPerturb}
\end{equation}
with $\delta = 0.01$ in all our simulations.
As for computing the gravitational waveform, we use the same method that we
used in the previous paper \citep{SSBS01}. For observers along the rotational
axis ($z$-axis), we have
\begin{eqnarray}
\frac{r h_{+}}{M} &=&
\frac{1}{2 M} \frac{d^{2}}{d t^{2}} (I_{xx} - I_{yy}), \label{h+}
\\
\frac{r h_{\times}}{M} &=&
\frac{1}{M} \frac{d^{2}}{d t^{2}} I_{xy} \label{h-}
,
\end{eqnarray}
where $h_{+}$ and $h_{\times}$ are the two polarization modes of gravitational
waves, $r$ is the distance to the source, $I_{ij}$ the approximate quadrupole
moment of the mass distribution defined as
\begin{equation}
I_{ij} = \int \rho_{*} x^{i} x^{j} d^{3}x.
\end{equation}
\begin{figure*}
\begin{center}
\includegraphics[width=0.7\textwidth]{fig5.eps}
\end{center}
\caption{
Density contour in the equatorial plane of 2 collapsing stars (Model III[a] and
[b] of Table \ref{tbl:initial}). Snapshots are plotted at the parameter
($t/t_{\rm dyn}$, $\rho^{*}_{\rm max}$) $=$
III(a)-i ($3.19$, $2.70 \times 10^{-4}$),
III(a)-ii ($7.77$, $4.02 \times 10^{-3}$),
III(b)-i ($3.20$, $3.28 \times 10^{-4}$),
III(b)-ii ($7.77$, $1.92 \times 10^{-3}$), respectively.
The contour lines denote coordinate densities
$\rho^{*} = \rho^{*}_{\rm max} \times 10^{- 0.220 (16-i)} (i=1, \cdots, 15)$.
\label{fig:n25con}
}
\end{figure*}
We show the quadrupole diagnostic $Q$ throughout the evolution in Fig.
\ref{fig:dig}. We determine that the system is stable to $m=2$ mode when the
quadrupole diagnostic remains small throughout the evolution. We also
determine that the system is unstable when the diagnostic grows exponentially
at its early stage of evolution. It is clearly shown in Fig. \ref{fig:dig}
that the star is more unstable to the bar mode when the structure of the star
is toroidal at its equilibrium stage than when it is spheroidal. Even in a
spheroidal case, the dynamical instability sets in after the core bounce in
case of model I(b) (see Table \ref{tbl:initial}). Note also that the
diagnostic damps out especially for the soft equation of state ($n=2$,
$2.5$). The above effect corresponds to the destruction of the toroidal
structure of the star (see Fig. \ref{fig:density}).
We show the density contour in the intermediate stage and in the final stage
of the rotational core collapse in Figs. \ref{fig:n15con} -- \ref{fig:n25con}.
In the intermediate stage of the rotational core collapse, the bar
instability grows due to the nonaxisymmetric instability. The deformation
rate of the bar is significantly higher for the case of a toroidal star at
equilibrium than that of a spheroidal star. At the termination of integration,
the equatorial shape of the star is almost spherical except for the case of
model I(a). The coincidence of the destruction of the toroidal structure
in Fig. \ref{fig:density} at the termination of integration indicates that
the structure of torus plays a significant role in enhancing bar instability.
We show the gravitational waveforms along the observer in the rotational axis
in Fig. \ref{fig:gw}. The gravitational waves amplify during the bar
formation. Since the dynamical bar plays an important role in our case of
rotational core collapse, our result is quite different from the previous
picture of the 3D Newtonian results \citep{RMR}. In fact, the amplitude
in our calculation observed from the rotational axis grows about 20 times
larger than that of core bounce which has a peak in the waveform. Note also
that the quasiperiodic waves retain several rotation periods for stiff
equation of state, due to the persistence of the toroidal structure.
\begin{figure*}
\begin{center}
\includegraphics[width=0.7\textwidth]{fig6.eps}
\end{center}
\caption{
Gravitational waveforms from a distant observer along the rotational axis of
the equilibrium star. Roman character in the panel corresponds to the model
type in Table \ref{tbl:initial}. Solid lines and dash lines represent
toroidal and spheroidal stars, which correspond to $a$ and $b$ in Table
\ref{tbl:initial}, respectively.
\label{fig:gw}
}
\end{figure*}
We also show $\beta$ in the evolution in Fig. \ref{fig:tw}, which is regarded
as a diagnostic of the dynamical bar instability in the equilibrium star.
Although $\beta$ behaves quite similar in the different polytropic index, the
bar structure persists for at least several rotation periods in case of
the model I(a), which corresponds to the persistence of the toroidal structure
(see Fig \ref{fig:density}). We also show the distribution of the angular
velocity in the intermediate stage and at the termination of the integration
in Fig. \ref{fig:omg}. A sharp dip at the center in the angular velocity of
model I(a) at $t=9.61 P_{\rm c}$ potentially has a redistribution of the
angular momentum.
\begin{figure*}
\begin{center}
\includegraphics[width=0.7\textwidth]{fig7.eps}
\end{center}
\caption{
Diagnostics of dynamical bar instability $\beta$ as a function of time. Roman
character in the panel corresponds to the model type in
Table~\ref{tbl:initial}. Solid lines and dash lines represent toroidal and
spheroidal star, which corresponds to $a$ and $b$ in Table~\ref{tbl:initial}.
\label{fig:tw}
}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=0.7\textwidth]{fig8.eps}
\end{center}
\caption{
Snapshot of the orbital angular velocity along the $x$-axis.
Solid lines and dash lines represent $t/P_{\rm c}=$
I(a) (9.61, 16.0), I(b) (9.67, 15.3),
II(a) (2.39, 12.0), II(b) (3.25, 12.0),
III(a) (3.19, 7.77), III(b) (3.20, 7.77), respectively.
\label{fig:omg}
}
\end{figure*}
\section{Discussion}
\label{sec:Discussion}
We investigate the role of dynamical bar instability in rotational core
collapse by means of hydrodynamic simulations in conformally flat
approximation in general relativity. We specifically focus on the
structure of the star to see whether it takes a significant role in enhancing
dynamical bar instability.
We find that the structure of the star takes a significant role in enhancing
the dynamical bar instability at core bounce. Since the angular velocity of
the collapsing star has already reached the maximum inside a certain
cylindrical radius to produce a toroidal structure, the angular momentum can
only shift outward at the bounce. For a spheroidal star, the angular
momentum can shift both inward and outward at bounce since it does not reach
the ``Keplarian''. This means that rotational core collapse for the
spheroidal case cannot significantly break the central core of the star.
Consequently, in case of a toroidal star a bar structure is easily
constructed during the evolution. Note that for a soft equation of state
($n=2.0$, $2.5$) the amplitude of gravitational waves decreases, as the torus
is destroyed in the rotational core collapse. Therefore the torus is the key
issue to trigger the bar formation.
Once a bar has formed, the amplitude of gravitational waves has significantly
increased due to the nonaxisymmetric deformation of the star. The previous 2D
calculation shows that a peak amplitude of gravitational waves comes from the
core bounce of the star, and its behavior coincides with that of 3D
calculation. In our results, gravitational radiation is dominantly generated
in the bar formation process. Our results of the soft equation of state
qualitatively agree with the results in full general relativity \citep{SS05}.
Thus, the characteristic amplitude and frequency of gravitational waves in
the collapsing star can be written as
\begin{eqnarray}
f_{\rm gw} &\approx& \frac{1}{2 \pi t_{\rm dyn}} =
100~[{\rm Hz}] \left( \frac{M_{\odot}}{M} \right)
\left( \frac{40M}{R} \right)^{3/2}
,\\
h_{\rm gw} &\approx& 4.8 \times 10^{-23}
\left( \frac{M}{M_{\odot}} \right)
\left( \frac{10 {\rm M pc}}{d} \right)
\left( \frac{rh/M}{0.01} \right).
\nonumber
\\
\end{eqnarray}
Therefore, gravitational waves from a dynamical bar in a collapsing neutron
star will be detected in advanced LIGO.
\begin{table}
\caption
{Magnitude of higher order correction for the central
lapse at bounce}
\begin{center}
\begin{tabular}{c c c c c}
\hline
\hline
Model &
${t_{\rm bce}/P_{\rm c}}$\footnotemark[10] &
$\alpha_{\rm c}$\footnotemark[11] &
$M_{\rm src}/R_{\rm src}$\footnotemark[12] &
$(M_{\rm src}/R_{\rm src})^3 [\%]$
\\
\hline
I-a & $1.22$ & $0.872$ & $0.137$ & $0.257$
\\
I-b & $1.00$ & $0.848$ & $0.166$ & $0.457$
\\
\hline
II-a & $1.05$ & $0.843$ & $0.172$ & $0.509$
\\
II-b & $0.88$ & $0.806$ & $0.218$ & $1.04$
\\
\hline
III-a & $1.08$ & $0.835$ & $0.181$ & $0.593$
\\
III-b & $0.90$ & $0.803$ & $0.222$ & $1.09$
\\
\hline
\end{tabular}
\label{tbl:lapse}
\footnotetext[10]{$t_{\rm bce}$: Bounce time}
\footnotetext[11]{Central lapse}
\footnotetext[12]{$M_{\rm src}$: Characteristic mass of the source;
$R_{\rm src}$: Characteristic radius of the source}
\end{center}
\end{table}
Finally we mention the validity of the conformally flat approximation
in our 6 different collapsing stars. The approximation has two issues:
that it contains the first post-Newtonian order of general relativity,
and that it neglects gravitational radiation. First, we investigate the
central lapse $\alpha_{\rm c}$ at core bounce to check whether the first
post-Newtonian order of general relativity is reasonable approximation in
our model. The central lapse can be expanded by the compaction
(characteristic mass $M_{\rm src}$ and radius $R_{\rm src}$) of the source
[Eq. (19.13) of Ref. \citep{MTW}] as
\begin{equation}
\alpha_{\rm c} =
1 - \frac{M_{\rm src}}{R_{\rm src}}
+ \frac{1}{2} \left( \frac{M_{\rm src}}{R_{\rm src}} \right)^{2}
+ o \left( \left( \frac{M_{\rm src}}{R_{\rm src}} \right)^{3} \right).
\label{eqn:lapsec}
\end{equation}
Note that the shift appears in the $g_{tt}$ component of the 4-metric at the
second post-Newtonian order of general relativity. All models of our 6
collapsing stars bounce at $\alpha_{\rm c} \gtrsim 0.8$, and therefore the
value of central lapse corresponds to the compaction of the source
$M_{\rm src} / R_{\rm src} \lesssim 0.22$ from Eq. (\ref{eqn:lapsec}).
For each model, we summarize the magnitude of higher order correction
[$(R_{\rm src}/M_{\rm src})^{3}$ term] from the first post-Newtonian order
in Table \ref{tbl:lapse}. Note that the central lapse at bounce is the
minimal one throughout the evolution. Since the magnitude of the higher
order correction is $(M_{\rm src} / R_{\rm src})^{3} \lesssim 0.011$, we can
roughly claim that the first post-Newtonian gravity is a satisfactory
approximation to describe the full general relativity in the relative error
rate of few percent in our calculation, depending on the coefficient of the
higher order term. Second, it is also a quite satisfactory approximation
to neglect gravitational radiation in our dynamics, since our main target in
this paper is to focus on the enhancement of the dynamical bar instabilities
which occur in dynamical time scale. Gravitational waves affect the whole
dynamics in secular time scale, which is much longer than the dynamical
time scale, and hence we can safely neglect such effect in this paper.
Therefore, conformally flat approximation is a satisfactory one in our
computation.
\acknowledgments
We would like to thank Thomas Janka and Ewald M\"uller for their kind
hospitality at Max-Plank-Institut f\"ur Astrophysik, where part of this
work was done. We thank Takashi Nakamura for his continuous encouragement
and valuable advice. We also thank Yukiya Aoyama and Jun Nakano for their
constructive comments on parallelizing my code. This work was supported in
part by MEXT Grant-in-Aid for young scientists (No. 200200927). Numerical
computations were performed on the VPP-5000 machine in the Astronomical Data
Analysis Center, National Astronomical Observatory of Japan, on the SGI Origin
3000 machine in the Yukawa Institute for Theoretical Physics, Kyoto
University, on the FUJITSU HPC2500 machine in the Academic Center for
Computing and Media Studies, Kyoto University, and on the Pentium-4 cluster
machine in the Theoretical Astrophysics Group at Kyoto University.
|
1,314,259,996,647 | arxiv | \section{Proof of Theorem~3}
\begin{proof
{\flushleft \em (Case~\ref{pro:3-ratios-1}): }
Let $G$ be an $n$-clique and $\Theta = \times_{e\in E} [0, 1]$, i.e., for every edge $e$, $l_e=0$ and $r_e=1$.
For arbitrary set $S=\{v_1,\cdots, v_k\}$, there exists a valid parameter vector $\theta = (p_e)_{e \in E} \in \Theta$, where $p_e=0$ for all $E_S=\{e=(u,v)\mid u\in S \text{ or } v\in S\}$ and $p_e=1$ for all $e\notin E_S$.
Then, $\sigma_{\theta}(S)=k$ and $\sigma_{\theta}(S^{*}_{\theta})=n-1$,
which implies that
$g(\Theta, S) =
\min_{\theta \in \Theta} \frac{\sigma_{\theta}(S)}{\sigma_{\theta}(S^{*}_{\theta})}
\leq \frac{k}{n-1}
$.
For any set $S$ of size $k$, the above holds, thus we can conclude that
\[
\max_{|S|=k}g(\Theta,S) = O\left(\frac{k}{n}\right) \mbox{.}
\]
{\flushleft \em (Case~\ref{pro:3-ratios-2}): }
Consider graph $G=(V,E)$ such that $V=A\cup B, |A|=|B|=\frac{n}{2}$ and $E=\{(u,v)\mid u,v\in A \text{ or }u,v\in B\}$, and let $E(A)$ be the set of edges with two endpoints in $A$ and $E(B)$ defined similarly. The problem is to find a single seed ($k=1$) such that the influence spread is maximized. Let $p=\frac{2}{n}$ and the input instance is $l_e=p-\epsilon$ and $r_e=p+\epsilon$ for every edge $e$ such that $[l_e,r_e]$ covers the critical interval of Erd\H{o}s-R\'{e}nyi random graph with $\frac{n}{2}$ nodes.
Now since every node is seemingly the same for any algorithm, suppose the algorithm chooses a seed $u\in A$,
then consider the worst-case $\theta$ where for every $e\in E(A)$, $p_e=l_e$ and for every $e\in E(B)$, $p_e=r_e$.
It can be figured out that the optimal solution is an arbitrary node $v\in B$.
Since $\sigma_{\theta}(\{u\}) = O(\log n)$ and $\sigma_{\theta}(\{v\})=\Theta(n)$, then the ratio $r = O(\frac{\log n}{n})$.
{\flushleft \em (Case~\ref{pro:3-ratios-3}): }
Consider graph $G=(V,E)$ such that
$V$ is composed of disjoint sets $A_1, A_2,\ldots, A_{\sqrt{n}}$ where each $|A_i|=\sqrt{n}$,
and
$E=\{(u,v)\mid u,v\in A_i, \forall i=1,\cdots,\sqrt{n}\}$.
Let $E(A_i)$ be the set of edges with two endpoints in $A_i$.
The problem is to find a single seed ($k=1$) such that the influence spread is maximized.
Let $p=\frac{1}{\sqrt{n}}$, and the input instance is $l_e=p-\epsilon$ and $r_e=p+\epsilon$ for every edge $e$
such that $[l_e,r_e]$ covers the critical interval of Erd\H{o}s-R\'{e}nyi random graph with $\sqrt{n}$ nodes.
Now every node appears to be symmetric from the input.
Denote $q_{i}$ as the probability of choosing a node in $A_{i}$.
Consider any distribution assigned on
$A_{1}, A_{2},\ldots,A_{\sqrt{n}}$, i.e. $q_1+q_2+\cdots+q_{\sqrt{n}}=1$,
and let the random seed set be $\tilde{S}$.
Without loss of generality, let $q_1$ be the smallest one. Then consider the worst-case $\theta$ where for every $e\in E(A_1)$, $p_e=r_e$ and for every $e\in E(A_i),i\ge 2$, $p_e=l_e$. It is obvious that the optimal solution $S_{\theta}^{*}$ is an arbitrary point $v\in A_1$.
Since
\[
\E\left[ \sigma_{\theta}(\tilde{S})\right] \le \frac{1}{\sqrt{n}} \cdot \sqrt{n} + \left( 1-\frac{1}{\sqrt{n}} \right) O(\log \sqrt{n})=O(\log n) \mbox{,}
\]
and
\[
\sigma_{\theta}(S_{\theta}^{*})=\Theta(\sqrt{n}) \mbox{,}
\]
which completes the proof.
\end{proof}
\section{Proof of Lemmas
\begin{proof}[(Lemma~\ref{lem:mul})]
Since when $\sigma_{\theta}(S)$ is regarded as a function on $\theta$ (if $S$ is fixed), it is monotonically increasing,
thus it suffices to consider the case that $\forall e\in E$, $r_e=(1+\lambda)l_e$.
Flipping coins for every edge according to the probability parameter $\theta$, and we have a live-edge (random) graph $L$.
Let $E(L)$ denote the set of edges in $L$, and $\Pr_{\theta}[L]$ be the probability yielding $L$.
We use $R_L(S)$ to denote the reachable set from $S$ in $L$. Then, the influence spread function has a linear form as follows,
\[
\sigma_{\theta}(S)=\sum_L\Pr_{\theta}[L]\cdot|R_L(S)| \mbox{.}
\]
As a convention, for any edge $e\in E$, we denote conditional probability
$\Pr_{\theta}[L|e] = \Pr_{\theta}\left[L|e\in E(L)\right]$,
and
$\Pr_{\theta}[L|\bar{e}] = \Pr_{\theta}\left[L|e\notin E(L)\right]$.
Then, we have
\begin{align*}
&\frac{\sigma_{\theta^+}(S)}{\sigma_{\theta^-}(S)} \\
=&\frac{\displaystyle\sum_{L:e\in E(L)}r_e |R_L(S)| \Pr_{\theta^{+}}{[L|e]} + \sum_{L:e\not\in E(L)}(1 - r_e) |R_L(S)|\Pr_{\theta^{+}}{[L|\bar{e}]}}
{\displaystyle\sum_{L:e\in E(L)}l_e |R_L(S)| \Pr_{\theta^{-}}{[L|e]} + \sum_{L:e\not\in E(L)}(1 - l_e) |R_L(S)| \Pr_{\theta^{-}}{[L|\bar{e}]}} \mbox{.}
\end{align*}
When we fixed $l_{e'}$ for all $e'\not=e$, we have
\[
\frac{\sigma_{\theta^+}(S)}{\sigma_{\theta^-}(S)}=\frac{Al_e+B}{Cl_e+D} \mbox{,}
\]
where $A,B,C,D$ are not dependent on $l_e$. It can be observed that the ratio is monotone with $l_e$, and is thus maximized either when $l_e=0$ or when $l_e=\frac{1}{1+\lambda}$.
Similar analysis for other edges, we can conclude that when the ratio is maximized, it must holds that $\forall e\in E$, $l_e=0$ or $l_e=\frac{1}{1+\lambda}$.
Since when $l_e=0$, it holds that $r_e=0$, thus we can just delete this edge from the graph.
Delete all such edges, and it ends up with a graph $G_1=(V,E_1)$ such that the probability interval on every edge is $[\frac{1}{1+\lambda},1]$.
And it can be seen that $R_{G_1}(S)$ is determined when probability on all edges are $1$.
Given set $S$, denote the influence spread for any graph $G$ under any parameter vector $\theta$ as $\sigma_{\theta}^{G}(S)$ explicitly.
If there exists a directed cycle $v_0\to v_1\to\cdots \to v_i\to v_0$ in graph $G_1$.
Then it can be seen that either all nodes in this cycle is in $R_{G_1}(S)$, or none of them is in.
In both cases, we can remove some edge (e.g. $v_i\to v_0$) from $E_1$ and obtain a new graph $G_2$ (e.g. $G_2=(V,E_1 \setminus \{(v_i,v_0)\})$) such that $\sigma_{\theta^+}^{G_1}(S)=\sigma_{\theta^+}^{G_2}(S)$ while $\sigma_{\theta^-}^{G_1}(S)\geq\sigma_{\theta^-}^{G_2}(S)$. Thus,
\[
\frac{\sigma_{\theta^+}^{G_1}(S)}{\sigma_{\theta^-}^{G_1}(S)}\leq \frac{\sigma_{\theta^+}^{G_2}(S)}{\sigma_{\theta^-}^{G_2}(S)} \mbox{.}
\]
Removing can be done since if none of the nodes are in $R_{G_1}(S)$, then deleting one edge will not change either $\sigma_{\theta^+}^{G_1}(S)$ or $\sigma_{\theta^-}^{G_1}(S)$, and if all of the nodes are in, then there must exists $v_0$ in the cycle such that $v_0\in S$ or $v_0$ can be reached from a path directing from some node (in $S$) outside the cycle to it, then deleting the edge $(v_p,v_0)$ can be proved to satisfy the above property.
Repeat deleting edges until the remaining graph is a directed acyclic graph (DAG), denoted by $G'$. Then it can be split into finite subgraphs $T_1,T_2,\ldots,T_j$
where each $T_i$ is a connected DAG, and it is immediate that
\[
\frac{\sigma_{\theta^+}^{G'}(S)}{\sigma_{\theta^-}^{G'}(S)}\le \max_{1\le i\le j}\frac{\sigma_{\theta^+}^{T_i}(S)}{\sigma_{\theta^-}^{T_i}(S)} \mbox{.}
\]
It remains to analyze the ratio in a connected DAG $T_i$, and we need more notations before that. First, the DAG $T_i$ naturally induces a topological order on nodes (we can therefore call the nodes in $T_i$ be $V(T_i):=\{v_1,\cdots,v_{|T_i|}\}$), in which every edge in $E(T_i)$ is directing from a node with smaller order to a larger order. Let $S_i=S\cap V(T_i)$, and let $R(S_i)$ be the subset of nodes in $V(T_i)$ that is reachable with positive probability (therefore $R(S_i)$ naturally contains nodes in $S_i$). Besides, for any $v\notin S_i$, let $d(S_i,v)$ denotes the length of shortest path directing from some node in $S_i$, and for any $v\in S_i$, define $d(S_i,v)=0$. Thus,
\[
\sigma_{\theta^+}^{T_i}(S_i)=|R(S_i)| \mbox{.}
\]
Let $\beta=\frac{1}{1+\lambda}$. For any path of length $l \geq 0$ from $S_i$ to $v$, the activating probability of
that path is $\beta^l$ under $\theta^-$. Then, we have
\[
\begin{aligned}
\sigma_{\theta^{-}}^{T_i}(S_i)&=|S_i|+\sum_{v\in V(T_i) \setminus S_i}\Pr\left[\text{v is reached}\right]\\
&\geq |S_i|+\sum_{v\in R(S_i) \setminus S_i}\beta^{d(S_i,v)}\\
&\geq \sum_{v \in R(S_i)} \beta^{d(S_i,v)}\\
&\geq |R(S_i)| \beta^{|T_i|} \mbox{.}
\end{aligned}
\]
Therefore,
\[
\frac{\sigma_{\theta^+}^{G}(S)}{\sigma_{\theta^-}^{G}(S)}
\leq \frac{\sigma_{\theta^+}^{G'}(S)}{\sigma_{\theta^-}^{G'}(S)}
\leq \max_{1\le i\le p}\frac{\sigma_{\theta^+}^{T_i}(S)}{\sigma_{\theta^-}^{T_i}(S)}
\leq \max_{1\le i\le p} \beta^{-|T_i|}
\leq (1+\lambda)^{n} \mbox{.}
\]
To prove the second inequality of this lemma, by definition we have
$S_{\theta}^{*}=\arg\max_{|S|\leq k}\sigma_{\theta}(S)$.
Note that for all $\theta\in\Theta$,
$\sigma_{\theta^{+}}(S_{\theta^{+}}^{*})\geq\sigma_{\theta}(S_{\theta}^{*})$.
Then we have
\begin{align*}
\max_{|S|\leq k}\min_{\theta\in\Theta}\frac{\sigma_{\theta}(S)}{\sigma_{\theta}(S_{\theta}^{*})}&\geq
\max_{|S|\leq k}\min_{\theta\in\Theta}\frac{\sigma_{\theta}(S)}{\sigma_{\theta^{+}}(S_{\theta^{+}}^{*})}\\
&=\frac{\sigma_{\theta^{-}}(S_{\theta^{-}}^{*})}{\sigma_{\theta^{+}}(S_{\theta^{+}}^{*})}\\
&\geq \frac{\sigma_{\theta^{-}}(S_{\theta^{+}}^{*})}{\sigma_{\theta^{+}}(S_{\theta^{+}}^{*})}\\
&\geq \min_{|S|\subseteq V}\frac{\sigma_{\theta^{-}}(S)}{\sigma_{\theta^{+}}(S)}\\
&\geq \frac{1}{(1+\lambda)^{n}} \mbox{.}
\end{align*}
This completes the proof for Lemma~\ref{lem:mul}.
\end{proof}
\begin{proof}[(Lemma~\ref{lem:conf})]
First, we focus on one fixed edge $e$. According to Chernoff bound, we have
\[
\Pr[|\hat{p}_e - p_e|\le p_e\delta] \ge 1 - 2e^{-\frac{1}{3}\delta^2p_et_e}.
\]
Let $\gamma = 2me^{-\frac{1}{3}\delta^2p_et_e}$, and
$\delta = \sqrt{\frac{3}{t_e}\ln\frac{2m}{\gamma}}\frac{1}{\sqrt{p_e}} = \frac{c_e}{\sqrt{p_e}}$.
Then, with probability no less than $1 - \frac{\gamma}{m}$,
we see that $p_e$ should satisfy the constraint
\[
|\hat{p}_e - p_e|\le c_e\sqrt{p_e},
\]
thus we have
\[
\hat{p}_e + \frac{c_e^2}{2} - c_e\sqrt{\frac{c_e^2}{4} + \hat{p}_e}\le p_e \le \hat{p}_e + \frac{c_e^2}{2} + c_e\sqrt{\frac{c_e^2}{4} + \hat{p}_e}.
\]
By definition of $l_e$, $r_e$ and the fact that $p_e \in [0,1]$,
therefore we have
\[
\Pr[l_e \le p_e \le r_e] \ge 1 - \frac{\gamma}{m}.
\]
By union bound, we can conclude that
\[
\Pr[l_e \le p_e \le r_e,\, \forall e\in E] \ge 1 - \gamma,
\]
which completes the proof of Lemma~\ref{lem:conf}.
\end{proof}
\section{Proof of Theorem~6}
\begin{proof}
{\flushleft \em Setting~\ref{thm-add-case}:}
First, since every $e$ is probed for $t=\frac{2m^2n^2 \ln \frac{2m}{\gamma}}{k^2\epsilon^2}$ times, using the additive form of Chernoff-Hoeffding Inequality we have
\[
\Pr\left[ \abs{\frac{1}{t}\sum_{i=1}^{t}x^i_e-p_e} >\frac{k\epsilon}{2mn} \right] \le 2\exp\left(-\frac{k^2\epsilon^2}{2m^2n^2}\cdot t\right)\le \frac{\gamma}{m} \mbox{.}
\]
Then by union bound, it holds that
\[
\Pr\left[\forall e\in E, \abs{\frac{1}{t}\sum_{i=1}^{t} x^i_e-p_e} >\frac{k\epsilon}{2mn}\right]\le \gamma \mbox{.}
\]
For every $e\in E$, we set $l_e=\frac{1}{t}\sum_{i=1}^{t}x^i_e-\frac{k\epsilon}{2mn}$, and $r_e=\frac{1}{t}\sum_{i=1}^{t}x^i_e+\frac{k\epsilon}{2mn}$, then with probability $\ge 1-\gamma$, it holds that $\theta\in \Theta$.
Therefore, for every $S$, according to Lemma~\ref{lem:add},
\begin{align*}
\frac{\sigma_{\theta^{-}}(S)}{\sigma_{\theta^{+}}(S)}&
\ge 1-\frac{\sigma_{\theta^{+}}(S)-\sigma_{\theta^{-}}(S)}{\sigma_{\theta^{+}}(S)}\\
&\ge 1-\frac{mn\cdot\frac{k\epsilon}{mn}}{k}\\
&=1-\epsilon \mbox{.}
\end{align*}
Thus, $\frac{\sigma_{\theta^{-}}(S^g_{\theta^+})}{\sigma_{\theta^{+}}(S^g_{\theta^+})} \geq 1-\epsilon$
also holds.
Now, since we use $S^\lu_\Theta$ as the solution, applying \Cref{thm:main}, we have
\begin{align*}
g(\Theta,S^\lu_\Theta)
\ge \alpha(\Theta)\left(1-\frac{1}{e}\right)
= \frac{\sigma_{\theta^{-}}(S^\lu_\Theta)}{\sigma_{\theta^{+}}(S_{\theta^{+}}^g)} \left(1-\frac{1}{e}\right) \\
\ge \frac{\sigma_{\theta^{-}}(S^g_{\theta^+})}{\sigma_{\theta^{+}}(S^g_{\theta^+})} \left(1-\frac{1}{e}\right)
\ge \left( 1-\epsilon \right) \left(1-\frac{1}{e}\right),
\end{align*}
where the second inequality holds due to $\sigma_{\theta^{-}}(S^\lu_\Theta) \ge \sigma_{\theta^{-}}(S^g_{\theta^+})$
by definition of \eqref{def:lu-greedy-solution}.
{\flushleft \em Setting~\ref{thm-mul-case}:}
Denote $a = \frac{\ln \frac{1}{1-\epsilon}}{2n + \ln \frac{1}{1-\epsilon}}$ for convenience.
Since every edge $e$ is probed for $t=\frac{3 \ln \frac{2m}{\gamma} }{p a^2} \geq \frac{3 \ln \frac{2m}{\gamma} }{p_e a^2}$ times,
the probability of upper and lower tails derived by the multiplicative form of Chernoff-Hoeffding Inequality is
\begin{align*}
& \Pr\left[\frac{1}{t}\sum_{i=1}^{t}x^i_e \ge (1+a)p_e\right]\le e^{-\frac{a^2}{3}\cdot p_e t} \le \frac{\gamma}{2m}
\\
& \Pr\left[\frac{1}{t}\sum_{i=1}^{t}x^i_e \le (1-a)p_e\right]\le e^{-\frac{a^2}{3}\cdot p_e t} \le \frac{\gamma}{2m} \mbox{.}
\end{align*}
Then by union bound, it holds that
\[
\Pr\left[\forall e\in E, \frac{1}{1+a}\frac{\sum_{i=1}^{t} x^i_e}{t} \le p_e \le \frac{1}{1-a} \frac{\sum_{i=1}^{t} x^i_e}{t} \right] \ge 1 - \gamma \mbox{.}
\]
Now suppose the above bound is satisfied.
For every edge $e \in E$, let $r_e = (1+a) p_e$ and $l_e = (1-a) p_e$.
Then, we have
$r_e \leq \frac{1+a}{1-a} \frac{\sum_{i=1}^{t}x^i_e}{t} \le (1 + \frac{1}{n} \ln \frac{1}{1-\epsilon}) \frac{\sum_{i=1}^{t}x^i_e}{t}$.
On the other hand, it is easy to check that $r_e = \frac{1+a}{1-a} l_e \le (1 + \frac{1}{n} \ln \frac{1}{1-\epsilon}) l_e$.
According to Lemma~\ref{lem:mul}, for any set $S$,
\begin{align*}
\frac{\sigma_{\theta^{-}}(S)}{\sigma_{\theta^{+}}(S)}&
\ge \left( 1+\frac{1}{n} \ln\frac{1}{1-\epsilon} \right)^{-n}
\ge 1-\epsilon \mbox{.}
\end{align*}
Thus, $\frac{\sigma_{\theta^{-}}(S^g_{\theta^+})}{\sigma_{\theta^{+}}(S^g_{\theta^+})} \geq 1-\epsilon$
also holds. Similar to Setting~\ref{thm-add-case},
then we can apply Theorem~\ref{thm:main} to derive the theorem.
\end{proof}
\section{Introduction}
\section{Introduction} \label{sect:intro}
In social and economic networks, {\em Influence Maximization} problem has been extensively studied
over the past decade, due to its wide applications to viral marketing \cite{domingos2001mining,kempe2003maximizing}, outbreak detection \cite{leskovec2007cost},
rumor monitoring \cite{budak2011limiting}, etc.
For example, a company may conduct a promotion campaign in social networks
by sending free samples to the initial users (termed as seeds),
and via the word-of-mouth (WoM) effect, more and more users are influenced by social links to join the campaign
and propagate messages of the promotion.
This problem is first introduced by Kempe et al.~\cite{kempe2003maximizing} under an algorithmic framework to find the most influential seeds,
and they propose the {\em independent cascade} model and {\em linear threshold} model,
which consider the social-psychological factors of information diffusion to simulate such a random process of adoptions.
Since Kempe et al.'s seminal work, extensive researches have been done on influence
maximization, especially on improving the efficiency of influence maximization
in the independent cascade model~\cite{chen2009efficient, chen2010scalable, goyal2011celf,borgs14,tang14}, all of which
assume that the ground-truth influence probabilities on edges are exactly known.
Separately, a number of studies \cite{saito2008prediction,tang2009social,goyal2010learning,gomez2011uncover,Netrapalli12} propose learning methods to extract edge influence probabilities.
Due to inherent data limitation, no learning method could recover the
exact values of the edge probabilities, and what can be achieved is the estimates
on the true edge probabilities, with confidence intervals indicating that
the true values are within the confidence intervals with high probability.
The uncertainty in edge probability estimates, however, may adversely affect
the performance of the influence maximization task, but this topic has
left mostly unexplored.
The only attempt addressing this question is a recent study in~\cite{he2015stability},
but due to a technical issue as explained in~\cite{he2015stability},
the results achieved by the study is rather limited.
In this paper, we utilize the concept of robust optimization~\cite{ben2002robust}
in operation research to address the issue of influence maximization
with uncertainty.
In particular, we consider that the input to the influence maximization task is
no longer edge influence probability on every edge of a social graph, but instead
an interval in which the true probability may lie.
Thus the input is actually a parameter space $\Theta$, which is the product of
all intervals on all edges.
For any seed set $S$, let $\sigma_\theta(S)$ denote the {\em influence spread} of $S$
under parameter setting $\theta\in \Theta$.
Then we define {\em robust ratio} of $S$ as
$g(\Theta,S) = \min_{\theta\in \Theta} \frac{\sigma_{\theta}(S)}{\sigma_{\theta}(S^{*}_{\theta})} $,
where $S^{*}_{\theta}$ is the optimal seed set achieving the maximum influence
spread under parameter $\theta$.
Intuitively, robust ratio of $S$ indicates the (multiplicative) gap between
its influence spread and the optimal influence spread under the worse-case
parameter $\theta \in \Theta$, since we are unsure which $\theta \in \Theta$
is the true probability setting.
Then our optimization task is to find a seed set of size $k$ that
maximize the robust ratio under the known parameter space $\Theta$
--- we call this task {\em Robust Influence Maximization (RIM)}.
It is clear that when there is no uncertainty on edge probabilities, which means
$\Theta$ collapses to the single true parameter $\theta$,
RIM degenerates to the classical influence maximization problem.
However, when uncertainty exists, solving RIM may be a more difficult task.
In this paper, we first propose an algorithm {\lugreedy} that
solves the RIM task with a solution-dependent bound on
its performance, which means
that one can verify its performance after it selects the seed
set (Section~\ref{sect:rim}).
We then show that if the input parameter space $\Theta$ is only given and cannot
be improved, it is possible that even the best robust ratio
in certain graph instances could be very small (e.g. $O(\log n / \sqrt{n})$
with $n$ being the number of nodes in the graph).
This motivates us to study sampling methods to further tighten parameter
space $\Theta$, and thus improving the robustness of our algorithm
(Section~\ref{sect:sample}).
In particular, we study both uniform sampling and adaptive sampling for
improving RIM performance.
For uniform sampling, we provide theoretical results on the sample complexity
for achieving a given robust ratio of the output seed set.
For adaptive sampling, we propose an information cascade based sampling heuristic
to adaptively bias our sampling effort to important edges often traversed
by information cascades.
Through extensive empirical evaluations (Section~\ref{sect:experiments}), we show that
(a) robust ratio is sensitive to the width of the confidence interval, and
it decreases rapidly when the width of the confidence interval increases; as
a result prior studies that learned edge probabilities may result in poor robust
ratio due to relative large confidence intervals (and thus high uncertainty);
(b) information cascade based adaptive sampling method performs better than
uniform sampling and other baseline sampling methods, and can significantly
improve the robustness of the influence maximization task.
In summary, the contribution of our paper includes: (a) proposing the problem
of robust influence maximization to address the important issue of
uncertainty in parameter estimates adversely impacting
the influence maximization task;
(b) providing the {\lugreedy} algorithm that guarantees a solution-dependent
bound; and
(c) studying uniform and adaptive sampling methods to
improve robust influence maximization.
\texarxiv{Note that proofs of some technical results can be found in the appendix.}
\texkdd{Due to space constraint, the proofs of some technical results are omitted.
The complete proofs of all results can be found in the full technical report~\cite{ChenLTZZ16}.}
\subsection{Additional Related Work} \label{sect:related}
Influence maximization has been extensively studied and we already point out
a number of closely related studies to our work in the introduction.
For a comprehensive survey, one can refer to the monograph~\cite{chen2013information}.
We discuss a few most relevant work in more detail here.
To the best of our knowledge, the study by He and Kempe~\cite{he2015stability}
is the only attempt prior to our work that also tries to address the
issue of uncertainty
of parameter estimates impacting the influence maximization tasks.
However, besides the similarity in motivation, the technical treatments are quite
different.
First, their central problem, called influence difference maximization, is to
find a seed set of size $k$ that maximizes the additive difference between
the two influence spreads of the {\em same} seed set using different parameter values.
Their purpose is to see how large the influence gap could be due to the
uncertainty in parameter space.
However, our goal is still to find the best possible seed set for influence
maximization purpose, while considering the adverse effect of the uncertainty,
and thus we utilize the robust optimization concept and use the worse-case
multiplicative ratio between the influence spread of the chosen seed set and
the optimal seed set as our objective function.
Second, their influence difference maximization turns out to be hard to approximate
to any reasonable ratio, while we provide an actual algorithm for robust
influence maximization that has both a theoretical solution-dependent bound and
performs reasonably well in experiments.
Third, we further consider using sampling methods to improve RIM, which is not
discussed in~\cite{he2015stability}.
In the context of robust optimization, Krause et al.'s work on robust
submodular optimization~\cite{krause2008robust} is possibly the closest to ours.
Our RIM problem can be viewed as a specific instance of robust submodular
optimization studied in~\cite{krause2008robust}.
However, due to the generality of problem scope studied in~\cite{krause2008robust},
they show strong hardness results and then they have to resolve to
bi-criteria solutions.
Instead, we are working on a particular instance of robust submodular optimization,
and their bi-criteria solution may greatly enlarge the selected seed set size,
which may not be allowed in our case.
Furthermore, they work on finite set of submodular functions, but in our case
our objective function is parametrized with $\theta$ from a continuous
parameter space $\Theta$, and it is unclear how their results work for
the continuous case.
\texkdd{In a parallel work that will appear in the same proceeding, }
\texarxiv{In a parallel work, }
He and Kempe study the
same subject of robust influence maximization~\cite{HeKempe16}, but they follow
the bi-criteria approximation approach of~\cite{krause2008robust}, and thus
in general their results are orthogonal to ours.
In particular, they use essentially the same objective function, but they work on
a finite set of influence spread functions $\Sigma$, and require to find
$k\cdot \ln |\Sigma|$ seeds to achieve $1-1/e$ approximation ratio comparing to
the optimal seed set of size $k$; when working on continuous parameter space
$\Theta$, they show that it is equivalent to a finite spread function space
of size $2^n$ and thus requiring $\varTheta(kn)$ seeds for a bi-criteria solution,
which renders the bi-criteria solution useless.
Thus their bi-criteria approach is suitable when the set of possible spread functions
$\Sigma$ is small.
Adaptive sampling for improving RIM bears some resemblance to pure exploration
bandit research~\cite{bubeck2011pure}, especially to combinatorial pure exploration
\cite{chen2014combinatorial} recently studied.
Both use adaptive sampling and achieve some optimization objective in the end.
However, the optimization problem modeled in combinatorial pure exploration
\cite{chen2014combinatorial} does not have a robustness objective.
Studying robust optimization together with combinatorial pure exploration
could be a potentially interesting topic for future research.
Another recent work \cite{lei2015online} uses online algorithms to maximize the
expected coverage of the union of influenced nodes in multiple rounds based on online
feedbacks, and thus is different from our adaptive sampling objective: we use feedbacks
to adjust adaptive sampling in order to find a seed set nearly maximizing
the robust ratio after the sampling is done.
\section{Model and Problem Definition} \label{sect:model}
As in \cite{kempe2003maximizing}, the {\em independent cascade (IC)} model can be
equivalently modeled as a stochastic diffusion process from
seed nodes or as reachability from seed nodes
in random live-edge graphs.
For brevity, we provide the live-edge graph description below.
Consider a graph $G=(V,E)$ comprising a set $V$ of nodes and a set $E$ of directed edges,
where every edge $e$ is associated with probability $p_e \in [0,1]$,
and let $n = |V|$ and $m = |E|$.
To generate a random live-edge graph, we declare each edge $e$
as {\em live}
if flipping a biased random coin with probability $p_e$ returns success,
declare $e$ as {\em blocked} otherwise (with probability $1-p_e$).
The randomness on all edges are mutually independent.
We define the subgraph $L$ consisting of $V$ and the set of
live edges as the (random) {\em live-edge graph}.
Given any set $S \subseteq V$ (referred as {\em seeds}), let $R_L(S) \subseteq V$ denote the {\em reachable set} of nodes from $S$ in live-edge graph $L$, i.e.,
(1) $S\subseteq R_L(S)$, and (2) for a node $v\notin S$, $v \in R_L(S)$ iff there is a path in $L$ directing from some node in $S$ to $v$.
For convenience, we use {\em parameter vector} $\theta=(p_e)_{e\in E}$ to denote the probabilities on all edges.
The {\em influence spread} function $\sigma_{\theta}(S)$ is defined as the expected size of the reachable set from $S$, that is
\[
\sigma_{\theta}(S) := \sum_{L}\Pr_{\theta}[L]\cdot |R_L(S)| \mbox{,}
\]
where $\Pr_{\theta}[L]$ is the probability of yielding live-edge graph $L$ under vector $\theta$.
From \cite{kempe2003maximizing}, we know that the influence spread function is non-negative ($\forall S \subseteq V$, $\sigma_{\theta}(S) \geq 0$), monotone ($\forall S \subseteq T \subseteq V$, $\sigma_{\theta}(S) \leq \sigma_{\theta}(T)$), and
submodular ($\forall S \subseteq T \subseteq V$, $\forall v \in V$ $\sigma_{\theta}(S \cup \{ v \}) - \sigma_{\theta}(S) \geq \sigma_{\theta}(T \cup \{ v \}) - \sigma_{\theta}(T)$).
The well-known problem of \emph{Influence Maximization}
raised in \cite{kempe2003maximizing} is stated in the following.
\begin{problem}[Influence Maximization \cite{kempe2003maximizing}]
Given a graph $G=(V,E)$, parameter vector $\theta=(p_e)_{e\in E}$ and a fixed budget $k$,
we are required to find a seed set $S \subseteq V$ of $k$ vertices, such that the influence spread function $\sigma_{\theta}(S)$ is maximized, that is,
\begin{align*}
S^*_{\theta} := \argmax_{S \subseteq V, |S| = k} \sigma_{\theta}(S) \mbox{.}
\end{align*}
\end{problem}
It has been shown that Influence Maximization problem is NP-hard \cite{kempe2003maximizing}.
Since the objective function $\sigma_{\theta}(S)$ is submodular,
we have a $(1-\frac{1}{e})$ approximation using standard greedy policy $\greedy(G,k,\theta)$ in Algorithm~\ref{alg:greedy} (assuming a value oracle on function $\sigma_\theta(\cdot)$).
Let $S^{g}_{\theta}$ be the solution of $\greedy(G,k,\theta)$.
As a convention, we assume that both optimal seed set $S^*_{\theta}$ and greedy seed set $S^{g}_{\theta}$
in this paper are of fixed size $k$ implicitly.
On the other hand, it is proved by Feige~\cite{feige1998threshold} that such an approximation ratio
could not be improved for $k$-max cover problem, which is a special case of the
influence maximization problem under the IC model.
\begin{algorithm}[t]
\begin{algorithmic}[1]
\REQUIRE Graph $G$, budget $k$, parameter vector $\theta$
\STATE $S_0 \gets \emptyset$
\FOR {$i = 1,2,\ldots,k$}
\STATE $v \gets \argmax_{v \notin S_i} \left\{ \sigma_{\theta}(S_{i-1}\cup \{v\})-\sigma_{\theta}(S_{i-1}) \right\}$
\STATE $S_i \gets S_{i-1} \cup \{v\}$
\ENDFOR
\RETURN $S_k$
\end{algorithmic}
\caption{{\sf Greedy}($G,k,\theta$)}
\label{alg:greedy}
\end{algorithm}
However, the knowledge of the probability on edges is usually acquired by learning from the real-world
data \cite{saito2008prediction,tang2009social,goyal2010learning,gomez2011uncover,Netrapalli12},
and the obtained estimates always have some inaccuracy comparing to the
true value.
Therefore, it is natural to assume that, from observations of edge $e$,
we can obtain the statistically significant neighborhood $[l_e,r_e]$,
i.e., the {\em confidence interval} where the true probability $p_e$
lies in with high probability.
This confidence interval prescribes the uncertainty on the true probability
$p_e$ of the edge $e$, and such uncertainty on edges may adversely
impact the influence maximization task.
Motivated by this, we study the problem of {\em robust influence maximization}
as specified below.
Suppose for every edge $e$, we are given an interval $[l_e,r_e]$ ($0\le l_e\le r_e\le 1$)
indicating the range of the probability, and the ground-truth probability $p_e \in [l_e,r_e]$ of this edge is unknown.
Denote $\Theta=\times_{e\in E}[l_e,r_e]$ as the \emph{parameter space} of network $G$, and
$\theta=(p_e)_{e \in E}$ as the latent parameter vector.
Specifically, let $\theta^{-}(\Theta)=(l_e)_{e\in E}$ and $\theta^{+}(\Theta)=(r_e)_{e\in E}$ as the minimum and maximum parameter vectors, respectively, and when the context is clear, we would only use $\theta^{-}$ and $\theta^+$.
For a seed set $S \subseteq V$ and $|S| = k$, define its \emph{robust ratio} under
parameter space $\Theta$ as
\begin{equation}\label{robustratio}
g(\Theta,S) := \min_{\theta \in \Theta} \frac{\sigma_{\theta}(S)}{\sigma_{\theta}(S^{*}_{\theta})} \mbox{,}
\end{equation}
where $S^{*}_{\theta}$ is the optimal solution of size $k$ when the probability on every edge is given by $\theta$.
Given $\Theta$ and solution $S$, the robust ratio $g(\Theta,S)$
characterizes the {\em worst-case}
ratio of influence spread of $S$ and the underlying optimal one,
when the true probability vector $\theta$ is unknown (except knowing that
$\theta \in \Theta$).
Then, the {\em Robust Influence Maximization} (RIM) problem is defined as follows.
\begin{problem}[Robust Influence Maximization]
Given a graph $G=(V,E)$, parameter space $\Theta=\times_{e\in E}[l_e,r_e]$ and a fixed budget $k$, we are required to find a set $S\subseteq V$ of $k$ vertices,
such that robust ratio $g(\Theta,S)$ is maximized, i.e.,
\begin{align*}
S^*_{\Theta}
:= \argmax_{S \subseteq V, |S| = k} g(\Theta,S)
= \argmax_{S \subseteq V, |S| = k} \min_{\theta\in \Theta}\frac{\sigma_{\theta}(S)}{\sigma_{\theta}(S^{*}_{\theta})} \mbox{.}
\end{align*}
\end{problem}
The objective of this problem is to find a seed set $S^*_{\Theta}$ that has the largest robust ratio, that is, $S^*_{\Theta}$ should maximize the
worst-case ratio between its influence spread and the optimal influence spread,
when the true probability vector $\theta$ is unknown.
When there is no uncertainty, which means $\Theta$ collapses to the true
probability $\theta$, we can see that the RIM problem is reduced back
to the original influence maximization problem.
In RIM, the knowledge of the confidence interval is assumed to be the input.
Another interpretation is that, it can be viewed as given an estimate of probability vector
$\hat{\theta} = (\hat{p}_e)_{e \in E}$ with a perturbation level $\delta_e$ on each edge $e$,
such that the true probability $p_e \in [\hat{p}_e - \delta_e, \hat{p}_e + \delta_e] = [l_e, r_e]$,
which constitutes parameter space $\Theta = \times_{e \in E}[l_e,r_e]$.
Notice that, in reality, this probability could be obtained via edge samplings,
i.e., we make samples on edges and compute the fraction of times that the edge is live.
On the other hand, we can also observe information cascades on each edge when collecting
the trace of diffusion in the real world,
so that the corresponding probability can be learned.
However, when the amount of observed information cascade is small, the best robust ratio $\max_{S}g(\Theta,S)$ for the given $\Theta$ can be low so that the output for a RIM algorithm does not have a good enough guarantee of the performance in the worst case. Then a natural question is, given $\Theta$, how to further make samples on edges (e.g., activating source node $u$ of an edge $(u,v)$ and see if the sink node $v$ is activated through edge $e$) so that $\max_{S}g(\Theta,S)$ can be efficiently improved? To be specific, how to make samples on edges and output $\Theta'$ and $S'$ according to the outcome so that (a) with high probability the true value $\theta$ lies in the output parameter space $\Theta'$, where the randomness is taken according to $\theta$,
and (b) $g(\Theta',S')$ is large.
This sub-problem is called \emph{Sampling for Improving Robust Influence Maximization}, and will be addressed in Section~\ref{sect:sample}.
\section{Algorithm and Analysis for RIM} \label{sect:rim}
Consider the problem of RIM, parameter space $\Theta = \times_{e \in E}[l_e,r_e]$ is given, and we do not know the true probability $\theta \in \Theta$.
Let $\theta^{-}=(l_e)_{e\in E}$ and $\theta^{+}=(r_e)_{e\in E}$.
Our first observation is that, when $\Theta$ is a single vector ($l_e = r_e$, $\forall e \in E$),
it is trivially reduced to the classical Influence Maximization problem.
Therefore, we still have the following hardness result on RIM \cite{kempe2003maximizing,feige1998threshold}:
\begin{theorem} \label{pro:RIM-nphard}
RIM problem is \NP-hard, and for any $\varepsilon > 0$, it is \NP-hard
to find a seed set $S$ with robust ratio at least
$1-\frac{1}{e} + \varepsilon$.
\end{theorem}
To circumvent the above hardness result, we develop algorithms that achieves reasonably
large robust ratio.
When we are not allowed to make new samples on the edges to improve the input interval, it is natural to
utilize the greedy algorithm of submodular maximization in \cite{kempe2003maximizing} (i.e., Algorithm~\ref{alg:greedy})
as the subroutine to calculate the solution.
In light of this, we first propose Lower-Upper Greedy Algorithm and the solution-dependent bound for $g(\Theta,S)$,
and then discuss $g(\Theta,S)$ in the worst-case scenario.
\subsection{Lower-Upper Greedy Algorithm}
\begin{algorithm}[t]
\begin{algorithmic}[1]
\REQUIRE Graph $G = (V,E)$, budget $k$, parameter space $\Theta = \times_{e \in E}[l_e, r_e]$
\STATE $S_{\theta^{-}}^g \gets \greedy(G,k,\theta^-)$
\STATE $S_{\theta^{+}}^g \gets \greedy(G,k,\theta^+)$
\RETURN $\argmax_{S \in \left\{ S_{\theta^{-}}^g, S_{\theta^{+}}^g \right\}} \left\{ \sigma_{\theta^-}(S) \right\}$
\end{algorithmic}
\caption{{\lugreedy}($G,k,\Theta$)}
\label{alg:lugreedy}
\end{algorithm}
Given parameter space $\Theta=\times_{e\in E}[l_e,r_e]$ with the minimum and maximum parameter vectors $\theta^{-}=(l_e)_{e\in E}$ and $\theta^{+}=(r_e)_{e\in E}$,
our \emph{Lower-Upper Greedy algorithm} ($\lugreedy(G, k, \Theta)$) is described in Algorithm~\ref{alg:lugreedy}
which outputs the best seed set $S^\lu_\Theta$ for the minimum parameter vector $\theta^-$ such that
\begin{align}\label{def:lu-greedy-solution}
S^\lu_\Theta := \argmax_{S \in \left\{ S_{\theta^{-}}^g, S_{\theta^{+}}^g \right\}}
\left\{ \sigma_{\theta^-}(S) \right\}.
\end{align}
To evaluate the performance of this output, we first define the \emph{gap ratio} $\alpha(\Theta) \in [0,1]$ of the input parameter space to be
\begin{equation} \label{eq:def-alpha}
\alpha(\Theta):=
\frac{\sigma_{\theta^{-}}(S^\lu_\Theta)}
{\sigma_{\theta^{+}}(S_{\theta^{+}}^g)} \mbox{.}
\end{equation}
Then, {\lugreedy} achieves the following result:
\begin{theorem}[solution-dependent bound] \label{thm:main}
Given a graph $G$, parameter space $\Theta$ and budget limit $k$, {\lugreedy} outputs a seed set
$S^\lu_\Theta$ of size $k$ such that
\begin{displaymath}
g(\Theta, S^\lu_\Theta) \ge \alpha(\Theta)\left(1-\frac{1}{e}\right) \mbox{,}
\end{displaymath}
where $\alpha(\Theta):=
\frac{\sigma_{\theta^{-}}(S^\lu_\Theta)}
{\sigma_{\theta^{+}}(S_{\theta^{+}}^g)}$.
\end{theorem}
\begin{proof}
For any seed set $S$,
$g(\Theta,S) = \min_{\theta\in \Theta} \frac{\sigma_{\theta}(S)}{\sigma_{\theta}(S^{*}_{\theta})}$
by definition.
Obviously, it is a fact that $\sigma_\theta(S)$ is monotone on $\theta$ for any fixed $S$.
From the definition of optimal solutions and the greedy algorithm, we can get
$
\sigma_{\theta}(S^{*}_{\theta}) \le \sigma_{\theta^+}(S^{*}_{\theta}) \le \sigma_{\theta^{+}}(S^{*}_{\theta^{+}}) \le \frac{ \sigma_{\theta^{+}}(S^{g}_{\theta^{+}}) }{1 - 1/e }.
$
Moreover, it can be implied that
\begin{align*}
g(\Theta,S) \ge \min_{\theta\in \Theta} \frac{ \sigma_{\theta}(S) }{ \sigma_{\theta^{+}}(S^{g}_{\theta^{+}}) } \left(1- \frac{1}{e} \right)
= \frac{ \sigma_{\theta^{-}}(S) }{ \sigma_{\theta^{+}}(S^{g}_{\theta^{+}}) } \left(1- \frac{1}{e} \right).
\end{align*}
Use seed set $S^{\lu}_{\Theta}$ from {\lugreedy}, and it follows immediately that
$
g(\Theta, S^{\lu}_{\Theta})
\ge \frac{ \sigma_{\theta^{-}}(S^{\lu}_{\Theta}) }{ \sigma_{\theta^{+}}(S^{g}_{\theta^{+}}) } \left(1- \frac{1}{e} \right)
= \alpha(\Theta) \left(1 - \frac{1}{e} \right).
$
\end{proof}
We refer $\alpha(\Theta) (1- \frac{1}{e})$ as the {\em solution-dependent bound} of $g(\Theta,S^\lu_{\Theta})$
that {\lugreedy}
achieves, because it depends on the solution $S^\lu_{\Theta}$.
The good thing is that it can be evaluated once we have the solution, and then
we know the robust ratio must be at least this lower bound.
Note that the bound is good if $\alpha(\Theta)$ is not too small, and thus it in turn indicates that
the influence spread $\sigma_{\theta}(S^\lu_{\Theta})$
we find has a good performance under any probability vector $\theta \in \Theta$.
It is worth remarking that the choice of using
$\alpha(\Theta) = \sigma_{\theta^{-}}(S^\lu_\Theta) / \sigma_{\theta^{+}}(S_{\theta^{+}}^g)$
as a measurement is for the following reasons:
(a) Intuitively, $S_{\theta^{-}}^g$ is expected to be the best possible seed set we can find that maximizes $\sigma_{\theta^{-}}(\cdot)$;
(b) Meanwhile, we consider $S_{\theta^{+}}^g$ as a potential seed set for
the later theoretical analysis (in the proof of \Cref{thm:uniform}),
which requires the alignment of the same seed set for the numerator and denominator.
Thus, $\alpha(\Theta) \geq \max\{ \sigma_{\theta^{-}}(S_{\theta^{-}}^g), \sigma_{\theta^{-}}(S_{\theta^{+}}^g) \} / \sigma_{\theta^{+}}(S_{\theta^{+}}^g)$.
In particular, when $\theta^+$ and $\theta^-$ tend to the same value $\theta$,
RIM is tending towards the classical Influence Maximization,
and thus the influence spread $\sigma_{\theta}(S^\lu_\Theta)$
can be close to the best possible result $\sigma_{\theta}(S_{\theta}^g)$.
The approach adopted by {\lugreedy} is similar to the sandwich approximation used
in~\cite{lu2015competition}.
The following example shows that for certain problem instances,
the gap ratio $\alpha(\Theta)$ of {\lugreedy} could match the robust ratio
$g(\Theta,S^\lu_\Theta)$, which also matches
the best possible robust ratio $\max_{|S|=k}g(\Theta,S)$.
\begin{example} \label{exp:tight}
Consider a graph $G=(V,E)$ where the set of nodes are equally partitioned into $2k$ subsets $V=\cup_{i=1}^{2k}V_i$ such that every $V_i$ contains $t+1$ nodes.
Let $V_i=\{v_i^{j}\mid 1\le j\le t+1\}$ and set $E=\cup_{i=1}^{2k}E_i$
where $E_i=\{(v_i^{1},v_i^{j})\mid 2\le j\le t+1\}$.
That is, every $(V_i,E_i)$ forms a star with $v_i^1$ being the node at the center,
all stars are disconnected from one another.
For the parameter space we set the interval on every edge to be $[l,r]$.
When {\lugreedy} select $k$ nodes, since all $v^1_i$'s have the same (marginal)
influence spread, w.l.o.g., suppose that {\lugreedy} selects
$\{v^1_1, v^1_2,\ldots, v^1_k \}$.
Then if we set the true probability vector $\theta\in \Theta$ such that $p_e=l$ for every $e \in \cup_{i=1}^{k} E_i$, and $p_e=r$
for every $e\in \cup_{i=k+1}^{2k} E_i$, it is easy to check that
$
\max_{|S|=k}g(\Theta,S)=g(\Theta,S^\lu_{\Theta})=\alpha(\Theta)=\frac{1+tl}{1+tr}.
$
\end{example}
The intuition from the above example is that, when there are many alternative choices
for the best seed set, and these alternative seed sets do not have much overlap
in their influence coverage, the gap ratio $\alpha(\Theta)$ is a good indicator
of the best possible robust ratio one can achieve.
In the next subsection, we will show that the best robust ratio could be very bad
for the worst possible graph $G$ and parameter space $\Theta$,
which motivates us
to do further sampling to improve $\Theta$.
\subsection{Discussion on the robust ratio} \label{sect:analysis-sub:hardness}
For the theoretical perspective, we show in this part that if we make no assumption or only add loose constraints to the input parameter space $\Theta$, then no algorithm will guarantee good performance for some worst possible graph $G$.
\begin{theorem}
\label{thm:3-ratios}
For RIM,
\begin{enumerate}
\item \label{pro:3-ratios-1}
There exists a graph $G=(V,E)$ and parameter space $\Theta = \times_{e\in E} [l_{e}, r_{e}]$, such that
\[
\max_{|S|=k}g(\Theta,S)=\max_{|S|=k}\min_{\theta\in \Theta} \frac{\sigma_{\theta}(S)}{\sigma_{\theta}(S^{*}_{\theta})}=O\left(\frac{k}{n}\right) \mbox{.}
\]
\item \label{pro:3-ratios-2}
There exists a graph $G=(V,E)$, constant $\delta = \varTheta\left(\frac{1}{n}\right)$ and
parameter space $\Theta = \times_{e \in E}[l_{e}, r_{e}]$ where
$r_{e} - l_{e} \leq \delta$ for every $e \in E$,
such that
\[
\max_{|S|=k}g(\Theta,S)=O\left(\frac{\log n}{n}\right) \mbox{.}
\]
\item \label{pro:3-ratios-3}
Consider random seeds set $\tilde{S}$ of size $k$.
There exists a graph $G=(V,E)$, constant $\delta = \varTheta\left(\frac{1}{\sqrt{n}}\right)$ and
parameter space $\Theta = \times_{e \in E}[l_{e}, r_{e}]$ where
$r_{e} - l_{e} \leq \delta$ for every $e \in E$,
we have
\[
\max_{\Omega} \min_{\theta\in\Theta}
\E_{\tilde{S}\in \Omega}\left[
\frac{\sigma_{\theta}(\tilde{S})}{\sigma_{\theta}(S_{\theta}^*)} \right]=O\left(\frac{\log n}{\sqrt{n}}\right) \mbox{,}
\]
where $\Omega$ is any probability distribution over seed sets of size $k$, and
$\E_{\tilde{S}\in \Omega}[\cdot]$ is the expectation of random set
$\tilde{S}$ taken from the distribution $\Omega$.
\end{enumerate}
\end{theorem}
In the first case, we allow the input $\Theta$ to be an arbitrary parameter space.
It is possible that $\Theta=\times_{e\in E}[0,1]$ for some graph $G$, which means
there is no knowledge at all for edge probabilities.
Then any seed set may achieve $O\left(\frac{k}{n}\right)$-approximation
of the optimal solution in the worst case.
Intuitively, a selected seed set $S$ may rarely activate other nodes (i.e., $O(k)$), while optimal solution (to the latent $\theta$) may cover almost the whole graph (i.e., $\Omega(n)$).
In the second case, an additional constraint is assumed on the parameter space
$\norm{\theta^+ - \theta^-}_{\infty} \leq \delta$, i.e.,
for every $e\in E$, $r_e-l_e\le \delta$, to see if we could obtain a better performance when $\delta$ is small.
However, even though $\delta$ is in the order of $O(1/{n})$, the robust ratio can be as small as $O(\log{n}/{n})$.
The proof is related to the phase transition in the Erd\H{o}s-R\'{e}nyi graph
for the emergence of giant component.
In particular, if we have a graph $G$ consisting of two disconnected, equal-sized
Erd\H{o}s-R\'{e}nyi random graphs with edge probabilities close to the
critical value of generating a giant connected component, then whenever
we select a seed in one component, that component could be just below
the threshold resulting in $O(\log n)$ influence spread while the other
component is just above the threshold leading to $\varTheta(n)$ influence spread.
Thus, the worst-case ratio for any one-node seed set is always
$O(\log{n}/{n})$.
A similar discussion can be found in \cite{he2015stability}.
In the third case, we allow the algorithm to be randomized,
namely the output seed set $\tilde{S}$ is a random set of size $k$.
Even in this case, the robust ratio could be as bad as
$O(\log n/\sqrt{n})$.
\if 0
\begin{proposition}
\label{pro:no}
If there is no constraint on input parameter space $\Theta$, then there exists a graph $G$ such that
\[
\max_{|S|=k}g(\Theta,S)=\max_{|S|=k}\min_{\theta\in \Theta} \frac{\sigma_{\theta}(S)}{\sigma_{\theta}(S^{*}_{\theta})}=O\left(\frac{k}{n}\right) \mbox{.}
\]
\end{proposition}
\wei{The proof of the above proposition refers to an algorithm, which is wrong. This proposition
should not be related to any algorithm. The proof is simply fixing any set $S$, there is always another
$\theta$ that make the ratio very bad.}
\wei{However, from the above comment and the proof, I feel that something is strange.
Why can we fix $S$ first, and then change $\theta$ accordingly to get the worst case ratio?
Shouldn't that we should fix $\theta$ first, that is, it is the true value, and should not change
if we change a set $S$?
Thinking about playing a game with the adversary, what is the procedure? The adversary
should first give a graph, and give the $\theta$, the graph is known to us, the player, but
the $\theta$ is not known to us, only to the referee.
Then we choose a set, and the adversary review the $\theta$ and show us the bound.
So this is not as our defined ratio. Let's think about it more.}
Since $\Theta$ can be regarded as our knowledge about the transmitting probability on each edge, it is possible that the only knowledge we have is $\Theta=\times_{e\in E}[0,1]$.
This proposition shows that any seed set is possibly an $O\left(\frac{k}{n}\right)$-approximation of the optimal solution in the worst case of $\theta\in \Theta$.
Intuitively, any solution $S$ may only cover $O(k)$ nodes (in the order of the budget), while optimal solution (to the respective $\theta$)
may cover $\varTheta(n)$ (in the order of the whole nodes), for some graph $G$.
Now we may add additive constraints on the input parameter space $\norm{\theta^+ - \theta^-}_{\infty} \leq \delta$, i.e.,
for every $e\in E$, $r_e-l_e\le \delta$, to see if we could obtain a better performance when $\delta$ becomes smaller.
The following two propositions on the performance are hardness results when imposing certain additive constraints on $\Theta$.
\begin{proposition}
\label{pro:1/n}
When $\delta=\Omega(\frac{1}{n})$, then there exists $\Theta$ and a graph $G$ such that
\[
\max_{|S|=k}g(\Theta,S)=O\left(\frac{\log n}{n}\right) \mbox{.}
\]
\end{proposition}
The proof is via a construction related to Erd\H{o}s-R\'{e}nyi graph. Similar discussion can be found in the introduction section of \cite{he2015stability}.
If we allow the algorithm to be randomized, namely the output seed set $\tilde{S}$ is a random variable whose possible outcome is any set of nodes with cardinality equal to $k$, the definition of optimal performance would be:
\[
r=\max_{\tilde{S}:|\supp(\tilde{S})|= k}\min_{\theta\in\Theta} \E_{\tilde{S}}\left[
\frac{\sigma_{\theta}(\tilde{S})}{\sigma_{\theta}(S_{\theta}^*)} \right] \mbox{,}
\]
where $\supp(\tilde{S})$ is the support of distribution $\tilde{S}$.
\begin{proposition}
\label{pro:1/nr}
When $\delta=\Omega(\frac{1}{n})$, for any distribution $\tilde{S}$, then there exists $\Theta$ and a graph $G$ such that
\[
\max_{\tilde{S}:|\supp(\tilde{S})|= k}\min_{\theta\in\Theta} \E_{\tilde{S}}\left[
\frac{\sigma_{\theta}(\tilde{S})}{\sigma_{\theta}(S_{\theta}^*)} \right]=O\left(\frac{\log n}{\sqrt{n}}\right) \mbox{.}
\]
\end{proposition}
\fi
\section{Sampling for Improving RIM}\label{sect:sample}
From the previous section, we propose {\lugreedy} algorithm to check the solution-dependent bound of the robust ratio,
and point out the worse-case bound could be small if $\Theta$ is not assumed to be tight enough.
Theorem~\ref{thm:3-ratios} in the previous subsection
points out that the best possible robust ratio $\max_{S}g(\Theta,S)$ can be
too low so that the output for RIM could not provide us with a satisfying seed set in the worst case.
Then a natural question is: given the input $\Theta$, can we make efficient samples
on edges
so that $\Theta$ is narrowed into $\Theta'$ (this means the true $\theta\in \Theta'$ with high probability) and then output a seed set $S'$ that makes $g(\Theta',S')$ large?
This problem is called \emph{Sampling for Improving RIM}.
In this section we study both uniform sampling and adaptive sampling
for improving RIM.
According to the Chernoff's bound, the more samples we make on an edge, the narrower
the confidence interval we get that guarantees the true probability
to be located within the confidence interval with a desired probability of
confidence.
After sampling to get a narrower parameter space, we could use
{\lugreedy} algorithm
to get the seed set.
\subsection{Uniform Sampling} \label{sect:analysis-sub:Uniform}
In Sampling for improving RIM, the goal is to design a sampling and maximization algorithm $\A$ that outputs $\Theta'$ and $S'$ such that with high probability the robust ratio of $S'$ in $\Theta'$ is large.
After sampling edges, we can use Chernoff's bound to compute the confidence interval,
and the confidence interval can be further narrowed down with more samples.
However, the key issue is to connect the width of confidence interval with
the stability of influence spread.
We propose two ideas
exploiting properties of additive and multiplicative confidence interval respectively
to this issue, and incorporate into Uniform Sampling algorithm (in Algorithm~\ref{alg:uniform-sampling})
with theoretical justification (in Theorem~\ref{thm:uniform}).
Our first idea is inspired by the following lemma from \cite{ChenWY14a} to build the connection in the additive form.
\begin{lemma}[Lemma~7 in \cite{ChenWY14a}]
\label{lem:add}
Given graph $G$ and parameter space $\Theta$ such that $\forall \theta_1,\theta_2\in \Theta$, $\norm{\theta_1-\theta_2}_{\infty}\leq \delta$, then, $\forall S\subseteq V$,
\[
\abs{\sigma_{\theta_1}(S)-\sigma_{\theta_2}(S)} \leq mn\delta \mbox{.}
\]
\end{lemma}
We use a tight example (in the order of $|V|$ and $|E|$) to illustrate the connection and give an insight of this lemma as follows.
Consider graph $G = (V,E)$ with $|V|=n$ and $|E| = m$ ($m \gg n$).
Let $G$ be two disjoint cycles, each containing exactly $\frac{n}{2}$ nodes and $\frac{n}{2}$ edges.
We arbitrarily assign the rest $m-n$ edges between two cycles.
Then, for every edge $e$ in the cycle, the interval is $l_e = r_e = 1$,
and $l_e = 0$, $r_e = \delta$ for those between two cycles,
which constitutes $\Theta = \times_{e \in E} [l_e, r_e]$.
Suppose $\delta > 0$ is sufficiently small, and let budget $k=1$. For any single-node set $S$,
it is easy to check that for $\theta^- = (l_e)_{e \in E}$, $\sigma_{\theta^-}(S) = \frac{n}{2}$,
and for $\theta^+ = (r_e)_{e \in E}$,
$\sigma_{\theta^+}(S) \approx \frac{n}{2} + \frac{n}{2} (m - n) \delta$,
thus $\abs{\sigma_{\theta^+}(S)-\sigma_{\theta^-}(S)} \approx \frac{1}{2} n(m-n)\delta$ in this case.
As a comparison, from Lemma~\ref{lem:add}, we know that $\abs{\sigma_{\theta^+}(S)-\sigma_{\theta^-}(S)} \leq m n \delta$.
Therefore, the above lemma establishes the guidance that we may sample every edge for sufficient times to shrink their confidence intervals in $\Theta$,
and feed {\lugreedy} with $\Theta$ as same as solving RIM, then the performance is guaranteed by Theorem~\ref{thm:main},
which matches our intuition that {\lugreedy} performs well with the satisfactory $\Theta$.
On the other hand, our second idea is to use the multiplicative confidence interval to reduce the fluctuation of influence spread,
then {\lugreedy} still applies.
The next lemma is crucial to achieve this goal.
\begin{lemma}
\label{lem:mul}
Given graph $G=(V,E)$ and parameter space $\Theta$. If there exists $\lambda \geq 0$, for all edge $e\in E$, s.t.,
$r_e \leq (1+\lambda)l_e$, then for any nonempty set $S \subseteq V$,
\begin{align}
\frac{\sigma_{\theta^+}(S)}{\sigma_{\theta^-}(S)} \leq (1+\lambda)^{n} \mbox{,}
\end{align}
and
\begin{equation}
\max_{|S|= k}\min_{\theta\in\Theta}\frac{\sigma_{\theta}(S)}{\sigma_{\theta}(S_{\theta}^{*})}\geq (1+\lambda)^{-n} \mbox{.}
\end{equation}
\end{lemma}
In this lemma, the ratio of influence spread can be bounded based on the relation of $l_e$ and $r_e$ in the multiplicative form.
To unify both ideas mentioned above, we propose \emph{Uniform Sampling for RIM} algorithm ({\sf US-RIM}) in Algorithm~\ref{alg:uniform-sampling},
and the theoretical result is presented in Theorem~\ref{thm:uniform}.
Basically, the algorithm samples every edge with the same number of
times,
and use {\lugreedy} to obtain the seed set.
We set different $t$ and $\delta_e$ for the two ideas.
Henceforth,
we explicitly refer the first setting as {\em Uniform Sampling with Additive form} ({\sf US-RIM-A}), and the second one as {\em Uniform Sampling with Multiplicative form} ({\sf US-RIM-M}).
\begin{algorithm}[t]
\begin{algorithmic}[1]
\REQUIRE Graph $G=(V,E)$, budget $k$, $(\epsilon, \gamma)$
\ENSURE Parameter space $\Theta_{out}$, seed set $S_{out}$
\FORALL{$e \in E$}
\STATE Sample $e$ for $t$ times, and observe $x^1_e, \ldots, x^t_e$
\STATE $p_e\gets\frac{1}{t}\sum_{i=1}^{t}x_e^i$, and set $\delta_e$ according to Theorem~\ref{thm:uniform}
\STATE $r_e\gets\min\{1,p_e+\delta_e\}$, $l_e\gets\max\{0,p_e-\delta_e\}$
\ENDFOR
\STATE
$\Theta_\text{out} \gets \times_{e \in E} [l_e,r_e]$
\STATE $S_\text{out} \gets \lugreedy(G,k,\Theta_\text{out})$
\RETURN $(\Theta_\text{out}$,$S_\text{out})$
\end{algorithmic}
\caption{{\sf US-RIM}}
\label{alg:uniform-sampling}
\end{algorithm}
\begin{theorem}\label{thm:uniform}
Given a graph $G=(V,E)$, budget $k$, and accuracy parameter $\epsilon,\gamma>0$, let $n=|V|$ and $m=|E|$, then for any unknown ground-truth parameter vector $\theta=(p_e)_{e \in E}$, Algorithm {\sf US-RIM} outputs $(\Theta_\text{out}$,$S_\text{out})$ such that
\[
g(\Theta_\text{out}, S_\text{out})\ge \left(1-\frac{1}{e}\right)(1-\epsilon),
\]
with $\Pr[\theta \in \Theta_\text{out}]\ge 1-\gamma$,
where the randomness is taken according to $\theta$,
if we follow either of the two settings:
\begin{enumerate}
\item \label{thm-add-case}
Set $t=\frac{2m^2n^2 \ln (2m/\gamma)}{k^2\epsilon^2}$, and for all $e$, set $\delta_e=\frac{k\epsilon}{mn}$;
\item \label{thm-mul-case}
Assume we have $p'$ such that $0 < p' \leq \min_{e\in E} p_e$,
set $t=\frac{3 \ln (2m/\gamma)}{ p' } \left( \frac{2n}{\ln (1/1-\epsilon)} + 1 \right)^2$,
and for all edge, set $\delta_e=\frac{1}{n} p_e\log\frac{1}{\gamma}$.
\end{enumerate}
\end{theorem}
In general, the total number of samples summing up all edges is $O(\frac{m^3n^2\log (m/\gamma)}{k^2\epsilon^2})$ for {\USRIMA},
and $O(\frac{mn^2 \log (m/\gamma)}{p' \epsilon^2})$ for {\USRIMM} with an additional constant $p'$, the lower bound probability on all edge probabilities.
The difference is that the former has a higher order of $m$, and the latter requires the knowledge of $p'$ and has an extra dependency on $O(1/p')$.
Since the sample complexity for both settings can be calculated in advance,
one may compare the values and choose the smaller one when running the uniform sampling algorithm.
An intuitive interpretation is that: (1) with high probability ($\geq 1-\gamma$),
the algorithm always outputs an $(1-\frac{1}{e}-\epsilon)$-approximation solution guaranteed by {\USRIMA};
(2) if $p'=\Omega(\frac{k^2}{m^2})$ (it is a loose assumption naturally satisfied in practice),
we may choose {\USRIMM} to achieve better sample complexity.
\subsection{Non-uniform and Adaptive Sampling} \label{sect:heuristic}
In a real network, the importance of edges in an influence diffusion process varies significantly.
Some edges may have larger influence probability than others or connect two important nodes in the network.
Therefore, in sampling it is crucial to sample edges appropriately.
Moreover, we can adapt our sampling strategy dynamically to put more sampling effort on
critical edges when we learn the edge probabilities
more accurately over time.
For convenience, given graph $G = (V, E)$, we define \emph{observation set} $\mathcal{M} = \left\{ M_e \right\}_{e \in E}$ as a collection of sets, where
$M_e = \{ x^1_e, x^2_e, \cdots, x^{t_e}_e \}$ denotes observed values of edge $e$ via the first $t_e$ samples on edge $e$.
We allow that a parameter space $\Theta_0 \subseteq \times_{e \in E} [0,1]$ is given,
which can be obtained by some initial samples $\mathcal{M}_0$
(e.g., uniformly sample each edge of the graph for a fixed number of times).
The following lemma is used to calculate the confidence interval, which
is a combination of additive and multiplicative Chernoff's Bound.
We adopt this bound in the experiment since some edges in the graph have large influence probability while others have small ones,
but using either additive or multiplicative bound may not be good enough
to obtain a small confidence interval.
The following bound is adapted from \cite{badanidiyuru2013bandit}
and is crucial for us in the experiment.
\begin{lemma} \label{lem:conf}
For each $e \in E$, let $M_e = \left\{ x^1_e, x^2_e, \dots, x^{t_e}_e \right\}$ be samples of $e$ in $\mathcal M = \{ M_e \}_{e \in E}$, and $t_e$ be the sample number.
Given any $\gamma > 0$, let confidence intervals for all edges be
$\Theta = \times_{e\in E} [l_e, r_e]$, such that, for any $e \in E$,
\begin{equation*}
\begin{aligned}
l_e &= \min\left\{\hat{p}_e + \frac{c_e^2}{2} - c_e\sqrt{\frac{c_e^2}{4} + \hat{p}_e}, ~0\right\}\\
r_e &= \max\left\{\hat{p}_e + \frac{c_e^2}{2} + c_e\sqrt{\frac{c_e^2}{4} + \hat{p}_e}, ~1\right\},
\end{aligned}
\end{equation*}
where $\hat{p}_e=\frac{ \sum_{i=1}^{t_e} x^{i}_e }{t_e}$, $c_e = \sqrt{\frac{3}{t_e} \ln\frac{2m}{\gamma}}$.
Then, with probability at least $1-\gamma$, the true probability $\theta= \left(p_e\right)_{e\in E}$ satisfies that $\theta\in\Theta$.
\end{lemma}
Our intuition for non-uniform sampling is that the edges along the information cascade
of important seeds determine the influence spread, and henceforth they should be estimated more accurately than other edges not along important information
cascade paths.
Thus, we use the following \emph{Information Cascade Sampling} method to select edges.
Starting from the seed set $S$, once node $v$ is activated, $v$ will try to activate its out-neighbors.
In other words,
for every out-edge $e$ of $v$, denote $t_e$ as the number of samples,
then $e$ will be sampled once to generate a new observation $x_e^{t_e}$ based on the latent Bernoulli distribution with success probability $p_e$,
and $t_e$ will be increased by $1$.
The process goes on until the end of the information cascade.
We propose \emph{Information Cascade Sampling for RIM} ({\ICSRIM}) algorithm in Algorithm~\ref{alg:information-cascade-sampling},
which adopts information cascade sampling described above to select edges.
\begin{algorithm}[t]
\begin{algorithmic}[1]
\REQUIRE Graph $G=(V,E)$, budget $k$, initial sample $\mathcal{M}_0$, threshold $\kappa$, $\gamma$.
\ENSURE Parameter space $\Theta_\text{out}$, seed set $S_\text{out}$
\STATE $i\gets 0$
\REPEAT
\STATE Get $\Theta_i$ based on $\mathcal{M}_i$ (see Lemma~\ref{lem:conf}).
\STATE $S^\lu_{\Theta_i} = \lugreedy(G,k, \Theta_i)$
\STATE $\mathcal{M}_{i+1} \gets \mathcal{M}_i$
\FOR{$j=1,2,\ldots,\tau$}
\STATE Do information cascade with the seed set $S^\lu_{\Theta_i}$
\STATE During the cascade, once $v\in V$ is activated, sample all out-edges of $v$ and update $\mathcal{M}_{i+1}$
\ENDFOR
\STATE $i\gets i+1$
\UNTIL{$\alpha(\Theta_i)>\kappa$}
\STATE $S_\text{out} \gets S^\lu_{\Theta_{i-1}}$
\STATE $\Theta_\text{out} \gets \Theta_{i-1}$
\RETURN $(\Theta_\text{out}, S_\text{out})$
\end{algorithmic}
\caption{{\ICSRIM}$(\tau)$: Information Cascade Sampling}
\label{alg:information-cascade-sampling}
\end{algorithm}
Algorithm~\ref{alg:information-cascade-sampling} is an iterative procedure. In the $i$-th iteration,
Lemma~\ref{lem:conf} is used to compute the confidence interval $\Theta_i$ from observation set $\mathcal{M}_i$.
Then according to $\Theta_i$, we find the lower-upper greedy set $S^{\lu}_{\Theta_{i}}$ and use information cascade to
update observation set $\mathcal{M}_{i+1}$ by absorbing new samples.
Since the robust ratio $g(\Theta, S^{\lu}_{\Theta_{i}})$ cannot be calculated efficiently, we will calculate $\alpha(\Theta)$ (defined in \eqref{eq:def-alpha}) instead.
In our algorithm, we use a pre-determined threshold $\kappa$ ($\kappa \in (0,1)$) as the stopping criteria.
Therefore, for $S_\text{out}$, the robust ratio $g(\Theta,S_\text{out})\ge \alpha(\Theta)\left(1-\frac{1}{e}\right) > \kappa\left(1-\frac{1}{e}\right)$ is guaranteed by Theorem~\ref{thm:main},
and the true probability $\theta \in \Theta_\text{out}$ holds with probability at least $1-\gamma$ due to Lemma~\ref{lem:conf}.
Compared with information cascade sampling method, calculating a greedy set is time-consuming.
Therefore in Algorithm \ref{alg:information-cascade-sampling}, we call $\lugreedy$ once every $\tau$ rounds of information cascades
to reduce the cost.
\section{Empirical Evaluation} \label{sect:experiments}
We conduct experiments on two datasets,
Flixster\footnote{http://www.cs.sfu.ca/$\sim$sja25/personal/datasets/} and
NetHEPT\footnote{http://research.microsoft.com/en-us/people/weic/projects.aspx}
to verify the robustness of influence maximization and our sampling methods.
\subsection{Experiment Setup}
\subsubsection{Data Description}
\paragraph*{Flixster}
The Flixster dataset is a network of American social movie discovery service (www.flixster.com). To transform the dataset into a weighted graph, each user is represented by a node, and a directed edge from node $u$ to $v$ is formed
if $v$ rates one movie shortly after $u$ does so on the common movie.
The dataset is analyzed in \cite{barbieri2013topic}, and the influence probability are learned by the topic-aware model.
We use the learning result of \cite{barbieri2013topic} in our experiment, which is a graph containing 29357 nodes and 212614 directed edges.
There are 10 probabilities on each edge, and each probability represents the influence from the source user to the sink on a specific topic.
Since most movies belong to at most two topics,
we only consider 3 out of 10 topics in our experiment, and get two induced graphs whose number of edges are 23252 and 64934 respectively. For the first graph, probabilities of topic 8 are directly used as the ground truth parameter (termed as Flixster(Topic~8)).
For the second graph, we mix the probabilities of Topic 1 and Topic 4 on each edge evenly to obtain the ground-truth probability (termed as as Flixster(Mixed)).
After removing isolated nodes, the number of nodes in the two graphs are 14473 and 7118 respectively.
In~\cite{barbieri2013topic}, the probability for every edge $(u,v)$ is learned
by rating cascades that reach $u$ and may or may not reach $v$, and in this
cases we view that edge $(u,v)$ are sampled.
According to the data reported in~\cite{barbieri2013topic}, on average
every edge is sampled $318$ times for their learning process.
We then use $318$ samples on each edge as our initial sample
${\cal M}_0$.
\paragraph*{NetHEPT}
The NetHEPT dataset \cite{chen2009efficient} is extensively used in many influence
maximization studies.
It is an academic collaboration network from the "High Energy
Physics-Theory" section of arXiv form 1991 to 2003, where nodes represent the authors
and each edge in the network represents one paper co-authored by two nodes.
It contains $15233$ nodes and $58891$ undirected edges (including duplicated edges).
We remove those duplicated edges and obtain a directed graph $G=(V,E), |V|=15233, |E|=62774$ (directed edges).
Since the NetHEPT dataset does not contain the data of influence probability on edges,
we set the probability on edges according to the \emph{weighted cascade} model \cite{kempe2003maximizing}
as the ground truth parameter, i.e.,
$\forall e = (v, u)\in E$, let $x_u$ be the in-degree of $u$ in the
edge-duplicated graph, $y_{e}$ be the number of edges connecting node $v$ and $u$,
then the true probability is $p_e = 1 - (1-\frac{1}{x_u})^{y_e}$.
Following the same baseline of Flixster, we initially sample each edge
for 318 times as $\mathcal{M}_0$.
\subsubsection{Algorithms}
\label{algorithms}
We test both the uniform sampling algorithm {\USRIM} and the adaptive sampling
algorithm {\ICSRIM}, as well as another adaptive algorithm
{\OESRIM} (Out-Edge Sampling) as the baseline (to be described shortly).
Each algorithm is given a graph $G$ and initial observation set $\mathcal{M}_0$.
Note that the method to estimate the parameter space based on sampling results in Algorithm~\ref{alg:uniform-sampling} and Algorithm~\ref{alg:information-cascade-sampling} are different. In order to make the comparison meaningful, in this section, for all three algorithms, a common method according to Lemma~\ref{lem:conf} is used to estimate the parameter space. In all tests, we set the size of the seed set $k=50$. To reduce the running time, we use a faster approximation algorithm PMIA (proposed in \cite{chen2010scalable}) to replace the well known greedy algorithm purposed in \cite{kempe2003maximizing} in the whole experiment. The accuracy requirement $\gamma=o(1)$ is set to be $\gamma=m^{-0.5}$ where $m$ is the number of edges.
\paragraph*{\USRIM}
The algorithm is slightly modified from Algorithm~\ref{alg:uniform-sampling} for a better comparison of performance. The modified algorithm proceeds in an iterative fashion: In each iteration, the algorithm makes $\tau_1$
samples on each edge, updates $\Theta$ according to Lemma~\ref{lem:conf} and computes $\alpha(\Theta)$. The algorithm stops when $\alpha(\Theta)\ge \kappa=0.8$.
$\tau_1$ is set to 1000, 1000, 250 for NetHEPT, Flixster(Topic~8), Flixster(Mixed), respectively
to achieve fine granularity
and generate visually difference of $\alpha(\Theta)$ in our results.
\paragraph*{\ICSRIM}
As stated in Algorithm~\ref{alg:information-cascade-sampling}, in each iteration, the algorithm do
$\tau_2 = 5000$
times information cascade sampling based on the seed set from the last iteration,
and then it updates $\Theta$ according to Lemma~\ref{lem:conf}, computes $\alpha(\Theta)$ and uses {\lugreedy} algorithm to compute the seed set for the next round. The algorithm stops when $\alpha(\Theta)\ge \kappa=0.8$.
\paragraph*{\OESRIM}
This algorithm acts as a baseline, and it proceeds in the similar way to {\ICSRIM}.
Instead of sampling information cascades starting from the current seed set
as in {\ICSRIM}, {\OESRIM} only sample {\em out-edges} from the seed set.
More specifically, in each iteration, the algorithm samples $5000$
times of all out-edges of the seed set from last iteration, for the three graphs respectively, and then it updates $\Theta$ according to Lemma~\ref{lem:conf}, computes $\alpha(\Theta)$ and uses {\lugreedy} algorithm to compute the seed set for the next round.
Note that for {\OESRIM}, $\alpha(\Theta)$ remains small (with the increase of the number of samples) and cannot exceed the threshold $\kappa$ even the iteration has been processed for a large number of times,
therefore we will terminate it when $\alpha(\Theta)$ is stable.
\subsubsection{$\bar{\alpha}$ as a Upper Bound}
Theorem~\ref{thm:main} shows that $\alpha(\Theta)\left(1-\frac{1}{e}\right)$ is
a lower bound for the robust ratio $g(\Theta,S^\lu_{\Theta})$.
We would also like to find some upper bound of $g(\Theta, S^\lu_{\Theta})$:
If the upper bound is reasonably close to the lower bound or match in trend of
changes, it indicates that $\alpha(\Theta)\left(1-\frac{1}{e}\right)$ is
a reasonable indicator of the robust ratio achieved by the {\lugreedy}
output $S^\lu_{\Theta}$.
For any $\theta\in \Theta$, we define $\bar{\alpha}(\Theta, \theta)
= \frac{\sigma_{\theta}\left( ^\lu_{\Theta} \right)}{\sigma_{\theta}(S_{\theta}^g)}$.
The following shows that $\bar{\alpha}(\Theta, \theta)$ is an upper bound for
$g(\Theta,S^\lu_{\Theta})$:
\begin{equation*}
\bar{\alpha}(\Theta, \theta) =
\frac{\sigma_{\theta}(S^\lu_{\Theta})}{\sigma_{\theta}(S_{\theta}^g)}
\ge
\frac{\sigma_{\theta}(S^\lu_{\Theta})}{\sigma_{\theta}(S^{*}_{\theta})}
\ge
\min_{\theta' \in \Theta} \frac{\sigma_{\theta'}(S^\lu_{\Theta})}{\sigma_{\theta'}(S^{*}_{\theta'})}
=
g(\Theta, S^\lu_{\Theta}) \mbox{.}
\end{equation*}
The next question is how to find a $\theta=(\theta_e)_{e\in E}\in \Theta$
to make the upper bound
$\bar{\alpha}(\Theta,\theta)$ as small as possible.
In our experiments, we use the following two heuristics and take their minimum.
The first heuristic borrows the intuition from Example~\ref{exp:tight}, which
says that the gap ratio $\alpha(\Theta)$ is close to the robust ratio
$g(\Theta, S^\lu_{\Theta})$ when (a) there are two disjoint seed sets with
similar influence spead, (b) their cascade overlap is small, and
(c) the reachable edges from one seed set use lower end parameters values while the reachable edges from the other seed set use upper end parameters.
Thus in our heuristic, we use PMIA algorithm to find another seed set $S'$
of $k$ nodes
when we remove all nodes in $S^\lu_{\Theta}$.
We then do information cascades from both $S^\lu_{\Theta}$ and $S'$ for an
equal number of times.
Finally, for every edge $e$, if it is sampled more in the information cascade with seed set $S^\lu_{\Theta}$ than with $S'$, we set $\theta_e=l_e$, otherwise we set $\theta_e=r_e$.
The second heuristic is a variant of the first one, where we run a number of
information cascades from $S^\lu_{\Theta}$, and for any edge $e$
that is sampled in at least $10\%$ of cascades, we set $\theta_e=l_e$,
otherwise we set $\theta_e=r_e$.
Other more sophisticated heuristics are possible, but it could be a separate
research topic to find tighter upper bound for the robust ratio, and thus
we only use the simple combination of the above two in this
paper, which is already indicative.
We henceforth use $\bar{\alpha}(\Theta)$ to represent
the upper bound found by the minimum of the above two heuristics.
\subsection{Results}
\subsubsection{$\alpha(\Theta)$ and $\bar{\alpha}(\Theta)$
with Predetermined Intervals}
In the first experiment we explore the relationship between
the width of confidence interval $\Theta=\times_{e\in E}[l_e,r_e]$ and $\alpha(\Theta)$ together
with $\bar{\alpha}(\Theta)$.
For a given interval width $W$,
we set $l_e=\min\{p_e-\frac{W}{2},0\},r_e=\max\{p_e+\frac{W}{2},1\}$
$\forall e\in E$, where $p_e$ is the ground-truth probability of $e$.
Then we calculate $\alpha(\Theta)$ and $\bar{\alpha}(\Theta)$.
We vary the width $W$ to see the trend of changes of
$\alpha(\Theta)$ and $\bar{\alpha}(\Theta)$.
Figure~\ref{fig1} reports the result on the three graphs with seed set size $k=50$.
\begin{figure}[t]
\centering
\includegraphics[scale=0.45]{AR.pdf}
\caption{$\alpha(\Theta)$ and $\bar{\alpha}(\Theta)$
for different widths of confidence interval $W$.}
\label{fig1}
\end{figure}
First, we observe that as the parameter space $\Theta$ becomes wider,
the value of both $\alpha(\Theta)$ and $\bar{\alpha}(\Theta)$ become smaller,
which matches our intuition that larger uncertainty results in worse
robustness.
Second, there is a sharp decrease of $\alpha(\Theta)$
between $W\in [0,0.1]$ and a much slower decrease afterwards
for all three graphs.
The decrease of $\bar{\alpha}(\Theta)$ is not as sharp as that of $\alpha(\Theta)$
but the decrease also slows down with larger $W$ after $0.2$.
The overall trend of $\alpha(\Theta)$ and $\bar{\alpha}(\Theta)$
suggests that the robust ratio may be sensitive
to the uncertainty of the parameter space, and only when the uncertainty of
the parameter space reduces to a certain level that we can obtain reasonable
guarantee on the robustness of our solution.
As a comparison, we know that the average number
of samples on each edge is $318$ for the learned probabilities in the
Flixster dataset.
This corresponds to an average interval width of
0.293 for topic 8 and 0.265 for the mixed topic.
At these interval widths, $\alpha(\Theta)$ values are approximately
$0.04$ and $0.08$ respectively for the two graphs, and
$\bar{\alpha}(\Theta)$ are approximately $0.12$ and $0.2$ respectively.
This means that, even considering the upper bound $\bar{\alpha}(\Theta)$,
the robust ratio is pretty low, and thus the learned probabilities
reported in~\cite{barbieri2013topic} may result in quite poor performance for
robust influence maximization.
Of course, our result of $\alpha(\Theta)$ and $\bar{\alpha}(\Theta)$ is only
targeted at the robustness of our {\lugreedy} algorithm, and there could
exist better algorithm having higher robustness performance at the same
uncertainty level.
Finding a better RIM algorithm seems to be a difficult task, and
we hope that our study could motivate more research in searching for such better
RIM algorithms.
Besides $S^\lu_{\Theta}$, we also independently test the classical
greedy seed set $S^g_{\theta}$ for $\theta=(p_e)_{e\in E}$
on the lower parameter vector $\theta^-$
(that is $\frac{\sigma_{\theta^-}(S^g_{\theta})}{\sigma_{\theta^+}(S^g_{\theta^+})}$ versus $\alpha(\Theta)$),
and the average performance on each data point is $2.45\%$, $1.05\%$, $6.11\%$ worse than $S^\lu_{\Theta}$
for Flixster(Mixed), Flixster(Topic~8) and NetHEPT, respectively.
Therefore, it shows that $S^\lu_{\Theta}$ outperforms $\sigma^g_{\theta}$ in the worse-case scenario,
and henceforth we only use $S^\lu_{\Theta}$ in the following experiments.
\subsubsection{Results for Sampling algorithms}
Figures~\ref{fig2}, \ref{fig3} and \ref{fig4} reports the result of
$\alpha=\alpha(\Theta)$ and $\bar{\alpha}=\bar{\alpha}(\Theta)$
for the three tested graphs respectively,
when the average number of samples per edge increases.
For better presentation, we trim all figures as long as
$\alpha(\text{\USRIM}) = 0.7$.
(For example, in Flixster(Topic 8),
{\USRIM} requires $77318$ samples in average for $\alpha$ to reach $0.8$,
while {\ICSRIM} only needs $33033$, and for {\OESRIM} $\alpha$ sticks to $0.118$.)
For the sampling algorithms, after the $i$-{th} iteration, the observation
set is updated from $\mathcal{M}_{i-1}$ to $\mathcal{M}_i$,
and the average number of samples per edge in the network is calculated.
Markers on each curve in these figures represent the result after one
iteration of the corresponding sampling algorithm.
\begin{figure}[t]
\centering
\includegraphics[scale=0.45]{NetHEPT.pdf}
\caption{$\alpha(\Theta)$ and $\bar{\alpha}(\Theta)$
for different average number of samples per edge on graph NetHEPT.}
\label{fig2}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.45]{Flixster8.pdf}
\caption{$\alpha(\Theta)$ and $\bar{\alpha}(\Theta)$
for different average number of samples per edge on graph Flixster(Topic~8).}
\label{fig3}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.45]{FlixsterMix.pdf}
\caption{$\alpha(\Theta)$ and $\bar{\alpha}(\Theta)$
for different average number of samples per edge on graph Flixster(Mixed).}
\label{fig4}
\end{figure}
The results on all three graphs are consistent.
First, for each pair of $\alpha(\Theta)$ and $\bar{\alpha}(\Theta)$, even though
there is still some gap,
indicating either the lower bound
or the upper bound may not be tight yet, the trends on
both $\alpha(\Theta)$ and $\bar{\alpha}(\Theta)$ are consistent:
Both increase with the number of samples, even with similar slopes at each
point; and among different
algorithms, the ranking order and relative change are consistent
with both $\alpha(\Theta)$ and $\bar{\alpha}(\Theta)$.
All these consistency suggests that gap ratio $\alpha(\Theta)$ could be used
as an indicator for the robustness of Algorithm {\lugreedy}, and
it is reasonable to use $\alpha(\Theta)$ in comparing the performance
of different algorithms.
Second, comparing the performance of three algorithms, we see that
both {\USRIM} and {\ICSRIM} are helpful in improving the robust ratio of
the selected seed set, and {\ICSRIM} is better than {\USRIM}, especially
when the sample size increases.
The baseline algorithm {\OESRIM}, however, performs significantly poorer
than the other two, even though it is also an adaptive algorithm as
{\ICSRIM}.
The reason is that, the lower-upper greedy set $S^{\lu}_{\Theta}$ changes little
after a certain number of iterations in {\OESRIM},
and thus only a small number of edges (out edges of $S^{\lu}_{\Theta}$)
are repeatedly sampled.
The probabilities on these edges are already estimated very accurately
while other edge probabilities are far from accurate.
It is the inaccurate edges that make $\alpha(\Theta)$ and the best robust ratio small.
In contrast, {\ICSRIM} uses information cascades to sample
not only edges directly connecting to the seed set but also edges
that can be potentially reached.
This suggests that it is important for a sampling method to balance the
sampling between critical edges and other potentially useful edges
in order to achieve better robustness in influence maximization.
Overall, the results suggest that information cascade based sampling method
stands out as a competitive choice when we can adaptively sample more
edges to achieve better robustness.
If adaptive sampling is not possible, predetermined uniform sampling
may also perform reasonably well.
\section{Conclusion} \label{sect:conclusion}
In this paper, we propose the study of robust influence maximization to address
the impact of uncertainty in edge probability estimates that would inevitably occur
in practice to the influence maximization task.
We propose the {\lugreedy} algorithm with a proven solution-dependent bound,
and further propose sampling methods, in particular information cascade
based adaptive sample method to effectively reduce the uncertainty and
increase the robustness of the {\lugreedy} algorithm.
The experimental results validate the usefulness of the {\lugreedy} algorithm
and the information cascade based sampling method {\ICSRIM}.
Moreover, the results indicate that robustness may be sensitive to the uncertainty
of parameter space, and learning algorithms may need more data to achieve
accurate learning results for robust influence maximization.
Our work opens up a number of research directions.
First, it is unclear what could be the upper bound of the best robust ratio given
an actual network and learned parameter space.
Answering this question would help us to understand whether robust
influence maximization is intrinsically
difficult for a particular network or it is just our algorithm that does not
perform well.
If it is the latter case, then an important direction is to
design better robust influence maximization algorithms.
Another direction is how to improve sampling methods and learning methods
to achieve more accurate parameter learning, which seems to be crucial
for robust influence maximization.
In summary, our work indicates a big data challenge on social influence research
--- the data on social influence analysis is still not big enough,
such that the uncertainty level in model learning may result in
poor performance for influence maximization.
We hope that our work could encourage further researches to meet this
challenge from multiple aspects including
data collection, data analysis, and algorithm design.
\section*{Acknowledgment} \label{sect:acknowledge}
The research of Wei Chen is partially supported by the National
Natural Science Foundation of China (Grant No. 61433014). |
1,314,259,996,648 | arxiv | \section{Introduction} \label{intro}
It is now thought that most, if not all, of the stars in our Galaxy were
born in star clusters \citep[e.g.][]{lada95, lada03, mckee07}. And yet,
there remain several key details of the star formation process that are still
not understood. Part of the
problem lies in the fact that populations of young stars are typically
hidden by a dense veil of optically-thick gas and dust. This
prevents the escape of most of the light produced by infant stars, and
often renders these regions difficult to observe \citep[e.g.][]{grenier05,
lada07}. Most of
these clusters are sparsely populated and are of relatively low mass (M
$\lesssim$ 10$^4$ M$_{\odot}$) \citep[e.g.][]{lada85}. They are also
very young since clusters of such low mass are unlikely to survive for more than
1 Gyr \citep[e.g.][]{portegieszwart10}.
At the other end of the cluster mass spectrum, most massive star clusters (M
$\gtrsim$ 10$^4$ M$_{\odot}$) in our Galaxy tend to be at least a few
Gyrs old, and
in many cases are nearly as old as the Universe itself
\citep[e.g.][]{harris96, deangeli05}. These clusters have the advantage that
they are no longer obscured by the primordial gas
from which they formed, however they are a
dynamically active environment. As a result, the conditions present at the
time of their formation have now been largely erased
\citep[e.g.][]{portegieszwart01, hurley05, murray09}. This presents a
considerable challenge for studying star formation in the regime of
cluster masses and metallicities that characterize Milky Way globular
clusters. This is unfortunate since these old star clusters contain
the fossil record of a very
early episode of star formation in the Universe, and are the only
means of studying it locally in massive star clusters.
One of the primary
observational tests for star formation theories is the stellar initial
mass function (IMF). Current observational evidence suggests that the
IMF is very similar in different regions of our Galaxy,
including the disk and young star clusters
\citep[e.g.][]{elmegreen99, kroupa11}. However, this is still being debated
throughout the literature \citep[e.g.][]{scalo98, parravano11}.
Different star formation theories tend to predict different IMFs. These
vary with the properties of the gas clouds from which the
stars are born, including density, temperature and composition
\citep[e.g.][]{elmegreen01, bonnell07, mckee07, kroupa11}.
Given the sensitive nature of the observations, a
large sample of IMFs spanning the entire range of cluster properties
exhibited by star clusters in the Milky Way,
including total mass and chemical composition, has yet to be
compiled.
This is a sorely needed step in order to advance our understanding
of star formation by providing direct comparisons between observations
and theoretical
predictions. This is especially true of massive, metal-poor star
clusters since we are particularly
lacking observations of IMFs in this regime of cluster masses and
metallicites \citep[e.g.][]{mckee07,
portegieszwart10}. Important steps in this direction were
recently taken by \citet{demarchi10} and \citet{paust10}, who studied
the present-day mass
functions of a large sample of Galactic clusters
and considered the effects of the cluster dynamics in modifying them
from their primordial forms.
For the very first time, the Advanced Camera for Surveys (ACS) Survey for Globular Clusters has
provided photometry for a large sample of Milky Way globular clusters
(GCs) that reaches down to unprecedented faint magnitudes. This offers a
large sample of current stellar mass functions spanning the stellar mass range
$\approx 0.2 - 0.8$ M$_{\odot}$. All of the
clusters are massive and very old, with total masses and ages
ranging from $\approx$ 10$^4$ - 10$^6$ M$_{\odot}$ and $\approx$ 10-12 Gyrs,
respectively
\citep{harris96, deangeli05}. This has allowed significant time for
their stellar mass functions to have been modified from their
primordial forms due to both stellar evolution and stellar dynamics.
However, most of the processes responsible for this evolution are now
largely understood. Therefore, in principle, it is possible to use
current observations of old star clusters together with theoretical models
for their evolution to extrapolate backwards in time and indirectly
probe their IMFs.
For most of the life of a massive star cluster, two-body relaxation is the
dominant physical mechanism driving its evolution
\citep[e.g.][]{henon60, spitzer87, heggie03, gieles11}.
The term describes the cumulative effects of long-range gravitational
interactions that occur between pairs of
stars, which act to alter their orbits within the cluster. This results in a
phenomenon known as mass segregation, which is the tendency for
heavier stars to accumulate in the central cluster regions and
low-mass stars to be dispersed to wider orbits. This mechanism also causes
stars to escape from their host cluster, with
the probability of ejection increasing with decreasing cluster mass.
Therefore, two-body relaxation acts to slowly modify the distribution of stellar
masses within clusters, and can cause very dynamically evolved clusters to
appear severely depleted of their low-mass stars.
Evidence in favour of this process having actually
occurred in real star clusters has been reported by several authors
\citep[e.g.][]{vonhippel98, demarchi10}.
A number of theoretical studies have been conducted to learn how
the evolution of the stellar mass function (MF) in GCs is affected
by two-body relaxation, stellar evolution,
disc shocking, and tidal effects from the Galaxy (see \citet{baumgardt03}
for a detailed review). In the absence of these effects, we expect the MF
to continually rise toward lower stellar masses.
By performing a series of N-body simulations,
\citet{vesperini97} showed that tidal effects from the Galaxy, disc shocking,
and a higher initial central concentration all act to increase the rate of
stellar evaporation, and
accelerate the depletion of preferentially low-mass stars. These results
were confirmed and built upon by \citet{baumgardt03} who showed that the
depletion of low-mass stars can be sufficiently dramatic to change the
sign of the slope of the MF at the low-mass end.
Interestingly, these results were not
supported by the observational study of \citet{demarchi07}. These authors
analyzed the MFs in a sample of 20 Galactic GCs, and found that the slope of the
MF decreases with increasing central concentration. They argued that
this contradicts what is expected from theory since two-body relaxation is
responsible both for increasing the central density and flattening the MF at
the low-mass end. In an effort to explain this, they suggested that
many of the clusters in their sample could be post-core collapse, and therefore
had much higher central densities in the past. Alternatively,
\citet{marks08} argued that this can be explained by residual gas-expulsion
from initially mass segregated clusters \citep[e.g.][]{tutukov78}, and cautioned
that unresolved binaries could also be contributing.
Several theoretical studies have also been conducted to study the
dynamical histories of individual globular clusters \citep[e.g.][]{heggie08,
heggie09}. For example, \citet{zonoozi11} recently performed the first ever
direct N-body simulations of a Milky Way (MW) GC over its entire lifetime. This
was done for the distant GC Palomar 14, which has an unusually low-density and
large radius. The emphasis of this paper is to use the ensemble information
of many GCs to learn about the universality of the IMF in old massive star clusters.
Individual cases, in particular Pal 14, are often chosen for their peculiar
characteristics and may not be representative of the bulk of the GCs in the MW.
In this paper, we present a new technique to quantify
cluster-to-cluster variations
in the observed stellar mass functions of a large sample of clusters
spanning a diverse range of properties. Our method offers the
advantage that it is insensitive to the precise functional form of
the MF. We have applied it to a sample of 27 MFs
taken from the ACS Survey for Globular Clusters \citep{sarajedini07}.
Can the present-day MFs be explained by an universal IMF and stellar evaporation
induced by two-body relaxation? Or are cluster-specific IMFs needed
to reproduce the observed MFs? To address these questions,
we compared the results of our observational analysis to
268 Monte Carlo simulations for GC evolution.
The models spanned a range of initial masses, virial radii, central
concentrations and IMFs. Therefore, by evolving all of these models to
the current ages of the GCs in our sample and comparing the resulting
MFs to the present-day observed ones, we have quantified the
dynamical evolution of the MFs in our observed sample. This has
allowed us to take the first steps toward constraining both the exact
functional forms of the IMFs of MW GCs and
the conditions present at the time of their formation.
In Section~\ref{method}, we present our sample of observed stellar mass
functions and describe both our technique for analyzing the
observations and the models for globular cluster evolution to which
they are compared. The results of our analysis of
the ACS observations are presented in Section~\ref{results}, along
with an example comparison between the observations and the models. This
example demonstrates how our method can be used to compare a large
number of observed MFs to analogous samples of simulated MFs.
Finally, we discuss in Section~\ref{discussion} the implications of
our results for the conditions present in our observed clusters at the
time of their formation and the role played by two-body relaxation in
modifying the
stellar MF to its present-day
form.
\section{Method} \label{method}
In this section, we describe how we acquired our sample of mass
functions from the ACS data, as well as the Monte Carlo simulations
for globular cluster evolution used for comparison to the observations.
\subsection{The Data} \label{data}
The data used in this study was taken from the sample of 35 MW GCs
used in \citet{leigh11}, which was in turn taken
from the ACS Survey for Globular Clusters
\citep{sarajedini07}.\footnote[1]{The
data can be found at http://www.astro.ufl.edu/$\sim$ata/public\_hstgc/.}
The ACS Survey provides unprecedented deep photometry in the F606W ($\approx$
V) and F814W ($\approx$ I) filters
that is nearly complete down to $\approx 0.2$ M$_{\odot}$. In other
words, the colour-magnitude diagrams (CMDs) extend reliably from the
horizontal branch all the way down to about 7 magnitudes below the main-sequence
turn-off (MSTO). A list of the GCs used in this study is shown in
Table~\ref{table:list} along with their core radii (r$_c$), half-mass
radii (r$_h$), central luminosity densities ($\rho_0$), and central
concentration parameters (c).
These were taken directly from \citet{harris96}, with the exception of
the core and half-mass radii. The latter quantities are
given in parsecs and were calculated using the distance modulii and
extinction corrections provided in \citet{harris96}.
Each cluster was centred in the ACS field, which
extends out to several core radii from the cluster
centre in most cases. Coordinates for the cluster centres were
taken from
\citet{goldsbury10}. These authors found their centres by fitting
a series of ellipses to the density distributions within the inner 2'
of the cluster centre, and computing an average value.
\begin{table}
\centering
\caption{List of globular clusters and their structural parameters
\label{table:list}}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Cluster & Alternate & r$_c$ & r$_h$ & $\rho_0$ & c \\
ID & ID & (in pc) & (in pc) & (in L$_{\odot}$ pc$^{-3}$) & \\
\hline
104 & 47 Tuc & 0.47 & 4.11 & 4.88 & 2.07 \\
1261 & & 1.66 & 3.22 & 2.99 & 1.16 \\
2298 & & 0.97 & 3.08 & 2.90 & 1.38 \\
4147 & & 0.51 & 2.70 & 3.63 & 1.83 \\
4590 & M 68 & 1.73 & 4.51 & 2.57 & 1.41 \\
5024 & M 53 & 1.82 & 6.80 & 3.07 & 1.72 \\
5272 & M 3 & 1.10 & 6.84 & 3.57 & 1.89 \\
5286 & & 0.95 & 2.48 & 4.10 & 1.41 \\
5904 & M 5 & 0.96 & 3.85 & 3.88 & 1.73 \\
5927 & & 0.94 & 2.46 & 4.09 & 1.60 \\
5986 & & 1.43 & 2.97 & 3.41 & 1.23 \\
6093 & M 80 & 0.44 & 1.78 & 4.79 & 1.68 \\
6121 & M 4 & 0.59 & 2.18 & 3.64 & 1.65 \\
6171 & M 107 & 1.04 & 3.21 & 3.08 & 1.53 \\
6205 & M 13 & 1.29 & 3.51 & 3.55 & 1.53 \\
6218 & M 12 & 1.11 & 2.49 & 3.23 & 1.34 \\
6254 & M 10 & 0.98 & 2.49 & 3.54 & 1.38 \\
6304 & & 0.36 & 2.43 & 4.49 & 1.80 \\
6341 & M 92 & 0.63 & 2.45 & 4.30 & 1.68 \\
6535 & & 0.71 & 1.68 & 2.34 & 1.33 \\
6584 & & 1.02 & 2.86 & 3.33 & 1.47 \\
6637 & M 69 & 0.84 & 2.15 & 3.84 & 1.38 \\
6779 & M 56 & 1.21 & 3.02 & 3.28 & 1.38 \\
6838 & M 71 & 0.74 & 1.96 & 2.83 & 1.15 \\
6934 & & 1.00 & 3.14 & 3.44 & 1.53 \\
6981 & M 72 & 2.28 & 4.60 & 2.38 & 1.21 \\
7089 & M 2 & 1.08 & 3.56 & 4.00 & 1.59 \\
\hline
\end{tabular}
\end{table}
\subsection{Measuring the Stellar Mass Function} \label{criteria}
First, we used the available photometry to obtain estimates for
the masses of the stars in our sample. To do this,
we fit theoretical isochrones taken from \citet{dotter07}
to the CMDs of every cluster. Each isochrone
was generated using the metallicity and age of the cluster, and fit to
its CMD using the corresponding distance modulus and extinction
provided in \citet{dotter10}.
The MSTO was then defined using our isochrone fits by selecting the
bluest point along the main-sequence (MS).
We considered five stellar mass bins along the MS. These ranged
from 0.25 - 0.75 M$_{\odot}$ in increments of 0.1 M$_{\odot}$. This
range was chosen to help ensure complete sampling in all bins since the
lowest MSTO mass in our sample corresponds to $\approx$ 0.75 M$_{\odot}$,
and the photometric errors remain small ($\lesssim$ 0.05 mag) within
the magnitude range for each stellar mass bin in every cluster.
We obtained number counts for
all stellar mass bins in the annulus r$_c <$ r $<$ 2r$_c$, where r is
the distance from the cluster center.
This reduced our sample size by five clusters since the spatial coverage
offered by the ACS field of view is incomplete in these cases.
We obtained completeness corrections for each stellar mass bin
in the annulus immediately outside the core (r$_c <$ r $<$ 2r$_c$).
This was done using the results of artificial star tests taken from
\citet{anderson08}.
Number counts for each mass bin were then multiplied by their
corresponding completeness corrections. We did not include core
number counts in our analysis
since our completeness corrections begin to exceed 50\% somewhere
inside the core for every cluster in our sample. This is due to
crowding and the high central surface brightnesses at the centres of
our clusters. We have entirely removed three clusters
from our original sample used in \citet{leigh11}, namely NGC 1851,
NGC 5139, and NGC 6652. This is because their completeness corrections
exceeded 50\% in every mass bin in the
annulus immediately outside the core. We also removed additional
clusters from our samples for the lowest three mass bins whenever their
completeness corrections exceeded 50\%. These clusters typically had
the highest MSTO masses. In total, this left us with
27, 27, 23, 20, and 15 clusters in each of the five mass bins, in order of
decreasing stellar mass. The completeness-corrected number counts for each
stellar mass bin have been provided in Table~\ref{table:counts}.
\begin{table}
\centering
\caption{Completeness-corrected number counts for all five stellar mass
bins in the annulus r$_c$ $<$ r $<$ 2r$_c$
\label{table:counts}}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Cluster & MS1 & MS2 & MS3 & MS4 & MS5 \\
ID & (0.65 - 0.75 M$_{\odot}$) & (0.55 - 0.65 M$_{\odot}$) & (0.45 - 0.55 M$_{\odot}$) & (0.35 - 0.45 M$_{\odot}$) & (0.25 - 0.35 M$_{\odot}$) \\
\hline
104 & 15113 & 16056 & -- & -- & -- \\
1261 & 4652 & 4840 & 4747 & 4647 & 4441 \\
2298 & 1022 & 865 & 828 & 673 & 595 \\
4147 & 525 & 355 & 257 & 154 & 100 \\
4590 & 2830 & 3179 & 3793 & 4477 & 5543 \\
5024 & 9773 & 9626 & 9725 & 8114 & 5859 \\
5272 & 9763 & 10447 & 12097 & 12886 & 14339 \\
5286 & 9394 & 8330 & 6436 & 2765 & 941 \\
5904 & 5382 & 6726 & 9226 & -- & -- \\
5927 & 4304 & 4208 & 5244 & 6043 & -- \\
5986 & 10328 & 10936 & 12070 & 13012 & 13635 \\
6093 & 4356 & 2658 & -- & -- & -- \\
6121 & 1111 & 879 & -- & -- & -- \\
6171 & 1207 & 1049 & 968 & 1064 & -- \\
6205 & 11757 & 13012 & 16176 & -- & -- \\
6218 & 2480 & 2337 & 2348 & 2589 & -- \\
6254 & 4631 & 4826 & 5375 & 6394 & 7342 \\
6304 & 1806 & 1941 & -- & -- & -- \\
6341 & 4127 & 4019 & 3771 & 2456 & -- \\
6535 & 292 & 200 & 188 & 143 & 134 \\
6584 & 3083 & 3330 & 3624 & 3880 & 4498 \\
6637 & 3166 & 3192 & 3710 & 3818 & 1714 \\
6779 & 3273 & 2941 & 2983 & 3047 & 3088 \\
6838 & 592 & 634 & 654 & -- & -- \\
6934 & 2615 & 2416 & 2228 & 1836 & 1302 \\
6981 & 2731 & 2774 & 2914 & 2865 & 2847 \\
7089 & 12549 & 12388 & 10113 & 3458 & -- \\
\hline
\end{tabular}
\end{table}
\clearpage
The field of view of the ACS images is about
200'' on a side, which gives physical scales ranging between 1.5 and
16 pc (for the closest and furthest clusters in our sample). Based on
this, we expect foreground contamination by field stars to be
negligible for most of the clusters in our sample given their current
locations in the Galaxy. For example, \citet{dacosta82} considered
star count data
in a similar area and over a comparable range of stellar masses for
three nearby globular clusters. The author found that the
corrections resulting from field contamination were always less than
10\% over nearly the entire range of stellar masses we are
considering.
\subsection{Weighted Lines of Best-Fit} \label{lines}
In order to quantify cluster-to-cluster differences in the present-day
stellar mass functions of the clusters in our sample, we obtained
lines of best-fit for
(the logarithm of) the number of stars belonging to each stellar mass bin
versus (the logarithm of) the total number of stars spanning all five
mass bins, which provides a proxy for the total cluster mass.
This can be written:
\begin{equation}
\label{eqn:frac-bin}
\log N_{bin,i} = {\gamma_i}\log \Big( \frac{N_{tot}}{10^3} \Big) + \delta_i,
\end{equation}
where N$_{bin,i}$ is the number of stars belonging to mass bin $i$, N$_{tot}$
is the total number of stars spanning all five mass bins, and $\gamma_i$
and $\delta_i$ are both constants.
Our motivation for adopting this technique is as follows. If the
fraction of stars belonging
to each mass bin, or f$_{bin,i}$ $=$ N$_{bin,i}$/N$_{tot}$,
is constant for all cluster masses, then we would expect N$_{bin,i}$ to
scale linearly with N$_{tot}$. Or, equivalently, $\gamma_i$ $\approx$ 1 in
Equation~\ref{eqn:frac-bin}.
However, if there is any systematic dependence of f$_{bin,i}$ on the
total cluster mass, then we should find that N$_{bin,i}$ does
\textit{not} scale linearly with N$_{tot}$. In log-log space, the
slope of the line of best-fit for stellar mass bin $i$
should be less than unity (i.e. $\gamma_i$ $< 1$) if f$_{bin,i}$
systematically decreases with increasing cluster mass. Conversely,
we expect $\gamma_i$ $> 1$ if f$_{bin,i}$ systematically increases
with increasing cluster mass. This means that, for a sample of clusters
with a wide range of total masses, we expect $\gamma_i < 1$ for
the highest mass stars and $\gamma_i > 1$ for the lowest mass stars.
This is because clusters lose preferentially low-mass stars due to
two-body relaxation, and this process operates the fastest in lower
mass clusters.
Equation~\ref{eqn:frac-bin} quantifies the number of stars belonging
to each stellar mass bin as a function of the total cluster mass. More
generally, it provides a means of quantifying cluster-to-cluster
differences in the stellar mass function as a function of both the
stellar mass and the total cluster mass.
The lines of best-fit have been weighted by adopting uncertainties for the
number of stars in each mass bin using Poisson statistics.
Uncertainties for the slopes (i.e. for $\gamma_i$ in Equation~\ref{eqn:frac-bin})
were found using a bootstrap methodology in which we generated 1,000 fake
data sets by randomly sampling (with replacement) number counts from
the observations. We obtained lines of best fit for each fake data
set, fit a Gaussian to the subsequent distribution and extracted its
standard deviation.
\subsection{Monte Carlo Models} \label{models}
We have generated 268 Monte Carlo simulations for
globular cluster evolution spanning a range of initial total
numbers of stars, concentrations, virial radii and IMFs.
The models realistically take into account
both stellar and binary evolution, and track both short- and
long-range gravitational interactions between both single and binary
stars. Detailed explanations concerning the development of these models
can be found in \citet{joshi00}, \citet{joshi01}, \citet{fregeau03},
\citet{fregeau07}, and \citet{chatterjee10}.
For every combination of initial concentration (W$_0$), virial radius
(r$_{vir}$) and IMF slope ($\alpha$ in Equation~\ref{eqn:kroupa}), we
generated a series of models with different initial total numbers of
stars (i.e. total cluster masses). We adopted an IMF of the form:
\begin{equation}
\label{eqn:kroupa}
\frac{dN}{dm} = {\beta}m^{-\alpha},
\end{equation}
where $\alpha$ and $\beta$ are constants. This was taken from
\citet{kroupa01}, who fit a three-part
power-law to this function with $\alpha = 2.3$ for 0.50 $<$
$m$/M$_{\odot}$ $<$ 1.00, $\alpha = 1.3$ for 0.08 $<$
$m$/M$_{\odot}$ $<$ 0.50, and $\alpha = 0.3$ for 0.01 $<$
$m$/M$_{\odot}$ $<$ 0.08. We varied $\alpha$ only in the stellar
mass range 0.08 $<$ $m$/M$_{\odot}$ $<$ 0.50.
Each model run was evolved for a period of 12 Gyrs,
which roughly coincides with the ages of the clusters in our sample
\citep[e.g.][]{deangeli05, marin-franch09}.
The resulting collection of simulations spanned roughly the same range of
total masses as our observed sample. The initial cluster
parameters considered in this paper are shown in
Table~\ref{table:initial-conditions}. With this suite of simulations, we
have only scratched
the surface in terms of exploring the total initial parameter space that
could be relevant to the GCs in our observed sample. However, our goal in this
paper is to demonstrate the strength of our technique for quantifying
cluster-to-cluster differences in the observed present-day MFs, and to show
by example how our method can be used to compare a large number of
observed MFs to analogous samples of simulated MFs. We defer a more
complete exploration of the total possible
parameter space of initial conditions to a future paper.
\begin{table}
\caption{Initial Model Parameters
\label{table:initial-conditions}}
\begin{tabular}{|l|c|}
\hline
Parameter & Initial Values \\
\hline
IMF slope ($\alpha$) & 1.3, 0.4, 0.0, -0.4 \\
Number of Stars & 1e5, 2e5, 4e5, 6e5, 8e5, 1e6, 2e6, 4e6 \\
Concentration (W$_0$) & 5.0, 5.5, 6.0, 6.5, 7.0 \\
Virial Radius (in pc) & 3, 4, 5 \\
\hline
\end{tabular}
\end{table}
The simulated clusters were placed on circular orbits at a distance
of 4 kpc from the
Galactic centre, and the resulting tidal effects from the Galaxy were
accounted for. We note that these effects are reduced by adopting
a smaller initial virial radius. The effects of tides
were typically small in all but those models
for which both the initial mass and concentration were very low, which
agrees with the results of previous studies \citep[e.g.][]{vesperini97}.
We assumed a metallicity of Z $= 0.001$ for all simulated clusters.
This roughly agrees with what is typically observed in
Galactic GCs \citep[e.g.][]{harris96}, and primarily
affects the rate of stellar mass loss due to winds early on in the
cluster lifetime when massive stars are still present
\citep[e.g.][]{chernoff90}. Although this does affect the rate of
dynamical evolution, the effect should be very similar from
cluster-to-cluster for the initial conditions considered in this paper.
This is because the mass loss that occurs early on
in the cluster lifetime due to stellar evolution has the greatest impact
on clusters with very low initial concentrations, and can significantly
reduce the time for cluster dissolution in these cases. However,
\citet{baumgardt03} showed that this effect is not severe for
the range of initial concentrations (W$_0$ = 5 - 7) considered here.
Finally, we assumed an initial global binary fraction of 10\% for all
model runs using the same binary orbital parameter distributions as
adopted in \citet{chatterjee10}. We will return to these assumptions
in Section~\ref{discussion}.
We generated simulated CMDs for every model run by converting the bolometric
luminosity of every star to its corresponding magnitude in
the ACS F814W band. This was done using the colour conversion routine of
\citet{pols98}, which uses the spectral libraries of \citet{lejeune97}
and \citet{lejeune98}. For binary stars, the magnitudes of the components
were combined in order to position them in the CMD as single objects.
Observations of star clusters are projected onto the plane of the sky,
whereas the output from our models provides only a 3-D distance from
the cluster centre for
every single and binary star. Therefore, it was necessary to
convert these 3-D distances to corresponding 2-D values. This was
done by randomly varying the component along the line-of-sight
to the cluster, and using the 3-D distance to calculate a
2-D value. Using these projected 2-D distances from the cluster centre,
we also generated surface brightness profiles for every model run and
re-calculated a 2-D core radius (defined as the distance from the
cluster centre at which the surface brightness falls to half
its central value). These 2-D core radii were then used to count
the number of objects (i.e. single and binary stars) belonging to each stellar
mass bin located within the annulus immediately outside the core
(i.e. r$_c <$ r $<$ 2r$_c$).
\subsection{Comparing the Observed and Simulated Present-Day Mass Functions}
For every combination of initial concentration, virial radius, and IMF
slope, the different runs corresponding to different initial numbers of
stars were grouped together. This gave us a sample of MFs spanning a range of
total cluster masses for every combination of initial conditions. The
selection criteria described in
Section~\ref{criteria} was then applied to each model, and lines of
best-fit were found
for each stellar mass. The slopes of these lines of best-fit (i.e. $\gamma_i$
in Equation~\ref{eqn:frac-bin})
were then compared to the corresponding observed slopes for
every stellar mass, and both the
chi-squared value and the probability that the two samples (i.e. the observed
$\gamma_i$'s for all five mass bins and a given set of theoretically-derived
$\gamma_i$'s for all mass bins) are drawn from the same distribution were found.
\section{Results} \label{results}
In this section, we present the results of both our observational
analysis and its comparison to the models.
\subsection{Observational Analysis} \label{obs_analysis}
We have plotted the logarithm of the number of stars in each stellar mass bin
versus the logarithm of the total number of stars spanning all five
mass bins in Figure~\ref{fig:Ncore_vs_Nms_2rc}.
The slopes and y-intercepts for the weighted lines of best-fit performed for
each of these relations
provided values for $\gamma_i$ and $\delta_i$ in Equation~\ref{eqn:frac-bin}.
These are shown in Table~\ref{table:bestfit},
along with their corresponding uncertainties ($\Delta$$\gamma$ and
$\Delta$$\delta$). Each table entry has been
provided in the form ($\gamma$ $\pm$ $\Delta$$\gamma$; $\delta$ $\pm$
$\Delta$$\delta$). The values for $\gamma_i$ have also been plotted
in Figure~\ref{fig:slopes}.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{fig1.eps}
\end{center}
\caption[Logarithm of the number of
stars belonging to each stellar mass bin as a function of the
logarithm of the total number of stars spanning all five mass
bins in the annulus immediately outside the core]{The logarithm of the number of
stars belonging to each stellar mass bin (N$_{bin}$) as a function of the
logarithm of the total number of stars spanning all five mass
bins in the annulus immediately outside the core (N$_{r_c < r < 2r_c}$). In
descending order from top to bottom, the plots correspond
to number counts in the mass ranges 0.65 - 0.75 M$_{\odot}$ (MS1),
0.55 - 0.65 M$_{\odot}$ (MS2), 0.45 - 0.55 M$_{\odot}$ (MS3), 0.35 -
0.45 M$_{\odot}$ (MS4), and 0.25 - 0.35 M$_{\odot}$ (MS5). Lines of
best fit are shown for each mass bin by solid lines.
\label{fig:Ncore_vs_Nms_2rc}}
\end{figure}
\begin{table}
\centering
\caption{Lines of Best Fit for log N$_{bin,i}$ = ($\gamma$ $\pm$ $\Delta$$\gamma$)log (N$_{tot}$/10$^3$) + ($\delta$ $\pm$ $\Delta$$\delta$)
\label{table:bestfit}}
\begin{tabular}{|l|c|}
\hline
Stellar Mass & $\gamma$ $\pm$ $\Delta$$\gamma$; $\delta$ $\pm$ $\Delta$$\delta$ \\
\hline
MS1 (0.65-0.75 M$_{\odot}$) & 0.81 $\pm$ 0.09; 2.64 $\pm$ 0.12 \\
MS2 (0.55-0.65 M$_{\odot}$) & 0.91 $\pm$ 0.09; 2.50 $\pm$ 0.10 \\
MS3 (0.45-0.55 M$_{\odot}$) & 0.93 $\pm$ 0.03; 2.43 $\pm$ 0.04 \\
MS4 (0.35-0.45 M$_{\odot}$) & 0.99 $\pm$ 0.05; 2.33 $\pm$ 0.08 \\
MS5 (0.25-0.35 M$_{\odot}$) & 1.12 $\pm$ 0.11; 2.14 $\pm$ 0.17 \\
\hline
\end{tabular}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{fig2.eps}
\end{center}
\caption[Slopes for the lines of best-fit (i.e. $\gamma_i$) plotted as a
function of stellar mass]{Slopes for the lines of best-fit,
given by $\gamma_i$ in Equation~\ref{eqn:frac-bin}, plotted as a
function of stellar mass.
\label{fig:slopes}}
\end{figure}
As shown in Figure~\ref{fig:slopes}, $\gamma_i$ tends to
systematically increase with decreasing stellar mass. The
uncertainty for $\gamma_i$ is the highest
for the lowest mass bin (MS5 in Table~\ref{table:bestfit}).
This is because the photometric errors are the
highest at these dim magnitudes. However, the errors are consistently
at most $\approx$
10\% of the width in magnitude of their corresponding mass bin.
In an attempt to improve
upon these statistics, we have also calculated reduced chi-squared
values with added intrinsic dispersion for the relations for each mass
bin. That is, for each mass bin we added a constant term to the uncertainty for each
data point, found the uncertainty that yielded a reduced chi-squared of one,
and looked at the subsequent effects on the uncertainties for the
line of best-fit. Based on this, we appear to be slightly over-estimating the
uncertainties for the MS1, MS2 and MS3 mass bins using our bootstrap
approach, and slightly under-estimating them for the MS4 and MS5
bins.
The change in the distribution of stellar masses as a function of the
total cluster mass can be illustrated using pie charts, as shown in
Figure~\ref{fig:pie_charts}. Using the values for $\gamma_i$ and $\delta_i$
provided in Table~\ref{table:bestfit},
we have generated pie charts for three total numbers of stars
(spanning all five mass bins), namely N$_{tot}$ = 10$^5$, 10$^4$,
10$^3$ (from top to bottom in Figure~\ref{fig:pie_charts}). As is
clear, low-mass stars become more and more preferentially depleted
with decreasing total cluster mass.
From top to
bottom, the pie charts can be interpreted as depicting the evolution of
the stellar mass function with increasing dynamical age. This can be
understood as follows.
The inverse of the half-mass relaxation time can be used as a
proxy for the rate of two-body relaxation throughout the entire
cluster. The half-mass relaxation time ranges from several
million years to the age of the
Universe or longer, and is approximated by \citep{spitzer87}:
\begin{equation}
\label{eqn:t-rh}
t_{rh} = 1.7 \times 10^5[r_h(pc)]^{3/2}N^{1/2}[m/M_{\odot}]^{-1/2}
years,
\end{equation}
where $r_h$ is the half-mass radius (i.e. the radius enclosing half
the mass of the cluster), $N$ is the total number of stars
within $r_h$ and $m$ is the average stellar mass.
Simulations have shown that $r_h$ changes by a factor of at
most a few
over the course of a cluster's lifetime \citep{henon73, murray09}.
The GCs that comprise the ACS sample show a range of masses spanning
roughly 3 orders of magnitude (10$^4$-10$^6$ M$_{\odot}$), and have
comparably old ages ($\approx$ 10-12 Gyrs) \citep{deangeli05,
marin-franch09}. Moreover,
their half-mass radii typically differ by less than a factor of 2 (see
Table~\ref{table:list}).
Therefore, Equation~\ref{eqn:t-rh} suggests that the total cluster
mass provides a rough proxy for the degree of
dynamical evolution due to two-body relaxation. In other words, the
effects of two-body relaxation on the evolution of
the stellar mass function should be the most pronounced in the least
massive clusters in the ACS sample \citep[e.g.][]{demarchi07,
baumgardt08, kruijssen09}. Said another
way, dynamical age increases with decreasing cluster mass. Therefore,
Figure~\ref{fig:pie_charts} shows that our results are consistent with
the general picture that two-body relaxation
is the cause of the observed depletion of low-mass stars in low-mass
clusters, as opposed to some unknown feature of the star formation
process.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{fig3.eps}
\end{center}
\caption[Mass Functions in Pie Chart Form]{
Stellar mass functions depicted in pie chart form. The total
area of each circle corresponds to the total number of stars
spanning all five stellar mass bins, and each pie slice shows the
fraction of this total corresponding to each mass bin. Each of
these fractions was calculated using the weighted lines of
best-fit provided in Table~\ref{table:bestfit}. From top to
bottom, the total number of stars used to generate each pie chart
was 10$^5$, 10$^4$, and 10$^3$. Darker pie slices correspond to
more massive bins. The sequence of pie charts progressing from top
to bottom effectively shows the evolution of
the stellar mass function with increasing dynamical age.
\label{fig:pie_charts}}
\end{figure}
\subsection{Theoretical Analysis} \label{theory_analysis}
In this section, we compare the results of our observational analysis
to 268 Monte Carlo simulations for globular
cluster evolution spanning a range of initial conditions.
Figure~\ref{fig:slopes-compare} shows a comparison between all five
$\gamma_i$ values found from the observed MFs and the corresponding
model $\gamma_i$ values for every combination of initial conditions.
As is clear, the agreement is excellent for nearly every combination of
initial IMF, concentration and virial radius. This was confirmed by
our chi-squared values, and the probability that the observed and
model $\gamma_i$'s are drawn from the same distribution exceeded 64\%
for all comparisons.
Our results suggest that a Kroupa IMF (i.e. $\alpha = 1.3$ in
Equation~\ref{eqn:kroupa}) typically gives the best agreement with the
observations. Every set of models with this IMF yielded a probability
greater than 93\% that the observed and model
$\gamma_i$'s are drawn from the same distribution.
Our results also appear to be relatively
insensitive to the initial concentration and virial radius. This agrees
with what was found by \citet{vesperini97} and \citet{baumgardt03} given
the limited ranges we have explored for these parameters.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{fig4colour.eps}
\end{center}
\caption[Comparison of the observed $\gamma_i$ values to
the corresponding model $\gamma_i$ values for every combination of initial
conditions]{Comparison of the observed $\gamma_i$ values
to the corresponding model $\gamma_i$ values for every combination of initial
conditions. The observed $\gamma_i$'s are shown as solid circles, whereas
the model $\gamma_i$'s are shown as different symbols. Each inset shows the model
$\gamma_i$'s for a different IMF. Starting at the upper right
and rotating clockwise, the insets correspond to IMF slopes
(in Equation~\ref{eqn:kroupa}) of -0.4, 0.0, 0.4, and 1.3. The online
version of this plot shows the model $\gamma_i$'s either as open or coloured
symbols which
have been used to indicate different initial concentrations. The solid blue
circles, solid red squares, solid green triangles, open black triangles, and
open five-point stars correspond to initial concentrations (W$_0$) of 5.0, 5.5,
6.0, 6.5, and 7.0, respectively. A small offset has been implemented for
every stellar mass bin to indicate different initial virial radii, with
the virial radius increasing from left to right. A horizontal line has also
been included on every observed data point in order to indicate the
width of the corresponding stellar mass bin.
\label{fig:slopes-compare}}
\end{figure}
\section{Summary \& Discussion} \label{discussion}
In this paper, we have presented a new technique to quantify
cluster-to-cluster variations in a large sample of observed stellar mass
functions. Our method quantifies these differences as a function of both
the stellar mass and the total cluster mass, and offers the advantage
that it is insensitive to
the exact functional form of the MF. We have applied our technique to
completeness-corrected stellar mass
functions in the range 0.25-0.75 M$_{\odot}$ for a sample of 27
globular clusters taken from the ACS Survey for GCs,
and have compared the results to a series of Monte Carlo models for
GC evolution.
In the subsequent sections, we discuss the implications of our results for the formation
and evolution of Milky Way GCs.
\subsection{The Effects of Two-Body Relaxation} \label{2-body}
We have shown that the observed
differences in the present-day MFs in our sample can be reproduced
by assuming (1) an universal initial mass function for all clusters, and
(2) that internal two-body relaxation is the dominant mechanism
contributing to the cluster-to-cluster variations.
Our results are
the most reliable in the mass range 0.45 - 0.75 M$_{\odot}$. This
is due to the larger photometric errors at the fainter magnitudes
corresponding
to lower stellar masses, and incompleteness resulting from crowding.
Despite the high quality of
the data used in this study, these issues are currently unavoidable
given the nature of the observations. This will be a key challenge
for future studies to resolve, however the method we have presented
in this paper offers a robust means of performing future analyses.
We note that the results of our observational analysis
are consistent with those of \citet{demarchi10}, who fit a tapered
power-law distribution function with an exponential truncation to the
stellar mass functions of a sample of 30 clusters containing both
young and old members. We have also verified that our results are
consistent with those of
\citet{paust10}, who performed power-law fits to the MFs of 17 GCs
taken from the ACS Survey.
In this paper, we have focused on the \textit{local}
MFs in the central cluster regions of the GCs in our sample. However,
previous studies have shown that the \textit{local} MF can differ
considerably from the \textit{global} MF \citep[e.g.][]{vesperini97}.
In principle, this should not have affected our comparisons
between the observed and simulated MFs since we have considered the same
structural areas of the clusters in all cases.
\subsection{The Effects of Binary Stars} \label{binaries}
Binary stars are unresolved in GCs, appearing as single
objects located above the MS in the
cluster CMD. Therefore, we have included some objects in our number
counts that are in fact binaries masquerading as single stars.
Previous studies have shown that unresolved binaries can contribute to
flattening, or even inverting, the stellar mass function in the range
of stellar masses considered in this paper \citep[e.g.][]{marks08, marks10}.
Moreover, observational evidence suggests that
the binary fractions in GCs are inversely proportional to the total
cluster mass \citep[e.g.][]{sollima08, milone08, knigge09}.
In particular, the most massive MW GCs tend to have binary fractions on the order of
only a few percent \citep[e.g.][]{rubenstein97, cool02, davis08}, whereas
the least massive MW GCs tend to have larger binary fractions that can
even exceed 50\% in some cases \citep[e.g.][]{milone11}.
This suggests that unresolved binaries should have had the
largest effect on the MFs of the lowest mass clusters in our sample.
Therefore, unresolved binaries could also be contributing to the general
trend we have found of increasing $\gamma_i$ with
decreasing stellar mass.
In an effort to quantify the effects of unresolved binaries on our results,
we removed all binaries from our simulated MFs and re-performed our
weighted lines of best-fit. This confirmed that unresolved binaries
could indeed be contributing to the general trend we have found of
increasing $\gamma_i$ with decreasing stellar mass.
However, this effect was not significant in the models, in large part
because we assumed a constant initial global binary fraction of 10\%
for all clusters.
In order to reproduce the observed trend in $\gamma_i$, it would require
that the range in binary fractions between low- and high-mass clusters
is greater than the observed range by a factor $\gtrsim$ 2
\citep[e.g.][]{milone11}. Although the binary fractions generally
increase in the core over time in our simulations (see \citet{fregeau09}
for more details), they are not sufficiently high to have
significantly contributed to the trend of increasing $\gamma_i$ with
decreasing stellar mass. At the end of our simulations, the core
binary fractions are typically in the range 10-30\%.
In order to properly assess these effects,
objects that are in fact unresolved binaries should be identified and our
analysis of the observations should be re-performed. This could
be done using multi-band photometry since, if a given binary happens to
fall on a single star evolution track in one CMD, it is unlikely to fall
on the corresponding tracks in
other CMDs constructed using different wavelength bands. Stellar
evolution models could then be used to constrain the masses of the
component stars. We intend to address this issue in future work.
We expect that the influence of binaries on GC evolution
by, for example acting as heat-sources via hardening encounters
with single stars \citep[e.g.][]{hut83a, hut83b, fregeau09}, should
have had a negligible
impact on our results. This is because most of the clusters
in our sample should still be undergoing core contraction
\citep[e.g.][]{gieles11}. It follows that their central densities
have not yet
become sufficiently high for encounters involving binaries to occur
frequently enough that they could have significantly affected the cluster
evolution. Notwithstanding, future studies should incorporate
models spanning a range of realistic initial binary fractions and distributions
of orbital parameters in order to properly assess all of these effects.
\subsection{The Effects of Tides from the Galaxy} \label{tides}
Tides from the Galaxy effectively reduce the time-scale
on which two-body relaxation operates \citep[e.g.][]{heggie03}.
This primarily serves to make clusters appear more dynamically evolved
than they otherwise would. The same effect is caused by disc shocking
which, as with tidal effects, should most severely affect clusters with
the lowest masses and the smallest Galactocentric distances. Therefore,
the locations of clusters within their host galaxies has been shown to
play an important role in determining the degree of flattening of their
stellar MFs \citep[e.g.][]{vesperini97}. In an effort to quantify the
effects caused by tides on our observational results,
we performed several cuts in perigalacticon distance
and re-performed our weighted lines of best-fit. Estimates for the
perigalacticon distances were obtained from \citet{dinescu99} and
\citet{dinescu07} for every cluster in our sample. Despite
removing clusters with small perigalacticon distances
for which it is typically argued that tidal effects should be the most
severe \citep[e.g.][]{heggie03}, our lines of best-fit
remained the same. We caution that these effects
have not been fully accounted for in the simulations of GC evolution
performed in this paper, for which we adopted circular orbits and a
Galactocentric distance of 4 kpc in all cases. We intend to adopt
a more realistic distribution of Galactic orbits for our model clusters
in a future paper in order to properly assess the effects caused by tides
in determining the present-day MFs of the GCs in our observed sample.
\subsection{The Effects of Primordial Gas Expulsion} \label{gas-expulsion}
As discussed in \citet{marks08} and \citet{marks10}, the expulsion of primordial
gas early on in the cluster lifetime can have a dramatic effect on the stellar
MFs of clusters. In particular, these authors showed that
clusters that began their lives with smaller concentrations are more likely
to have lost a larger fraction of their low-mass stars as a result of this
effect.
Primordial gas expulsion could therefore contribute to improving the agreement we
have found between the observed and simulated MFs. This would be
accomplished if our $\gamma_i$ values for the models were
simultaneously increased for low stellar masses and decreased for high
stellar masses, as is evident from Figure~\ref{fig:slopes-compare}. This
would occur if primordial gas expulsion had a more pronounced effect on
the MFs of the lowest mass clusters in our sample.
This is not unreasonable since the depth of
the cluster potential increases with increasing cluster mass.
\subsection{Future Work} \label{future}
Given the very old ages and
therefore low metallicities ([Fe/H] $\approx$ -2.28 -
(-0.37)) of the clusters in our sample, our technique could potentially
be used to better constrain the IMFs of old massive
star clusters and, more generally, star formation in the very early
Universe. This will be done in a forthcoming paper
by considering a larger range of IMFs and initial cluster
conditions than we have considered here.
Finally, we wish to point out that the method we have presented
can be generalized to compare any large sample of distribution functions.
We intend to illustrate this in a
future paper by using our technique to quantify cluster-to-cluster
differences in the orbital distributions (period, eccentricity, and
mass-ratio) of the binary populations in GCs as a function of the
total cluster mass.
\section*{Acknowledgments}
We would like to thank an anonymous referee for several
suggestions that helped to improve our manuscript. We also
wish to thank Christopher McKee for useful discussions, Aaron Dotter
and Roger Cohen for their support in analyzing the observations, and
Robert Cockcroft for a critical read of our manuscript. N.L. was
supported by Ontario Graduate Scholarships (OGS) and the European
Space Agency (ESA). S.U. was supported by NSF Grant AST-0607498
and NASA ATP Grant NNX09AO36G at Northwestern University.
|
1,314,259,996,649 | arxiv | \section{Introduction}
The Allen Brain Atlas (ABA, \cite{AllenGenome,AllenAtlasMol}) put neuroanatomy on a genetic basis by releasing voxelized,
{\emph{ in situ}} hybridization data for the expression of the entire genome in the mouse brain ({\ttfamily{www.mouse-brain.org}}). These data were co-registered to
the Allen Reference Atlas of the mouse brain (ARA, \cite{ARA}). About 4,000 genes of special neurobiological interest
were proritized. For these genes
an entire brain was sliced coronally and processed (giving rise to the coronal ABA). For the rest of the genome
the brain was sliced sagitally, and only the left hemisphere was processed (giving rise to the sagittal ABA).\\
From a computational viewpoint, gene-expression data from the the ABA can be studied collectively, thousands of genes at a time.
Indeed the collective behaviour of gene-expression data is crucial for the analysis of \cite{cellTypeBased},
in which the brain-wide correlation between the ABA and cell-type-specific microarray data was studied.
These microarray data characterize the transcriptome of $64$ different cell types, microdissected
from the mouse brain, and collated in \cite{OkatyComparison}. However, for a given cell characterized in this way,
it is not known where other cells of the same type are located in the brain.
A linear model was proposed in \cite{cellTypeBased,supplementary1,supplementary2} (see also \cite{KoCellTypes,TanFrenchPavlidis,JiCellTypes}), and used to estimate
the region-specificity of cell types by linear regression with
positivity constraint. The model was fitted using the coronal ABA only, which allowed to
obtain brain-wide results. However, this restriction implies that only one ISH expression profile per gene
was used to fit the model. This poses the problem
of the error bars on the results of the model.\\
\section{Spatial densities of cell types in the mouse mouse brain from the ABA and transcriptome profiles}
Since all the ISH data in the ABA were co-registered to the
voxelized ARA, so that data for the sagittal and coronal atlas
can be treated computationally in the same way. However, the ABA does not specify from which cell type(s) the expression of each gene comes.\\
{\bf{Gene expression energies from the Allen Brain Atlas.}} In the ABA, the adult mouse brain is partitioned into $V=49,742$ cubic voxels of side 200 microns, to which ISH data are registered \cite{AllenGenome,AllenAtlasMol,ARA} for thousands of genes.
For computational purposes, these gene-expression data can be arranged into
a voxel-by-gene matrix. For a cubic labeled $v$, the {\it{expression energy}} \cite{AllenGenome,AllenAtlasMol} of the gene $g$ is a
weighted sum of the greyscale-value intensities evaluated at the
pixels intersecting the voxel:
\begin{equation}
E(v,g) = {\mathrm{expression\;energy\;of\;gene\;labeled\;}}g\;{\mathrm{in\;voxel\;labeled\;}}v,
\label{ExpressionEnergy}
\end{equation}
The analysis of \cite{cellTypeBased} is restricted to digitized image series from the coronal ABA,
for which the entire mouse brain was processed in
the ABA pipeline (whereas only the left hemisphere was processed for the sagittal atlas).\\
{\bf{Cell-type-specific transcriptomes and density profiles.}} On the other hand, the
cell-type-specific microarray reads collated in \cite{OkatyComparison} (for $T=64$
different cell-type-specific samples studied in \cite{foreBrainTaxonomy, ChungCells, ArlottaCells, RossnerCells, HeimanCells,CahoyCells,DoyleCells,OkatyCells}) can be arranged in a type-by-gene matrix denoted by $C$, such that
\begin{equation}
C(t,g) = {\mathrm{expression\;of\;gene\;labeled\;}}g\;{\mathrm{in\;cell\;type\;labeled\;}}t,
\label{typeByGene}
\end{equation}
and the columns are arranged in the same order as in the matrix $E$ of expression energies defined in Eq. \ref{ExpressionEnergy}.\\
We proposed the following linear model in \cite{cellTypeBased} for a voxel-based gene-expression atlas in terms
of the transcriptome profiles of individual cell types and their spatial densities:
\begin{equation}
E(v,g) = \sum_t \rho_t(v)C( t,g) + {\mathrm{Residual}}(v,g),
\label{modelEquation}
\end{equation}
where the index $t$ denotes labels cell type, and $\rho_t(v)$ denotes its (unknown) density at voxel labeled $v$.
The values
of the cell-type-specific density profiles were computed in \cite{cellTypeBased} by minimizing the value
of the residual term over all the (positive) density profiles, which amounts to solving a quadratic
optimization problem (with positivity constraint) at each voxel. These computations can be reproduced on a desktop
computer using the MATLAB toolbox {\ttfamily{Brain Gene Expression Analysis}} (BGEA) \cite{qbBGEA,BGEAManual}. For other applications
of the toolbox see \cite{markerGenes} (marker genes of brain regions), \cite{autismCoExpr, autismCoExpr2} for co-expression
properties of some autism-related genes, and \cite{eikonal} for computations of stereotactic coordinates).\\
\section{Monte Carlo simulation of variability of spatial densities of cell types}
The optimization procedure in our model is deterministic. On the other hand, decomposing the density of a cell type
into the sum of its mean and Gaussian noise is
a difficult statistics problem (see \cite{Meinshausen2013}).
Some error estimates on the value of $\rho_t(v)$ were obained in \cite{cellTypeBased}
using sub-sampling techniques (i.e.
sub-sampling the data repeatedly by keeping only a random 10\% of the
coronal ABA). This induced
a ranking of the cell types based on the stability of the results against sub-sampling.
However, the 10 \% fraction is arbitrary (even though it is close to the
fraction of the genome covered by our coronal data set).\\
\begin{figure}
\includegraphics[width=0.99\textwidth]{meanFittings16.png}
\caption{{\bf{Heat map of the average density of cell types in the left hemisphere}},
$\langle \rho_t(v) \rangle$, defined in Eq. \ref{meanFittingDef}, for medium spiny neurons, labeled $t=16$ in our data set. The restriction
to the left hemisphere comes from the use we made of sagittal image series, which cover the left hemisphere only.}
\label{meanFittings}
\end{figure}
In the present work we simulated the variability of the spatial density of cell types by integrating the digitized
sagittal image series into the data set.
For gene labeled $g$, the ABA provides $N(g)$ expression profiles, where
$N(g)$ is the number of image series in the ABA for this gene.
Hence, instead of just one voxel-by-gene matrix,
the ABA gives rise to a family of $\prod_{g=1}^G N(g)$ voxel-by-gene matrices, with voxels belonging to the
left hemisphere. A quantity computed from the coronal ABA can be recomputed from any of these matrices, thereby inducing
a distribution for this quantity. This is a finite but prohibitively large number of
computations, so we took a Monte Carlo
approach based on $R$ random choices of images series, described by the following pseudo-code:\\
{\small{
{\ttfamily{
for all $i$ in $[1..R]$}}\\
{\ttfamily{1. for all $g$ in [1..G], choose an image series labeled by the integer $n_i(g)$ in $[1.. N(g)]$}};\\
{\ttfamily{2. construct the matrix $E_{[i]}$ with entries $E_{[i]}(v,g) =E^{(n_i(g))}(v,g)$,\\
{\ttfamily{3. estimate the density of cell type labeled $t$ using this matrix, call the result $\rho_{t,[i]}$}};\\
{\ttfamily{end\\
}}
}}}}
The larger $R$ is, the more precise the estimates for the distribution of the spatial density of cell types will be.
The only price we have to pay for th e integration of the sagittal ABA is the restriction of the
results to the left hemisphere in step 2 of the pseudo-code.
\section{Anatomical analysis of results}
The average
density across random draws of image series for cell type labeled $t$ reads:
\begin{equation}
\langle \rho_t(v) \rangle = \frac{1}{R}\sum_{i=1}^R\rho_{t,[i]}(v).
\label{meanFittingDef}
\end{equation}
\begin{figure}
\includegraphics[width=1.1\textwidth]{fittingDistr16.png}
\caption{{\bf{Estimated probability densities of fractions of density
agglomerated in a few regions of the coarsest versions of the ARA}} (see Eq.
\ref{fittingDistrDef}), for medium spiny neurons, labeled $t=16$, based on $R=1000$ random draws.
The right-most peak, corresponding to the striatum, is well-decoupled from the others, furthermore
the other peaks are all centered close to zero (making most of them almost invisible). Medium spiny neurons
have $93(\pm3)$ percent of their densities supported in the striatum,
without any region gathering more than 5 percent of the signal in any of the random draws.}
\label{fittingDistrs}
\end{figure}
A heat map of this average for medium spiny neurons (extracted from the striatum)
is presented on Fig. \ref{meanFittings}. It is optically very similar to the (left) striatum,
which allows the model to predict that medium spiny neurons are specific to the striatum (which confirms prior neurobiological knowledge
and therefore serves
as a proof of concept for the model).\\
To compare the results to classical neuroanatomy, we can group the voxels by region according to the ARA.
Since the number of cells of a given type in an extensive quantity, we compute the
fraction of the total density contributed by a given brain region denoted by $V_r$ (see the legend of Fig. \ref{fittingDistrs} for a list of possible values of
$V_r$):
\begin{equation}
\phi_{r,[i]}(t) = \frac{1}{\sum_{v\in\mathrm{left\;hemisphere}}\rho_{[i],t}(v) }\sum_{v\in V_r} \rho_{t,[i]}(v).
\label{fittingDistrDef}
\end{equation}
We can plot the distribution of these $R$ values for a given cell type and all brain regions (see Fig.
\ref{meanFittings} for medium spiny neurons, which gives rise to the best-decoupled right-most peak
in the distribution of simulated densities). Moreover, we estimated the densities of the contribution of each region in the coarsest version
of the ARA to the total density of each cell type in the data set. For most cell types, this
confirms the ranking of cell types by stability obtained in \cite{cellTypeBased}, but based on error bars
obtained from the same set of genes in every fitting of the model (see the accompanying preprint \cite{accompanying}
for exhaustive results for all cell types in the panel). The most stable results against sub-sampling
tend to correspond to cell types for which the anatomical distribution of results is more peaked. The present analysis can be repeated
when the panel of cell-type-specific microarray expands.\\
\ack{Microarray data were made available by Ken Sugino, Benjamin Okaty and Sacha B. Nelson. The Allen Atlas data were analysed under the guidance of Michael Hawrylycz and Lydia Ng. This work is supported by the Research Development Fund and the Research Conference Fund of Xi'an Jiaotong--Liverpool University.}
\section*{References}
|
1,314,259,996,650 | arxiv | \section{Introduction}
There has been a considerable interest in studying quarkonium properties at non-zero temperature since the
seminal paper of Matsui and Satz, that suggested that suppression of quarkonium production in heavy
ion collisions can signal creation of a deconfined medium \cite{Matsui:1986dk}. In-medium properties
of heavy quarkonium are encoded in the spectral functions, which in turn can be related to
Euclidean time meson correlation functions, see e.g. Ref. \cite{Bazavov:2009us} for a review.
Reconstruction of the quarkonium spectral functions from a discrete set of data points turned
out to be a very challenging task, see e.g. Refs. \cite{Wetzorke:2001dk,Karsch:2002wv,Datta:2003ww}.
At zero temperature quarkonium properties can be estimated quite well using potential models with
static quark antiquark potential
calculated in lattice QCD, see e.g. Ref. \cite{QuarkoniumWorkingGroup:2004kpm}.
It has been also proposed to calculate the quarkonium spectral function at non-zero temperature
using potential model with complex potential from lattice QCD \cite{Petreczky:2010tk,Burnier:2015tda}.
The complex potential at non-zero temperature is also important for real time modeling of
quarkonium production in heavy ion collisions \cite{Yao:2021lus,Rothkopf:2019ipj}.
The complex potential at non-zero temperature can be defined in terms of Wilson loops or correlators
of Wilson lines in Coulomb gauge, $W(r,\tau,T)$ through their spectral decomposition \cite{rothkopf2009proper,Rothkopf:2011db}
\begin{align*}
W(r,\tau,T)&=\int_{-\infty}^\infty d\omega \rho_r(\omega, T)e^{-\omega\tau}.
\end{align*}
Here $r$ is the spatial separation between the static quark and antiquark and acts as label index
for the spectral function of static $Q\bar Q$ pair, $\rho_r(\omega, T)$.
If the spectral function has a peak at some value of $\omega$, the peak position gives the real part of
the potential
$ReV (r, T )$, while its width gives the imaginary part of the potential, $ImV (r, T )$.
We still need to reconstruct the spectral function in order to determine the potential, but
the structure of this spectral function is much simpler than the structure of quarkonium spectral function, and thus should be easier to reconstruct.
One needs a large number of data points in the time direction to accomplish this task.
Current state of the art calculation in 2+1 flavor QCD mostly use
lattices with temporal extent $N_\tau = 12$ and some $N_{\tau}=16$ lattices that are available only at high temperatures \cite{nt12pap}.
In this contribution we report preliminary calculations of the Wilson line correlators
calculated on very fine lattices with lattices spacing $a^{-1} = 7.04$ GeV and temporal
extent up to $N_{\tau}=32$.
\section{Lattice setup}
We perform calculations at non-zero temperature in 2+1 flavor QCD on $96^3\times N_\tau$ lattices
with lattice spacing $a^{-1} = 7.04$ GeV. The strange quark mass, $m_s$ is fixed to its physical value,
while for the light quark mass we use $m_l=m_s/5$, which corresponds to the pion mass of about $300$ MeV
in the continuum limit.
We consider
$N_\tau = 32,~24,~20$ and $16$, which correspond to $T = 220,~294,~353$ and $441$ MeV, respectively.
We also generated
additional $T = 0$ ($64^4$ ) lattices to serve as a reference.
We use temporal Wilson line correlators in Coulomb gauge instead of Wilson
loops to obtain a better signal for the potential. However, this approach turns out to be insufficient
for our very fine lattices. Therefore, the temporal Wilson lines are constructed from HYP smeared \cite{Hasenfratz:2001hp}
temporal gauge links for $N_{\tau}=32$ and $24$, as well as for the zero temperature case.
We use 5 and 10 steps of HYP smearing. For the two highest temperature we consider an alternative
noise reduction approach, which will be discussed below.
\section{Effective Masses}
Before reconstructing the spectral function $\rho_r(\omega,T)$ we need to understand
the $\tau$-dependence of the Wilson line correlators at different temperatures. This can be done
in terms of the effective mass
\begin{align*}
m_{\text{eff}}(r,\tau, T)&=\partial_\tau \ln W(r,\tau, T)
\\ &\simeq \frac{1}{a}\ln \frac{W(r,\tau, T)}{W(r,\tau+a, T)}.
\end{align*}
At zero temperature the effective mass approaches a plateau that corresponds to the ground state energy, i.e.
the static potential, $V(r)$. Our results for the effective masses at zero temperature are shown in Fig. \ref{fig:meff0}.
\begin{figure}
\includegraphics[width=8cm]{tzm8.pdf}
\caption{The effective masses for Wilson line correlator for $r/a=8$ for 0, 5 and 10 steps of HYP smearing.}
\label{fig:meff0}
\end{figure}
Since the potential has a divergent part proportional to the inverse lattice spacing and the coefficient
of this divergence depends on the number of smearing steps, the effective masses have been shifted by a
constant to match them in the plateau region. As one can see from the figure for $\tau/a>3$ more smearing
steps leads to faster approach to the plateau. However, for the two smaller $\tau$ values this is not the case,
and for the smallest $\tau$, the effective mass is the largest for 10 steps of HYP smearing.
Without HYP smearing we loose the signal for $\tau/a \gtrsim 12$ and therefore, we do not show the corresponding results.
At larger spatial separations the situation is even worse, and no hint of a plateau can be obtained from
unsmeared results. The HYP smearing distorts the $r$-dependence of the static potential. But these distortions
are limited to small distances. For distances $r/a>5$, which are relevant for our study, the distortions due to
HYP smearing are negligible compared to other sources of errors.
Next, we examine the temperature dependence of the effective masses. In Fig. \ref{fig:meffTdep} we show
the effective mass for $T=0$, $T=220$ MeV and $T=293$ MeV at two different distances, $r/a=8$ and $r/a=16$
and 10 steps of HYP smearing. The results for 5 steps of HYP smearing are similar.
We see that for small $\tau$ the temperature effects are quite small and grow with increasing $\tau$.
At non-zero temperature the effective masses do not show a plateau at large $\tau$ but an approximately linear
decrease followed by a very rapid strongly non-linear drop as $\tau$ approaches $1/T$. These features follow
from the general properties of the spectral function $\rho_r(\omega,T)$ \cite{nt12pap}.
The approximately linear decrease in the effective masses is due to the fact that the ground
state acquires a thermal width.
The sharp drop at large $\tau$ is caused by the low $\omega$ tail of the broadened peak \cite{nt12pap}.
The thermal effects are obviously larger at larger separations.
\begin{figure}
\includegraphics[width=7cm]{tvar_m8_h10.pdf}
\includegraphics[width=7cm]{tvar_m16_h10.pdf}
\caption{The effective masses at different temperature as function of $\tau$ for $r/a=8$ (left) and
$r/a=16$ (right). The results for 10 steps of HYP smearing are shown.}
\label{fig:meffTdep}
\end{figure}
For $N_\tau=20$ we tried another procedure for noise reduction. It is based on the idea that
for any fixed $\tau$ the Wilson line correlator is a smoothly decaying function of $r$.
The noise problem shows up at large values of $r$, where the correlation
functions is available for many different spatial separations around a particular $r$ value, that differ by fraction of
the lattice spacing.
Since the correlation function should be smooth in $r$ (lattice artifacts at large distances are negligible)
we can reduce the fluctuations
in the data set by performing
smooth interpolation in $r$ in a narrow region of $r$.
We used second order polynomial interpolations in intervals of $r/a$ that are smaller than $0.9$ for $r/a\ge 20$.
The statistical fluctuations are largely reduced as the result of these interpolations.
Using the corresponding interpolations for each $\tau$
we can calculate the effective mass using the smoothened data for prescribed $r$ values.
In Fig. \ref{fig:meffnt20} we show the result of such an analysis. We indeed see that the interpolation
in $r$ really helps to reduce the statistical noise in the correlator. The right panel of this
figure shows that the procedure is very effective for large $r$ values, $r/a\ge 20$.
\begin{figure}
\includegraphics[width=7cm]{Nt20_efmass_r20_extrapolated_r_vs_not.pdf}
\includegraphics[width=7cm]{Nt20_efmass_r20_to_40_extrapolated_r.pdf}
\caption{The effective masses for $T=353$ MeV ($N_\tau=20$) for different separation.
In the left panel we show the effective mass obtained using interpolation in $r$ and compared
to the standard result on the effective mass. In the right panel we show results obtained
form the interpolation procedure for $r/a=20,~25,~30,~35$ and $40$.}
\label{fig:meffnt20}
\end{figure}
\subsection{Subtracted correlators and comparison with the previous results}
In the previous section we have seen that at very small $\tau$
the temperature dependence of
the Wilson line
correlators is very small. This is expected to be due to the fact that the at small Euclidean
time the correlation function mostly receives contributions from the high $\omega$ part of the spectral
function, and this part of the spectral function is largely temperature
independent, see e.g. Ref. \cite{Bazavov:2009us}.
\begin{figure}
\includegraphics[width=7cm]{t220_dmt8.pdf}
\includegraphics[width=7cm]{t220_dmt16.pdf}
\includegraphics[width=7cm]{t300_dmt6.pdf}
\includegraphics[width=7cm]{t300_dmt12.pdf}
\caption{The effective masses corresponding to the subtracted correlators at two distances:
$rT=1/4$ (left) and $rT=1/2$ (right). The top panel shows the results for $T=220$ MeV, while
the bottom panels show the results for $T=300$ MeV. We compare our results with the previous
$N_{\tau}=12$ and $N_{\tau}=16$ results without smearing, i.e. bare results.}
\label{fig:meff_subtr}
\end{figure}
Therefore, for the spectral function we can write \cite{Larsen:2019bwy,Larsen:2019zqv}
\begin{equation}
\rho_r(\omega, T)=\rho_r^{\text{peak}}(\omega, T)+ \rho_r^{\text{high}}(\omega),
\end{equation}
with $\rho^{peak}(\omega,T)$ corresponding to the ground state peak in the spectral function that is broadened at
non-zero temperature and may have a large low $\omega$ tail \cite{nt12pap}.
This equation implies that
\begin{equation}
W(r,\tau,T)=W^{\text{peak}}(r,\tau,T)+W^{\text{high}}(r,\tau).
\end{equation}
The zero temperature spectral functions has a delta peak corresponding to the ground state,
$\rho_r(\omega, T=0)=A \delta(\omega-V^{T=0}(r))+\rho_r^{\text{high}}(\omega)$. Therefore,
by fitting the ground state contribution at zero temperature and then subtracting it from
$W(r,\tau,T=0)$ we can estimate $W^{\text{high}}(r,\tau)$. If we subtract $W^{\text{high}}(r,\tau)$
from the finite temperature Wilson line correlator we obtain the subtracted correlator that is mainly
sensitive to the temperature dependent peak part of the spectral function, $\rho_r^{\text{peak}}(\omega, T)$.
Therefore, it is important to analyze the effective masses for the subtracted correlator. The
corresponding results are shown in Fig. \ref{fig:meff_subtr}. We also compare our results with
the ones obtained on $N_{\tau}=12$ and $N_{\tau}=16$ lattice without HYP smearing \cite{nt12pap}.
The effective masses show a linear decrease in $\tau$ when the Euclidean time separation is far away from $1/T$.
For $\tau$ close to $1/T$ we see a faster non-linear dependence. These features of the subtracted
effective masses are also present in the unsmeared $N_{\tau}=12$ and $N_{\tau}=16$ data and we find
a very good agreement with the corresponding results, see Fig. \ref{fig:meff_subtr}.
For large time the new results have significantly smaller errors as the result of HYP smearing.
The significant smearing dependence of the effective masses at small $\tau$ is largely reduced due to the subtraction, i.e.
HYP smearing was mostly affecting the high energy part of the spectral function. The subtracted
effective masses at small $\tau$ show some non-monotonic behavior at very small $\tau$. This
is due to the distortions due to HYP smearing, which affect slightly differently the zero
and finite temperature correlator. The unsmeared data, on the other hand, show the expected behavior.
\section{Conclusion}
To obtain the the complex static $Q \bar Q$ potential at non-zero temperature one needs to
calculate Wilson loops or Wilson line correlators on fine lattices with large temporal extent.
This, however, is challenging because of the poor signal to noise ratio. Therefore, in this contribution
we explored two possible avenues for noise reduction. The first one is to use HYP smearing on the
temporal links. We observed that using 5 and 10 steps of HYP smearing on lattices with $a^{-1}=7.04$ GeV
at $T=220$ MeV and $T=330$ MeV provides a good signal for all relevant $Q\bar Q$ separations.
The distortions due to HYP smearing are limited to very small $\tau$, namely $\tau/a<5$. Therefore,
HYP smearing is a viable strategy for noise reduction. We also explored interpolation in $r$ to
reduces the noise. This approach worked well at the two highest temperature, $T=353$ and $441$ MeV.
The subtracted effective masses agree well with the previous calculations performed on $N_{\tau}=12$
and $N_{\tau}=16$ lattices but have much smaller errors for large $\tau$ values. Using the presented
data on the effective masses of the Wilson line correlators we can obtain the complex static $Q\bar Q$ potential
if a suitable Ansatz for the spectral function is introduced. The corresponding analysis is currently underway.
\section*{Acknowledgement}
This material is based upon work supported by the U.S. Department of Energy, Office of Science,
Office of Nuclear Physics: (i) Through the Contract No. DE-SC0012704;
(ii) Through the Scientific Discovery through Advance Computing (SciDAC) award
Computing the Properties of Matter with Leadership Computing Resources.
R.L., G.P. and A.R. acknowledge funding by the Research Council of Norway under the FRIPRO Young Research Talent grant 286883. Part of the data analysis has been carried out on computing resources provided by UNINETT Sigma2 - the National Infrastructure for High Performance Computing and Data Storage in Norway under project NN9578K-QCDrtX "Real-time dynamics of nuclear matter under extreme conditions".
J.H.W.’s research was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Projektnummer 417533893/GRK2575 ``Rethinking Quantum Field Theory''.
D.B. and O.K. acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the CRC-TR 211 'Strong-interaction matter under extreme conditions'– project number 315477589 – TRR 211.
This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231.
Some of the numerical calculations have been performed on JUWELS Booster J\"ulich Supercomputing Center using allocation from PRACE.
\bibliographystyle{JHEP}
|
1,314,259,996,651 | arxiv | \section{Introduction}
\label{sec:Introduction}
Granular packing is ubiquitous in everyday life. It is common knowledge that a denser granular pack can be achieved by tapping the pack. Clogged granular flow can be unjammed and structural foundations of buildings strengthened with tapping. Indeed, the first thing one does when in trouble with handling granular materials is to tap the container. Nevertheless, the physical mechanisms concerning the effect of tapping on granular packs are not yet completely understood. Recent investigations on granular compaction have used the dimensionless maximum acceleration $\Gamma=\alpha_{max}/g$ to characterize the strength of tapping and/or vibration applied to a granular pack, where $\alpha_{max}$ is the maximum acceleration and $g = 9.8$ m/s$^{2}$ is the gravitational acceleration~\cite{Katsuragi2015,Knight1995,Nowak1997,Josserand2000,Dijksman2009,Philippe2002,Philippe2003,Lumay2005,Lumay2006,Arsenovi2006,Ribiere2007}. Most previous studies have used steady vibration to cause granular compaction. The final state attained by steady vibration is solely determined by $\Gamma$ value~\cite{Knight1995,Nowak1997,Philippe2002,Arsenovi2006,Ribiere2007}. The most efficient compaction is induced at a value of $\Gamma \simeq 2$~\cite{Knight1995,Nowak1997,Philippe2003}. When $\Gamma$ is too small, the compaction takes long and grows logarithmically with time. When $\Gamma$ is too large, on the other hand, it is difficult to attain the highly compacted state as a large amount of kinetic energy is delivered to the granular pack in such a strong vibration~\cite{Philippe2003}. However, granular compaction also depends on vibration history~\cite{Josserand2000,Dijksman2009}. Although steady vibration has a well-defined maximum acceleration value, it represents one particular instance of granular compaction. In general, natural vibration or tapping applied to granular pack is somewhat irregular. Hence, granular compaction induced by irregular perturbations such as manual tapping must be examined to understand compaction processes that occur in nature.
To diagnose the physical mechanism of granular compaction, an access to the inner stress structure created by granular pack is necessary. In general, the granular packs exhibit inhomogeneous stress distribution which can be characterized by a network of force chains. This force chain structure is peculiar to granular assemblies and causes their complex rheological behaviors. The force chain structure can be visualized in two-dimensional (2D) case. Using a 2D pack of photoelastic disks, the force chains can be observed via the retardation due to stress-induced birefringence of photoelastic material~\cite{Howell1999,Geng2001,Majmudar2005,Bandi2013,Puckett2013,Zheng2014,Behringer2014}. Using photoelastic disks, force applied to each disk can be measured~\cite{Howell1999,Geng2001}. More recently, the applied force has been decomposed into the normal and tangential components using a computational image matching method~\cite{Majmudar2005,Puckett2013}. Relations among the shearing, isotropic compression, and jamming have been intensively studied using photoelastic disks~\cite{Majmudar2005,Bandi2013,Puckett2013,Zheng2014,Behringer2014}. To the best of our knowledge, however, the tapping-induced granular compaction has not been studied with photoelastic disks. Therefore, we carry out the experiment with photoelastic disks to approach the physics of granular compaction via manual tapping.
In this paper, we report the details of experimental methodology developed to study the granular compaction. A 2D granular pack consisting of photoelastic disks is constructed. Then, the manual taps are added to the granular pack. The evolution of packing fraction and force chain structure in the granular pack are characterized by image analysis of photoelastic disks. Using the analysed results, the relation between the packing fraction and force chain structure is discussed to reveal what happens in the compacted granular pack.
\section{Experiment}
\subsection{Setup}
\label{sec:Setup}
\begin{figure}
\begin{center}
\includegraphics[width=\hsize]{Setup_1200dpi.eps}
\end{center}
\caption{Top view of the optical setup of the experiment. The experiment is carried out in a dark room to prevent stray light. The distance between the light source and the camera is about 1 m to make uniform angles of incident light into camera. The 2D experimental chamber is put vertically in front of the light source which is attached to a circular polarizer. A snapshot of the chamber is taken with a CCD camera (Nikon D70). The circular polarizer in front of the camera is set at $90^{\circ}$ (cross-polarisation mode) relative to the other. Two types of images are obtained with and without the circular polarizer in front of the camera.}
\label{fig:device_manual}
\end{figure}
The experimental setup as shown in Fig.~\ref{fig:device_manual}, consists of a 2D experimental chamber constructed with acrylic plates along the front and back and held together by aluminium bars along the sides and bottom. The chamber dimensions are $0.3 \times 0.5 \times 0.011$ m in height, width and thickness, respectively. An accelerometer (EMIC Corp. Model: 710-C) is mounted on the top right corner of the chamber to measure the maximum acceleration ($\alpha_{max}$) experienced during the experiment. The chamber is filled with a bidisperse (to avoid crystallisation) set of 350 large (diameter is 0.015 m) and 700 small (diameter is 0.01 m) photoelastic disks of 0.01 m thickness (Vishay Micro Measurements, PSM-4). The chamber is vertically placed between a circular polarised LED light source and a CCD camera (Nikon D70), which acquires two types of images $2000 \times 3008$ pixels in size, corresponding to a spatial resolution of $1.76 \times 10^{-4}$ m/pixel (MPP). The camera is placed 1 m in front of the experimental chamber. First, a bright field image (Fig.~\ref{fig:Bright_and_dark_field_pictures}a) of the granular pack is acquired to measure the packing fraction and the disk centres and diameters for estimation of force per disk. A second dark field image (Fig.~\ref{fig:Bright_and_dark_field_pictures}b) is acquired by placing a second circular polariser between the experimental chamber and the camera in cross-polarisation mode. This image provides the photoelastic intensities of the granular force chains. Images are acquired under dark room conditions to minimise ambient noise from extraneous illumination.\\
\begin{figure}
\begin{center}
\includegraphics[width=\hsize]{Bright_and_dark_field_pictures_h.eps}
\end{center}
\caption{(a) An example of bright field image by which the packing fraction and the position of photoelastic disks are obtained. (b) The corresponding dark field image by which the structure of the force chains is analysed}
\label{fig:Bright_and_dark_field_pictures}
\end{figure}
\subsection{Experimental Protocol}
\label{sec:Experimental Protocol}
Prior to start of the experiment, an initial configuration of low packing fraction is generated. It is preferable that the initial packing fraction be small since this study focuses on granular compaction via manual tapping. However, when disks are introduced in a vertically standing chamber, initial compaction occurs from disk impacts. Therefore, the disks were introduced by spreading them in the chamber while it is horizontally laid down, and then the chamber was vertically fixed, thus assuring a small initial packing fraction. A pair of bright and dark field images is then acquired for this initial configuration.
The system is then perturbed by providing a manual tapping to the experimental chamber. In particular, each manual tapping is defined as adding two impulses to each bottom edge of the experimental chamber. Whereas this tapping protocol is not systematically controlled as in the case of an electromagnetic shaker, for instance, it was specifically chosen to mimic the situation of stochastic impulse forcing observed in many natural processes. In any event, the accelerometer attached to the experimental chamber measures the acceleration experienced during tappings, from which dimensionless acceleration is defined as $\Gamma \equiv \frac{\alpha_{max}}{g}$. The experiments reported here are in the regime of $\Gamma \simeq 3 - 4$. This tapping acceleration is large enough to achieve the efficient compaction. Following each manual tap, a pair of bright and dark field images is acquired for subsequent analysis to determine the evolution in the packing fraction ($\phi$), force per disk ($F_d$), and the force chain segment length ($l$) as a function of the tapping number $\tau$. Each experimental run consists of the initial configuration followed by nine manual taps ($\tau = 9$), thus providing ten pairs of bright and dark field images per run. Nine experimental runs under identical experimental conditions were conducted.
\subsection{Image Analysis}
\label{sec:Image Analysis}
Here we explain the image analysis methods employed to extract the packing fraction $\phi$, force per disk $F_d$, and force chain segment length $l$, from the bright and dark field images. The image analysis software, ImageJ \cite{abramoff2004}, was used to analyse the experimental image data.\\
\subsubsection{Determination of Packing Fraction}
\label{sec:Determination of Packing Fraction}
In this study, we define the packing fraction as $\phi = S_t/(S_m + S_v)$, where $S_t$ is the theoretical total area of the photoelastic disks, $S_m$ is the total area of photoelastic disks measured from the bright field images, and $S_v$ is the total void area measured from the bright field images. Whereas theoretically, $S_t = S_m$, in reality $S_m/S_t \simeq 1.1$ due to the optical distortion of the image between the centre and edges of the bright field image (see Fig.~\ref{fig:Bright_and_dark_field_pictures}a) and the thickness gap between disks and the chamber wall. When granular compaction occurs under manual tapping, whereas $S_m$ remains almost constant, $S_v$ decreases due to reduction in area of the voids between disks. Therefore, a measurable increase in packing fraction is observed with tapping number $\tau$. Often, the packing fraction is calculated as the ratio of the area occupied by the photoelastic disks to the total chamber area. This definition is reasonable when the granular pack is enclosed on all sides. However, since this experiment is conducted with the upper side of the experimental chamber left open, an accurate estimation of the total chamber area is not possible. The same situation also arises in estimation of packing fraction for granular heaps or sand piles \cite{Clement1992}.
For calculation of packing fraction $\phi$, the theoretical area of the disks $S_t$ was first calculated from the known number of large and small disks, whose diameters were already available, yielding $S_t = 1.17 \times 10^{-1}$ m$^2$ for experiments reported here. For calculation of $S_m$ and $S_v$, the bright field image was first binarized by using ImageJ which resulted in dark disks on a bright background. The pixel area of the dark regions multiplied by the spatial resolution (MPP) then provided the value of $S_m$, and inversion of images was used to obtain the value of $S_v$.\\
\subsubsection{Characterization of photoelastic intensity gradient}
\label{sec:Characterization of photoelastic intensity gradient}
\begin{figure}
\begin{center}
\includegraphics[width=\hsize]{Identification_h2.eps}
\caption{Disk identification and measurement of force per disk from image analysis. The area and centre of each disk are obtained from the bright field image (a). Sample disk center and circumference are shown in red for a large disk. This information is then used in the corresponding dark field image (b) to obtain the photoelastic signal at disk contacts, and subsequent analysis is employed to measure the force per disk.}
\label{fig:Identification}
\end{center}
\end{figure}
Extant studies have used the photoelastic signal to measure contact forces in one of two ways. The first method estimates force per disk by using the photoelastic intensity gradient \cite{Howell1999, Zheng2014}. The second method estimates force per disk via computational image matching \cite{Majmudar2005, Puckett2013}. This study applies the former method for measurement of force per disk as the image resolution obtained is insufficient to measure forces by the latter computational matching scheme. The algorithm applied here for force measurement is similar to the one adapted by Howell et al. \cite{Howell1999}.
For the given intensity $I$ for each image pixel (8 bit, gray scale), Sobel filter was applied to obtain the squared horizontal $(\nabla I_h)^2$ and vertical $(\nabla I_v)^2$ gradients in intensity. Their sum $|\nabla I|^2 \equiv (\nabla I_h)^2 + (\nabla I_v)^2$ provides the squared gradient in intensity per pixel. The mean squared intensity gradient over all pixels within a disk was then defined as $\langle G^2 \rangle \equiv \langle |\nabla I|^2\rangle$. Computation of $\langle G^2 \rangle$ on each disk requires knowledge of each disk centre and its area; information available from the bright field image (Fig.~\ref{fig:Identification}) which proceeded in three steps: (1) binarizing a bright field image, (2) splitting disk areas of contiguous binarized intensity into individual disks, and (3) measuring each disk centre position and area. Step 1 is identical to the packing fraction measurement method. In step 2, a watershed algorithm was employed to discriminate between sharp gradients in intensity among disks, usually referred to as mountain (low intensity gradient) and river (high intensity gradient), and separates out individual disks. This is necessary to identify the disk perimeters along which the photoelastic intensities of contact forces exist. Following this watershed procedure, each disk centre position and area were measured in step 3. Applying these results on the dark field image, the mean squared intensity gradient in photoelastic signal $\langle G^2 \rangle$ was then obtained for each disk.\\
\subsubsection{Force Calibration}
\label{sec:Force Calibration}
\begin{figure}
\begin{center}
\includegraphics[width=\hsize]{Calibration_h2.eps}
\caption{Calibration method using a vertical 1D chain of photoelastic disks. Bright (a) and dark (b) field images of the vertical 1D chain. Image analysis of the 1D chain images using algorithms explained in section~\ref{sec:Characterization of photoelastic intensity gradient} provided values of $\langle G^2 \rangle$. Force per disk estimated from gravitational forcing $F_c$ was then used to relate $F_c$ versus $\langle G^2 \rangle$ for both large (solid red circles) and small (solid blue triangles) disks (c). Solid lines through the calibration data are quadratic fits, which were used in experimental measurement of force per disk $F_d$.}
\label{fig:Calibration_curve}
\end{center}
\end{figure}
The force per disk was calibrated using a vertical one-dimensional (1D) chain and the measurement method explained in section~\ref{sec:Characterization of photoelastic intensity gradient} to obtain force calibration curves which convert $\langle G^2 \rangle$ to force. A vertical 1D chain of photoelastic disks of height 0.3 m as shown in Fig.~\ref{fig:Calibration_curve}a \& b was constructed. The 1D chain consisted of either 20 large disks or 30 small disks. A pair of bright and dark field images was then obtained, and the image analysis methods (section~\ref{sec:Characterization of photoelastic intensity gradient}) were applied to calculate $\langle G^2 \rangle$ for each photoelastic disk. $F_c(n)$ in Newtons (where $n$ is the position of the disk from the top of the 1D chain), applied force per disk in the vertical 1D chain was estimated from the relation $F_c = n \times mg$, where $m$ is the mass per disk ($1.8 \times 10^{-3}$ kg for large disk and $0.8 \times 10^{-3}$ kg for small disk, respectively). In Fig.~\ref{fig:Calibration_curve}b, fringes on the boundary between disks and sidewalls cannot be observed in the dark field image. Therefore, the effect of sidewalls was neglected in calibration measurements. Figure~\ref{fig:Calibration_curve}c shows the calibration data obtained for both disk sizes. The quadratic fits of the calibration data were then used as the final calibration curves for measurements of force per disk in the experimental data. Since the adopted procedure does not involve computational image matching of photoelastic fringes, only the total force applied to disk can be measured in this study, and cannot be decomposed into the normal and tangential components.\\
\subsubsection{Force chain segment length measurement}
\label{sec:Force chain segment length measurement}
The segment length $l$ of force chains forms one of the structural variables measured in this experimental study. We employed a standard image analysis technique known as thinning method, an example of which is shown in Fig.~\ref{fig:thinning}. A dark field image (Fig.~\ref{fig:thinning}a) was binarized (Fig.~\ref{fig:thinning}b) and a skeletonize procedure (also known as the erosion method or the bleeding algorithm) in ImageJ was used to thin the segment down to a line of single pixel thickness. The force chain segment length was then defined as the linear distance between intersections or end points of the chain in the thinned force chain image (Fig.~\ref{fig:thinning}c).\\
\begin{figure}
\begin{center}
\includegraphics[width=\hsize]{thinning_h2.eps}
\caption{Method of stress chain thinning and definition of stress chain segment length. A thinned stress chain (c) is obtained by binarizing original image ($(a) \rightarrow (b)$), and thinning it ($(b) \rightarrow (c)$). A stress chain segment length is defined by a liner distance between intersections or end points in thinned stress chain image.}
\label{fig:thinning}
\end{center}
\end{figure}
\section{Results}
\subsection{Packing Fraction}
\label{sec:Packing Fraction}
The calculated results for packing fraction at each tapping number $\tau$ are shown in Fig.~\ref{fig:packing_fraction}, where $\tau = 0$ represents the initial configuration. The experimental data for $\phi(\tau)$ are fit with the function $\phi(\tau) = \phi_0 + A\exp(-\frac{\tau}{\tau_0})$, where $\phi_0$, $A$ and $\tau_{0}$ are fit parameters. The fit parameter values for this study were found to be $\phi_0 = 0.79$, $A = -1.39 \times 10^{-2}$ and $\tau_{0} = 2.27$, respectively. As a result, Fig.~\ref{fig:packing_fraction} reveals that the packing fraction exponentially asymptotes towards a final steady state packing fraction value, in agreement with prior results reported by Bandi et al. \cite{Bandi2013}. Figure~\ref{fig:packing_fraction} reports the mean over nine experimental runs, with the error bars being their standard error.
\begin{figure}
\begin{center}
\includegraphics[width=\hsize]{packing_fraction.eps}
\end{center}
\caption{The variation of packing fraction with manual tapping. Packing fraction increases with each manual tapping and approaches the steady state ($\phi_0 = 0.79$). Mean value of nine runs is reported, and the error bars represent the standard error of nine runs. The dotted curve is the fit, $\phi(\tau) = \phi_0 + A\exp(-\frac{\tau}{\tau_0})$, where $\phi_0 = 0.79$, $A = -1.39 \times 10^{-2}$ and $\tau_0= 2.27$ are values obtained for the fit parameters.}
\label{fig:packing_fraction}
\end{figure}
\subsection{Force per disk}
\label{sec:Force per disk}
The force on each disk in the granular pack was measured using the method described in sections~\ref{sec:Characterization of photoelastic intensity gradient} \& \ref{sec:Force Calibration}. Figure~\ref{fig:Force applied to particle} shows the cumulative number distribution of force per disk at each tapping number $\tau$ in the granular pack. The range of force per disk $F_d$ in Fig.~\ref{fig:Force applied to particle} is wider than the calibration range (Fig.~\ref{fig:Calibration_curve}c). However, the calibration is performed under 1D diametral compression, i.e., coordination number is 2 in the calibration. In the granular pack, on the other hand, average coordination number is almost 4. Thus, the force applied to each disk can approximately be two times greater than the calibration case. Then, the force magnitude of each contact point in the granular pack is almost within the calibration range.The distribution can be approximated by the exponential form:
\begin{equation}
N_{cum}(F_d) = A_F\exp\left(-\frac{F_d}{F_{0}}\right),
\label{eq:forces applied to particle}
\end{equation}
where $N_{cum}(F_d)$ is the number of disks on which the applied force is equal to or greater than $F_d$. $A_F$ and $F_0$ are fit parameters with $F_0$ having dimensions of force. Figure~\ref{fig:Force applied to particle} is obtained from the mean value of nine identical experimental runs, and exhibits a roughly exponential distribution with almost constant slopes across all values of $\tau$, initial as well as the final compact state. The fit parameters were found to be $A_F = 1.53 \times 10^{3}$ and $F_0 = 9.06 \times 10^{-2}$ N at the initial configuration ($\tau = 0$). This result suggests that the functional form of the cumulative force distribution itself is invariant to the compaction under manual tapping as it yields the same slope for the exponential tail for all $\tau$ values. This result is qualitatively consistent with the previous study in which Liu et al. and Coppersmith et al. measured the cumulative distribution of force exerted by a three-dimensional (3D) granular pack on the container walls and showed that it follows an exponential distribution \cite{Liu1995,Coppersmith1996}. Note that, however, the coefficient $A_F$ does vary with $\tau$ as shown in Fig.~\ref{fig:Af_variation}.\\
\begin{figure}
\begin{center}
\includegraphics[width=\hsize]{Force_on_particle.eps}
\caption{Cumulative number distributions of force per disk at each tapping number $\tau$ in log-linear scale. The black solid line represents the initial configuration ($\tau = 0$) whereas colored lines represent the respective compacted states for various $\tau$ values. The data represent the mean value of nine experimental runs.}
\label{fig:Force applied to particle}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\hsize]{Af_variation.eps}
\caption{The variation of fit parameter $A_F$ as a function of tapping number $\tau$. $A_F$ increases with each manual tapping. The $A_F$ values are calculated from the fitting by Eq.~(\ref{eq:forces applied to particle}) with fixed $F_0$ ( = 0.09 [N]). The error bars represent the uncertainty of the fitting.}
\label{fig:Af_variation}
\end{center}
\end{figure}
\subsection{Force chain segment length}
\label{sec:Force chain segment length}
Recent studies have analysed force chain segment lengths under pure shear and isotropic compression and found that they are exponentially distributed \cite{Peters2005, Sanfratello2011, Zhang2014.1}:
\begin{equation}
N_{cum}(l) = A_l\exp\left(-\frac{l}{l_{0}}\right),
\label{eq:a stress chain material}
\end{equation}
where $N_{cum}(l)$ is the number of segments of length equal to or greater than $l$. $A_l$ and $l_0$ are fit parameters and $l_0$ has dimensions of length. The unit of length used is the mean disk diameter $D = \frac{0.015 + 0.01}{2} = 0.0125$ m.
In agreement with previous works, the cumulative number distribution for the force chain segment length in this study is also found to be exponentially distributed (Fig.~\ref{fig:stress_chain_material_length}) with the functional form of Eq.~(\ref{eq:a stress chain material}). The fit parameters are $A_l = 9.45 \times 10^{2}$ and $l_0 = 0.82D \simeq 0.01$ m at the initial configuration ($\tau = 0$). The characteristic length $l_0$ corresponding to the diameter of the small disk is derived from a mere reflection of the effect on analysis method. This fact indicates that a segment length is meaningless for less than the small disk size. This is natural because we consider the force chain structure consisting of disks. It is also clearly reflected in Fig.~\ref{fig:stress_chain_material_length} where the steady exponential slope is observed only for $l > 1D$. The slope and coefficient of the exponential distributions are almost constant across all $\tau$ values, rendering this distribution invariant to the manual tapping protocol.
\begin{figure}
\begin{center}
\includegraphics[width=\hsize]{stress_chain_material_length.eps}
\caption{Cumulative number distributions of force chain segment lengths in log-linear scale. The force chain segment lengths are quoted in units of the mean disk diameter, $D$. The black solid line represents the initial configuration ($\tau = 0$) whereas colored lines represent the respective compacted states for various $\tau$ values. The data represent the mean value of nine experimental runs.}
\label{fig:stress_chain_material_length}
\end{center}
\end{figure}
\section{Discussion}
\label{sec:Discussion}
The experimental results discussed thus far show that the packing fraction varies with tapping number $\tau$ through the relation $\phi(\tau) = \phi_0 + A \exp(-\frac{\tau}{\tau_0})$, and saturating at an asymptotic value of $\phi_0 = 0.79$. Additionally, the cumulative distribution of force per disk at each $\tau$ was found to be exponentially distributed as: $N_{cum}(F_d) = A_F \exp(-\frac{F_d}{F_0})$. In particular, whereas the characteristic force $F_0$ remains invariant to $\tau$, the coefficient $A_F$ varies with $\tau$ (Fig.~\ref{fig:Af_variation}). On the other hand, the cumulative distribution of force chain segment length, which too is exponentially distributed ($N_{cum}(l) = A_l\exp(-\frac{l}{l_{0}})$), exhibits no dependence on tapping number $\tau$ (Fig.~\ref{fig:stress_chain_material_length}). This suggests that the evolution of packing fraction $\phi(\tau)$ leads to the increment of internal force within the compacted granular pack, but does not lead to creation of new force chain segments. Following from these trends, we now explore a speculative relation between $\phi(\tau)$ and the total force $F_{tot}$ of the granular pack. Since $\phi$ is a globally averaged structural quantity, it ought to be compared with the total force.
\begin{figure}
\begin{center}
\includegraphics[width=\hsize]{Ft_vs_tau.eps}
\caption{Tapping number dependence of the normalized total force $F_{tot}^*$. Mean value of nine runs is reported, and the error bars represent the standard error of nine runs. The dashed curve is the fit, $F_{tot}^*(\tau) = F_{t0}^* + A_t^*\exp(-\frac{\tau}{\tau_{t0}})$, where $F_{t0}^* = 1.2$, $A_t^* = -0.2$ and $\tau_{t0}= 1.67$ are values obtained for the fit parameters. }
\label{fig:Ft_vs_tau}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\hsize]{Ft_vs_PF.eps}
\caption{Relation between $F_{tot}^*$ and packing fraction $\phi$. $F_{tot}^*$ is defined by $\displaystyle F_{tot}^* \equiv \frac{F_{tot}(\tau)}{F_{tot}(\tau =0)}$, where $F_{tot}$ is the sum of force per disk in the granular pack. Mean value of nine runs is reported, and the error bars represent the standard error of nine runs. The black solid line indicates the linear relation, $(F_{tot}^*(\tau)-F_{t0}^*)/(\phi(\tau)-\phi_{0}) = 14\pm2$.}
\label{fig:Ft_vs_PF}
\end{center}
\end{figure}
The total force $F_{tot}$ is defined as $F_{tot} = \sum_{i=1}^k F_i$, where $F_i$ is the force per disk on the $i$th disk, and the summation is carried over all disks in the granular pack ($k$ represents the total number of disks), with the force threshold placed at 0.1 N; forces below this threshold are not included in the summation. The total force is measured for the initial configuration and after each manual tap. Accordingly, we define a normalized total force as $F^*_{tot}(\tau) \equiv \frac{F_{tot}(\tau)}{F_{tot}(\tau=0)}$, where $F_{tot}(\tau = 0)$ represents the total force of the initial configuration. In Fig.~\ref{fig:Ft_vs_tau}, we show the normalized total force $F_{tot}^*$ as a function of $\tau$. We can confirm an asymptotic behavior of $F_{tot}^*(\tau)=F_{t0}^*+A_{t}^*\exp(-\frac{\tau}\tau_{t0})$, where $F_{t0}^*=1.2$, $A_{t}^*=-0.2$, and $\tau_{t0}=1.67$. This functional form is similar to that for $\phi(\tau)$. The comparison of $\phi(\tau)$ and $F_{tot}^*(\tau)$ reveals that $\tau_{t0} \simeq \tau_{0}$. Therefore, the ratio $(F_{tot}^*(\tau)-F_{t0}^*)/(\phi(\tau)-\phi_{0})$ should be approximated by $A_{t}^*/A = 14$. We independently confirm that $F_{tot}^*(\tau)$ vs $\phi(\tau)$ scales linearly as shown in Fig.~\ref{fig:Ft_vs_PF}. The slope of this scaling is $14\pm2$ in excellent agreement with the estimated result. This linear relation suggests that the process of compaction by tapping leads to the increment of granular internal force, in a linear fashion. This linear dependence may possibly result from the optimal amplitude perturbation ($\Gamma \simeq 3-4$) representing the linear response regime of the system. Stronger perturbations may not exhibit a similar dependence between $F^*_{tot}(\tau)$ and $\phi(\tau)$. This linear relation could be potentially useful for estimation of the increment of internal force within the compacted granular pack from packing fraction measurements for applications that involve compaction processes.
In this study, we developed the systematic methodology to analyse 2D granular pack comprising photoelastic disks. Using the developed method, granular compaction by manual tapping was analysed. Although the interesting structural evolution was revealed in this study, this is still the first step to approach granular compaction by tapping using photoelastic disks. The result should be compared with the case of controlled tapping using an electromagnetic shaker. Further studies concerning this comparison are in progress at present. The result will be published elsewhere.
\section{Conclusion}
\label{sec:Conclusion}
In this study, the structural evolution of 2D compacted granular pack has been experimentally studied using photoelastic disks. First, we developed the method to measure the packing fraction, contact forces, and force chain segment lengths using image analysis methods. Then, the dependence of these quantities on manual tapping were experimentally measured. From the experimental result, the exponentially asymptotic behavior of the packing fraction was observed. The distributions of applied force per disk and force chain segment length at each $\tau$ were characterized by exponential forms. Although the former depends on the tapping number $\tau$, the latter does not depend on it. The $\tau$-dependent total force is also shown to exhibit the asymptotic exponential behavior. The linear relation of these two functions ($\phi$ and $F_{tot}^*$) is confirmed from the measurements of $F_{tot}^*$ and $\phi$.
\acknowledgments
The authors acknowledge S. Watanabe, H. Kumagai, S. Sirono, and T. Morota for fruitful discussions and suggestions. H. Katsuragi was supported by JSPS KAKENHI Grant Number 26610113. M. M. Bandi was supported by the Collective Interactions Unit at the Okinawa Institute of Science and Technology Graduate University.
|
1,314,259,996,652 | arxiv | \section{Introduction}
Let $M$ be a smooth closed oriented manifold of dimension 4.
A second cohomology class of $M$ is called a \emph{monopole class}
if it arises as the first Chern class of a Spin$^c$ structure for
which the Seiberg-Witten equations
$$\left\{
\begin{array}{ll} D_A\Phi=0\\
F_{A}^+=\Phi\otimes\Phi^*-\frac{|\Phi|^2}{2}\textrm{Id},
\end{array}\right.
$$
admit a solution for every choice of a Riemannian metric. Clearly
a basic class, i.e. the first Chern class of a Spin$^c$ structure with a nonzero Seiberg-Witten invariant is a monopole class. However, ordinary Seiberg-Witten invariants which are gotten by the intersection
theory on the moduli space of solutions $(A,\Phi)$ of the above equations is trivial in many
important cases, for example connected sums of 4-manifolds with $b_2^+>0$.
Bauer and Furuta \cite{BF, bau} made a breakthrough in detecting a
monopole class on certain connected sums of 4-manifolds. Their new
brilliant idea is to generalize the Pontryagin-Thom construction
to a proper map between infinite-dimensional spaces, which is the
sum of a linear Fredholm map and a compact map, and take some sort of a
stably-framed bordism class of the Seiberg-Witten moduli space as an
invariant. However its applications are still limited in that
this new invariant which is expressed as a stable cohomotopy class
is difficult to compute, and we are seeking after further refined
invariants of the Seiberg-Witten moduli space.
In the meantime, sometimes we need a solution of the
Seiberg-Witten equations for a specific metric rather than any
Riemannian metric. The case we have in mind is the one when a
manifold $M$ and its Spin$^c$ structure $\frak{s}$ admit a smooth
orientation-preserving action by a compact Lie group $G$ and we
are concerned with finding a solution of the Seiberg-Witten
equations for any $G$-invariant metric.
Thus for a $G$-invariant metric on $M$ and a $G$-invariant
perturbation of the Seiberg-Witten equations, we consider the
\emph{$G$-monopole moduli space} $\frak{X}$ consisting of their
$G$-invariant solutions modulo gauge equivalence. One can easily
see that the ordinary moduli space $\frak{M}$ is acted on by $G$,
and $\frak{X}$ turns out to be a subset of its $G$-fixed point
set. The intersection theory on $\frak{X}$ will give the
\emph{$G$-monopole invariant} $SW^{G}_{M,\frak{s}}$ defined first
by Y. Ruan \cite{ruan}, which encodes the information of the given
$G$-action along with $M$, and may be sometimes sharper than the
ordinary Seiberg-Witten invariant $SW_{M,\frak{s}}$. To be
precise, we need the dimension $b_2^+(M)^G$ of the maximal
dimension of subspaces of $G$-invariant 2nd cohomology classes of $M$,
where the intersection form is positive-definite to be bigger than
1. In view of this, the following definition is relevant :
\begin{defn}
Suppose that $M$ admits a smooth action by a compact Lie group $G$
preserving the orientation of $M$.
A second cohomology class of $M$ is called a $G$-\emph{monopole
class} if it arises as the first Chern class of a $G$-equivariant
Spin$^c$ structure for which the Seiberg-Witten equations admit a
$G$-invariant solution for every $G$-invariant Riemannian metric of $M$.
\end{defn}
When the $G$-monopole invariant is nonzero, its first Chern class
has to be a $G$-monopole class. As explain in \cite{sung3}, the
cases we are aiming at are those for finite $G$. If a compact
connected Lie group $G$ has positive dimension and is not a torus
$T^n$, then $G$ contains a Lie subgroup isomorphic to $S^3$ or
$S^3/\Bbb Z_2$, and hence $M$ admitting an effective action of
such $G$ must have a $G$-invariant metric of positive scalar
curvature by the well-known Lawson-Yau theorem \cite{law-yau}.
Therefore when $b_2^+(M)^G>1$, $M$ has no $G$-monopole class for
such $G$. On the other hand, the Seiberg-Witten invariants of a
4-manifold with an effective $S^1$ action were extensively studied
by S. Baldridge \cite{bal1, bal2, bal3}.
Using $G$-monopole invariants, we find $G$-monopole classes in some
connected sums which have vanishing Seiberg-Witten invariants :
\begin{thm}\label{firstth}
Let $M$ and $N$ be smooth closed oriented connected 4-manifolds satisfying
$b_2^+(M)> 1$ and $b_2^+(N)=0$, and $\bar{M}_k$ for any $k\geq 2$ be the connected sum
$M\#\cdots \#M\# N$ where there are $k$ summands of $M$.
Suppose that a finite group $G$ with $|G|=k$ acts effectively on $N$ in a smooth
orientation-preserving way such that it is free or has at least one fixed point, and that $N$ admits a Riemannian metric of positive scalar curvature invariant under the $G$-action and a $G$-equivariant Spin$^c$ structure $\frak{s}_N$ with $c_1^2(\frak{s}_N)=-b_2(N)$.
Define a $G$-action on $\bar{M}_k$ induced from that of $N$
permuting $k$ summands of $M$ glued along
a free orbit in $N$, and let $\bar{\frak{s}}$ be the
Spin$^c$ structure on $\bar{M}_k$ obtained by gluing $\frak{s}_N$ and a Spin$^c$
structure $\frak{s}$ of $M$.
Then for any $G$-action on $\bar{\frak{s}}$ covering the above
$G$-action on $\bar{M}_k$, $SW^{G}_{\bar{M}_k,\bar{\frak{s}}}$ mod 2 is nontrivial if
$SW_{M,\frak{s}}$ mod 2 is nontrivial.
\end{thm}
The precise computation of $SW^{G}_{\bar{M}_k,\bar{\frak{s}}}$ mod 2 will be given in Section 3, and we will also give some examples of such $N$ in the last section. The condition on $N$ may be generalized a bit more.
This article is a refined publish version of original results announced in the archive \cite{sung4}. In a subsequent paper \cite{sung5}, we will use $G$-monopole invariants to detect smooth exotic actions of finite groups on 4-manifolds. The existence of a $G$-monopole class also has applications to Riemannian geometry such as $G$-invariant Einstein metrics and $G$-Yamabe invariant, which are dealt with in \cite{sung3}.
\section{$G$-monopole invariant}
Let $M$ be a smooth closed oriented 4-manifold. Suppose that a
compact Lie group $G$ acts on $M$ smoothly preserving the
orientation, and this action lifts to an action on a Spin$^c$
structure $\mathfrak{s}$ of $M$. Once there is a lifting, any
other lifting differs from it by an element of $Map(G\times M,
S^1)$. We fix a lifting and put a $G$-invariant Riemannian metric
$g$ on $M$. Then the associated spinor bundles $W_{\pm}$ are also
$G$-equivariant, and we let $\Gamma(W_{\pm})^G$ be the set of its
$G$-invariant sections. When we put $G$ as a superscript on a set,
we always mean the subset consisting of its $G$-invariant
elements. Thus $\mathcal{A}(W_+)^G$ is the space of $G$-invariant
connections on $\det (W_+)$, which is identified as the space of
$G$-invariant purely-imaginary valued 1-forms
$\Gamma(\Lambda^1(M;i\Bbb R))^G$, and $\mathcal{G}^G=
Map(M,S^1)^G$ is the group of $G$-invariant gauge transformations.
The perturbed Seiberg-Witten equations give a smooth map $$H:
\mathcal{A}(W_+)^G\times \Gamma(W_+)^G\times \Gamma(\Lambda^2_+(M;i\Bbb
R))^G\rightarrow \Gamma(W_-)^G\times \Gamma(\Lambda^2_+(M;i\Bbb R))^G$$ defined by
$$H(A,\Phi,\varepsilon)=(D_A\Phi,
F_A^+-\Phi\otimes\Phi^*+\frac{|\Phi|^2}{2}\textrm{Id}+\varepsilon),$$
where the domain and the range are endowed with $L_{l+1}^2$ and
$L_l^2$ Sobolev norms for a positive integer $l$ respectively,
and $D_A$ is a Spin$^c$ Dirac operator. The $G$-monopole moduli
space $\frak{X}_\varepsilon$ for a perturbation $\varepsilon$ is
then defined as
$$\frak{X}_\varepsilon:=H^{-1}_\varepsilon(0)/ \mathcal{G}^G$$ where $H_\varepsilon$ denotes
$H$ restricted to $\mathcal{A}(W_+)^G\times \Gamma(W_+)^G\times\{\varepsilon\}$.
In the followings, we give a detailed proof that $\frak{X}_\varepsilon$ for generic $\varepsilon$ and finite $G$ is a smooth compact manifold, because some statements in \cite{ruan, cho}
are incorrect or without proofs.
\begin{lem}\label{saveyou}
The quotient map $$p : (\mathcal{A}(W_+)^G\times (\Gamma(W_+)^G-\{0\}))/ \mathcal{G}^G\rightarrow (\mathcal{A}(W_+)^G\times (\Gamma(W_+)^G-\{0\}))/
\mathcal{G}$$ is bijective, and hence $\mathfrak{X}_\varepsilon$ is a subset of the
ordinary Seiberg-Witten moduli space $\frak{M}_\varepsilon$.
\end{lem}
\begin{proof}
Obviously $p$ is surjective, and to show that $p$ is injective, suppose that $(A_1,\Phi_1)$ and
$(A_2,\Phi_2)$ in $\mathcal{A}(W_+)^G\times (\Gamma(W_+)^G-\{0\})$ are
equivalent under $\gamma\in\mathcal{G}$. Then $$A_1=A_2-2d\ln \gamma, \ \ \ \textrm{and}\ \ \
\Phi_1=\gamma\Phi_2.$$ By the first equality, $d\ln \gamma$ is $G$-invariant.
Let $S$ be the subset of $M$ where $\gamma$ is $G$-invariant. By the continuity of $\gamma$, $S$ must be a closed subset. Since $S$ contains a nonempty subset $$\{x\in M| \Phi_1(x)\ne 0\},$$ $S$ is nonempty.
It suffices to show that ${S}$ is open. Let $x_0\in {S}$. Then we have that for any $g\in G$, $$g^*\ln\gamma(x_0)=\ln\gamma(x_0), \ \ \ \textrm{and}\ \ \ g^*d\ln\gamma=d\ln\gamma,$$ which implies that $g^*\ln\gamma=\ln\gamma$ on an open neighborhood of $x_0$ on which $g^*\ln\gamma$ and $\ln\gamma$ are well-defined. By the compactness of $G$, there exists an open neighborhood of $x_0$ on which $g^*\ln\gamma$ is well-defined for all $g\in G$, and $\ln \gamma$ is $G$-invariant. This proves the openness of $S$.
\end{proof}
As in the ordinary Seiberg-Witten moduli space, the transversality is
obtained by a generic perturbation $\varepsilon$ :
\begin{lem}
$H$ is a submersion at each $(A,\Phi,\varepsilon)\in H^{-1}(0)$ for nonzero
$\Phi$.
\end{lem}
\begin{proof}
Obviously $d{H}$ restricted to the last factor of the domain is onto
the last factor of the range. Using the surjectivity in the ordinary
setting, for any element $\psi\in \Gamma(W_-)^G$, there exists
an element $(a,\varphi)\in \mathcal{A}(W_+)\times \Gamma(W_+)$
such that $d{H}(a,\varphi,0)=\psi$. The average
$$(\tilde{a},\tilde{\varphi}):=\int_G h^*(a,\varphi)\
d\mu(h):=( \int_G h^*a\ d\mu(h) , \int_G h^*\varphi\ d\mu(h))$$
using a unit-volume $G$-invariant metric on $G$ is an element of
$\mathcal{A}(W_+)^G\times \Gamma(W_+)^G$. It follows from the
smoothness of the $G$-action that every $h^*(a,\varphi)$ and hence
$(\tilde{a},\tilde{\varphi})$ belong to the same Sobolev space as
$(a,\varphi)$.
Moreover it still satisfies
\begin{eqnarray*}
d{H}(\tilde{a},\tilde{\varphi},0)&=& \int_G dH (h^*(a,\varphi,0))\
d\mu(h)\\ &=& \int_G h^* dH ((a,\varphi,0))\
d\mu(h)\\ &=& \int_G h^* \psi\
d\mu(h)\\ &=& \psi,
\end{eqnarray*}
where we used the fact that $d{H}$ is a $G$-equivariant differential operator. This completes the proof.
\end{proof}
Assuming that $b_2^{+}(M)^G$ is nonzero,
$\mathfrak{X}_\varepsilon$ consists of irreducible solutions. By
the above lemma, $\cup_{\varepsilon}\mathfrak{X}_\varepsilon$ is a
smooth submanifold, and applying Smale-Sard theorem to the
projection map onto $\Gamma(\Lambda^2_+(M;i\Bbb R))^G$,
$\mathfrak{X}_\varepsilon$ for generic $\varepsilon$ is also
smooth. (Nevertheless $\frak{M}_\varepsilon$ for that $\varepsilon$ may not be smooth in general. Its obstruction is explained in \cite{cho}.) From now on, we will always assume that a generic
$\varepsilon$ is chosen so that $\frak{X}_\varepsilon$ is smooth,
and often omit the notation of $\varepsilon$, if no confusion
arises.
Its dimension and orientability can be obtained in the same way as the ordinary Seiberg-Witten moduli space. The linearization $dH$ is deformed by a homotopy to
$$d^++2d^* : \Gamma(\Lambda^1)^G\rightarrow
\Gamma(\Lambda^0\oplus\Lambda^2_+)^G$$ and $$D_A :
\Gamma(W_+)^G\rightarrow \Gamma(W_-)^G$$ so that the virtual dimension of $\mathfrak{X}$ is equal to $$\dim H_1(M;\Bbb R)^G-b_2^+(M)^G-1+2(\dim_{\Bbb C}(\ker D_A)^G-\dim_{\Bbb C}(\textrm{coker} D_A)^G),$$
and its orientation can be assigned by fixing the homology orientation of $H_1(M;\Bbb R)^G$ and $H_2^+(M;\Bbb R)^G$. When $G$ is finite, one can use Lefschetz-type formula to explicitly compute the last term $\textrm{ind}^G D_A$ in the above formula. For more details, one may consult \cite{cho}.
\begin{thm}\label{cpt}
When $G$ is finite, $\mathfrak{X}_\varepsilon$ for any $\varepsilon$ is compact.
\end{thm}
\begin{proof}
Following the proof for the ordinary Seiberg-Witten moduli space, we need the $G$-equivariant version of the gauge fixing lemma.
\begin{lem}
Let $\frak{L}$ be a $G$-equivariant complex line bundle over $M$ with a hermitian metric, and $A_0$ be a fixed $G$-invariant smooth unitary connection on it.
Then for any $l\geq 0$ there are constants $K, C>0$ depending on $A_0$ and $l$ such that for any $G$-invariant $L^2_l$ unitary connection $A$ on $\frak{L}$ there is a $G$-invariant $L^2_{l+1}$ change of gauge $\sigma$ so that $\sigma^*(A)=A_0+\alpha$ where $\alpha\in L^2_l(T^*M\otimes i\Bbb R)^G$ satisfies
$$d^*\alpha=0,\ \ \ \textrm{and}\ \ \ \ ||\alpha||^2_{L^2_l}\leq C||F^+_A||^2_{L^2_{l-1}}+K.$$
\end{lem}
\begin{proof}
We know that a gauge-fixing $\sigma$ with the above estimate always exists, but we need to prove the existence of $G$-invariant $\sigma$.
Write $A$ as $A_0+a$ where $a\in L^2_l(T^*M\otimes i\Bbb R)^G$. Let $a=a^{harm}+df+d^*\beta$ be the Hodge decomposition. By the $G$-invariance of $a$, so are $a^{harm}, df$, and $d^*\beta$.
Applying the ordinary gauge fixing lemma to $A_0+d^*\beta$, we have $$||d^*\beta||^2_{L^2_l}\leq C'||F^+_{A_0+d^*\beta}||^2_{L^2_{l-1}}+K'=C'||F^+_A||^2_{L^2_{l-1}}+K'$$ for some constants $C',K'>0$.
Defining a $G$-invariant $i\Bbb R$-valued function $f_{av}=\frac{1}{|G|}\sum_{g\in G}g^*f$, we have
$$df=\frac{1}{|G|}\sum_{g\in G}g^*df=d(f_{av})=-d\ln \exp(-{f_{av})},$$ and hence $df$ can be gauged away by a $G$-invariant gauge transformation $\exp(-f_{av})$.
Write $a^{harm}$ as $(n|G|+m)a^{h}$ for $m\in [0,|G|)$ and an integer $n\geq 0$, where $a^h\in H^1(M;\Bbb Z)^G$ is not a positive multiple of another element of $H^1(M;\Bbb Z)^G$. There exists $\frak{g}\in \mathcal{G}$ such that $a^h=-d\ln \frak{g}$. In general $\frak{g}$ is not $G$-invariant, but $$|G|a^h=\sum_{g\in G}g^*a^h=-d\ln \prod_{g\in G}g^*\frak{g},$$ and hence $n|G|a^h$ can be gauged away by a $G$-invariant gauge transformation $\prod_{g\in G}g^*\frak{g}^n$.
In summary, $A_0+a$ is equivalent to $A_0+ma^{h}+d^*\beta$ after a $G$-invariant gauge transformation, and
\begin{eqnarray*}
||ma^{h}+d^*\beta||^2_{L^2_l}&\leq& (||ma^{h}||_{L^2_l}+||d^*\beta||_{L^2_l})^2\\ &\leq& |G|^2||a^{h}||_{L^2_l}^2+2|G|||a^{h}||_{L^2_l}||d^*\beta||_{L^2_l}+||d^*\beta||_{L^2_l}^2\\ &\leq& 3|G|^2||a^{h}||^2_{L^2_l}+ 3||d^*\beta||_{L^2_l}^2 \\ &=& K''+3C'||F^+_A||^2_{L^2_{l-1}}+3K'
\end{eqnarray*}
for a constant $K''>0$. This completes the proof.
\end{proof}
Now the rest of the compactness proof proceeds in the same way as the ordinary case using the Weitzenb\"ock formula and standard elliptic and Sobolev estimates. For details the readers are referred to \cite{morgan}.
\end{proof}
\begin{rmk}
If $G$ is not finite, $\mathfrak{X}_\varepsilon$ may not be compact.
For example, consider $M=S^1\times Y$ with the trivial Spin$^c$
structure and its obvious $S^1$ action, where $Y$ is a closed
oriented 3-manifold. For any $n\in\Bbb Z$, $n d\theta$ where
$\theta$ is the coordinate on $S^1$ is an $S^1$-invariant
reducible solution. Although $n d\theta$ is gauge equivalent to 0,
but never via an $S^1$-invariant gauge transformation which is an
element of the pull-back of $C^\infty(Y,S^1)$. Therefore as
$n\rightarrow \infty$, $n d\theta$ diverges modulo
$\mathcal{G}^{S^1}$, which proves that $\mathfrak{X}_0$ is
non-compact.
\end{rmk}
In the rest of this paper, we assume that $G$ is finite. Then note that $G$ induces smooth actions on $$\mathcal{C}:=\mathcal{A}(W_+)\times \Gamma(W_+),$$ $$\mathcal{B}^*=(\mathcal{A}(W_+)\times (\Gamma(W_+)-\{0\}))/ \mathcal{G},$$ and also the Seiberg-Witten moduli space $\frak{M}$ whenever it is smooth.
Since $\frak{X}_\varepsilon$ is a subset of $\frak{M}_\varepsilon$, (actually a subset of the fixed locus $\mathfrak{M}^G_\varepsilon$ of a $G$-space $\frak{M}_\varepsilon$), we can define the
$G$-monopole invariant $SW^G_{M,\mathfrak{s}}$ by integrating the same universal cohomology classes as in the ordinary Seiberg-Witten invariant $SW_{M,\mathfrak{s}}$. Thus using the $\Bbb Z$-algebra isomorphism $$\mu_{M,\frak{s}} : \Bbb Z[H_0(M;\Bbb Z)]\otimes \wedge^*H_1(M;\Bbb Z)/\textrm{torsion}\ \tilde{\rightarrow}\ H^*(\mathcal{B}^*;\Bbb Z),$$
we define it as a function $$SW^G_{M,\frak{s}}: \Bbb Z[H_0(M;\Bbb
Z)]\otimes \wedge^*H_1(M;\Bbb Z)/\textrm{torsion}\rightarrow \Bbb
Z$$ $$\alpha\mapsto \langle [\frak{X}],\mu_{M,\frak{s}}(\alpha)
\rangle,$$ which is set to be 0 when the degree of
$\mu_{M,\frak{s}}(\alpha)$ does not match $\dim \frak{X}$. To be
specific, for $[c]\in H_1(M,\Bbb Z)$,
$$\mu_{M,\frak{s}}([c]):=Hol_c^*([d\theta])$$ where $[d\theta]\equiv 1\in H^1(S^1,\Bbb Z)$ and $Hol_c: \mathcal{B}^*\rightarrow S^1$ is given by the holonomy of each connection around $c$, and $\mu_{M,\frak{s}}(U)$ for $U\equiv 1\in H_0(M,\Bbb Z)$ is given by the first
Chern class of the $S^1$-bundle
$$\mathcal{B}^*_o=(\mathcal{A}(W_+)\times (\Gamma(W_+)-\{0\}))/
\mathcal{G}_o$$ over $\mathcal{B}^*$ where $\mathcal{G}_o=\{g\in
\mathcal{G}| g(o)=1\}$ is the based gauge group for a fixed base
point $o\in M$. (The $S^1$-bundles obtained by choosing a
different base point are all isomorphic by the connectedness of
$M$.)
As in the ordinary case, a different choice of a $G$-invariant
metric and a $G$-invariant perturbation $\varepsilon$ gives a
cobordant $\frak{X}$ so that $SW^G_{M,\mathfrak{s}}$ is
independent of such choices, if $b_2^{+}(M)^G> 1$. When
$b_2^{+}(M)^G= 1$, one should get an appropriate wall-crossing
formula.
When $\frak{M}$ happens to be smooth for a $G$-invariant
perturbation, the induced $G$-action on it is a smooth action, and
hence $\mathfrak{M}^G$ is a smooth submanifold. Moreover if the
finite group action is free, then $\pi: M\rightarrow M/G$ is a
covering, and $\frak{s}$ is the pull-back of a Spin$^c$ structure
on $M/G$, which is determined up to
the kernel of $\pi^*: H^2(M/G,\Bbb Z)\rightarrow H^2(M,\Bbb Z),$ and all the irreducible solutions of the upstairs is precisely the pull-back of the corresponding irreducible solutions of the downstairs :
\begin{thm}[\cite{RW, naka}]\label{nakamur}
Let $M$, $\mathfrak{s}$, and $G$ be as above. Under the assumption that $G$ is finite and
the action is free, for a $G$-invariant generic perturbation $$\frak{X}_{M,\mathfrak{s}}=\mathfrak{M}_{M/G,\mathfrak{s}'} \ \ \ \ \textrm{and} \ \ \ \ \mathfrak{M}^G_{M,\mathfrak{s}}\backsimeq\coprod_{c\in \ker \pi^*}\mathfrak{M}_{M/G,\mathfrak{s}'+c},$$ where the second one is a homeomorphism in general, and $\mathfrak{s}'$ is the Spin$^c$ structure on $M/G$ induced from $\frak{s}$ and its $G$-action.
\end{thm}
Finally we remark that the $G$-monopole invariant may change when
a homotopically different lift of the $G$-action to the Spin$^c$
structure is chosen.
\section{Connected sums and $G$-monopole invariant}
For $(\bar{M}_k,\bar{\frak{s}})$ described in Theorem
\ref{firstth}, there is at least one $G$-action lifted to
$\bar{\frak{s}}$ coming from the given $G$-action on
$(N,\frak{s}_N)$ and the $G$-equivariant gluing of $k$-copies
of $(M,\frak{s})$.
In general, there may be homotopically inequivalent liftings of the $G$-action on $\bar{M}_k$ to $\bar{\frak{s}}$.
Take a $G$-invariant metric of positive scalar curvature on $N$. In order to do the connected sum with $k$ copies of $M$, we perform a Gromov-Lawson type surgery \cite{GL,sung1} around each point of a free orbit of $G$ keeping the positivity of scalar curvature to get a Riemannian manifold $\hat{N}$ with cylindrical ends with each end isometric to a Riemannian product of a round $S^3$ and $\Bbb R$. We suppose that this is done in a symmetric way so that the $G$-action on $\hat{N}$ is isometric.
On $M$ part, we put any metric and perform a Gromov-Lawson surgery with the same cylindrical end as above. Let's denote this by $\hat{M}$. Now chop the cylinders at sufficiently large length and then glue $\hat{N}$ and $k$-copies of $\hat{M}$ along the boundary to get a desired $G$-invariant metric $g_k$ on $\bar{M}_k$. Sometimes we mean $(\bar{M}_k,g_k)$ by $\bar{M}_k$.
\begin{thm}
Let $(\bar{M}_k,\bar{\frak{s}})$ in Theorem
\ref{firstth} be endowed with $g_k$ as above. Then for any sufficiently large cylindrical length and some generic perturbation, $\mathfrak{X}_{\bar{M}_k,\bar{\mathfrak{s}}}$ is diffeomorphic to $\frak{M}_{M,\mathfrak{s}}\times T^{\nu}$, where $\nu=\dim H_1(N;\Bbb R)^{G}$.
\end{thm}
\begin{proof}
First we consider the case when the $G$-action on $N$ has a fixed point.
Let's first figure out the ordinary moduli space $\frak{M}_{\bar{M}_k}$ of
$(\bar{M}_k,\bar{\mathfrak{s}})$. Let $\frak{M}_{\hat{M}}$ and
$\frak{M}_{\hat{N}}$ be the moduli spaces of finite-energy
solutions of Seiberg-Witten equations on $(\hat{M},\mathfrak{s})$ and
$(\hat{N},\mathfrak{s}_N)$ respectively. From now on, $[\ \cdot\ ]$ of a configuration $\cdot$ denotes its gauge equivalence class.
By the gluing theory\footnote{For more details, one may consult
\cite{KM, nicol, safari, vid1, sung2}.} of Seiberg-Witten moduli
space, which is now a standard method in gauge theory, $\frak{M}_{M}$
is diffeomorphic to $\frak{M}_{\hat{M}}$. In
$\frak{M}_{\hat{M}}$, we use a compact-supported self-dual 2-form
for a generic perturbation.
Since $\hat{N}$ has a metric of positive scalar curvature and the property that $b_2^+(\hat{N})=0$ and $c_1^2(\frak{s}_{\hat{N}})=-b_2(\hat{N})$, $\hat{N}$ also has no gluing obstruction even without perturbation so that $\frak{M}_N$ is diffeomorphic to
$\frak{M}_{\hat{N}}=\frak{M}_{\hat{N}}^{red},$ which can be identified with the space of $L^2$-harmonic 1-forms on $\hat{N}$ modulo gauge, i.e. $$H^1_{cpt}(\hat{N},\Bbb R)/H^1_{cpt}(\hat{N},\Bbb Z)\simeq T^{b_1(N)}.$$ (Here by $T^0$ we mean a point, and $\frak{M}^{red}\subset \frak{M}$ denotes the moduli space of reducible solutions.)
As is well-known, approximate solutions on $\bar{M}_k$ are
obtained by chopping-off solutions on each $\hat{M}$ and $\hat{N}$
at a sufficiently large cylindrical length and then grafting them
to $\bar{M}_k$ via a sufficiently slowly-varying partition of
unity in a $G$-invariant way. More precise prescription of grafting is as follows. First, let's name $k$ $M$ parts of $\bar{M}_k$. Choose one of $k$ $M$ parts and we call it the 1st $M$ part. To assign other $k-1$ $M$ parts, let's denote $G$ by $\{\sigma_1, \sigma_2,\cdots,\sigma_k=e\}$ where $e$ is the identity element. Since each $M$ part of $\bar{M}_k$ is the image of the 1st $M$ part under exactly one of $\sigma_i\in G$, lets call it the $i$-th $M$ part. Now choosing an identification of the Spin$^c$ structure on the 1st $M$ part with that of $\hat{M}$, and the identifications of Spin$^c$ structures on other $M$ parts with that of $\hat{M}$ can be done using the $G$-action on $\bar{\frak{s}}$. Once there is such identification, we can graft a cut-off solution on $\hat{M}$ to each $M$ part of $\bar{M}_k$.
In taking cut-offs of solutions on $\hat{N}$, we use a special gauge-fixing condition. Fix a $G$-invariant connection $\eta_0$ such that $[\eta_0]\in\frak{M}_{\hat{N}}$,
which exists by taking the $G$-average of any reducible
solution, and take compact-supported closed 1-forms
$\beta_1,\cdots,\beta_{b_1(N)}$ which generate $H^1_{cpt}(\hat{N};\Bbb Z)$ and vanish on the cylindrical gluing regions. Any element $[\eta]\in\frak{M}_{\hat{N}}$ can be
expressed as $$\eta=\eta_0+\sum_{i}c_i\beta_i$$ for $c_i\in \Bbb
R/ \Bbb Z$, and the gauge equivalence class of its cut-off
$$\tilde{\eta}:=\rho\eta=\rho\eta_0+\sum_{i}c_i\beta_i$$ using a
$G$-invariant cut-off function $\rho$ which is equal to 1
on the support of every $\beta_i$ is well-defined independently of
the mod $\Bbb Z$ ambiguity of each $c_i$.
Similarly, for the cut-off procedure to be well-defined independently of the choice of a gauge representative on $\frak{M}_{\hat{M}}$, one needs to take a gauge-fixing so that homotopy classes of gauge transformations on $\hat{M}$ are parametrized by $H^1_{cpt}(\hat{M},\Bbb Z)$, whose elements are gauge transformations constant on gluing regions.
Thus the gluing produces a smooth map from $$(\prod_{i=1}^k\frak{M}_{\hat{M}})\times \frak{M}_{\hat{N}}:=(\underbrace{\frak{M}_{\hat{M}}\times \cdots \times
\frak{M}_{\hat{M}}}_k)\times \frak{M}_{\hat{N}}$$
to a so-called approximate moduli space
$\tilde{\frak{M}}_{\bar{M}_k}$ in $\frak{B}^*_{\bar{M}_k}$. This gluing map is one to one, because of the unique continuation principle (\cite{KM}) of Seiberg-Witten equations. From the
unobstructedness of gluing, $\tilde{\frak{M}}_{\bar{M}_k}\subset
\frak{B}^*_{\bar{M}_k}$ is a smoothly embedded submanifold diffeomorphic to
\begin{eqnarray*}
((\prod_{i=1}^k\frak{M}_{\hat{M}}^o)/S^1)\times\frak{M}_{\hat{N}} &=&
((\prod_{i=1}^k\frak{M}_{\hat{M}})\tilde{\times} T^{k-1})\times T^{b_1(N)},
\end{eqnarray*}
where $\frak{M}_{\hat{M}}^o$ is the based moduli space fibering
over $\frak{M}_{\hat{M}}$ with fiber $\mathcal G_o/ \mathcal
G=S^1$, and $\tilde{\times}$ means a $T^{k-1}$-bundle over
$\prod_{i=1}^k\frak{M}_{\hat{M}}$.
As the length of the cylinders in $\bar{M}_k$ increases,
approximate solutions get close to genuine solutions
exponentially. Once we choose smoothly-varying normal subspaces to
tangent spaces of $\tilde{\frak{M}}_{\bar{M}_k}\subset
\frak{B}^*_{\bar{M}_k}$, the Newton method gives a diffeomorphism
$$\Upsilon : \tilde{\frak{M}}_{\bar{M}_k}\rightarrow
\frak{M}_{\bar{M}_k}$$ given by a very small isotopy along the
normal directions. A bit more explanation will be given in Lemma
\ref{saveme}.
An important fact for us is that the same $k$ copies of a compactly supported self-dual 2-form can be used for the perturbation on $M$ parts, while no perturbation is put on the $N$ part. Along with the $G$-invariance of the Riemannian metric $g_k$, the perturbed Seiberg-Witten equations for $(\bar{M}_k,g_k)$ are $G$-equivariant so that the induced smooth $G$-action on $\mathcal{B}^*_{\bar{M}_k}$ maps $\frak{M}_{\bar{M}_k}$ to itself.
Let's describe elements of $\tilde{\frak{M}}_{\bar{M}_k}$ for
$(\bar{M}_k,g_k)$ more explicitly. For $[\xi]\in
\frak{M}_{\hat{M}}$, let $\tilde{\xi}$ be an approximate solution
for $\xi$ cut-off at a large cylindrical length, and
$\tilde{\xi}(\theta)$ be its gauge-transform under the gauge
transformation by $e^{i\theta}\in C^\infty(\hat{M},S^1)$. (From
now on, the tilde $\tilde{\ }$ of a solution will mean its
cut-off.) Any element in $\tilde{\frak{M}}_{\bar{M}_k}$ can be
written as an ordered $(k+1)$-tuple
\begin{eqnarray}\label{paul}
[(\tilde{\xi}_1(\theta_1),\cdots,\tilde{\xi}_{k-1}(\theta_{k-1}),\tilde{\xi}_k(0),
\tilde{\eta})]
\end{eqnarray}
for each $[\xi_i]\in \frak{M}_{\hat{M}}$ and
constants $\theta_i$'s, where the $i$-th term for $i=1,\cdots , k$
represents the approximate solution grafted on the
$i$-th $M$ part, and the last term is a cut-off of $\eta\in
\frak{M}_{\hat{N}}^{red}$. In fact, there is a bijective
correspondence
\begin{eqnarray}\label{general}
\tilde{\frak{M}}_{\bar{M}_k}
\end{eqnarray}
\begin{eqnarray*}
\wr|
\end{eqnarray*}
$$\{ [(\tilde{\xi_1}(\theta_1),\cdots , \tilde{\xi}_{k-1}(\theta_{k-1}),\tilde{\xi_k}(0), \tilde{\eta})]\ |\ [\eta]\in \frak{M}_{\hat{N}}, [\xi_i]\in \frak{M}_{\hat{M}}, \theta_i\in [0,2\pi)\ \forall i
\}.
$$
\begin{lem}
The $G$-action on $\mathcal{B}^*_{\bar{M}_k}$ maps $\tilde{\frak{M}}_{\bar{M}_k}$ to itself.
\end{lem}
\begin{proof}
The $G$-action on $(\bar{M}_k,\bar{\frak{s}})$ can be obviously extended to an action on the Spin$^c$ structure of $\hat{N}\cup \amalg_{i=1}^k\hat{M}$ and also its moduli space of finite-energy monopoles.
Let $\sigma\in G$. By the $G$-invariance of $\rho$, $$\sigma^*\tilde{\eta}=\sigma^*(\rho\eta)=\rho\sigma^*\eta=\widetilde{\sigma^*\eta}.$$
Since $\sigma^*\beta_i$ also gives an element of $H^1_{cpt}(\hat{N};\Bbb Z)$, let's let $\sigma^*\beta_i$ be cohomologous to $\sum_j d_{ij}\beta_j$ for each $i$. Thus
$$\widetilde{\sigma^*\eta}=\rho\sigma^*(\eta_0+\sum_{i}c_i\beta_i)=\rho\eta_0+\sum_{i}c_i\sigma^*\beta_i$$ is gauge-equivalent to $$\rho\eta_0+\sum_{i,j}c_id_{ij}\beta_j$$ which is the cut-off of $\eta_0+\sum_{i,j}c_id_{ij}\beta_j.$
The $G$-action on the first $k$ components of $(\ref{paul})$ just permutes them.
Thus a constant gauge transform of $\sigma^*(\tilde{\xi}_1(\theta_1),\cdots,\tilde{\xi}_{k-1}(\theta_{k-1}),\tilde{\xi}_k(0),
\tilde{\eta})$ is also of the type (\ref{paul}).
\end{proof}
Moreover we may assume that the map $\Upsilon$ is $G$-equivariant by the following lemma.
\begin{lem}\label{saveme}
$\Upsilon$ can be made $G$-equivariant, and the smooth submanifold $\frak{M}_{\bar{M}_k}^{G}$
pointwisely fixed under the action is isotopic to
$\tilde{\frak{M}}_{\bar{M}_k}^{G}$, the fixed point set in
$\tilde{\frak{M}}_{\bar{M}_k}$.
\end{lem}
\begin{proof}
To get a $G$-equivariant $\Upsilon$, we need to choose a
smooth normal bundle of
$\tilde{\frak{M}}_{\bar{M}_k}\subset\frak{B}^*_{\bar{M}_k}$ in a $G$-equivariant way. This can be achieved by taking the $G$-average of any smooth Riemannian metric defined in a small
neighborhood of $\tilde{\frak{M}}_{\bar{M}_k}$.
A smooth Riemannian metric on a Hilbert manifold is a smoothly
varying bounded positive-definite symmetric bilinear forms on its tangent spaces. In
order to have a well-defined exponential map as a diffoemorphism
on a neighborhood of the origin, we want the metric to be
``strong" in the sense that the metric on each tangent space
induces the same topology as the original Hilbert space topology.
(For a proof, see \cite{kling}).
Since $\tilde{\frak{M}}_{\bar{M}_k}$ is compact, we use a
partition of unity on it to glue together obvious Hilbert space
metrics in local charts, thereby constructing a smooth Riemannian
metric in a neighborhood of $\tilde{\frak{M}}_{\bar{M}_k}$ in a
Hilbert manifold $\frak{B}^*_{\bar{M}_k}$. Taking its average under the $G$-action, we get a desired Riemannian metric, which is easily
checked to be strong.
Taking the orthogonal complement to the tangent bundle of
$\tilde{\frak{M}}_{\bar{M}_k}$ under the above-obtained metric, we
get its normal bundle which is trivial by being
infinite-dimensional. In the same way as the finite dimensional case, the inverse function theorem implies that a small neighborhood of the zero section in the normal bundle is mapped
diffeomorphically into $\frak{B}^*_{\bar{M}_k}$ by the exponential map. Thus we can view a small
neighborhood of $\tilde{\frak{M}}_{\bar{M}_k}$ as
$\tilde{\frak{M}}_{\bar{M}_k}\times \Bbb H$ where $\Bbb H$ is
the Hilbert space isomorphic to the orthogonal complement of the
tangent space of $\tilde{\frak{M}}_{\bar{M}_k}$ at any point.
Applying the Newton method, $\Upsilon$ is pointwisely a vertical
translation along $\Bbb H$
direction. Now the assertion follows from the $G$-invariance of the normal directions.
\end{proof}
As a preparation for finding $G$-fixed points of $\tilde{\frak{M}}_{\bar{M}_k}$,
\begin{lem}\label{adam}
$\frak{M}_{\hat{N}}^{G}$ is diffeomorphic to $T^{\nu}$, the space of $G$-invariant $L^2$-harmonic 1-forms on $\hat{N}$ modulo $\Bbb Z$.
\end{lem}
\begin{proof}
Let $[\eta]\in
\frak{M}_{\hat{N}}^{G}$, i.e. $[\sigma^*\eta]=[\eta]$ for any $\sigma\in G$. Then
$$\bar{\eta}:=\frac{1}{k}\sum_{\sigma\in G}(\sigma)^*\eta$$ satisfies that $\sigma^*\bar{\eta}=\bar{\eta}$ for any $\sigma\in G$, and $\bar{\eta}$ is cohomologous to $\eta$ so that $[\bar{\eta}]=[\eta]$.
When $\nu\ne 0$, let $b_1,\cdots,b_{b_1(N)}\in H_1(N;\Bbb Z)$ be a basis of $H_1(N;\Bbb R)$ such that $b_1,\cdots,b_\nu\in H_1(N;\Bbb Z)^G$, where we used that $$\textrm{rank}(H_1(N;\Bbb Z)^{G})=\dim H_1(N;\Bbb R)^{G},$$ simply because $G$ also acts on $H_1(N;\Bbb Z)$. Also let $b_1^*,\cdots,b_{b_1(N)}^*\in H^1_{cpt}(\hat{N};\Bbb R)$ be the corresponding dual cohomology classes under the isomorphism $$H^1_{cpt}(\hat{N};\Bbb R)\simeq H_1(N;\Bbb R)^*.$$ Since
$b_i^*(b_j)=\delta_{ij}$ for all $i,j=1,,\cdots,b_1(N)$, a simple Linear algebra shows that
$b_1^*,\cdots,b_\nu^*$ are not only in $H^1_{cpt}(\hat{N};\Bbb Z)^{G}$, but also
form a basis of $H^1_{cpt}(\hat{N};\Bbb R)^{G}$. Therefore $\frak{M}_{\hat{N}}^{G}$ is a
$\nu$-dimensional torus spanned by $b_1^*,\cdots,b_\nu^*$.
When $\nu=0$, $\frak{M}_{\hat{N}}^{G}$ is a point.
\end{proof}
As an easy case,
\begin{lem}
If $G=\Bbb Z_k$, then $\tilde{\frak{M}}_{\bar{M}_k}^{\Bbb Z_k}$ is diffeomorphic to $k$ copies of $\frak{M}_{{M}}\times T^{\nu}$, where $T^0$ means a point.
\end{lem}
\begin{proof}
Let $\sigma$ be a generator of $\Bbb Z_k$, and take the numbering of elements of $G=\{\sigma_1,\cdots,\sigma_k=e\}$ such that $\sigma_i=\sigma^i$.
Thus the condition for a fixed point is that
$$(\tilde{\xi}_k(0),\tilde{\xi}_1(\theta_1),\cdots,
\tilde{\xi}_{k-1}(\theta_{k-1}),\widetilde{\sigma^*\eta})\equiv
(\tilde{\xi}_1(\theta_1),\cdots,\tilde{\xi}_{k-1}(\theta_{k-1}),\tilde{\xi}_k(0),\tilde{\eta}
)$$ modulo gauge transformations. By (\ref{general}) this implies
$$[\xi_{1}]=[\xi_{2}]=\cdots =[\xi_{k}] \in\frak{M}_{\hat{M}},\ \textrm{and }\ [\sigma^*{\eta}]=[{\eta}]\in \frak{M}_{\hat{N}},$$
and
$$0 \equiv \theta_1+\theta,\ \ \theta_1 \equiv \theta_2+\theta,\cdots, \theta_{k-1}\equiv 0+\theta\ \ \ \textrm{mod}\ 2\pi$$
for some constant $\theta\in [0,2\pi)$. Summing up the above $k$ equations gives $$0\equiv k\theta\ \ \ \textrm{mod}\ 2\pi,$$ and hence
$$\theta=0,\frac{2\pi}{k},\cdots,\frac{2(k-1)\pi}{k},$$ which lead to the corresponding $k$ solutions
\begin{eqnarray}\label{temp}
[(\tilde{\xi}((k-1)\theta),\tilde{\xi}((k-2)\theta),\cdots
,\tilde{\xi}(\theta) ,
\tilde{\xi}(0),\tilde{\eta})],
\end{eqnarray}
where we let $\xi_i=\xi$ for all $i$ and $[\eta]\in \frak{M}_{\hat{N}}^{\Bbb Z_k}$.
Therefore $\tilde{\frak{M}}_{M_k}^{\Bbb Z_k}$ is diffeomorphic to $k$ copies of $\frak{M}_{\hat{M}}\times\frak{M}_{\hat{N}}^{\Bbb Z_k}\simeq\frak{M}_{M}\times T^{\nu}.$
\end{proof}
\begin{lem}
If $G=\Bbb Z_k$, then $\mathfrak{X}_{\bar{M}_k}$ is diffeomorphic to $\frak{M}_{M}\times T^{\nu}$.
\end{lem}
\begin{proof}
For $\xi\in \frak{M}_{\hat{M}}$, $\eta\in \mathfrak{X}_{\hat{N}}$, and $\theta$ as above, let $$\tilde{\Xi}_{\theta}=(\tilde{\xi}((k-1)\theta),\tilde{\xi}((k-2)\theta),\cdots,\tilde{\xi}(\theta), \tilde{\xi}(0),\tilde{\eta}),$$ and denote $\Upsilon([\tilde{\Xi}_\theta])$ by $[\Xi_\theta]$. From the above lemma, we have that $$\sigma^*\tilde{\Xi}_\theta=e^{i\theta}\cdot\tilde{\Xi}_\theta,$$ where $\sigma$ is a generator of $\Bbb Z_k$, and $\cdot$ denotes the gauge action.
We will show that $k-1$ copies of $\frak{M}_{M}\times T^{\nu}$ corresponding to nonzero $\theta$ do not belong to $\mathfrak{X}_{\bar{M}_k}$.
Let $\theta=\frac{2\pi}{k},\cdots,\frac{2(k-1)\pi}{k}$. By the $\Bbb Z_k$-equivariance of $\Upsilon$,
$[\sigma^*\Xi_\theta]=\sigma^*[\Xi_\theta]$, and so write $$\sigma^*\Xi_\theta=e^{i\vartheta}\cdot\Xi_\theta \ \ \ \textrm{for}\ e^{i\vartheta}\in Map(\bar{M}_k,S^1).$$ By taking the cylindrical length sufficiently large, $e^{i\vartheta}$ can be made arbitrarily close to the constant $e^{i\theta}$ in a Sobolov norm and hence $C^0$-norm too by the Sobolov embedding theorem. (The Sobelev embedding constant does not change, if the cylindrical length gets large, because the local geometries remain unchanged.)
Assume to the contrary that $\sigma^*(\frak{g}\cdot\Xi_\theta)=\frak{g}\cdot\Xi_\theta$ for some $\frak{g} \in \mathcal{G}$.
Then combined with that
\begin{eqnarray*}
\sigma^*(\frak{g}\cdot\Xi_\theta)&=& \sigma^*(\frak{g})\cdot\sigma^*(\Xi_\theta)\\ &=& \sigma^*(\frak{g})\cdot (e^{i\vartheta}\cdot\Xi_\theta)\\ &=& (\sigma^*(\frak{g})e^{i\vartheta})\cdot \Xi_\theta,
\end{eqnarray*}
it follows that
$$\frak{g}\cdot \Xi_\theta = (\sigma^*(\frak{g})e^{i\vartheta})\cdot \Xi_\theta,$$
which implies that
\begin{eqnarray}\label{prayer2}
\sigma^*(\frak{g})=\frak{g}e^{-i\vartheta},
\end{eqnarray}
where we used the continuity of $g$ and the fact that the spinor part of $\alpha$ is not identically zero on an open subset by the unique continuation property.
Choose a fixed point $p\in \bar{M}_k$ under the $\Bbb
Z_k$-action.\footnote{This and the next two paragraphs are the only three places where we use the condition
that the action on $N$ has a fixed point, which was assumed in the
beginning of the proof of current theorem.} Then evaluating
(\ref{prayer2}) at the point $p$ gives $$\frak{g}(p)=\frak{g}(p)e^{-i\vartheta(p)}\approx \frak{g}(p)e^{-i\theta},$$ which yields a desired contradiction.
It remains to show that $\frak{M}_{M}\times T^{\nu}$ corresponding to $\theta=0$ belongs to $(\mathcal{A}(W_+)^G\times (\Gamma(W_+)^G-\{0\}))/ \mathcal{G}^G$. Let $\Xi_0=\tilde{\Xi}_0+(a,\varphi)$ where $a\in \Gamma(\Lambda^1(\bar{M}_k;i\Bbb R))$ satisfies the Lorentz gauge condition $d^*a=0$. Since
$$\sigma^*\Xi_0=\tilde{\Xi}_0+(\sigma^*a,\sigma^*\varphi)$$ belongs to the same gauge equivalence class as $\Xi_0$, and $$d^*(\sigma^*a)=\sigma^*(d^*a)=0$$ using the isometric action of $G$, we have that $\sigma^*a\equiv a$ modulo $H^1(\bar{M}_k;\Bbb Z)=\Bbb Z^{b_1(\bar{M}_k)}$. Applying the obvious identity $(\sigma^*)^k=\textrm{Id}$, it follows that $\sigma^*a=a$. This implies that $\sigma^*\Xi_0$ is a constant gauge transform $e^c\cdot \Xi_0$ of $\Xi_0$. If $e^c\ne 1$, it leads to a contradiction by the same method as above using the existence of a fixed point. Therefore $\sigma^*\Xi_0=\Xi_0$ as desired, and we conclude that $\mathfrak{X}_{\bar{M}_k}$ is equal to $\frak{M}_{M}\times T^{\nu}$ .
\end{proof}
Now we will consider the case of any finite group $G$. We will show that $\mathfrak{X}_{\bar{M}_k}$ is diffeomorphic to $\Upsilon(S)$ where
$$S:=\{[(\tilde{\xi}(0),\cdots , \tilde{\xi}(0),\tilde{\xi}(0), \tilde{\eta})]\ |\ [\eta]\in \frak{M}_{\hat{N}}^G, [\xi]\in \frak{M}_{\hat{M}}, \theta_i\in [0,2\pi)\ \forall i
\}.
$$
Since $(\tilde{\xi}(0),\cdots , \tilde{\xi}(0),\tilde{\xi}(0), \tilde{\eta})$ is $G$-invariant, $\Upsilon([(\tilde{\xi}(0),\cdots , \tilde{\xi}(0),\tilde{\xi}(0), \tilde{\eta})])$ is also represented by a $G$-invariant element by the same method as the above paragraph using the existence of a fixed point. Hence $\Upsilon(S)\subset \mathfrak{X}_{\bar{M}_k}$.
To show the reverse inclusion, first note that any element of $\mathfrak{X}_{\bar{M}_k}$ can be written as $\Upsilon([(\tilde{\xi}(\theta_1),\cdots , \tilde{\xi}(\theta_{k-1}),\tilde{\xi}(0), \tilde{\eta})])$ for $[\xi]\in \frak{M}_{\hat{M}}$. We only need to show all $\theta_i$ are zero, and $[\eta]\in \frak{M}_{\hat{N}}^G$. For $\sigma_i\in G$, let $\langle\sigma_i\rangle$ be the cyclic subgroup generated by $\sigma_i$, and $\mathfrak{X}_{\bar{M}_k,\langle\sigma_i\rangle}$ be the $\langle\sigma_i\rangle$-monopole moduli space. Since $\mathfrak{X}_{\bar{M}_k}$ is a subset of $\mathfrak{X}_{\bar{M}_k,\langle\sigma_i\rangle}\subset \frak{M}_{\bar{M}_k}$, we can use the above lemma to deduce that $\theta_i$ is 0, and $[\eta]\in \frak{M}_{\hat{N}}^{\langle\sigma_i\rangle}$. Since $i$ is arbitrary, we get a desired conclusion.
Finally let's prove the theorem when the action on $N$ is free. In this case, directly from Theorem \ref{nakamur} and the gluing theory, we have diffeomorphisms
\begin{eqnarray*}
\frak{X}_{\bar{M}_k,\bar{\mathfrak{s}}}&=& \mathfrak{M}_{M\# N/G,\mathfrak{s}\# \frak{s}_N'}\\
&\simeq& \mathfrak{M}_{M,\mathfrak{s}}\times \mathfrak{M}^{red}_{N/G, \frak{s}_N'}\\
&\simeq& \mathfrak{M}_{M,\mathfrak{s}}\times T^\nu,
\end{eqnarray*}
where $\frak{s}_N'$ is the Spin$^c$ structure on $N/G$ induced from $\frak{s}_N$ and its $G$ action induced from that of $\bar{\frak{s}}$. This completes all the proof.
\end{proof}
Now we come to the main theorem which implies Theorem \ref{firstth}.
\begin{thm}\label{myLord}
Let $(\bar{M}_k,\bar{\frak{s}})$ be as in Theorem \ref{firstth} and $d\geq 0$ be an integer.
If $\nu:=\dim H_1(N;\Bbb R)^{G}=0$, then for $A=1$ or
$a_1\wedge\cdots\wedge a_{j}$
$$SW^{G}_{\bar{M}_k,\bar{\frak{s}}}(U^d A)\equiv
SW_{M,\frak{s}}(U^d A)\ \ \ mod \ 2,$$ where $U$ denotes the
positive generator of the zeroth homology of $\bar{M}_k$ or $M$,
and each $a_i\in H_1(M;\Bbb Z)/\textrm{torsion}$ also denotes any
of $k$ corresponding elements in $H_1(\bar{M}_k;\Bbb Z)$ by abuse
of notation.
If $\nu\ne 0$, then
$$SW^{G}_{\bar{M}_k,\bar{\frak{s}}}(U^d A\wedge b_1\wedge\cdots\wedge b_\nu)\equiv
SW_{M,\frak{s}}(U^d A)\ \ \ mod \ 2,$$ where $A$ is as above, and
$b_1,\cdots, b_\nu\in H_1(N;\Bbb Z)$ is a basis of $H_1(N;\Bbb R)^{G}$.
\end{thm}
\begin{proof}
As before, let's first consider the case when the action has a fixed point. We continue to use the same notation and context as the previous theorem.
\begin{lem}\label{LHW}
The $\mu$ cocycles on $\frak{M}_{{M}}\times T^\nu$ and $\mathfrak{X}_{\bar{M}_k}$ coincide, i.e.
$$\mu_{M}(a_i)=\mu_{\bar{M}_k}(a_i),\ \ \ \ \mu_{N}(b_i)=\mu_{\bar{M}_k}(b_i),\ \ \ \ \mu_{M}(U)=\mu_{\bar{M}_k}(U)$$ where the equality means the identification under the above diffeomorphism.
\end{lem}
\begin{proof}
The first equality comes from that the holonomy
maps $Hol_{a_i}$ defined on $\frak{M}_{{M}}$ and
$\tilde{\frak{M}}_{\bar{M}_k}^{G}$ are just the same, when
the representative of $a_i$ is chosen away from the gluing
regions. Using the isotopy between $\frak{M}_{\bar{M}_k}^{G}$ and $\tilde{\frak{M}}_{\bar{M}_k}^{G}$, the induced maps $Hol^*_{a_i}$ from $H^1(S^1;\Bbb Z)$ to $H^1(\frak{M}_{{M}};\Bbb Z)$ and
$H^1(\frak{M}_{\bar{M}_k}^{G};\Bbb Z)$ are the same so that
$$\mu_{M}(a_i)=Hol^*_{a_i}([d\theta])=\mu_{\bar{M}_k}(a_i)$$
for each $i$. Likewise for the second equality.
For the third equality, note that the $S^1$-fibrations on
$\frak{M}_{\hat{M}}\times T^\nu$ and
$\tilde{\frak{M}}_{\bar{M}_k}^{G}$ induced by the
$\mathcal{G}/\mathcal{G}_o$ action are isomorphic in an obvious
way, where the $T^\nu$ part is fixed under the
$\mathcal{G}/\mathcal{G}_o$ action. Since the isotopy between
$\tilde{\frak{M}}_{\bar{M}_k}$ and $\frak{M}_{\bar{M}_k}$ can be
extended to the $S^1$-fibrations induced by the
$\mathcal{G}/\mathcal{G}_o$ action, those $S^1$-fibrations are
isomorphic. In the same way using gluing theory, there are
isomorphisms of $S^1$-fibraions on $\frak{M}_{M}$, its approximate moduli space
$\tilde{\frak{M}}_{M}$, and $\frak{M}_{\hat{M}}$. Therefore we
have an isomorphism between those $S^1$-fibrations on
$\frak{M}_{M}\times T^\nu$ and $\mathfrak{X}_{\bar{M}_k}$.
\end{proof}
We are ready for the evaluation of the Seiberg-Witten invariant
on $\mathfrak{X}_{\bar{M}_k}$. Suppose $\nu\ne 0$. Let
$l_1,\cdots,l_{b_1(N)}$ be loops representing homology classes $b_1,\cdots,b_{b_1(N)}$
respectively. Then $b_i^*$ introduced in Lemma \ref{adam}
restricts to a nonzero element of
$H^1(l_j;\Bbb Z)$ iff $i=j$. Moreover $b_i^*$ is a generator of $H^1(l_j;\Bbb Z)$, and hence
$\{\mu(b_1),\cdots,\mu(b_\nu)\}$ is a standard generator of the 1st
cohomology of $T^\nu\simeq \Bbb R\langle
b_1^*,\cdots,b_\nu^*\rangle/\Bbb Z\langle
b_1^*,\cdots,b_\nu^*\rangle$. Combining the fact that $\mu(b_1)\wedge
\cdots \wedge \mu(b_{\nu})$ is a generator of $H^\nu(T^{\nu};\Bbb
Z)$ with the above identification of $\mu$-cocycles, we can conclude that
$$SW^{G}_{\bar{M}_k,\bar{\frak{s}}}(U^dA\wedge
b_1\wedge\cdots\wedge b_\nu)\equiv SW_{M,\frak{s}}(U^dA)\ \ \
\textrm{mod}\ 2$$ for $A=1$ or $a_1\wedge\cdots\wedge a_j$. The
case of $\nu=0$ is just a special case.
When the action is free, the theorem is obvious from the identification $\frak{X}_{\bar{M}_k,\bar{\mathfrak{s}}}=\mathfrak{M}_{M\# N/G,\mathfrak{s}\# \frak{s}_N'}$.
\end{proof}
\begin{rmk}
If the diffeomorphism between $\frak{X}_{\bar{M}_k}$ and $\mathfrak{M}_{M}\times T^\nu$
is orientation-preserving, then $G$-monopole invariants and Seiberg-Witten invariants are exactly the same.
We conjecture that the diffeomorphism between $\frak{X}_{\bar{M}_k}$ and $\frak{M}_{M}\times T^{\nu}$ is orientation-preserving, when the homology orientations are appropriately chosen.
One may try to prove $\frak{X}_{\bar{M}_k}\simeq \frak{M}_{M}\times T^{\nu}$ by gluing $G$-monopole moduli spaces directly. But the above method of proof by gluing ordinary moduli spaces also shows that for $G=\Bbb Z_k$, $\frak{M}_{\bar{M}_k}^{\Bbb Z_k}$ is diffeomorphic to $k$ copies of $\frak{M}_{M}\times T^{\nu}$. Lemma \ref{LHW} is also true for any other component of $\frak{M}_{\bar{M}_k}^{\Bbb Z_k}$.
\end{rmk}
\section{Examples of $(N,\frak{s}_N)$ of Theorem \ref{firstth}}
In this section, $G, H$ and $K$ denote compact Lie groups. Let's recall some elementary facts on equivariant principal bundles.
\begin{defn}
A principal $G$ bundle $\pi : P \rightarrow M$ is said to be $K$-equivariant if $K$ acts left on
both $P$ and $M$ in such a way that
(1) $\pi$ is $K$-equivariant :
$$\pi(k\cdot p) = k\cdot\pi(p)$$ for all $k\in K$ and $p\in P$,
(2) the left action of $K$ commutes with the right action of $G$ :
$$k\cdot(p\cdot g) = (k\cdot p)\cdot g$$ for all $k\in K, p\in P$, and $g\in G$.
\end{defn}
If $H$ is a normal subgroup of $G$, then one can define a principal $G/H$ bundle $P/H$ by taking the fiberwise quotient of $P$ by $H$. Moreover if $P$ is $K$-equivariant under a left $K$ action, then there exists the induced $K$ action on $P/H$ so that $P/H$ is $K$-equivariant.
\begin{lem}\label{jacob}
Let $P$ and $\tilde{P}$ be a principal $G$ and $\tilde{G}$ bundle
respectively over a smooth manifold $M$ such that $\tilde{P}$
double-covers $P$ fiberwisely. For a normal subgroup $H$
containing $\Bbb Z_2$ in both $\tilde{G}$ and $S^1$ where the
quotient of $\tilde{G}$ by that $\Bbb Z_2$ gives $G$, let
$$\tilde{P}\otimes_{H}S^1:=(\tilde{P}\times_M (M\times S^1))/ H$$
be the quotient of the fiber product of $\tilde{P}$ and the
trivial $S^1$ bundle $M\times S^1$ by $H$, where the right $H$
action is given by $$(p,(x,e^{i\vartheta}))\cdot h=(p\cdot h,
(x,e^{i\vartheta}h^{-1})).$$
Suppose that $M$ and $P$ admit a smooth $S^1$ action such that $P$
is $S^1$-equivariant. Then a principal $\tilde{G}\otimes_{H}S^1$
bundle $\tilde{P}\otimes_{H}S^1$ is also $S^1$-equivariant by
lifting the action on $P$. In particular, any smooth $S^1$-action
on a smooth spin manifold lifts to its trivial Spin$^c$ bundle so
that the Spin$^c$ structure is $S^1$-equivariant.
\end{lem}
\begin{proof}
Any left $S^1$ action on $P$ can be lifted to $\tilde{P}$ uniquely
at least locally commuting with the right $\tilde{G}$ action. If
the monodromy is trivial for any orbit, then the $S^1$ action can
be globally well-defined on $\tilde{P}$, and hence on
$\tilde{P}\otimes_{H}S^1$, where the $S^1$ action on the latter
$S^1$ fiber can be any left action, e.g. the trivial action,
commuting with the right $S^1$ action.
If the monodromy is not trivial, it has to be $\Bbb Z_2$ for any
orbit, because the orbit space is connected. In that case, we
need the trivial $S^1$ bundle $M\times S^1$ with an ``ill-defined"
$S^1$ action with monodromy $\Bbb Z_2$ defined as follows.
First consider the double covering map from $M\times S^1$ to
itself defined by $(x,z)\mapsto (x,z^2)$. Equip the downstairs
$M\times S^1$ with the left $S^1$ action which acts on the base as
given and on the fiber $S^1$ by the multiplication as complex
numbers. Then this downstairs action can be locally lifted to the
upstairs commuting with the right $S^1$ action. Most importantly,
it has $\Bbb Z_2$ monodromy as desired. Explicitly,
$e^{i\vartheta}$ for $\vartheta\in[0,2\pi)$ acts on the fiber
$S^1$ by the multiplication of $e^{i\frac{\vartheta}{2}}$.
Combining this with the local action on $\tilde{P}$, we get a
well-defined $S^1$ action on $\tilde{P}\otimes_{H}S^1$, because
two $\Bbb Z_2$ monodromies are cancelled each other.
Once the $S^1$ action on $\tilde{P}\otimes_{H}S^1$ is globally well-defined, it commutes with the right $\tilde{G}\otimes_{H}S^1$ action, because the local $S^1$ action on $\tilde{P}\times S^1$ commuted with the right $\tilde{G}\times S^1$ action.
If $S^1$ acts on a smooth manifold, the orthonormal frame bundle is always $S^1$-equivariant under the action. Then by the above result any $S^1$ action on a smooth spin manifold lifts to the trivial Spin$^c$ bundle which is $(\textrm{spin bundle})\otimes_{\Bbb Z_2}S^1$.
\end{proof}
\begin{lem}\label{joseph}
Let $P$ be a flat principal $G$ bundle over a smooth manifold $M$ with a smooth $S^1$ action. Suppose that the action can be lifted to the universal cover $\tilde{M}$ of $M$. Then it can be also lifted to $P$ so that $P$ is $S^1$-equivariant.
\end{lem}
\begin{proof}
For the covering map $\pi: \tilde{M}\rightarrow M$, the pull-back
bundle $\pi^*P$ is the trivial bundle $\tilde{M}\times G$. By
letting $S^1$ act on the fiber $G$ trivially, $\pi^*P$ can be made
$S^1$-equivariant. For the deck transformation group $\pi_1(M)$,
$P$ is gotten by an element of $\textrm{Hom}(\pi_1(M),G)$. Any
deck transformation acts on each fiber $G$ as the left
multiplication of a constant in $G$ so that it commutes with not
only the right $G$ action but also the left $S^1$ action which is
trivial on the fiber $G$. Therefore the $S^1$ action on $\pi^*P$
projects down to an $S^1$ action on $P$. To see whether this $S^1$
action commutes with the right $G$ action, it's enough to check
for the local $S^1$ action, which can be seen upstairs on
$\pi^*P$.
\end{proof}
\begin{lem}
On a smooth closed oriented 4-manifold $N$ with $b_2^+(N)=0$, any Spin$^c$ structure $\frak{s}$ satisfies $$c_1^2(\frak{s})\leq -b_2(N),$$ and the choice of a Spin$^c$ structure $\frak{s}_N$ satisfying $c_1^2(\frak{s}_N)=-b_2(N)$ is always possible.
\end{lem}
\begin{proof}
If $b_2(N)=0$, it is obvious. The case of $b_2(N)>0$ can be seen
as follows. Using Donaldson's theorem \cite{donal1,donal2}, we
diagonalize the intersection form $Q_N$ on $H^2(N;\Bbb
Z)/\textrm{torsion}$ over $\Bbb Z$ with a basis
$\{\alpha_1,\cdots,\alpha_{b_2(N)}\}$ satisfying
$Q_N(\alpha_i,\alpha_i)=-1$ for all $i$. Then for any Spin$^c$
structure $\frak{s}$, the rational part of $c_1(\frak{s})$ should
be of the form $$\sum_{i=1}^{b_2(N)}a_i\alpha_i$$ where each
$a_i\equiv 1$ mod 2, because $$Q_N(c_1(\frak{s}),\alpha)\equiv
Q_N(\alpha,\alpha)\ \ \ \ \textrm{mod}\ 2$$ for any $\alpha\in
H^2(N;\Bbb Z)$. Consequently $|a_i|\geq 1$ for all $i$ which means
$$c_1^2(\frak{s})=\sum_{i=1}^{b_2(N)}-a_i^2\leq -b_2(N),$$ and we
can get a Spin$^c$ structure $\frak{s}_N$ with
$$c_1(\frak{s}_N)\equiv\sum_i \alpha_i\ \ \ \ \textrm{modulo
torsion}$$ by tensoring any $\frak{s}$ with a line bundle $L$
satisfying $$2c_1(L)+c_1(\frak{s})\equiv\sum_i \alpha_i\ \ \ \
\textrm{modulo torsion},$$ completing the proof.
\end{proof}
\begin{thm}\label{Nexam}
Let $X$ be one of $$S^4,\ \ \overline{\Bbb CP}_2,\ \ S^1\times
(L_1\#\cdots\#L_n),\ \ \textrm{and}\ \ \widehat{S^1\times L}$$
where each $L_i$ and $L$ are quotients of $S^3$ by free actions
of finite groups, and $\widehat{S^1\times L}$ is the manifold
obtained from the surgery on $S^1\times L$ along an $S^1\times
\{pt\}$.
Then for any integer $l\geq 0$ and any smooth
closed oriented 4-manifold $Z$ with $b_2^+(Z)=0$ admitting a
metric of positive scalar curvature, $$X\ \#\ kl Z$$ satisfies the properties of $N$ with $G=\Bbb Z_k$ in
Theorem \ref{firstth}, where the Spin$^c$ structure of $X \# kl Z$ is given by gluing any Spin$^c$ structure $\frak{s}_X$ on $X$ and any Spin$^c$ structure $\frak{s}_Z$ on $Z$ satisfying $c_1^2(\frak{s}_X)=-b_2(X)$ and $c_1^2(\frak{s}_Z)=-b_2(Z)$ respectively.
\end{thm}
\begin{proof}
First, we will define $\Bbb Z_k$ actions preserving a metric of positive scalar curvature.
In fact, our actions on $X$ will be induced from such $S^1$ actions.
For $X=S^4$, one can take a $\Bbb Z_k$-action coming from a nontrivial action of $S^1\subset SO(5)$ preserving a round metric. In this case, one can choose a free action or an action with fixed points also.
If $X=\overline{\Bbb CP}_2$, then one can use the following actions for some integers $m_1, m_2$ :
\begin{eqnarray}\label{exam}
j\cdot [z_0,z_1,z_2]=[z_0,e^{\frac{2jm_1}{k}\pi i}z_1,e^{\frac{2jm_2}{k}\pi i}z_2]
\end{eqnarray}
for $j\in \Bbb Z_k$, which preserve the Fubini-Study metric and has at least 3 fixed points $[1,0,0], [0,1,0], [0,0,1]$.
Before considering the next example, recall that every finite
group acting freely on $S^3$ is in fact conjugate to a subgroup of
$SO(4)$, and hence its quotient 3-manifold admits a metric of
constant positive curvature. This follows from the well-known
result of G. Perelman. (See \cite{morgan-tian1, morgan-tian2}.)
In $S^1\times (L_1\#\cdots\#L_n)$, the action is defined as a rotation along the $S^1$-factor, which is
obviously free and preserves a product metric. By endowing $L_1\#\cdots\#L_n$ with a metric of positive scalar curvature via the Gromov-Lawson surgery \cite{GL}, $S^1\times (L_1\#\cdots\#L_n)$ has a desired metric.
Finally the above-mentioned $S^1$ action on
$S^1\times L$ can be naturally extended to $\widehat{S^1\times
L}$, and moreover the Gromov-Lawson surgery \cite{GL} on
$S^1\times\{pt\}$ produces an $S^1$-invariant metric of positive
scalar curvature. Its fixed point set is $\{0\}\times S^2$ in the attached $D^2\times S^2$.
Now $X\# kl Z$ has an obvious $\Bbb Z_k$-action induced
from that of $X$ and a $\Bbb Z_k$-invariant metric which has positive
scalar curvature again by the Gromov-Lawson surgery.
It remains to prove that the above $\Bbb Z_k$-action on $X \# kl Z$ can be lifted to the Spin$^c$ structure obtained by gluing the above $\frak{s}_X$ and $\frak{s}_Z$.
For this, we will only prove that any such $\frak{s}_X$ is $\Bbb Z_k$-equivariant. Then one can glue $k$ copies of $lZ$ in an obvious $\Bbb Z_k$-equivariant way. Recalling that the $\Bbb Z_k$ action on $X$ actually comes from an $S^1$ action, we will actually show the $S^1$-equivariance of $\frak{s}_X$ on $X$.
On $S^4$, the unique Spin$^c$ structure is trivial. Any smooth $S^1$ action on $S^4$ which is spin can be lifted its trivial Spin$^c$ structure by Lemma \ref{jacob}.
Any smooth $S^1$ action on $\overline{\Bbb CP}_2$ is uniquely lifted to its orthonormal frame bundle $F$, and any Spin$^c$ structure on $\overline{\Bbb CP}_2$
satisfying $c_1^2=-1$ is the double cover $P_1$ and $P_2$ of $F\oplus P$ and $F\oplus P^*$ respectively, where $P$ is the principal $S^1$ bundle over $\overline{\Bbb CP}_2$ with $c_1(P)=[H]$ and $P^*$ is its dual. Note that there is a base-preserving diffeomorphism between $P$ and $P^*$ whose total space is $S^5$. Obviously the action
(\ref{exam}) is extended to $S^5\subset \Bbb C^3$ commuting with the principal $S^1$ action of the Hopf fibration. By Lemma \ref{jacob} the $S^1$-action can be lifted to $P_i\otimes_{S^1} S^1$ in an $S^1$-equivariant way, which is isomorphic to $P_i$ for $i=1,2$.
In case of $S^1\times (L_1\#\cdots\#L_n)$, any Spin$^c$ structure is
the pull-back from $L_1\#\cdots\#L_n$, and satisfies $c_1^2=0=-b_2$. Because the
tangent bundle is trivial, a free $S^1$-action is obviously
defined on its trivial spin bundle. Then the action can be
obviously extended to any Spin$^c$ structure, because it is
pulled-back from $L_1\#\cdots\#L_n$.
\begin{lem}
$\widehat{S^1\times L}$ is a rational homology 4-sphere, and $$H^2(\widehat{S^1\times L};\Bbb Z)=H_1(L;\Bbb Z).$$ Its universal cover is $(|\pi_1(L)|-1)S^2\times S^2$ where $0(S^2\times S^2)$ means $S^4$.
\end{lem}
\begin{proof}
Since the Euler characteristic is easily computed to be 2 from the
surgery description, and $b_1(\widehat{S^1\times L})=b_1(L)=0$,
it follows that $\widehat{S^1\times L}$ is a rational homology
4-sphere.
By the universal coefficient theorem,
\begin{eqnarray*}
H^2(\widehat{S^1\times L};\Bbb Z)
&=&\textrm{Hom}(H_2(\widehat{S^1\times L};\Bbb Z),\Bbb Z)\oplus \textrm{Ext}(H_1(\widehat{S^1\times L};\Bbb
Z),\Bbb Z)\\ &=& H_1(\widehat{S^1\times L};\Bbb Z)\\ &=& H_1(L;\Bbb Z).
\end{eqnarray*}
The universal cover is equal to the manifold obtained from $S^1\times S^3$ by performing surgery along $S^1\times \{ |\pi_1(L)|\ \textrm{points in } S^3\}$, and hence it must be $(|\pi_1(L)|-1)S^2\times S^2$.
\end{proof}
By the above lemma, there are $|H_1(L;\Bbb Z)|$ Spin$^c$
structures on $\widehat{S^1\times L}$, all of which are torsion to satisfy
$c_1^2=0=-b_2(\widehat{S^1\times L})$. Since any $S^1$ bundle on $\widehat{S^1\times L}$ is flat, and the $S^1$-action on $\widehat{S^1\times L}$ can be obviously lifted to its universal cover, Lemma \ref{joseph} says that any $S^1$ bundle is $S^1$-equivariant under the $S^1$ action.
By the construction, $\widehat{S^1\times L}$ is spin, and hence the trivial Spin$^c$ bundle is $S^1$-equivariant by Lemma \ref{jacob}.
Any other Spin$^c$ structure is given by the tensor product over $S^1$ of the trivial Spin$^c$ bundle and an $S^1$ bundle, both of which are $S^1$-equivariant bundles. Therefore any Spin$^c$ bundle of $\widehat{S^1\times L}$ is $S^1$-equivariant.
This completes all the proof.
\end{proof}
\bigskip
|
1,314,259,996,653 | arxiv | \section{\label{sec1}Introduction}
Several types of time-dependent oscillators have been studied along the past years. Examples are: (i) the harmonic oscillator\cite{1}; (ii) the pseudo-harmonic oscillator\cite{1, 2}; (iii) the parametric oscillator\cite{3}; and (iv) the inverted harmonic oscillator\cite{4}. Recently, another interesting class of time-dependent oscillators, named log-periodic oscillators, was studied\cite{5}.
In Ref. \onlinecite{5}, \"{O}zeren\cite{5} considered the time evolution of five different one-dimensional classical oscillators. The coherent states for each system were constructed by using the SU(1, 1) algebra and their time evolution was investigated.
In this work, we use the Lewis and Riesenfeld\cite{6} (LR) invariant method and a unitary transformation to obtain the exact Schr\"{o}dinger wave function for three out of the five log-periodic-type oscillators investigated by \"{O}zeren\cite{5}, namely: (i) $m(t)=m_0\frac{t}{t_0}$ and $k(t)=k_0\frac{t_0}{t}$; (ii) $m(t)=m_0$ and $k(t)=k_0\left(\frac{t_0}{t}\right)^{2}$; (iii) $m(t)=m_0\left(\frac{t}{t_0}\right)^{2}$ and $k(t)=k_0$. In all three cases $\omega(t)=\omega_0\frac{t_0}{t}$.
The wave functions $\psi_n (q,t)$ for the time dependent harmonic oscillator ($H(t)=\frac{p^2}{2m(t)}+\frac{1}{2}m(t)\omega^2(t)q^2$) obtained in Ref. \onlinecite{1} are written in terms of $\rho$, a c-number quantity satisfying the generalized Milne-Pinney equation ($\ddot{\rho}+\gamma(t)\dot{\rho}+\omega^2(t)\rho=\frac{1}{m^2(t)\rho^3}$), whose solution can be found following the procedure reported in Refs. \onlinecite{7, 8}.
Here we write the solution of the Milne-Pinney equation for each system to obtain the exact wave functions for the oscillators. This paper is outlined as follows. In Sec. \ref{sec2} we briefly review the LR invariant method for the time-dependent harmonic oscillator. In Sec. \ref{sec3} we obtain the wave functions for the oscillators considered, and calculate the correlations between position and momentum and the uncertainty product. For oscillator (i) we construct the coherent states, while for oscillators (ii) and (iii) we construct the squeezed states. The analysis of the phase diagram for the three oscillators is also presented. Finally, some concluding remarks are added in Sec. \ref{sec4}.
\section{\label{sec2}THE LEWIS AND RIESENFELD INVARIANT METHOD - WAVE FUNCTIONS FOR A TIME-DEPENDENT HARMONIC OSCILLATOR}
Consider a time-dependent harmonic oscillator described by the Hamiltonian
\begin{equation}\label{1}H(t)=\frac{p^2}{2m(t)}+\frac{1}{2}m(t)\omega^2(t)q^2,\end{equation}whose mass ($m(t)$) and angular frequency ($\omega(t)$) depend on time explicitly, and the variables $q$ and $p$ are canonical coordinates with $[q,p]=i\hbar$. From Eq. (\ref{1}), we obtain the equation of motion
\begin{equation}\label{2}\ddot{q}+\gamma(t)\dot{q}+\omega^2(t)q=0,\end{equation}where
\begin{equation}\label{3}\gamma(t)=\frac{d}{dt}\ln{m(t)}.\end{equation}
It is well known that an invariant for Eq. (\ref{1}) is given by\cite{6}
\begin{equation}\label{4}I=\frac{1}{2}\left[ \left(\frac{q}{\rho}\right)^2+(\rho p-m\dot{\rho}q)^{2} \right] \end{equation}where $q(t)$ satisfies Eq. (\ref{2}) and $\rho(t)$ satisfies the generalized Milne-Pinney\cite{7} equation
\begin{equation}\label{5}\ddot{\rho}+\gamma(t)\dot{\rho}+\omega^2(t)\rho=\frac{1}{m^2(t)\rho^3}.\end{equation}
The invariant $I(t)$ satisfies the equation
\begin{equation}\label{6}\frac{dI}{dt}=\frac{\partial I}{\partial t}+\frac{1}{i\hbar}[I, H]=0\end{equation}and can be considered hermitian if we choose only the real solutions of Eq. (\ref{5}). Its eigenfunctions, $\phi_n(q,t)$, are assumed to form a complete orthonormal set with time-independent discrete eigenvalues, $\lambda_n$. Thus
\begin{equation}\label{7}I\phi_n(q, t)=\lambda_n\phi_n(q, t),\end{equation}with $\langle\phi_n,\phi_{n^\prime}\rangle=\delta_{nn^\prime}$.
Consider the Schr\"odinger equation (SE)
\begin{equation}\label{8}i\hbar\frac{\partial\psi(q, t)}{\partial t}=H(t)\psi(q, t),\end{equation}where $H(t)$ is given by Eq. (\ref{1}) with $p=-i\hbar\frac{\partial}{\partial q}$. Lewis and Riesenfeld\cite{6} showed that the solutions $\psi_n (q,t)$ of the SE (see Eq. (\ref{8})) are related to the functions $\phi_n (q,t)$ by
\begin{equation}\label{9}\psi_n(q, t)=e^{i\theta_n(t)}\phi_n(q, t),\end{equation}where the phase functions $\theta_n(t)$ satisfy the equation
\begin{equation}\label{10}\hbar\frac{d\theta_n(t)}{dt}=\langle\phi_n(q, t)|\left[i\hbar\frac{\partial}{\partial t}-H(t)\right]|\phi_n(q, t)\rangle.\end{equation}
The general solution of the SE (Eq. (\ref{8})) may be written as
\begin{equation}\label{11}\psi_n(q, t)=\sum_nc_ne^{i\theta_n(t)}\phi_n(q, t),\end{equation}where $c_n$ are time-independent coefficients.
Next, consider the unitary transformation
\begin{equation}\label{12}\phi_n^\prime(q, t)=\mathcal{U}\phi_n(q, t)\end{equation}where
\begin{equation}\label{13}\mathcal{U}=\exp{\left\{-i\left[\frac{m(t)\dot{\rho}}{2\hbar\rho}\right]q^2\right\}}.\end{equation}Under this transformation and defining $\sigma=q/\rho$, Eq. (\ref{7}) now reads
\begin{align}\label{14}I^\prime\varphi_n(\sigma)&=\left[-\left(\frac{\hbar^2}{2}\right)\frac{\partial^2}{\partial\sigma^2}+\left(\frac{\sigma^2}{2}\right)\right]\varphi_n(\sigma)\nonumber\\
&=\lambda_n\varphi_n(\sigma),\quad\lambda_n=\left(n+\frac{1}{2}\right)\hbar,\end{align}where $I^\prime=\mathcal{U}I\mathcal{U}^\dagger$ and $\frac{\varphi_n(\sigma)}{\rho^{1/2}}=\phi_n^\prime$. The factor $\rho^{1/2}$ warrants the normalization condition
\begin{equation}\label{15}\int{\phi_n^{\prime *}(q, t)\phi_n^\prime(q, t)}dq=\int{\varphi_n^{*}(q, t)\varphi_n(q, t)}d\sigma=1.\end{equation}
The solution of Eq. (\ref{14}) corresponds to that of the time-independent harmonic oscillator with $\lambda_n=(n+\frac{1}{2})\hbar$ . Then, by using Eqs. (\ref{12}), (\ref{13}) and (\ref{15}) we obtain
\begin{equation}\label{16}\phi_n(q,t)=\left[\frac{1}{\pi^{1/2}\hbar^{1/2}n!2^n\rho}\right]^{1/2}\exp{\left[\frac{im(t)}{2\hbar}\left(\frac{\dot{\rho}}{\rho}+\frac{i}{m(t)\rho^2}\right)q^2\right]}\times H_n\left[\left(\frac{1}{\hbar}\right)^{1/2}\frac{q}{\rho}\right],\end{equation}here $H_n$ is the usual Hermite polynomial of order $n$.
Applying $\mathcal{U}$ to the right-hand side of Eq. (\ref{10}) and after some algebra, we obtain
\begin{equation}\label{17}\theta_n(t)=-\left(n+\frac{1}{2}\right)\int_{t_0}^t{\frac{1}{m(t^\prime)\rho^2(t^\prime)}}dt^\prime.\end{equation}
Finally, using Eqs. (\ref{9}) and (\ref{16}) the exact solution of the SE for the time-dependent harmonic oscillator reads
\begin{equation}\label{18}\psi_n(q,t)=e^{i\theta_n(t)}\left[\frac{1}{\pi^{1/2}\hbar^{1/2}n!2^n\rho}\right]^{1/2}\exp{\left[\frac{im(t)}{2\hbar}\left(\frac{\dot{\rho}}{\rho}+\frac{i}{m(t)\rho^2}\right)q^2\right]}\times H_n\left[\left(\frac{1}{\hbar}\right)^{1/2}\frac{q}{\rho}\right].\end{equation}
\section{\label{sec3}WAVE FUNCTIONS OF TIME-DEPENDENT LOG-PERIODIC OSCILLATORS}
In Ref. \onlinecite{5}, \"{O}zeren considered five different variations of $m(t)$ and $k(t)$, namely: (i) $m(t)=m_0$ and $k(t)=k_0\left(\frac{t_0}{t}\right)^2$; (ii) $m(t)=m_0 \left(\frac{t}{t_0}\right)^2$ and $k(t)=k_0$; (iii) $m(t)=m_0\left(\frac{t}{t_0}\right)^\alpha$ and $k(t)=k_0 \left(\frac{t_0}{t}\right)^{(\alpha+2)}$; (iv) $m(t)=m_0 \left(\frac{t}{t_0}\right)$ and $k(t)=k_0 \left(\frac{t_0}{t}\right)$; and (v) $m(t)=m_0 \left(\frac{t}{t_0}\right)^\alpha$ and $k(t)=k_0\left(\frac{t}{t_0}\right)^\alpha$. Here we consider only three ((i), (ii) and (iv)) out of the five oscillators studied by \"{O}zeren\cite{5}, for which $\omega(t)=\sqrt{\frac{k(t)}{m(t)}}=\omega_0\frac{t_0}{t}$.
\subsection{$\bm{m(t)=m_0\frac{t}{t_0}}$ and $\bm{k(t)=k_0\frac{t_0}{t}}$}
In this case Eqs. (\ref{2}) and (\ref{5}) read
\begin{equation}\label{19}\ddot{q}+\frac{1}{t}\dot{q}+\frac{\omega_0^2t_0^2}{t^2}q=0\end{equation}and
\begin{equation}\label{20}\ddot{\rho}+\frac{1}{t}\dot{\rho}+\frac{\omega_0^2t_0^2}{t^2}\rho=\frac{t_0^2}{m_0^2}\frac{1}{t^2\rho^3},\end{equation}respectively.
Following the procedure described in Ref.\onlinecite{7}, we find $\rho=c=\frac{1}{\sqrt{m_0\omega_0}}$ . From Eqs. (\ref{17}) and (\ref{18}) we have
\begin{equation}\label{21}\psi_n(q,t)=e^{-i\left(n+\frac{1}{2}\right)\omega_0t_0\ln{\frac{t}{t_0}}}\left[\frac{m_0\omega_0}{\pi\hbar(n!)^22^{2n} }\right]^{1/4}\exp{\left[-\frac{m_0^2\omega_0^2q^2}{2\hbar}\right]}\times H_n \left[\left(\frac{m_0\omega_0}{\hbar}\right)^{1/2}q \right],\end{equation}which, except for the phase factor, is similar to the well-known wave function for the time-independent harmonic oscillator.
The coherent states for the time-dependent harmonic oscillator (Eq.(\ref{1})) are constructed as follows\cite{9}. Consider the time-dependent creation ($a^\dagger(t)$) and annihilation ($a(t)$) operators defined as
\begin{equation}\label{22}a^\dagger(t)=\left(\frac{1}{2\hbar}\right)^{1/2}\left[ \left(\frac{q}{\rho}\right)-i(\rho p-m\dot{\rho}q)\right]\end{equation}
\begin{equation}\label{23}a(t)=\left(\frac{1}{2\hbar}\right)^{1/2}\left[ \left(\frac{q}{\rho}\right)+i(\rho p-m\dot{\rho}q)\right],\end{equation}where $[a^\dagger(t),a(t)]=1$. In terms of $a(t)$ and $a^\dagger(t)$ the invariant $I$ (see Eq. (\ref{4})) can be written as
\begin{equation}\label{24}I=\hbar\left(a^\dagger(t)a(t)+\frac{1}{2}\right).\end{equation}
Let $|n,t\rangle$ be the eigenstates of $I$. Therefore the following relations hold
\begin{equation}\label{25}a(t)=\sqrt{n}|n-1,t\rangle\end{equation}
\begin{equation}\label{26}a^\dagger(t)=\sqrt{n+1}|n+1,t\rangle,\end{equation}
\begin{equation}\label{27}I|n,t\rangle=\hbar\left(n+\frac{1}{2}\right)|n,t\rangle.\end{equation}
Since the coherent states for $I$ can be easily constructed, the coherent states for the time-dependent harmonic oscillator are straightforwardly obtained:
\begin{equation}\label{28}|\alpha,t\rangle=e^{-|\alpha|^2/2} \sum_{n=0}^\infty{\frac{\alpha^n}{(n!)^{1/2}}e^{i\theta_n(t)} |n,t\rangle},\end{equation}where $\theta_n$ is given by Eq. (\ref{17}), and the complex number $\alpha(t)$ satisfies the eigenvalue equation
\begin{equation}\label{29}a(t)|\alpha,t\rangle=\alpha(t)|\alpha,t\rangle,\end{equation}with
\begin{equation}\label{30}\alpha(t)=\alpha(t_0)e^{2i\theta_0(t)}\end{equation}and
\begin{equation}\label{31}\theta_0(t)=-\frac{1}{2}\int_{t_0}^t\frac{dt^\prime}{m(t^\prime)\rho^2(t^\prime)}.\end{equation}
The fluctuations in $q$ ($\Delta q$) and $p$ ($\Delta p$) and the uncertainty product ($\Delta q\Delta p$) in the coherent state $|\alpha,t\rangle$ , read
\begin{equation}\label{32}\Delta q_\alpha=\sqrt{\langle q^2\rangle_\alpha-\langle q\rangle_\alpha^2}=\sqrt{\frac{\hbar}{2}}\rho,\end{equation}
\begin{equation}\label{33}\Delta p_\alpha=\sqrt{\langle p^2\rangle_\alpha-\langle p\rangle_\alpha^2}=\sqrt{\frac{\hbar}{2}}\frac{1}{\rho}\left(1+m^2\dot{\rho}^2\rho^2\right)^{1/2}\end{equation}and
\begin{equation}\label{34}\Delta q_\alpha\Delta p_\alpha=\frac{\hbar}{2}\left(1+m^2\dot{\rho}^2\rho^2\right)^{1/2},\end{equation}respectively.
If $m(t)\dot{\rho}\rho\neq0$, $\Delta q_\alpha\Delta p_\alpha$ is not minimum, indicating that the coherent states $|\alpha,t\rangle$ are not minimum-uncertainty (coherent) states. In fact, the states $|\alpha,t\rangle$ for the time-dependent harmonic oscillator are equivalent to the well-known squeezed states, as pointed out in Refs. \onlinecite{10, 11}.
For $\rho=c$, $\dot{\rho}=0$ and $\Delta q_\alpha\Delta p_\alpha=\frac{\hbar}{2}$ , indicating that the states $|\alpha,t\rangle$ are ``true" coherent states. This is an interesting result since the minimum uncertainty product is assumed to be satisfied only for time-independent harmonic oscillator, unless the solution of Eq.(\ref{5}) is a constant\cite{1}.
Next, let us analyze the time behavior of $\langle q\rangle_\alpha$, $\langle p\rangle_\alpha$ and the phase diagram $\langle q\rangle_\alpha\times\langle p\rangle_\alpha$. By setting $\alpha(t_0 )=u+iv$ and using Eqs. (\ref{22}) and (\ref{23}), we find
\begin{equation}\label{35}\langle q\rangle_\alpha=\sqrt{2\hbar}\left[u\cos{\left(\theta_0(t)\right)}-v\sin{\left(\theta_0(t)\right)}\right]\end{equation}
\begin{equation}\label{36}\langle p\rangle_{\alpha}=\sqrt{2\hbar}\left[\left(\frac{v}{\rho}+um\dot{\rho}\right)\cos{\left(\theta_0(t)\right)}+\left(\frac{u}{\rho}-vm\dot{\rho}\right)\sin{\left(\theta_0(t)\right)}\right].\end{equation}
The constants $u$ and $v$ are determined from the initial conditions $\langle q(t_0)\rangle_\alpha=q_0$ and $\langle p(t_0)\rangle_\alpha=p_0=m(t_0)v_0$. For $q_0=1$ and $v_0=0$, we find
\begin{equation}\label{37}\langle q\rangle_\alpha=\cos{\left(t_0\omega_0\ln{\frac{t}{t_0}}\right)},\end{equation}
\begin{equation}\label{38}\langle p\rangle_\alpha=-m_0\omega_0\sin{\left(t_0\omega_0\ln{\frac{t}{t_0}}\right)}.\end{equation}
Figures \ref{fig1}(a) and (b) show the time dependent behavior of $\langle q\rangle_\alpha$ and $\langle p\rangle_\alpha$, respectively. In all plots we used $t_0=1.0$, $\omega_0=10.0$ and $m_0=1.0$. From Fig. \ref{1}(a) we observe that the system oscillates forth and back between the classical turning points with an increasing period and constant amplitude. The phase diagram is shown in Fig. \ref{1}(c). Even though that this system is dissipative (total energy $E=\frac{1}{2t}$), it behaves like the usual time-independent harmonic oscillator ($E=$ constant). This can be seen from the relation $A=\sqrt{\frac{2E}{k}}$, where $A$ is the amplitude of motion. Since $k\propto\frac{1}{t}$ and $E\propto\frac{1}{t}$, $A$ is a constant. As $t$ increases, the frequency $\omega(t)$ decreases ($\propto\frac{1}{t}$) while the period increases ($\propto\frac{t}{lnt}$) leading to the ``exact" log periodic behavior shown in Fig. \ref{1}(a).
\begin{figure}[t]
\centering
\includegraphics{104354_0_figure_202875_l9kqn9.eps}
\caption{Plots of (a) $\langle q\rangle_\alpha$, (b) $\langle p\rangle_\alpha$, and (c) the phase diagram $\langle p\rangle_\alpha$ vs $\langle q\rangle_\alpha$. In the plots we used $t_0=1.0$, $q_0=1.0$, $v_0=0.0$, $\omega_0=10.0$ and $m_0=1.0$.}
\label{fig1}
\end{figure}
Pedrosa et al\cite{12} have combined linear invariants and the LR method to obtain the exact wave function for a particle trapped by oscillating fields, which were written in terms of Mathieu functions. They calculated $\Delta q\Delta p$ and the quantum correlation between $q$ and $p$, defined by $C_{1,1}=\frac{1}{2}\langle\left(qp+pq\right)\rangle-\langle q\rangle\langle p\rangle$\cite{13}. They are related through the equation
\begin{equation}\label{39}\Delta q\Delta p=\frac{\hbar}{2}\sqrt{1+\left(\frac{2}{\hbar}C_{1,1}\right)^2},\end{equation}which shows that $\Delta q\Delta p$ is minimum whenever $C_{1,1}=0$, as it happens for $C_{1,1}$ calculated in the coherent state $|\alpha,t\rangle$, i.e., $(C_{1,1})_\alpha=0$. The fact that $C_{1,1}=0$ does not mean that $q$ and $p$ are uncorrelated. In order to verify the correlation between $q$ and $p$ one may study the function $C_{n,m}=\frac{1}{2}\langle\left(q^np^m+p^mq^n\right)\rangle-\langle q^n\rangle\langle p^m\rangle$. For the coherent state $|\alpha,t\rangle$, we find that $(C_{2,2} )_\alpha=-\frac{\hbar^2}{2}$ , indicating that $q$ and $p$ even assumed as ``classical" quantities are correlated.
The uncertainty product and correlations in the state $\psi_n$ (Eq.(\ref{21})) are more easily calculated using the relation $|\psi_n(q,t)\rangle=e^{i\theta_n(t)}|n,t\rangle$. They are given by
\begin{equation}\label{40}\Delta q_{\psi_n}\Delta p_{\psi_n}=\left(n+\frac{1}{2}\right)\hbar,\end{equation}
\begin{equation}\label{41}\left(C_{1,1}\right)_{\psi_n}=0\end{equation}and
\begin{equation}\label{42}\left(C_{2,2}\right)=-\left(n^2+n+\frac{1}{2}\right)\hbar^2.\end{equation}
We noticed that Eq. (\ref{39}) is satisfied for $n=0$. In this case $\psi_0$ given by
\begin{equation}\label{43}\psi_0(q,t)=e^{-i\frac{t_0\omega_0}{2}\ln{\left(\frac{t}{t_0}\right)}}\left(\frac{m_0\omega_0}{\pi\hbar}\right)^{1/4}\exp{\left[-\frac{m_0^2\omega_0^2 q^2}{2\hbar}\right]}\end{equation}is the coordinate representation of the coherent state\cite{14}.
\subsection{$\bm{m(t)=m_0}$ and $\bm{k(t)=k_0\left(\frac{t_0}{t}\right)^2}$}
In this case Eqs. (\ref{2}) and (\ref{5}) are, respectively, given by
\begin{equation}\label{44}\ddot{q}+\omega_0^2\frac{t_0^2}{t^2}q=0\end{equation}and
\begin{equation}\label{45}\ddot{\rho}+\omega_0^2\frac{t_0^2}{t^2}\rho=\frac{1}{m_0\rho^3}.\end{equation}
Following the procedure described in Refs. \onlinecite{7, 8}, we find $\rho=\sqrt{\frac{2}{m_0}}\frac{\sqrt{t}}{\left(4\omega_0^2 t_0^2-1\right)^{1/4}}$ and from Eqs. (\ref{17}) and (\ref{18}) we have
\begin{align}\label{46}\psi_n(q,t)=&e^{-\frac{i}{2}\left(n+\frac{1}{2}\right)\left(4\omega_0^2t_0^2-1\right)^{1/2}\ln\left(\frac{t}{t_0}\right)}\left[\frac{m_0\left(4\omega_0^2t_0^2-1\right)^{1/2}}{\pi\hbar(n!)^22^{2n+1}}\right]^{1/4}\times\frac{1}{t^{1/4}}\nonumber\\ \times&\exp{\left\{\frac{m_0}{4\hbar t}\left[i-\left(4\omega_0^2t_0^2-1\right)^{1/2}\right]q^2\right\}}
\times H_n\left[\left(\frac{m_0}{2\hbar}\right)^{1/2}\frac{\left(4\omega_0^2t_0^2-1\right)^{1/4}}{\sqrt{t}}q\right]\end{align}
The values of $\Delta q\Delta p$ and $C_{1,1}$ in the state $|\psi_n (q,t)\rangle$, are given by, respectively
\begin{equation}\label{47}\Delta q_{\psi_n}\Delta p_{\psi_n}=\frac{2\omega_0 t_0}{\left(4\omega_0^2t_0^2-1\right)^{1/2}}\left(n+\frac{1}{2}\right)\hbar,\end{equation}and
\begin{equation}\label{48}\left(C_{1,1}\right)_{\psi_n}=-\left(\frac{1}{4\omega_0^2t_0^2-1}\right)^{1/2}\left(n+\frac{1}{2}\right)\hbar.\end{equation}
For $n=0$, $\Delta q_{\psi_0}\Delta p_{\psi_0}=\frac{\omega_0 t_0}{\left(4\omega_0^2t_0^2-1\right)^{1/2}}\hbar$, and the state
\begin{align}\label{49}\psi_0(q,t)=e^{-\frac{i}{4}\left(4\omega_0^2t_0^2-1\right)^{1/2}\ln{\left(\frac{t}{t_0}\right)}}&\left[\frac{m_0\left(4\omega_0^2t_0^2-1\right)^{1/2}}{2\pi\hbar}\right]^{1/4}\times\frac{1}{t^{1/4}}\nonumber\\
\times&\exp{\left\{\frac{m_0}{4\hbar t}\left[i-\left(4\omega_0^2t_0^2-1\right)^{1/2}q^2\right]\right\}}\end{align}corresponds to the coordinate representation of the squeezed state\cite{14}.
For the sake of comparison with case \textbf{A}, let us discuss the behavior of the classical variables $q$ and $p$ on time, as well as the phase diagram. By solving Eq.(\ref{44}), the solutions for $q$ and $p$ satisfying the initial conditions $q_0=1$ and $v_0=0$ are, respectively, given by
\begin{equation}\label{50}q(t)=\sqrt{\frac{t}{t_0}}\left[\cos{\left(\frac{\left(4\omega_0^2t_0^2-1\right)^{1/2}}{2}\ln{\frac{t}{t_0}}\right)}-\frac{1}{\left(4\omega_0^2t_0^2-1\right)^{1/2}}\sin{\left(\frac{\left(4\omega_0^2t_0^2-1\right)^{1/2}}{2}\ln{\frac{t}{t_0}}\right)}\right]\end{equation}
And
\begin{equation}\label{51}p(t)=-\sqrt{\frac{t_0}{t}}\frac{2m_0\omega_0^2t_0}{\left(4\omega_0^2t_0^2-1\right)^{1/2}}\sin{\left(\frac{\left(4\omega_0^2t_0^2-1\right)^{1/2}}{2}\ln{\frac{t}{t_0}}\right)}.\end{equation}
Figures \ref{fig2}(a) and \ref{fig2}(b) show the variation of $q$ and $p$ on time, respectively. Unlike from case \textbf{A} where the system oscillates back and forth between the turning points with constant amplitude, here $q$ increases while that $p$ decreases as time increases. Figure \ref{fig2}(c) shows the phase diagram for this oscillator. Initially at rest, the particle is speeded up, and then slowed down, indicating that the system is also dissipative. Since $E\propto 1/t$ and $k\propto 1/t^2$, the amplitude $A$ increases as $A\propto \sqrt{t}$. Due to the presence of the factor $\sqrt{t}$ in Eq. (\ref{50}) this oscillator exhibits a pseudo-log-periodic behavior.
\begin{figure}[t]
\centering
\includegraphics{104354_0_figure_202876_ldkqnd.eps}
\caption{Plots of (a) q, (b) p, and (c) the phase diagram $p$ vs $q$. In the plots we used $t_0=1.0$, $q_0=1.0$, $v_0=0.0$, $\omega_0=10.0$ and $m_0=1.0$.}
\label{fig2}
\end{figure}
\subsection{$\bm{m(t)=m_0\left(\frac{t}{t_0}\right)^2}$ and $\bm{k(t)=k_0$}}
Now Eqs. (\ref{2}) and (\ref{5}) are, respectively, given by
\begin{equation}\label{52}\ddot{q}+\frac{2}{t}\dot{q}+\omega_0^2\frac{t_0^2}{t^2}q=0\end{equation}and
\begin{equation}\label{53}\ddot{\rho}+\frac{2}{t}\dot{\rho}+\omega_0^2\frac{t_0^2}{t^2}\rho=\frac{t_0^4}{m_0^2}\frac{1}{t^4\rho^3}.\end{equation}
Again, by following the procedure described in Refs. \onlinecite{7, 8}, we find $\rho=\sqrt{\frac{2}{m_0}}\frac{t_0}{\left(4\omega_0^2 t_0^2-1\right)^{1/4}}\frac{1}{\sqrt{t}}$ and from Eqs. (\ref{17}) and (\ref{18}) we obtain
\begin{align}\label{54}\psi_n(q,t)=&e^{-\frac{i}{2}\left(n+\frac{1}{2}\right)\frac{\left(4\omega_0^2t_0^2-1\right)^{1/2}}{t_0}\ln\left(\frac{t}{t_0}\right)}\left[\frac{m_0\left(4\omega_0^2t_0^2-1\right)^{1/2}}{\pi\hbar t_0(n!)^22^{2n+1}}\right]^{1/4}\times t^{1/4}\nonumber\\ \times&\exp{\left\{\frac{m_0t}{4\hbar }\left[i-\frac{\left(4\omega_0^2t_0^2-1\right)^{1/2}}{t_0}\right]q^2\right\}}
\times H_n\left[\left(\frac{m_0}{2\hbar t_0}\right)^{1/2}\left(4\omega_0^2t_0^2-1\right)^{1/4}\sqrt{t}q\right]\end{align}
The values of $\Delta q\Delta p$ and $C_{1,1}$ in the state $|\psi_n (q,t)\rangle$, are respectively given by
\begin{equation}\label{55}\Delta q_{\psi_n}\Delta p_{\psi_n}=\frac{2\omega_0 t_0}{\left(4\omega_0^2t_0^2-1\right)^{1/2}}\left(n+\frac{1}{2}\right)\hbar\end{equation}and
\begin{equation}\label{56}\left(C_{1,1}\right)_{\psi_n}=-\left(\frac{1}{4\omega_0^2t_0^2-1}\right)^{1/2}\left(n+\frac{1}{2}\right)\hbar.\end{equation}
The expression of the coordinate representation of the squeezed state for $n=0$, reads
\begin{align}\label{57}\psi_0(q,t)=e^{-\frac{i}{4}\frac{\left(4\omega_0^2t_0^2-1\right)^{1/2}}{t_0}\ln{\left(\frac{t}{t_0}\right)}}&\left[\frac{m_0\left(4\omega_0^2t_0^2-1\right)^{1/2}}{2\pi\hbar t_0^2}\right]^{1/4}\times\frac{1}{t^{1/4}}\nonumber\\
\times&\exp{\left\{\frac{m_0t}{4\hbar }\left[i-\frac{\left(4\omega_0^2t_0^2-1\right)^{1/2}}{t_0}\right]q^2\right\}}.\end{align}
By solving Eq. (\ref{52}) and using the initial conditions $q_0=1$ and $v_0=0$, we find
\begin{equation}\label{58}q(t)=\sqrt{\frac{t_0}{t}}\left[\cos{\left(\frac{\left(4\omega_0^2t_0^2-1\right)^{1/2}}{2}\ln{\frac{t}{t_0}}\right)}-\frac{1}{\left(4\omega_0^2t_0^2-1\right)^{1/2}}\sin{\left(\frac{\left(4\omega_0^2t_0^2-1\right)^{1/2}}{2}\ln{\frac{t}{t_0}}\right)}\right]\end{equation}and
\begin{equation}\label{59}p(t)=\sqrt{\frac{t}{t_0}}\frac{2m_0\omega_0^2t_0}{\left(4\omega_0^2t_0^2-1\right)^{1/2}}\sin{\left(\frac{\left(4\omega_0^2t_0^2-1\right)^{1/2}}{2}\ln{\frac{t}{t_0}}\right)}.\end{equation}
\begin{figure}[t]
\centering
\includegraphics{104354_0_figure_202877_lckqnc.eps}
\caption{Plots of (a) $q$, (b) $p$, and (c) the phase diagram $p$ vs $q$. In the plots we used $t_0=1.0$, $q_0=1.0$, $v_0=0.0$, $\omega_0=10.0$ and $m_0=1.0$.}
\label{fig3}
\end{figure}
The behavior of the classical $q$ and $p$ variables on time is displayed in Figs. \ref{fig3} (a) and (b), respectively. Despite the oscillating ($\cos{\left(\frac{\sqrt{3}}{2}\ln{t}\right)}$ and ($\sin{\left(\frac{\sqrt{3}}{2}\ln{t}\right)}$) terms, $q$ and $p$ exhibit an opposite behavior compared to those calculated in case \textbf{B}. Here $q$ decreases while $p$ increases as $t$ increases. Figure \ref{fig3}(c) shows the phase diagram. We observe that the amplitude $A$ decreases as $\propto 1/\sqrt{t}$. This oscillator also exhibits a pseudo-log-periodic character.
\section{\label{sec4}CONCLUDING REMARKS}
In this paper we have used a unitary transformation and the LR invariant method in the Schr\"{o}dinger picture to obtain the exact wave functions for oscillators exhibiting either log-periodic or pseudo-log-periodic behavior. It is well-known that a challenge in obtaining the exact solution (see Eq. (\ref{18})) for the SE (Eq.(\ref{8})) for $H(t)$ given in Eq. (\ref{1}), is the solution of the auxiliary equation for the c-number quantity $\rho$ (see Eq. (\ref{5})). Here we find $\rho$ for each case using the methods described in Refs. \onlinecite{7, 8}.
For case $A$, we find $\rho=c$ and, as a consequence, the solution for $\psi_n (q,t)$ (see Eq. (\ref{21})) except for the phase factor ($e^{-i\left(n+\frac{1}{2}\right)t_0\omega_0\ln{\frac{t}{t_0}}}$), is similar to the well-known wave function for the time-independent harmonic oscillator of mass $m_0$ and frequency $\omega_0$. In Ref. \onlinecite{1}, we observed that when $m(t)=m_0$, $\omega(t)=\omega_0$, and $\rho(t)=\left(\frac{1}{m_0\omega_0}\right)^{1/2}$, which is a particular solution of Eq. (\ref{5}), the wave function obtained also corresponds to that of the time-independent harmonic oscillator. In case \textbf{A} even with $m\propto t$ and $\omega\propto \frac{1}{t}$, we obtain the same solution for $\rho$ ($\rho= c$), indicating that this oscillator behaves as the harmonic oscillator with $m$ and $\omega$ constant.
We have constructed the ``true" coherent states, $|\alpha,t\rangle$, whose coordinate representation is given by Eq. (\ref{43}). We verified that Eq. (\ref{39}) holds for $|\alpha,t\rangle$. We calculated the quantum fluctuations in the coordinate and momentum as well as the quantum correlations between the coordinate and momentum in the state $\psi_n (q,t)$.
We analyzed the time behavior of $\langle q\rangle_\alpha$ and $\langle p\rangle_\alpha$ , as well as the phase diagram $\langle q\rangle_\alpha\times\langle p\rangle_\alpha$ (see Fig.\ref{fig1}(a)-(c)). We observed that $\langle q\rangle_\alpha$ and $\langle p\rangle_\alpha$ exhibits the exact log-periodic behavior, and that the phase diagram indicates, as already anticipated, that the log-periodic oscillator behaves as the classical harmonic oscillator with $m(t)=m_0$ and $\omega(t)=\omega_0$.
For cases \textbf{B} and \textbf{C}, we obtained the wave functions given by Eqs. (\ref{46}) and (\ref{54}), respectively.
|
1,314,259,996,654 | arxiv | \section{Introduction}
A century after Einstein predicted the existence of gravitational waves (GWs),
the Laser Interferometer Gravitational-Wave Observatory (LIGO)
observed the first direct GW signal GW150914
from a merger of two black holes (BHs)
with masses of $36_{-4}^{+5} M_{\odot}$ and $29_{-4}^{+4} M_{\odot}$
and radiated energy $3_{-0.4}^{+0.5} M_{\odot} c^2$.
\citep{LIGO_1st}.
This is also the first discovery of a binary BH.
During Advanced LIGO's first observing period (O1),
September 12, 2015 to January 19, 2016 \citep{LIGO_O1},\footnote{
O1 was officially September 18, 2015 to January 12, 2016
before the detection of GW150914.}
the second event GW151226 with masses
$14.2_{-3.7}^{+8.3} M_{\odot}$ and $7.5_{-2.3}^{+2.3} M_{\odot}$
and radiated energy $1.0_{-0.2}^{+0.1} M_{\odot} c^2$
\citep{LIGO_GW151226}
and a candidate LVT151012
with $23_{-6}^{+18} M_{\odot}$, $13_{-5}^{+4} M_{\odot}$
and $1.5_{-0.4}^{+0.3} M_{\odot} c^2$
have been also detected,
and the existence of a population of merging BHs has been established.
These $\sim 2.5$ events give a relatively certain estimate
on the merger rate in the range
$\mathscr{R}_{\rm GW} \sim 9$--$240$ Gpc$^{-3}$ yr$^{-1}$ \citep{LIGO_O1}.
A new era of GW astrophysics has been finally opened
and will be driven by a network of LIGO, Virgo, KAGRA, and IndiGO,
and by eLISA and DECIGO satellites in the future
\citep{Sesana16,Kyutoku_Seto16,Nakamura+16b}.
The binary BH mergers are the most luminous events in the universe,
even brighter than gamma-ray bursts (GRBs).
The peak luminosities are $3.6_{-0.4}^{+0.5} \times 10^{56}$ erg s$^{-1}$,
$3.3_{-1.6}^{+0.8} \times 10^{56}$ erg s$^{-1}$
and $3.1_{-1.8}^{+0.8} \times 10^{56}$ erg s$^{-1}$
for GW150914, GW151226 and LVT 151012, respectively \citep{LIGO_O1},
which reach $\sim 0.1\%$ of the Planck luminosity
$c^5/G=3.6 \times 10^{59}$ erg s$^{-1}=2.0 \times 10^{5} M_{\odot} c^2$ s$^{-1}$.
Merged BHs also retain huge energy in the spin.
The spin energy is about
\begin{eqnarray}
E_{\rm spin} &=& \left(1-\sqrt{\frac{1+\sqrt{1-a_*^2}}{2}}\right)Mc^2
\sim 1 \times 10^{54}\ {\rm erg} \left(\frac{M}{10 M_\odot}\right),
\end{eqnarray}
where the spin parameter is typically $a_*=a/M \sim 0.7$ after a merger
\citep[e.g.,][]{Zlochower_Lousto15}.
Post-merger spinning BHs should also exist in our Galaxy,
having a lot of energy in the spin.
The number of such BHs is estimated as
\begin{eqnarray}
N_{\rm BH} \sim \mathscr{R}_{\rm GW} n_{\rm gal}^{-1} H_0^{-1}
\sim 7 \times 10^{4}\ {\rm galaxy}^{-1}
\left(\frac{\mathscr{R}_{\rm GW}}{70\,{\rm Gpc}^{-3}\,{\rm yr}^{-1}}\right),
\label{eq:NBH}
\end{eqnarray}
where we use $\mathscr{R}_{\rm GW} \sim 70$ Gpc$^{-3}$ yr$^{-1}$ \citep{LIGO_O1},
$n_{\rm gal} \sim 0.01$ Mpc$^{-3}$ is the number density of galaxies,
and $H_0^{-1} \sim 10^{10}$ yr is the Hubble time.
This estimate is applicable
unless the merger rate changes very rapidly
in a time much shorter than the Hubble time.
Note that, although the large mass in GW150914 suggest
a low-metallicity environment with $Z \lesssim Z_{\odot}/2$
\citep{LIGO_astro16},
our Galaxy had a low-metallicity environment in the past,
and also incorporated low-metallicity galaxies
in the hierarchical structure formation.
The total spin energy stored in the merged BHs in our Galaxy is
\begin{eqnarray}
E_{\rm tot}=N_{\rm BH} E_{\rm spin}
\sim 9 \times 10^{58}\ {\rm erg}
\sim 9 \times 10^{7} E_{\rm SN},
\label{eq:Etot}
\end{eqnarray}
where $E_{\rm SN} \sim 10^{51}$ erg is the kinetic energy of a supernova (SN).
This is comparable to the total energy of SNe that ever happened in our Galaxy,
i.e., $\sim 10^8$ SNe exploded during the Hubble time!
This is a robust lower limit on the total spin energy,
obtained by the GW observations for the first time.
A natural question arises:
How much spin energy is extracted from the merged BHs in our Galaxy?
The spin energy of a BH can be extracted
by a large-scale poloidal magnetic field threading the BH,
i.e., through Blandford-Znajek effect \citep{BZ77},
which is thought to produce a BH jet.
We show that a sufficient magnetic field is
advected by the Bondi-Hoyle accretion from the interstellar medium (ISM)
and the jet power becomes comparable to the accretion rate,
which is larger than the radiative power of the accretion disk.
By taking into account the distributions of the ISM density,
the BH mass and velocity,
we estimate the luminosity function and the total power of the BH jets.
Based on the estimate of the luminosities and the acceleration energy,
we suggest that the BH jets are potentially the origin of
high energy particles in our Galaxy.
There are enigmatic high-energy sources in our Galaxy,
such as still-unknown PeVatrons accelerating cosmic rays (CRs)
up to the knee energy $\varepsilon_{\rm knee} \sim 3 \times 10^{15}$ eV and beyond,
sources of TeV CR positrons,
and unidentified TeV sources (TeV unIDs)
that are dominant in the very-high-energy gamma-ray sky.
These sources require only a small fraction of the spin energy $E_{\rm tot}$
and could be powered by the BH jets.
Our examination of the BH accretion and jet also suggests that
it is very difficult to detect
an electromagnetic counterpart to a BH merger after a GW event.
In particular, the report of a GRB around the time of GW150914
by the Fermi Gamma-ray Burst Monitor (GBM)
\citep{Connaughton+16}
is most likely irrelevant to the GW event.
This is consistent with a large number of follow-up searches after GW150914
\citep{Ackermann+16,Kasliwal+16,Troja+16,KamLAND16,Adrian-Martinez+16,Tavani+16,Abbott+16,Adriani+16,Evans+16,Palliyaguru+16,Auger+16,Abe+16,Morokuma+16,Evans+16b}.
The organization of this paper is as follows.
In Section~\ref{sec:mechanism},
we discuss the physical mechanism of energy extraction from a spinning BH.
We find that the accretion disk typically results in
the so-called magnetically arrested disk (MAD) state
and the magnetic field extracts the spin energy with the maximum efficiency
for producing a jet.
In Section~\ref{sec:LF},
we calculate the luminosity function of the BH jets
by considering the distributions of
the BH mass, the peculiar velocity, the GW recoil velocity,
and the ISM density.
The luminosity function also gives the total power of the BH jets.
In Section~\ref{sec:obs}, we discuss the connections of the BH jets
with high energy sources in our Galaxy,
such as PeVatrons, CR positron sources, and TeV unIDs.
In Section~\ref{sec:uncertain},
we encompass the uncertainties of our estimate on the total power
within a factor of $10^{\pm 3}$
by taking into account
various effects such as the initial spin, the BH formation scenario,
and the wind feedbacks.
This is much better than before;
the factor was almost $10^{\pm \infty}$ before the GW detections.
In Section~\ref{sec:Fermi},
we show that BHs are difficult to keep accretion disks until the merger
that are massive enough for making a detectable electromagnetic counterpart
for GW150914.
Section~\ref{sec:discuss} is devoted to the summary and discussions.
In Section~\ref{sec:history},
we clarify novel points of our work compared with previous studies.
\section{Extracting spin energy of GW150914-like Galactic BHs}
\label{sec:mechanism}
The spin energy of a BH can be extracted
by a large-scale magnetic field threading the BH ergosphere.
The BH spin twists the magnetic field
and the twisted magnetic field carries energy outward as a Poynting jet.
This is the so-called BZ effect \citep{BZ77,Koide+02}.
Although the BH itself cannot keep the magnetic field
because of the no-hair theorem,
accretion onto the BH can maintain the magnetic field on the BH.
In this section we consider a BH in the ISM
and estimate the luminosity of a BZ jet powered by the BH spin.
For typical parameters,
we find that
the luminosity of a BH jet is comparable to the accretion rate
$L_{j} \approx {\dot M} c^2$,
with the accretion disk in the state of the so-called MAD.
\subsection{Bondi accretion from the ISM}\label{sec:Bondi}
The accretion rate onto a BH from the ISM is given by the Bondi-Hoyle rate
\citep{Hoyle_Lyttleton39,Bondi_Hoyle44,Bondi52},
\begin{eqnarray}
\dot{M}
&=& 4 \pi r_B^2 V \rho
=\frac{4\pi G^2 M^2 n \mu m_u}{V^{3}}
\nonumber\\
&\sim&
5 \times 10^{35} \ {\rm erg}\ {\rm s}^{-1}
\frac{1}{c^2}
\left(\frac{n}{10\,{\rm cm}^{-3}}\right)
\left(\frac{M}{10 M_\odot}\right)^2
\left(\frac{V}{10\,{\rm km}\,{\rm s}^{-1}}\right)^{-3}
\nonumber\\
&\sim& 4\times 10^{-4} \frac{{L}_{\rm Edd}}{c^2}
\left(\frac{n}{10\,{\rm cm}^{-3}}\right)
\left(\frac{M}{10 M_\odot}\right)
\left(\frac{V}{10\,{\rm km}\,{\rm s}^{-1}}\right)^{-3},
\label{eq:Mdot}
\end{eqnarray}
where $n$ is the number density of the ISM,
$m_u$ is the unified atomic mass unit,
the mean molecular weight is $\mu=1.41$ for the Milky Way abundance
and $\mu=2.82$ for molecular clouds \citep[e.g.,][]{Kauffmann+08},
$M$ is the mass of the merged BH,
$L_{\rm Edd}$ is the Eddington luminosity,
\begin{eqnarray}
r_B = \frac{GM}{V^2}
\sim 1 \times 10^{15}\ {\rm cm}
\left(\frac{M}{10 M_\odot}\right)
\left(\frac{V}{10\,{\rm km}\,{\rm s}^{-1}}\right)^{-2}
\end{eqnarray}
is the Bondi radius, and
\begin{eqnarray}
V=\sqrt{c_s^2+v^2+v_{\rm GW}^2}
\end{eqnarray}
includes the (effective) sound speed $c_s$ of the ISM,
the center-of-mass velocity $v$ of the BH before the merger
in the local ISM,
and the recoil velocity $v_{\rm GW}$ due to the GW emission at the merger.
The accretion rate ${\dot M}$ is proportional to $M^2 V^{-3} n$.
The discovery of a massive BH with mass $M \sim 60 M_{\odot}$ in GW150914
significantly increases the estimate of ${\dot M}$,
while the GW recoil tends to reduce it.
The ISM density spans many decades.
Thus we have to consider the distributions of mass, velocity, and density
to estimate the total power in Section~\ref{sec:LF}.
\subsection{Formation of an accretion disk and ADAF}\label{sec:disk}
The accreted matter forms an accretion disk for typical parameters
\citep{Fujita+98,Agol_Kamionkowski02}.
The ISM density has a turbulent fluctuation with a Kolmogorov spectrum
$\delta \rho/\rho \sim [L/(6 \times 10^{18}\,{\rm cm})]^{1/3}$
down to $\sim 10^{8}$ cm \citep{Armstrong+95,Draine11}.
As a BH travels in the ISM,
the accreting matter acquires a net specific angular momentum
\begin{eqnarray}
\ell \sim \frac{1}{4}\frac{\Delta \rho}{\rho} V r_B,
\end{eqnarray}
where $\Delta \rho/\rho=\delta \rho/\rho|_{L=2r_B}$
is the density difference across the accretion cylinder.\footnote{
A factor $1/4$ comes from the average over the accretion cylinder,
$\int_0^{r_B} r dr \int_0^{2\pi} d\theta (r \cos \theta)^2
/ r_B^2 \int_0^{r_B} r dr \int_0^{2\pi} d\theta = \frac{1}{4}$.
If a turbulent velocity dominates $V$,
the factor has a fluctuation
depending on the velocity direction.}
By equating this with the Keplerian angular momentum
$\ell_K=\sqrt{GM r_{\rm disk}}$,
we obtain the radius of the resulting accretion disk,
\begin{eqnarray}
\frac{r_{\rm disk}}{r_S} &\sim&
\frac{1}{16}\left(\frac{2 GM/V^2}{6\times 10^{18}\,{\rm cm}}\right)^{2/3}
\frac{c^2}{2V^2}
\nonumber\\
&\sim& 2\times 10^{5}
\left(\frac{M}{10 M_{\odot}}\right)^{2/3}
\left(\frac{V}{10\,{\rm km}\,{\rm s}^{-1}}\right)^{-10/3},
\label{eq:rdisk}
\end{eqnarray}
where $r_S=2GM/c^2$ is the Schwarzschild radius.
The disk radius could be decreased if the magnetic breaking is effective.
The accretion disk most likely forms hot, geometrically-thick accretion flow
such as advection-dominated accretion flow (ADAF)
\citep{Fujita+98}.
The accreted matter is heated and eventually ionized because
the collisional ionization rate is larger than
the accretion rate as well as the recombination rate
for typical parameters (see also Section~\ref{sec:Fermi}).
The accretion rate is much lower than the Eddington rate
as in Equation (\ref{eq:Mdot})
and hence the low density makes the cooling ineffective
\citep[][]{Ichimaru77,Narayan_Yi94,Narayan_Yi95,Kato+08,Yuan_Narayan14}.
The radiated energy from ADAF is much less than the total generated energy
and almost all energy is advected into the BH
(see also Section~\ref{sec:wind}).
For example, the luminosity of bremsstrahlung emission from electrons
is only
$L_{\rm brem}\sim (\alpha_{\rm QED}/\alpha^2)(m_e/m_u)
({\dot M}c^2/L_{\rm Edd}) {\dot M} c^2
\ll {\dot M} c^2$,
where $\alpha_{\rm QED}$ is the fine-structure constant,
$\alpha$ is the viscous parameter,
and $m_e$ is the electron mass.
As shown below, this is much smaller than the jet luminosity.
Thus we concentrate on the jet in this paper
and consider the disk emission in the future papers
\citep{Matsumoto+16}.
A transition to a cold standard disk outside the hot disk is not expected
for typical parameters,
although this is common in BH X-ray binaries \citep[e.g.,][]{Esin+97,Kato+08}.
The reason is that at the initial radius in Equation~(\ref{eq:rdisk}),
the disk is already hot (ionized) and
the maximum accretion rate of the ADAF solution \citep{Abramowicz+95}
is larger than the accretion rate in Equation (\ref{eq:Mdot}),
i.e., cooling is ineffective.
Then we do not also expect soft X-ray transients
(or X-ray novae) caused by the accumulation of the accreted matter
at some radius
because the thermal instability
due to recombination is absent for the ionized flow \citep[e.g.,][]{Kato+08}.
\subsection{Blandford-Znajek jet from a MAD state}\label{sec:BZ}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{fig1.eps}
\end{center}
\caption{
Schematic picture of a Blandford-Znajek jet from a spinning BH
that is accreting from the ISM.
}
\label{fig:BZ}
\end{figure}
The accretion of the ISM also drags magnetic fields into the BH
(see Figure~\ref{fig:BZ}).
The magnetic fields are well frozen in the accreting fluid
because the loss time of the magnetic flux in the ISM is much longer
than the accretion time \citep{Nakano+02}.
The formed disk is also thick, being able to
advect the magnetic flux inward \citep{Lubow+94,Cao11}.
The coherent length of the magnetic field in the ISM is
much larger than the Bondi radius,
approximately about the scale of energy injection by SNe and stellar winds
$\sim 1$--$10$ pc \citep{Han+04}.
Then the magnetic flux conservation implies
the magnetic field strength on the horizon
\begin{eqnarray}
B_H \sim \left(\frac{r_B}{r_H}\right)^2 B_{\rm ISM},
\label{eq:flux_cons}
\end{eqnarray}
where $B_{\rm ISM}$ is the magnetic field strength in the ISM,
and $r_H=\frac{1}{2}\left(1+\sqrt{1-a_*^2}\right) r_S$
is the radius of the BH horizon.
On the other hand, for a given accretion rate,
there is a maximum strength of the magnetic field on the horizon,
\begin{eqnarray}
B_H &\sim & \left.\sqrt{\frac{4GM{\dot M}}{r^3 v_r}}\right|_{r=r_H}
\nonumber\\
&\sim & 4 \times 10^{7}\ {\rm G}
\left(\frac{n}{10\,{\rm cm}^{-3}}\right)^{1/2}
\left(\frac{V}{10\,{\rm km}\,{\rm s}^{-1}}\right)^{-3/2},
\label{eq:B_MAD}
\end{eqnarray}
because the pressure of the magnetic field,
\begin{eqnarray}
p_B=\frac{B^2}{8\pi},
\end{eqnarray}
can not exceed the ram pressure of the accreting matter,
\begin{eqnarray}
p_{a} = \frac{GM\Sigma}{r^2}
\sim \frac{GM {\dot M}}{2\pi r^3 v_r},
\end{eqnarray}
where $\Sigma={\dot M}/2\pi r v_r$ is the surface density,
$v_r \equiv \epsilon v_{\rm ff}$ is the radial velocity,
$v_{\rm ff}=\sqrt{3GM/4\pi r}$ is the free-fall time,
and $\epsilon \sim 0.05$ is suggested
by the numerical simulations and observations
\citep{Tchekhovskoy+11,Zamaninasab+14}.
Although accumulation of the magnetic flux with the same polarity
makes a magnetic barrier \citep{BR76},
the accretion continues through a magnetic flux
via magnetic interchange instability \citep[e.g.,][]{Arons_Lea76,McKinney+12}.
Such a magnetically-dominated state is the so-called MAD
\citep{BR76,Narayan+03,Tchekhovskoy+11}.
The MAD state is realized
if $B_H$ in Equation (\ref{eq:flux_cons}) is larger than that in Equation (\ref{eq:B_MAD}), i.e.,
\begin{eqnarray}
B_{\rm ISM} &>& B_{\rm crit}
\equiv \left(\frac{r_H}{r_B}\right)^2
\left.\sqrt{\frac{4GM{\dot M}}{r^3 v_r}}\right|_{r=r_H}
\nonumber\\
&\sim & 1 \times 10^{-10}\,{\rm G}
\left(\frac{V}{10\,{\rm km}\,{\rm s}^{-1}}\right)^{5/2}
\left(\frac{n}{10\,{\rm cm}^{-3}}\right)^{1/2}.
\end{eqnarray}
This is usually satisfied for typical $B_{\rm ISM} \sim 3 \mu$G.
Thus the formed disk is likely MAD.
A spinning BH immersed in large-scale poloidal magnetic fields
releases energy through the BZ effect
with a Poynting luminosity
\begin{eqnarray}
L_{j} \approx \frac{\kappa}{4\pi c} \Omega_H^2 \Psi_{\rm BH}^2,
\label{eq:LBZ}
\end{eqnarray}
where $\kappa \approx 0.05$ \citep{Tchekhovskoy+11},
$\Omega_H=a_*c/2r_H$ is the angular frequency of the BH,
and $\Psi_{\rm BH} \sim \pi r_H^2 B_H$ is a magnetic flux on the BH.
For a MAD state $p_B \sim p_a$, the BZ luminosity
is calculated as
\begin{eqnarray}
L_{j} \sim \left(\frac{\kappa}{\epsilon}
\sqrt{\frac{\pi^3}{12}\frac{r_S}{2r_H}}\right)
a_*^2 {\dot M} c^2
\approx {\dot M} c^2
\label{eq:LBZ=Mdot}
\end{eqnarray}
for a typical spin parameter after the merger $a_*\approx 0.7$.
In the following we will use $L_{j} \approx {\dot M} c^2$
to estimate the jet luminosity of the merged BHs.\footnote{
We have confirmed that the results are almost similar even if we use
$L_{j} \approx a_*^2 {\dot M} c^2$.
}
Note that the net angular momentum direction of the accretion flow
changes on a timescale of crossing the density fluctuation
$t_a \sim r_B/V \sim 40\,{\rm yr}
(M/10M_{\odot}) (V/10\,{\rm km}\,{\rm s}^{-1})^{-3}$.
However the angular momentum vector near the BH
is forced to align with the BH spin
by the Bardeen-Petterson effect \citep{Bardeen_Petterson75}.
In addition, although the direction of the poloidal magnetic fields
is generally different from the BH spin direction,
the magneto-spin alignment is also realized by
the frame-dragging effect \citep{McKinney+13}.
Therefore we can consider that
the direction of the jet is the same as that of the BH spin.
\section{Luminosity function of GW150914-like Galactic BH jets}\label{sec:LF}
Since the accretion rate depends on $n M^2 V^{-3}$ that spans many decades,
we calculate the luminosity function of jets from GW150914-like merged BHs
in our Galaxy as
\begin{eqnarray}
\frac{dN}{d{\dot{M}}}
&=&
N_{\rm BH} \int dm_1\, \frac{dp(m_1)}{dm_1}
\int dm_2\, \frac{dp(m_2|m_1)}{dm_2}
\int dv\, \frac{df(v)}{dv}
\nonumber\\
&\times&
\int dn\, \frac{d\xi(n)}{dn}
h(m_1,m_2,v)
\delta\left[\dot{M}(n, m_1, m_2, v)-\dot{M}\right],
\label{eq:LF1}
\end{eqnarray}
where $dp(m_1)/dm_1$ and $dp(m_2|m_1)/dm_2$ are the distributions of BH masses
(Section~\ref{sec:mass}),
$df(v)/dv$ is the distribution of the pre-merger velocity
(Section~\ref{sec:velocity}),
$d\xi(n)/dn$ is the distribution of the ISM density
(Section~\ref{sec:density}),
and $h(m_1,m_2,v)$ is the correction factor
due to the scale heights of the ISM phases and BH distributions
(Section~\ref{sec:height}).
First, the delta function can be integrated over $v$ analytically as
\begin{eqnarray}
\frac{dN}{d{\dot M}}
&=&
N_{\rm BH} \int dm_1\, \frac{dp(m_1)}{dm_1}
\int dm_2\, \frac{dp(m_2|m_1)}{dm_2}
\int dn\, \frac{d\xi(n)}{dn}
\nonumber\\
&\times&
h(m_1,m_2,v_0)
\frac{df(v_0)}{dv}
\frac{V_{v=v_0}^2}{3 v_0 {\dot M}},
\end{eqnarray}
where $v_0^2\equiv (4\pi G^2 M^2 n \mu m_u/{\dot M})^{2/3}-c_s^2-v_{\rm GW}^2$
should be positive, otherwise the integrant vanishes.
The other integrals are computed numerically.
We adopt $N_{\rm BH} \sim 7 \times 10^4$ BHs galaxy$^{-1}$ as a fiducial value,
corresponding to the GW event rate
$\mathscr{R}_{\rm GW} \sim 70$ Gpc$^{-3}$ yr$^{-1}$ \citep{LIGO_O1}
in Equation~(\ref{eq:NBH}).
\subsection{Mass function}\label{sec:mass}
We assume a Salpeter-like mass function for the primary BH,
\begin{eqnarray}
\frac{dp(m_1)}{dm_1} = C m_1^{-\gamma},
\end{eqnarray}
with a uniform distribution of the secondary mass,
\begin{eqnarray}
\frac{dp(m_2|m_1)}{dm_2} = \frac{1}{m_1-M_{\min}},
\end{eqnarray}
where $\gamma=2.35$,
$M_{\min} \le m_2 \le m_1 \le M_{\max}$,
$M_{\min}=5M_{\odot}$,
$M_{\max} = 50 M_\odot$, and
$C=(\gamma-1)/(M_{\min}^{1-\gamma}-M_{\max}^{1-\gamma})$.
Such mass functions are inferred by the observations of massive stars
\citep{Sana+12,Kobulnicky+14}.
Similar mass functions\footnote{
LIGO imposes $m_1 + m_2 < 100M_{\odot}$ instead of $M_{\max} = 50 M_\odot$.
} are adopted
by the analysis of LIGO O1 data \citep{LIGO_O1},
and are consistent with the GW observations.
Note that the total luminosity is dominated by heavy masses for $\gamma<3$.
In this respect, GW150914 is crucial by raising the maximum mass $M_{\max}$
and hence the expected luminosity more than was previously thought
(cf. $M_{\max}=13 M_{\odot}$ was adopted in \citet{Agol_Kamionkowski02}.
Note also $\gamma=0.35$ in \citet{Agol_Kamionkowski02}).
\subsection{Velocity distribution before a merger}\label{sec:velocity}
The velocity distribution for GW150914-like BHs before mergers is described by
a Maxwell-Boltzmann distribution,
\begin{eqnarray}
\frac{df(v)}{dv}=\sqrt{\frac{2}{\pi}} \frac{v^2}{\sigma_v^3}
\exp \left(-\frac{v^2}{2 \sigma_v^2}\right),
\label{eq:velocity}
\end{eqnarray}
where an isotropic Gaussian approximation
is enough for our order-of-magnitude estimates.
As a fiducial value, we take the velocity dispersion $\sigma_v=40$ km s$^{-1}$
by considering the isolated binary formation scenario.
From a theoretical point of view, massive star progenitors are born
from molecular clouds and their velocity dispersion is initially low
$\sigma_v \sim 10$ km s$^{-1}$ \citep{Binney_Merrifield98}.
Unless the BH formation is associated with an exceptionally large kick
due to such as asymmetric mass ejection,
the resulting BHs have also low velocities.
If the kick velocity is inversely proportional to the mass
following the momentum conservation,
the kick velocity of neutron stars implies
$\sigma_v \sim 200$ km s$^{-1} (1.4 M_\odot/M)
\sim 30$ km s$^{-1} (M/10M_{\odot})^{-1}$.
Older stars tend to have larger velocity dispersion and
$\sigma_v=40$ km s$^{-1}$ is reasonable
for progenitors with metallicity $Z \lesssim 0.5 Z_\odot$
\citep{Binney_Merrifield98}.
From an observational point of view,
the rms distance $\sim 410$ pc from the Galactic plane
for BH low-mass X-ray binaries,
corresponding to a scale height of $290$ pc,
suggest a velocity dispersion of $\sigma_v \sim 40$ km s$^{-1}$
\citep{White_vanParadijs96}.
Although there are exceptions such as
GRO 1655-40 with a peculiar velocity $v \sim -114$ km s$^{-1}$
\citep{Brandt+95,Mirabel+02}
and XTE J1118+480 with $v \sim 145$ km s$^{-1}$ \citep{Mirabel+01},
two populations likely exist with low and high kick velocities,
similarly to neutron stars \citep{Cordes_Chernoff98,Pfahl+02}.
On the other hand, these observations are not for high-mass systems.
In addition, these estimates are subject to systematic errors in the distance
\citep{Repetto+12}.
The most reliable estimate is based on the astrometric observations
\citep{Miller-Jones14}.
Although there is only one sample for a high-mass system,
the BH high-mass X-ray binary Cygnus X-1 has a relatively low proper motion
$\sim 20$ km s$^{-1}$
\citep{Chevalier_Ilovaisky98,Mirabel_Rodrigues03,Reid+11}.
We discuss a high-velocity case $\sigma_v=200$ km s$^{-1}$
later in Section~\ref{sec:scenario}.
\subsection{ISM Density}\label{sec:density}
\begin{table*}[t]
\centering
\caption{Five ISM phases.
The density distribution from $n_1$ to $n_2$ with an index $\beta$,
the volume filling fraction $\xi_0$,
the (effective) sound velocity $c_s$
(including the turbulent velocity for cold phases),
and the scale height of the disk $H_d$ are shown.
}
\label{tab:ISM}
\begin{tabular}{lllllll} \hline
Phase & $n_1$ [cm$^{-3}$] & $n_2$ [cm$^{-3}$] & $\beta$ & $\xi_0$ & $c_s$ [km s$^{-1}$] & $H_d$ \\ \hline \hline
Molecular clouds & $10^2$ & $10^5$ & $2.8$ & $10^{-3}$ & 10 & 75 pc \\
Cold ${\rm H}_{\rm I}$ & $10$ & $10^2$ & $3.8$ & $0.04$ & 10 & 150 pc \\
Warm ${\rm H}_{\rm I}$ & 0.3 & $-$ & $-$ & 0.35 & 10 & 0.5 kpc \\
Warm ${\rm H}_{\rm II}$ & 0.15 & $-$ & $-$ & 0.2 & 10 & 1 kpc \\
Hot ${\rm H}_{\rm II}$ & 0.002 & $-$ & $-$ & 0.4 & 150 & 3 kpc \\ \hline
\end{tabular}
\end{table*}
We consider five phases of the ISM as listed in Table~\ref{tab:ISM};
the molecular clouds consisting mostly of H$_2$,
the cold neutral medium consisting of H$_{\rm I}$ clouds (cold H$_{\rm I}$),
the warm neutral medium in thermally equilibrium
with cold H$_{\rm I}$ (warm H$_{\rm I}$),
the warm ionized medium (warm H$_{\rm II}$),
and the hot ionized medium (hot H$_{\rm II}$)
\citep{Bland-Hawthorn_Reynolds00,Ferriere01,Heyer_Dame15,Inutsuka+15}.
For each phase, we use the probability distribution of the number density,
\begin{eqnarray}
\frac{d\xi(n)}{dn} = D \xi_0 n^{-\beta},\quad (n_1<n<n_2)
\label{eq:density}
\end{eqnarray}
where $n_1$, $n_2$ and $\beta$ are given in Table~\ref{tab:ISM}
\citep{Berkhuijsen99},
$D=(\beta-1)/(n_1^{1-\beta}-n_2^{1-\beta})$,
and $\xi_0 = \int \frac{d\xi}{dn} dn$ is the volume filling fraction
\citep{Scoville_Sanders87,Clemens+88,Agol_Kamionkowski02}.\footnote{
For the cases without $n_2$ in Table~\ref{tab:ISM}, we use a delta function.}
Each phase has its scale height $H_d$ in the Galactic disk.
We assume that the hot H$_{\rm II}$ phase has a sound velocity $c_s=150$ km s$^{-1}$
corresponding to a temperature $T \sim 10^{6}$ K,
while the other phases have effective $c_s \sim 10$ km s$^{-1}$
(corresponding to $T \sim 2 \times 10^{4}$ K)
because these phases (even with $T< 2 \times 10^{4}$ K)
have also a turbulent velocity $\sim 10$ km s$^{-1}$
in approximately pressure balance with each other.
The parameters in Table~\ref{tab:ISM} are similar to those in
Agol \& Kamionkowski (2002) \citep{Agol_Kamionkowski02}.
\subsection{Scale height}\label{sec:height}
The BHs have their scale height $H(v_z)$ in the Galactic disk.
Each phase of the ISM has also its own scale height $H_d$
in Table~\ref{tab:ISM}.
Then the number of BHs in each phase is corrected by a factor
\begin{eqnarray}
h(m_1,m_2,v)=\min\left[1,\frac{H_d}{H(v_z)}\right].
\end{eqnarray}
For simplicity, we make a one-dimensional analysis of the vertical structure,
neglecting the coupling of the vertical and horizontal motions.
The scale height $H\left(v_z\right)$
is determined by the velocity in the $z$-direction,
\begin{eqnarray}
\frac{1}{2}v_z^2=\Phi_z\left[H(v_z)\right],
\end{eqnarray}
where the $z$-component of the velocity is $v_z^2=\frac{1}{3}(v^2+v_{\rm GW}^2)$
and the gravitational potential in the $z$-direction,
\begin{eqnarray}
\frac{\Phi_z(z)}{2\pi G}=K \left(\sqrt{z^2+Z^2}-Z\right)+Fz^2,
\end{eqnarray}
where $Z \sim 180$ pc, $K=48 M_\odot$ pc$^{-2}$, and $F=0.01 M_\odot$ pc$^{-3}$
\citep{Kuijken_Gilmore89a,Kuijken_Gilmore89b}.
This simple model is sufficient for our order-of-magnitude estimates.
\subsection{GW recoil velocity}
Merged BHs receive a recoil due to the anisotropic GW emission
\citep{Bonnor_Rotenberg61,Peres62,Bekenstein73}.
GWs carry linear momentum if two merging BHs have different masses and/or finite spins.
Fitting formulas for the recoil velocity are obtained
by using numerical simulations in the post-Newtonian-inspired forms
\citep{Zlochower_Lousto15,Campanelli+07,Baker+07}.
The recoil velocity for a merger of non-spinning BHs is well approximated by
\begin{eqnarray}
v_{\rm GW} &=& A \eta^2 \sqrt{1-4\eta} \left(1+B \eta\right),
\end{eqnarray}
where
$A=1.20 \times 10^{4}$ km s$^{-1}$, $B=-0.93$ \citep{Gonzalez+07,Fitchett83},
and $\eta={m_1 m_2}/{(m_1+m_2)^2}$ is the symmetric mass ratio.
The maximum value is $v_{\rm GW} \sim 175$ km s$^{-1}$ for $\eta=0.195$.
GW150914 has $\eta \sim 0.247$ and hence $v_{\rm GW} \sim 61$ km s$^{-1}$,
and GW151226 has $\eta \sim 0.226$ and hence $v_{\rm GW} \sim 150$ km s$^{-1}$.
Although we have to extrapolate the formula to small $\eta$,
this does not affect our result so much.
We do not consider the spin-induced recoil
because we are now assuming that the pre-merger spin is low
to make a conservative estimate on the released energy from BHs.
If the pre-merger spin is high,
the pre-merger BHs, which are much more abundant than the merged BHs,
can release energy through the BZ mechanism
even before the merger without the GW recoil.
This case will be discussed in Section~\ref{sec:spin}.
Current observations of GWs show
that the primary BH has a spin of $<0.7$ at 90\% confidence
with no evidence for spins being both large and strongly aligned.
For GW151226, the effective spin parameter is $0.21^{+0.20}_{-0.10}$
and may imply spinning BHs \citep{LIGO_O1}.
For reference, if the spin parameters parallel to the orbital axis are
$a_{*2\parallel} \sim 0.2$ and $a_{*1\parallel} \sim 0$
with the same masses $\eta \sim 1/4$,
the recoil velocity is about
$v_{\rm GW} \sim 40$ km s$^{-1}$,
while it is $v_{\rm GW} \sim 260$ km s$^{-1}$
if the in-plane spins are $a_{*2\perp} \sim 0.2$ and $a_{*1\perp} \sim 0$
with the same masses $\eta \sim 1/4$ \citep{Zlochower_Lousto15}.
\subsection{Results of the luminosity function}
Figure~\ref{fig:LF1} shows the luminosity function of
the BH jets from accreting BHs in our Galaxy
for the fiducial case (see Table~\ref{tab:model} for the other cases),
calculated from Equation~(\ref{eq:LF1}).
Each line corresponds the ISM phase where the BHs reside.
As the accretion rate is proportional to the ISM density
in Equation~(\ref{eq:Mdot}),
the jet luminosity is brighter for BHs in denser ISM such as molecular clouds.
On the other hand, brighter jets are rarer
because the volume filling fraction
of denser medium is smaller in the ISM
as in Table~\ref{tab:ISM}.
We can find that the brightest sources in our Galaxy
(with the number $dN/d\,{\rm log}\,{\dot M} \sim 1$)
have $L_{j} \sim 10^{36}$ erg s$^{-1}$
mostly residing in the cold ${\rm H_I}$,
while fainter sources are more abundant.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{fig2.eps}
\end{center}
\caption{
Luminosity function of the BH jets
from GW150914-like merged BHs in our Galaxy
for the isolated binary formation scenario
with the velocity dispersion $\sigma_v=40$ km s$^{-1}$,
calculated from Equation~(\ref{eq:LF1}).
Each line corresponds the ISM phase where the BHs reside
(see Table~\ref{tab:ISM}).
We can see that the most luminous source in our Galaxy has
$L_{j} \sim 10^{36}$ erg s$^{-1}$ for this scenario.
Dotted lines are calculated by setting the GW recoil velocity $v_{\rm GW}=0$.
The upper horizontal axis is the maximum energy
of accelerated particles allowed by the Hillas condition
for a given luminosity with the charge of accelerated particles $Z=1$
in Equation~(\ref{eq:emax}).
}
\label{fig:LF1}
\end{figure}
The GW recoil effect reduces the luminosity by an order-of-magnitude
as we can see from the dotted lines,
which are calculated by setting $v_{\rm GW}=0$.
This reduction is approximately determined
by the $V^{-3}$ dependence of the accretion rate in Equation~(\ref{eq:Mdot}) as
$\sim (v_{\rm GW}/\sigma_v)^{-3}
\sim (100\,{\rm km}\,{\rm s}^{-1}/40\,{\rm km}\,{\rm s}^{-1})^{-3}
\sim 0.06$.
Note also that the luminosity function for the hot $H_{\rm II}$ phase has a peak
because this phase has a large $c_s$ and so $V \sim c_s$ has little dispersion.
Figure~\ref{fig:power} is the same as Figure~\ref{fig:LF1} but
with the vertical axis multiplied by $L_{j} \sim {\dot M}c^2$.
This makes it clear that the most energy is generated
by BHs in the cold H$_{\rm I}$ medium.
We can read the total power,
\begin{eqnarray}
P_{\rm tot}=\int L_{j} \frac{dN}{d{\dot M}} d{\dot M}
\sim 10^{37}\ {\rm erg}\ {\rm s}^{-1}
\left(\frac{N_{\rm BH}}{7 \times 10^{4}}\right),
\label{eq:budget}
\end{eqnarray}
which is very roughly derived by
$P_{\rm tot} \sim N_{BH} \times \xi_0^{{\rm H}_{\rm I}}
\times {\dot M}(10\,{\rm cm}^{-3}, 5 M_{\odot}, 5 M_{\odot}, 100\,{\rm km}\,{\rm s}^{-1}) c^2
\times (M_{\max}/M_{\min})^{3-\gamma}
\sim 7 \times 10^{4} \times 0.04 \times 5 \times 10^{32}\,{\rm erg}\,{\rm s}^{-1}
\times 4.5
\sim 0.6 \times 10^{37}\,{\rm erg}\,{\rm s}^{-1}$.
Note that the velocity dependences of $\dot M$ and $f(v)$ cancel with each other
and the low velocity BHs have smaller scale height than the cold H$_{\rm I}$.
The total power is approximately $\sim 3 \times 10^{-5}$ of that of SN explosions
${E_{\rm SN}}/{100\,{\rm yr}}
\sim 3 \times 10^{41}\ {\rm erg}\ {\rm s}^{-1}$.
This is small but comparable to the required power
for some high-energy particles in our Galaxy.
Based on these results, we will discuss observational implications in the next section.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{fig3.eps}
\end{center}
\caption{
Same as Figure~\ref{fig:LF1} but with the vertical axis
multiplied by $L_{j} \sim {\dot M}c^2$.
We can find that the most energy is generated by BHs
in the cold H$_{\rm I}$ medium.
Dashed line plots the required power spectrum for supplying the observed CRs.
We can see that the total power is comparable to
that of CRs above the knee energy at around $\sim 3$ PeV
within the model uncertainties that are listed in Table~\ref{tab:model}.
}
\label{fig:power}
\end{figure}
\section{Observational implications}\label{sec:obs}
\subsection{PeVatrons}\label{sec:PeVatron}
Particle acceleration is ubiquitous in the BH jet system
as manifested in active galactic nuclei (AGNs),
gamma-ray bursts (GRBs) and X-ray binaries \citep{Longair11}.
The maximum acceleration energy is limited by the source size, i.e.,
the so-called Hillas condition,
$\varepsilon_{\max}=Z q B r/\Gamma$,
where $Z$ is the charge of accelerated particles,
$\Gamma$ is the Lorentz factor of the acceleration region,
and $B$ is the lab-frame magnetic field.
This can be written in terms of the Poynting luminosity
$L_{j} \sim 2 \pi \theta_j^2 r^{2} c (B^2/8\pi)$,
where the magnetic field carries an energy density $B^2/8\pi$ at a radius $r$
with the jet opening angle $\theta_j$ \citep{Norman+95,Blandford00,Waxman04}.
With the causality condition $\Gamma \theta_j \gtrsim 1$, we have
\begin{eqnarray}
\varepsilon_{\max}=Zq B \frac{r}{\Gamma}\sim \frac{2Zq}{\Gamma \theta_j} \sqrt{\frac{L_{j}}{c}}
\lesssim 3.5\ {\rm PeV}\ Z \left(\frac{L_{j}}{10^{36}\,{\rm erg}\,{\rm s}^{-1}}\right)^{1/2}.
\label{eq:emax}
\end{eqnarray}
Therefore bright sources, i.e., massive BHs in dense ISM,
are potential ``PeVatrons'' accelerating particles beyond PeV energy
\citep{Barkov+12,Kotera_Silk16}.
We plot $\varepsilon_{\max}$ on the upper horizontal axis
in Figures~\ref{fig:LF1} and \ref{fig:power}.
Possible acceleration sites are discussed in Section~\ref{sec:discuss}.
In Figure~\ref{fig:power},
we also plot the required power spectrum for supplying the observed CRs
by using
$L_0 (\varepsilon/\varepsilon_{\min}^{\rm CR})^{2-s} \frac{d\varepsilon}{\varepsilon}
=L_0 (\varepsilon/\varepsilon_{\min}^{\rm CR})^{2-s} \frac{d{\dot M}}{2{\dot M}}$
where the spectral index is $s=2.34$ below the knee \citep{Genolini+15}\footnote{
The intrinsic spectral index of CRs $s$ is unobservable and
different from the observed one by the diffusion coefficient index,
which is usually obtained from observations of the boron-to-carbon ratio
\citep{Evoli+15,Oliva+15,Genolini+15}.
}
and $0.3$ higher above the knee \citep{Blumer+09,Gaisser+13}.
SN remnants are commonly believed to supply
most Galactic CRs from
the peak energy $\varepsilon_{\min}^{\rm SN}=1$ GeV
to the knee $\varepsilon_{\max}^{\rm SN}=3$ PeV.
The normalization $L_0$ is determined by the fact
that a fraction $\epsilon_{\rm CR}=0.1$ of the SN kinetic energy
can yield CRs,
\begin{eqnarray}
\frac{\epsilon_{\rm CR} E_{\rm SN}}{100\,{\rm yr}}
=\int_{\varepsilon_{\min}^{\rm SN}}^{\varepsilon_{\max}^{\rm SN}} L_0
\left(\frac{\varepsilon}{\varepsilon_{\min}^{\rm SN}}\right)^{2-s}
\frac{d\varepsilon}{\varepsilon}.
\end{eqnarray}
From Figure~\ref{fig:power} we can see that
the BH jets can produce comparable energy to that required for
the observed CRs at and beyond the knee energies $\gtrsim 3$ PeV,
taking the model uncertainties into account
(see Section~\ref{sec:uncertain}).
The origin of these CRs is not known \citep[e.g.,][]{Hillas05,Blumer+09,Gaisser+13}.
Currently known gamma-ray sources, including even SN remnants,
do not show the characteristic PeVatron spectrum extending
without a cutoff or break to tens of TeV \citep{Aharonian13},
with a possible exception of the Galactic center Sagittarius A* \citep{HESS16}.
Even if the SN remnants are responsible for CRs up to the knee,
the transition from Galactic to extragalactic CRs
occurs between the knee and the ankle.
Ultra-high-energy CRs above the ankle are extragalactic
because of the observed isotropy \citep{Abreu+10,Abbasi+16}.\footnote{
There could be possible hot spots \citep{Abbasi+14,Aab+15}.}
If the knee corresponds to the proton cutoff
and the source composition is solar,
the rigidity-dependent cutoffs extending beyond the knee
are not sufficient to fill the observed all-particle flux
\citep{Hillas05}.
This implies a second (Galactic) component at energies between the knee and the ankle, sometimes called ``component B''.
Our results suggest that the BH jets might be PeVatrons
and/or fill the gap between the knee and the ankle.
An unnatural point of this possibility is that
the BH jets are totally irrelevant to the SN remnants.
It is just a coincidence that the CR fluxes from two kinds of sources
are the same within a factor.
There are also orders-of-magnitude uncertainties
in the estimate of the total power of the BH jets
(see Section~\ref{sec:uncertain}).
Furthermore it is difficult to calculate the CR spectrum and
the acceleration efficiency at present.
Nevertheless the BH jets can potentially accelerate
the CRs at and beyond PeV energy
with the flux comparable to the observations.
\subsection{Cosmic-ray positrons and electrons}\label{sec:positron}
The CR positron fraction (the ratio of positrons to electrons plus positrons)
has been measured by the PAMELA satellite \citep{Adriani+09}
and more precisely by the AMS-02 experiment \citep{Aguilar+13}.
The observed positron fraction rises from $\sim 10$ GeV at least to $\sim 300$ GeV,
indicating the presence of nearby positron sources within $\sim 1$ kpc.
Although the dark matter annihilation or decay scenario is now severely constrained by
other messengers,
there are still many astrophysical candidates and the ture origin is unclear
\citep[e.g.,][]{Serpico12,Ioka10,Kashiyama+11,Fujita+09,Kohri+16}.
The BH jets could accelerate electrons and positrons preferentially
if the jets are not contaminated by baryons \citep{Barkov+12}
but associated with the pair cascade
(see also Section~\ref{sec:discuss}).
The maximum energy of particle acceleration is enough for producing
the observed positrons as in Equation~(\ref{eq:emax}).
The required total power for the positron excess is
about $\sim 10^{-4}$ of that of SN explosions, i.e., about $\sim 3 \times 10^{37}$ erg s$^{-1}$.
This is comparable to that of the BH jets in Equation~(\ref{eq:budget})
within the model uncertainties
(see Section~\ref{sec:uncertain}).
Therefore the BH jets are eligible to join the possible sources,
although it is again difficult to estimate the spectrum and efficiency of
the positron acceleration.
A BH jet likely forms an extended nebula in the ISM,
similarly to an AGN cocoon/lobe \citep{Begelman+89b} and a GRB cocoon
\citep{Ramirez-Ruiz+02,Mizuta_Ioka13}.
A BH jet collides with the ISM at the jet head.
The shocked matter goes sideways forming a cocoon.
Although the cocoon pressure initially collimates the jet,
the collimation radius expands and finally reaches
the termination (reverse) shock.
The maximum size of the termination shock is given by the condition that
the jet pressure balances with the ram pressure of the ISM,
$L_{j}/2 \pi \theta^2 r_h^2 c \sim n \mu m_u V^2$,
i.e.,
\begin{eqnarray}
r_h &\sim & \sqrt{\frac{L_{j}}{2\pi \theta^2 c n \mu m_u V^2}}
\sim 2\ {\rm pc}
\left(\frac{L_{j}}{10^{36}\,{\rm erg}\,{\rm s}^{-1}}\right)^{1/2}
\nonumber\\
&\times&
\left(\frac{n}{10\,{\rm cm}^{-3}}\right)^{-1/2}
\left(\frac{V}{10\,{\rm km}\,{\rm s}^{-1}}\right)^{-1}
\left(\frac{\theta}{0.1}\right)^{-1},
\label{eq:rhead}
\end{eqnarray}
at which point the jet is completely bent by the ISM
and dissipated into the cocoon.
The cocoon is also extended along the direction of the proper motion,
leading to a more or less spherical shape.
The forward shock of the BH jet nebula expands with
a velocity $v_c \sim (L_{j}/n \mu m_u)^{1/5} t^{-2/5}$
and a size $r_c \sim v_c t$,
slowing down to $v_c \sim V$ at the maximum size
$r_{c,\max} \sim (L_{j}/n m_u V^3)^{1/2}
\sim 80$ pc $(L_{j}/10^{36}\,{\rm erg}\,{\rm s}^{-1})^{1/2}
(n/10\,{\rm cm}^{-3})^{-1/2}
(V/10\,{\rm km}\,{\rm s}^{-1})^{-3/2}$.
The BH jet nebula is similar to an old pulsar wind nebula (PWN).
Then the CR electrons and positrons likely escape from the nebula to the ISM
without adiabatic cooling \citep{Kashiyama+11}.
Radiative cooling such as synchrotron emission
limits the maximum energy of electrons and positrons,
which depends on the propagation time and the magnetic field in the nebula
\citep{Kawanaka+10}.
Future observations beyond TeV energies by CALET, DAMPE and CTA will
probe such leptonic PeVatrons \citep{Kobayashi+04,Kawanaka+11}.
\subsection{Unidentified TeV gamma-ray sources}\label{sec:unID}
The Galactic Plane survey carried out by HESS led to the discovery of
dozens of TeV gamma-ray sources.
Among these, the most abundant category is dark accelerators, so-called
TeV unidentified sources (TeV unIDs),
which have no clear counterpart at other wavelengths
\citep{Aharonian+05,Aharonian+06,Aharonian+08}.
They lie close to the Galactic plane, suggesting Galactic sources.
Their power-law spectra with an index of 2.1--2.5 imply
a connection with CR accelerators.
They are extended $\Delta \Theta \sim 0.05$--$0.3^{\circ}$,
corresponding to a physical size of
$\sim 3\,{\rm pc}\,(\Delta \Theta/0.2^{\circ}) (D/{\rm kpc})$
for an unknown distance $D$.
Still, their unID nature prevents us to identify their origin
\citep[e.g.,][]{Yamazaki+06,deJager+09,Ioka_Meszaros10}.
In Figure~\ref{fig:flux}, we plot the observed flux distribution of the TeV unIDs
at energies $\varepsilon_\gamma >0.2$ TeV
in terms of the cumulative number of sources above a flux $N(>F)$,
i.e., log $N$--log $F$ plot.
In order to compare it with the BH jets,
we calculate the flux distribution from the luminosity function
in Equation~(\ref{eq:LF1})
by integrating the number of sources above a given (bolometric) flux
$F = L_{j}/4\pi D^2 \sim {\dot M} c^2/4\pi D^2$
as
\begin{eqnarray}
\frac{dN}{dF}=\int \frac{dN}{d{\dot M}}
\frac{4\pi D^2}{c^2}
\frac{DdD d\theta}{\pi R_d^2},
\label{eq:dN/dF}
\end{eqnarray}
where we approximate the spatial distribution of the BH jets by
a thin uniform disk with a radius $R_d=15$ kpc
and the distance of the Sun to the Galactic center $R_\odot=8$ kpc.
A thin approximation is applicable if the observed distance is
larger than the scale height $\sim 300$ pc
for the fiducial case (see Table~\ref{tab:model}).
Figure~\ref{fig:flux} shows that
the flux distribution is comparable with that of TeV unIDs
if the gamma-ray efficiency is about $\epsilon_\gamma \sim 10^{-2}$
for the fiducial parameters (see Table~\ref{tab:model}).
Note that the IC cooling time of 10 TeV electrons is $\sim 10^5$ yr.
If the age is $\sim 10^5$ yr, the TeV gamma-ray flux is
$\sim 0.1$--$0.02\, L_e$,
implying that $\epsilon_e \sim 0.1$--$0.5$.
This is comparable with values considered in GRB jets and PWNe.
If this is the case,
the CTA (Cherenkov Telescope Array) observatory
will increase the number of TeV unIDs up to $\sim 300$
by improving the sensitivity by about an order of magnitude
in the near future \citep{CTA13}.
Note that the flux distribution follows $N(>F) \propto D^{2} \propto F^{-1}$
if the spatial distribution is disk-like,
which is different from $N(>F) \propto F^{-1.5}$ for the 3D Euclidian space.
The uniform disk approximation is acceptable
for the current observations, which have not reached the Galactic center yet.
For future observations, we have to consider
the high density region near the Galactic center.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{fig4.eps}
\end{center}
\caption{
Flux distribution expressed by the cumulative number of sources
above a given flux, i.e., log $N(>F)$--log $F$ plot,
with Equations~(\ref{eq:dN/dF}) and (\ref{eq:LF1}).
This is compared with the observations of TeV unIDs.
Both are comparable if the gamma-ray efficiency is
$\epsilon_\gamma \sim 10^{-2}$.
CTA could detect additional $\sim 300$ BH jets in the near future.
}
\label{fig:flux}
\end{figure}
The nebular size in Equation~(\ref{eq:rhead}) is also consistent with the
extended nature of TeV unIDs.
The BH jet nebula also evades
strong upper limits in X-rays with a TeV to X-ray flux ratio up to $\gtrsim 50$
\citep{Matsumoto+07,Bamba+07,Matsumoto+08,Bamba+09,Fujinaga+11,Sakai+11}.
This is because the physical parameters such as the energy density
and the magnetic field are similar to those of an old PWN.
Their emission spectra have the unID nature
thanks to the old age \citep{Yamazaki+06,deJager+09,Ioka_Meszaros10}.
In addition, the ADAF disk is radiatively inefficient.
The X-ray flux of the ADAF disk is about
$F_{\nu} \sim
(\alpha_{\rm QED}/\alpha^2) ({m_e}/{m_u})
({{\dot M}c^2}/{L_{\rm Edd}})
{\dot M}/4\pi D^2 m_e
\sim 1 \times 10^{-18}\,{\rm erg}\,{\rm s}^{-1}\,{\rm cm}^{-2}\,{\rm keV}^{-1}$
$\left(\alpha/0.1\right)^{-2}$
$\left({n}/{10\,{\rm cm}^{-3}}\right)^2$
$\left({M}/{10 M_\odot}\right)^3$
$\left({V}/{10\,{\rm km}\,{\rm s}^{-1}}\right)^{-6}$
$\left({D}/{{\rm kpc}}\right)^{-2}$,
below the current limit.
\section{Model uncertainties}\label{sec:uncertain}
Although the GW observations significantly narrow down
the possible parameter space,
in particular putting a lower bound on the number of spinning BHs
in Equation~(\ref{eq:NBH}),
there are still large uncertainties about the model parameters
and the estimate for the BH jet power.
In this section, we clarify the range of the uncertainties by considering
four representative effects:
the initial BH spin (Section~\ref{sec:spin}),
the velocity distribution depending on the binary BH formation scenario
(Section~\ref{sec:scenario}),
the accretion rate profile changed by the disk wind (Section~\ref{sec:wind}),
and the feedback on the ISM by the outflow (Section~\ref{sec:feedback}).
These effects on the model parameters and
the resulting total power are summarized
in Table~\ref{tab:model}.
We enclose the uncertainty of the total power for the BH jets
within a factor of $10^{\pm 3}$,
which is much better than before.
\begin{table*}
\centering
\caption{Possible uncertainties of the model parameters and
the resulting total power $P_{\rm tot}$ are summarized.
The isolated binary formation scenario is the fiducial case.
The column with ``$-$'' equals the fiducial value.
The model parameters are the number of BH jets $N_{\rm BH}$,
the dispersion of the velocity distribution $\sigma_v$ in Equation~(\ref{eq:velocity}),
the initial BH spin $a_*^i$,
the accretion rate profile $s$ in Equation~(\ref{eq:profile}),
and the duty cycle $\mathscr{D}$ reduced by the feedback
on the ISM by the disk outflow.
}
\label{tab:model}
\begin{tabular}{lllllll} \hline
& Number & Velocity & Spin & Disk & Duty cycle & Total power \\
& $N_{\rm BH}$ & $\sigma_v$ [km s$^{-1}$] & $a_*^i$ & $s$ & $\mathscr{D}$ & $P_{\rm tot}$ [erg s$^{-1}$] \\ \hline \hline
Isolated binary (fiducial) & $7 \times 10^4$ & $40$ & $0$ & $0$ & $1$ & $\sim 10^{37}$ \\
High spin & $10^{8}$ & $-$ & $0.2$ & $-$ & $-$ & $\sim 10^{37} \times 10^{3}$ \\
Stellar cluster & $-$ & $200$ & $-$ & $-$ & $-$ & $\sim 10^{37} \times 10^{-2}$ \\
Wind & $-$ & $-$ & $-$ & $1$ & $-$ & $\sim 10^{37} \times 10^{-2}$ \\
Feedback & $-$ & $-$ & $-$ & $1$ & $10^{-1}$ & $\sim 10^{37} \times 10^{-3}$ \\ \hline
\end{tabular}
\end{table*}
\subsection{Initial spin}\label{sec:spin}
If BHs have spins before the mergers,
the BHs can launch BZ jets without the mergers.
Such spinning BHs could result from the massive stellar collapse.
The total number of BHs in our Galaxy is
about $N_{\rm BH} \sim 10^{8}$ \citep{Shapiro_Teukolsky83},
$\sim 10^{3}$ times larger than that of the merged BHs in Equation~(\ref{eq:NBH}).\footnote{
The number would be $N_{\rm BH} \sim 10^{10}$
if dark matter were composed of primordial BHs that are relevant to GWs,
while it is suggested that the fraction of primordial BHs in dark matter
is small $\sim 10^{-4}$
by observations of GWs and CMB spectral distortion \citep{Sasaki+16}.
}
In addition the GW recoil is absent without a merger,
increasing the total power by a factor of ten
as shown in Figure~\ref{fig:power}.
Then the total power is larger than the fiducial value
by a factor of $\sim 10^{4} (a_*^i/0.7)^2$ altogether,
that is, $P_{\rm tot} \sim 10^{41}$ erg $(a_*^i/0.7)^2$,
where the $(a_*^{i})^2$ dependence comes from
that of the BZ luminosity in Equation~(\ref{eq:LBZ}).
GW observations show no evidence for large spins.
Probably the initial spin would be small for most BHs
because the massive star progenitors with solar metallicity
lose the angular momentum by stellar wind \citep{Heger+03,Hirschi+05}.
Because of the same reason, the BH mass is also smaller than the fiducial case
\citep{LIGO_astro16}, reducing the total power of the BH jets.
In low metallicity, the wind is weak and the resulting BH spin may be high
\citep{Yoon_Langer05,Hirschi+05,Kinugawa+16c}.
A rapid rotation of the progenitors could lead to
a chemically homogeneous evolution without a common envelope phase,
avoiding a merger before the BH formation
\citep{Mandel_deMink16,Marchant+16}.
However the number of such BHs is much less than $N_{\rm BH} \sim 10^{8}$.
The event rate of GRBs, which likely produce spinning BHs,
is comparable to that of the BH mergers.
Although some BHs in X-ray binaries might have high spins,
these measurements are subject to systematic errors
\citep{Remillard_McClintock06}.
For GW151226, the effective spin parameter is $0.21^{+0.20}_{-0.10}$
\citep{LIGO_O1}.
So we tentatively take $a_*^i \sim 0.2$ as an upper limit
in Table~\ref{tab:model}.
This is the most extreme case because the total power
is comparable to that of SN explosions,
${E_{\rm SN}}/{100\,{\rm yr}} \sim 3 \times 10^{41}\ {\rm erg}\ {\rm s}^{-1}$.
\subsection{Binary BH formation scenario}\label{sec:scenario}
The accretion rate and the resulting jet luminosity sensitively depend on
the velocity of the BH in Equation~(\ref{eq:Mdot}).
We have adopted $\sigma_v=40$ km s$^{-1}$ as a fiducial value
for the isolated binary formation scenario
\citep{Tutukov_Yungelson93,Dominik+15,Belczynski+16,Lipunov+16}
in Equation~(\ref{eq:velocity}).
The GW150914 masses favor low metallicity below $0.5 Z_{\odot}$
\citep{LIGO_astro16}.
The extreme case is zero metallicity Population III stars
\citep{Kinugawa+14,Kinugawa+16a,Kinugawa+16b,Kinugawa+16c}.
If BHs form in very low metallicity $< 0.01 Z_{\odot}$,
the GW events may be dominated by recent BH mergers
in dwarf galaxies \citep{Lamberts+16}
because the low metallicity allows a small initial separation of a BH binary.
Then the merged, spinning BHs are incorporated into our Galaxy
relatively recently,
joining in the halo component with a velocity dispersion of
$\sigma_v \sim 200$ km s$^{-1}$.
Another scenario is
the dynamical binary formation in a dense stellar cluster
\citep{Kulkarni+93,Sigurdsson_Hernquist93,PortegiesZwart_McMillan00,Rodriguez+15,Rodriguez+16a,Mapelli16}.
In a high-density stellar environment,
BHs dynamically interact and form binaries.
Since the interaction is frequent in the clusters,
most of the BH mergers may occur outside the clusters
following dynamical ejection.
The escape velocity of the clusters is smaller than that of our Galaxy.
Thus the merged BHs are floating in our halo
with a velocity dispersion of $\sigma_v \sim 200$ km s$^{-1}$.
Primordial BHs are also a possible candidate
\citep[e.g.,][]{Bird+16,Sasaki+16,Nakamura+97,Ioka+98,Ioka+99},
although this scenario requires a fine tuning
in the primordial density fluctuation.
In this case, the BHs reside in our halo with $\sigma_v \sim 200$ km s$^{-1}$.
Figure~\ref{fig:LF2} shows the case of $\sigma_v \sim 200$ km s$^{-1}$.
Compared with the fiducial case $\sigma_v \sim 40$ km s$^{-1}$
(gray dashed line),
the luminosity and hence the total power
are reduced by a factor of $\sim 10^{2}$.
This factor is roughly given by the velocity dependence of the accretion rate,
$\sim (40\,{\rm km}\,{\rm s}^{-1}/200\,{\rm km}\,{\rm s}^{-1})^3 \sim 0.008$.
The GW recoil effect becomes less significant than the fiducial case
because the velocity dispersion is comparable with the recoil velocity.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{fig5.eps}
\end{center}
\caption{
Same as the fiducial case in Figure~\ref{fig:LF1}
except for the dispersion of the velocity distribution
$\sigma_v=200$ km s$^{-1}$.
The total luminosity function for the fiducial case
$\sigma_v=40$ km s$^{-1}$ is also plotted by
a gray dashed line.
}
\label{fig:LF2}
\end{figure}
\subsection{Wind}\label{sec:wind}
It remains highly uncertain how much of the accreting matter
at the Bondi radius reaches the BH \citep{Yuan_Narayan14}.
Some supermassive BH systems with jets
seem to require the Bondi accretion rates calculated from the observed
gas temperature and density to power the observed jets
\citep{Allen+06,Rafferty+06,Russell+13}.
On the other hand,
the ADAF model implies positive Bernoulli parameter for the inflow
in the self-similar regime,
which suggests that hot accretion flows could have outflows
\citep{Narayan_Yi94,Narayan_Yi95}.
The mass outflows make the accretion profile decrease inward
approximately in a power-law form,
${\dot M}(r)={\dot M}(r_{\rm disk}) \left({r}/{r_{\rm disk}}\right)^{s}$,
as in the adiabatic inflow-outflow solutions (ADIOS) model
\citep{Blandford_Begelman99,Blandford_Begelman04}.
The index is limited to $0 \le s < 1$ by the mass and energy conservation,
but has not been determined yet \citep{Yuan_Narayan14}.
The least accretion case corresponds to $s \approx 1$.
Recent 3D general relativistic MHD simulations
suggest that $s \approx 1$ continues down to $20 r_S$,
below which the mass flux is constant $s=0$ \citep{Yuan+15}.
This is also implied by an analytical study \citep{Begelman12}.
If we adopt this least accretion case,
the accretion rate of the BH is given by
\begin{eqnarray}
{\dot M}_{\rm BH} \approx {\dot M} \left(\frac{20 r_S}{r_{\rm disk}}\right)^{s},
\quad {\rm if}\ r_{\rm disk}>20r_S,
\label{eq:profile}
\end{eqnarray}
with $s \approx 1$ where the disk radius $r_{\rm disk}$
is given by Equation~(\ref{eq:rdisk}).
Correspondingly, the luminosity of the BH jet is reduced by the same factor
$\left({20 r_S}/{r_{\rm disk}}\right)^{s}$.
Figure~\ref{fig:LF3} shows the luminosity function
using the accretion rate of a BH in Equation~(\ref{eq:profile}).
Compared with the fiducial case $s=0$ (gray dashed line),
the luminosity and the total power
are reduced by a factor of hundred.
This factor is roughly given by the ratio
$r_{\rm disk}/20 r_S$ in Equation~(\ref{eq:rdisk}).
The GW recoil effect becomes less significant than the fiducial case
because the disk radius $r_{\rm disk}$ and the accretion rate ${\dot M}$
have similar dependences on the velocity.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{fig6.eps}
\end{center}
\caption{
Same as the fiducial case in Figure~\ref{fig:LF1}
except for the accretion rate of the BH
${\dot M}_{\rm BH}={\dot M} (20 r_S/r_{\rm disk})^{s}$
with $s=1$, which is reduced by the wind.
The total luminosity function for the fiducial case
$s=0$ is also plotted by a gray dashed line.
}
\label{fig:LF3}
\end{figure}
\subsection{Feedback}\label{sec:feedback}
Feedback from radiation, jets and winds on the surrounding ISM
could be crucial for estimating the total power of the BH jets,
as frequently argued in the context of supermassive BHs
\citep{Yuan_Narayan14}.
In the Galactic BH case,
the radiative feedback is weak because the ADAF disk is much fainter than
the Eddington luminosity.
The radiation may ionize the ISM around the Bondi radius,
but once ionized, the cross section
for the interaction between the ISM and photons
decreases by many orders of magnitude, reducing the radiative feedback.
The jet feedback is also not strong
because, although the jet dominates the energy output,
its penetration ability makes the dissipation scale large
as shown in Equation~(\ref{eq:rhead}).
A large amount of ISM is capable of radiating the injected energy.
The most influential feedback would be due to the wind from the disk,
if it exists.
If the wind is efficient with $s \approx 1$ in Equation~(\ref{eq:profile}),
even a small efficiency of the wind feedback
$\epsilon_w \gtrsim 10^{-6} (M/10 M_{\odot})^{2/3}
(V/10\,{\rm km}\,{\rm s}^{-1})^{-4/3}$
is able to heat the ISM at the Bondi radius to blow away,
$\epsilon_w {\dot M}_{\rm BH} c^2 > {\dot M} V^2$.
A larger efficiency $\epsilon_w\sim 0.03$--$0.001$
is implied by simulations \citep{Sadowski+16,Yuan+15,Ohsuga_Mineshige11}.
On the other hand, the wind will stop
if the mass accretion at the Bondi radius is terminated.
Therefore we expect that the BH activity is intermittent
with some duty cycle $\mathscr{D}$ if the wind feedback exists.
A rough estimate of the duty cycle is as follows.
The wind is somewhat collimated initially when it is released from the disk.
If it were spherical, the wind would not be launched because
the ram pressure of the Bondi accretion onto the disk exceeds
that of the wind.
The $4\pi$ solid angle of the ISM is affected by the wind
after the wind is decelerated by the ISM,
which will happen outside the Bondi radius
because the ram pressure of the accretion is a decreasing function of the radius
and the wind goes straight inside the Bondi radius.
Thus the accretion continues at least
for the dynamical time at the Bondi radius,
\begin{eqnarray}
t_a = \frac{r_B}{V} \sim 40\ {\rm yr}
\left(\frac{M}{10 M_{\odot}}\right)
\left(\frac{V}{10\,{\rm km}\,{\rm s}^{-1}}\right)^{-3}.
\end{eqnarray}
The injected energy during the active time is about
\begin{eqnarray}
E_{a}&=&\epsilon_w {\dot M}_{\rm BH} c^2 t_a
\sim 3 \times 10^{40}\ {\rm erg}
\left(\frac{\epsilon_w}{3\%}\right)
\left(\frac{n}{10\,{\rm cm}^{-3}}\right)
\nonumber\\
&\times&\left(\frac{M}{10 M_{\odot}}\right)^{11/3}
\left(\frac{V}{10\,{\rm km}\,{\rm s}^{-1}}\right)^{-8/3}.
\end{eqnarray}
The injected energy produces a wind remnant
and the lifetime of the remnant is about
$t_{\rm merge} \sim t_{\rm PDS} \times 153 (E_{51}^{1/14} n_0^{1/7}/V_6)^{10/7}
\sim 2\times 10^{6}$ yr $E_{51}^{3/14} n_0^{-4/7} (E_{51}^{1/14} n_0^{1/7}/V_6)^{10/7}$
according to the notation in \citet{Cioffi+88},
which gives
\begin{eqnarray}
t_{\rm merge} &\sim& 300\ {\rm yr}
\left(\frac{E_a}{10^{40}\,{\rm erg}}\right)^{31/98}
\left(\frac{n}{10\,{\rm cm}^{-3}}\right)^{-18/49}
\nonumber\\
&\times& \left(\frac{V}{10\,{\rm km}\,{\rm s}^{-1}}\right)^{-10/7}.
\end{eqnarray}
Therefore the duty cycle is roughly
\begin{eqnarray}
\mathscr{D} &\sim& \frac{t_a}{t_{\rm merge}} \sim 0.1
\left(\frac{V}{10\,{\rm km}\,{\rm s}^{-1}}\right)^{-0.73}
\nonumber\\
&\times& \left(\frac{n}{10\,{\rm cm}^{-3}}\right)^{-0.051}
\left(\frac{M}{10\,M_{\odot}}\right)^{-0.16}.
\end{eqnarray}
We use $\mathscr{D} \sim 10^{-1}$ in Table~\ref{tab:model}.
\section{On Fermi GBM events associated with GW150914}\label{sec:Fermi}
The Gamma-ray Burst Monitor (GBM) on board the Fermi satellite
reported a 1 sec-lasting weak GRB 0.4 seconds after GW150914.
Assuming the redshift of GW150914, $z=0.09^{+0.03}_{-0.04}$,
the luminosity in 1 keV--10 MeV is $1.8^{+1.5}_{-1.0} \times 10^{49}$ erg s$^{-1}$
\citep{Connaughton+16}.
This was unexpected
and prompted many theoretical speculations
\citep{Loeb16,Zhang16,Perna+16,Li+16,Veres+16,Cardoso+16,Kimura+16}.
The anti-coincidence shield (ACS) of the Spectrometer on board INTEGRAL (SPI)
put upper limits on the gamma-ray emission
with similar fluxes \citep{Savchenko+16}.
The GBM result also depend on the analysis of low count statistics
\citep{Greiner+16}.
No counterpart is observed for GW151226 and LVT151012
\citep{Racusin+16}.
Future follow-ups would be finally necessary
to confirm or defeat the GBM detection
\citep{Yamazaki+16,Morsony+16,Murase+16}.
If the signal were caused by the merged BH,
the BH would be surrounded by matter.
The size of the matter distribution is
$r_{m} \sim 1 \times 10^{8}$ cm
so that the accretion time is
\begin{eqnarray}
t_{\rm acc}&=&\frac{1}{\alpha \Omega_K} \left(\frac{r_{m}}{H}\right)^2
\sim 1.4\ {\rm sec}
\left(\frac{\alpha}{0.1}\right)^{-1}
\left(\frac{M}{60 M_{\odot}}\right)^{-1/2}
\nonumber\\
&\times&\left(\frac{r_m}{1 \times 10^{8}\,{\rm cm}}\right)^{3/2}
\left(\frac{H/r_m}{0.3}\right)^{-2},
\end{eqnarray}
where $\alpha$ is the viscosity parameter,
$H$ is the disk scale height,
and $\Omega_K=\sqrt{GM/r_m^3}$ is the Kepler rotation frequency.
The mass of the matter should be larger than
$M_{m} \gtrsim 10^{-5} \theta_j^2 M_{\odot}$
where $\theta_j$ is the opening angle of the GRB jet.
The accretion from the ISM affects the matter surrounding a BH.
In particular, it can evaporate a possible dead disk
which were invoked for the GBM event
\citep{Perna+16,Kimura+16}.
A dead disk is assumed to be cold and neutral due to the small mass,
suppressing the magnetorotational instability and hence the viscosity,
and remains unaccreted, keeping matter for producing the gamma-ray event.
However the accretion from the ISM forms
a hot disk sandwiching the dead disk and heating its surface.\footnote{
Note that the accretion does not stop even onto a binary
because the disk is thick.
}
The surface temperature develops a gradient,
being greater than $T \gtrsim 10^4$ K
(corresponding to the sound velocity
$v_i \sim \sqrt{k_B T/m_u} \sim 10$ km s$^{-1}$)
for the ionized atmosphere.
The density $n_i$ at the base of the ionized atmosphere
is determined by the pressure balance
$n_i v_i^{2} \sim n(r) v_h^2$
where $n(r) \sim n (r_B/r)^{3/2}$ and $v_h \sim \sqrt{GM/r}$
are the density and the thermal velocity of the hot disk.
Given $n_i$ and $v_i$,
we can estimate the mass evaporation rate
\citep[cf.][]{Hollenbach+94} from the dead disk
as
\begin{eqnarray}
{\dot M}_{\rm eva}
&\sim& 2\pi r^2 n_i v_i m_u
\sim 1\times 10^{15}\,{\rm g}\,{\rm s}^{-1}
\left(\frac{r}{10^{12}\,{\rm cm}}\right)^{-5/2}
\nonumber\\
&\times& \left(\frac{M}{60 M_{\odot}}\right)^{4}
\left(\frac{n}{1\,{\rm cm}^{-3}}\right)^{5/2}
\left(\frac{V}{40\,{\rm km}\,{\rm s}^{-1}}\right)^{-9/2}.
\end{eqnarray}
Then the evaporation time of the dead disk with mass $M_{m}$ is
\begin{eqnarray}
t_{\rm eva} &\sim& 10^{6}\,{\rm yr}
\left(\frac{M_{m}}{10^{-5} M_{\odot}}\right)
\left(\frac{r}{10^{12}\,{\rm cm}}\right)^{5/2}
\left(\frac{M}{60 M_{\odot}}\right)^{-4}
\nonumber\\
&\times& \left(\frac{n}{1\,{\rm cm}^{-3}}\right)^{-5/2}
\left(\frac{V}{40\,{\rm km}\,{\rm s}^{-1}}\right)^{9/2},
\end{eqnarray}
which is shorter than $\sim 10^{10}$ yr,
the merger time of the BH binary with a separation $r \sim 10^{12}$ cm,
for the fiducial case.
One should keep in mind that the above equation is rather sensitive to
parameters $M$, $n$ and $V$.
For example, BH binaries could have a dead disk if they are formed
in a low-density environment.
However, for typical parameters, the merged BH would not have a dead disk,
implying that the GBM event is not related with GW 150914 in the dead disk scenario.
We can also make a second argument
that a time-reversal of this event seems to encounter physical difficulty.
Let's go back in time, say $t_b \sim 1000$ sec before the merger.
Still the two BHs should be surrounded by the matter.
The size of the matter distribution should be larger
$r_{m} \sim 10^{10}\,{\rm cm}\, (t_b/10^{3}\,{\rm sec})^{2/3}
(\alpha/0.1)^{2/3}
(M/60 M_{\odot})^{1/3}
(H/r_m/0.3)^{4/3}$,
otherwise the matter is swallowed by the BHs before the merger.
The bounding energy of this matter is only
a fraction of the rest mass energy of the matter,
\begin{eqnarray}
\frac{G M M_{m}/r_{m}}{M_m c^2}
&\sim& 10^{-3}
\left(\frac{t_b}{10^{3}\,{\rm sec}}\right)^{-2/3}
\left(\frac{\alpha}{0.1}\right)^{-2/3}
\nonumber\\
&\times& \left(\frac{M}{60 M_{\odot}}\right)^{2/3}
\left(\frac{H/r_m}{0.3}\right)^{-4/3}.
\end{eqnarray}
This ratio is much smaller than the wind efficiency
$\epsilon_w \sim 0.1$ of a super-Eddington accretion disk,
so that such matter is easily blown away by the disk wind.
As long as a possible dead disk is ionized by the ISM accretion
(that occurs unless we consider low $n$ and high $V$),
we have encountered an unlikely setup.
Note that a fraction of the matter $M_{\rm m}$ should accrete onto the BHs
before the merger,
otherwise a fine-tuning is needed
because the time $t_b$ is much larger than the event duration $t_{\rm acc}$.
The accretion is super-Eddington,
even if only a fraction of the matter accretes,
and should be accompanied by a strong disk wind
as suggested by numerical simulations \citep{Ohsuga+05,Jiang+14,Sadowski+14}.
Therefore it is difficult to keep the matter near the BH before the merger
and the BH mergers unlikely accompany
observable prompt electromagnetic counterparts.
\section{Summary and Discussions}\label{sec:discuss}
We suggest possible connections between
the BH mergers observed by GWs
and the high energy sources of TeV-PeV particles in our Galaxy.
The GW observations give a lower limit on the number of
merged and hence highly-spinning BHs as
$\sim 70000 (\mathscr{R}_{\rm GW}/70\,{\rm Gpc}^{-3}\,{\rm yr}^{-1})$,
and the spinning BHs produce relativistic jets
by accreting matter and magnetic fields from the ISM.
We calculate the luminosity function, the total power, and
the maximum acceleration energy of the BH jets,
and find that
the BH jets are eligible for PeVatrons, sources of CR positrons, and TeV unIDs.
The BH jets form extended nebulae like PWNe.
If they are observed as TeV unIDs,
additional $\sim 300$ nebulae will be discovered by CTA.
We quantify the uncertainties of the estimate for the total power of the BH jets
within a factor of $10^{\pm 3}$,
which is much better than before the GW detections,
by considering the initial BH spin,
the velocity distribution depending on the formation scenario,
the accretion profile changed by the wind,
and the feedback by the outflow (Table~\ref{tab:model}).
The uncertainties will be reduced by the GW observations,
in particular, of the BH spins.
It is also important to clarify
the feedback by the wind from the sub-Eddington accretion disk
on the Bondi-Hoyle accretion.
Our considerations on the BH accretion and jet
imply that the electromagnetic counterparts to BH mergers
including the Fermi GBM event after GW150914 are difficult to detect
with the current sensitivity.
The accretion from the ISM can evaporate
the cold neutral dead disk around the BH.
A slight accretion before the merger can also
blow away the surrounding matter if any.
These should be considered as constraints on dead disk models
for prompt electromagnetic counterparts of the BH-BH merger.
Although we do not go into detail in this paper,
there are several sites of particle acceleration for a BH jet.
First, the BH magnetosphere acts as a particle accelerator like pulsars
if a gap arises with an electric field along the magnetic field
\citep{Hirotani+16,Hirotani_Pu16}.
The gamma-ray emission associated with leptonic acceleration
may be detectable for nearby sources
although its luminosity is usually much smaller than the BZ luminosity.
Second, the internal shocks in the jet are possible like GRBs and AGNs.
As long as $B \propto \Gamma/r$ during the propagation,
the maximum acceleration energy is the same as Equation~(\ref{eq:emax}).
Third, the jet dissipates the magnetic energy
when the MHD approximation breaks down.
This happens when the plasma density
drops below the Goldreich-Julian density
\citep{Goldreich_Julian69},
which is the minimum density required for shielding the electric field.
The comoving plasma density is given by
$n'_p \sim L/4\pi r^2 m_u c^3 \Gamma^2 (1+\sigma)$
where $L \sigma/(1+\sigma)$ is the BZ luminosity in Equation~(\ref{eq:LBZ}),
$\sigma$ is the ratio of the Poynting to particle energy flux,
$\Gamma$ is the Lorentz factor of the jet,
and we should make an appropriate correction if jets are leptonic.
The comoving Goldreich-Julian density beyond the light cylinder
$r_{\ell}=c/\Omega_H = 2r_H/a_*$ is
$n'_{\rm GJ} \sim (\Omega_H/2\pi q c) (r_H/r_{\ell})^3 (r_{\ell}/r \Gamma)$.
By equating $n'_p$ with $n'_{\rm GJ}$, we obtain
the radius at which the MHD breaks down,
\begin{eqnarray}
r_{\rm MHD} &\sim& \sqrt{\frac{\pi \kappa L}{\sigma (1+\sigma) c}}
\frac{q}{m_u c^2}
\frac{r_H}{a_*^2 \Gamma}
\sim 2 \times 10^{13}\,{\rm cm}
\left[\sigma (1+\sigma)\right]^{-1/2}
\nonumber\\
&\times& a_{*}^{-2} \Gamma^{-1}
\left(\frac{L}{10^{35}\,{\rm erg}\,{\rm s}}\right)^{1/2}
\left(\frac{M}{10 M_{\odot}}\right).
\end{eqnarray}
Forth, the termination (reverse) shock of the jet
at the radius in Equation~(\ref{eq:rhead})
is also a plausible site
like a hot spot of AGNs and a pulsar wind nebula for pulsars.
The jet could be subject to instability,
injecting energy into a cocoon/lobe before reaching the termination shock.
The shocks between the cocoon and the ISM are also possible sites
of particle acceleration.
Note that the BH Cygnus X-1 is surrounded by a ring-like structure in radio,
which may be formed by the interaction
between a jet/cocoon and the ISM \citep{Gallo+05}.
We do not discuss the disk emission in detail.
Nearby BH disks with bremsstrahlung, synchrotron, and inverse Compton emission
could be detected in the future surveys \citep{Matsumoto+16}.
The accretion disks could also accelerate nonthermal particles
and contribute to the observed cosmic rays \citep{Teraki+16}.
An on-axis BH jet may be also observable
if the beaming factor is larger than $\sim 0.01$.
These are interesting future problems.
\section*{Acknowledgements}
We thank Takashi Nakamura, Tsvi Piran, and Masaru Shibata for helpful comments.
This work is supported by
KAKENHI 24103006, 24000004, 26247042, 26287051 (K.I.).
\bibliographystyle{mnras}
|
1,314,259,996,655 | arxiv | \section{Introduction}
Let $X$ and $Y$ be Banach spaces and $T:X \to Y$ be an operator. We say $T$ is compact if and only if it maps closed unit ball $B_X$ of $X$ into a pre-compact subset of $Y$. In other words, $T$ is compact if and only if for every norm bounded sequence $\{x_n\}$ of $X$, the sequence $\{Tx_n\}$ has a norm convergent subsequence in $Y$. Equivalently, $T$ is compact if and only if for every $\epsilon >0$, there exists elements $y_1,y_2, \dots,y_n \in Y$ such that $$ T(B_{X}) \subseteq \bigcup_{k=1}^{n} \{y_k+\epsilon B_Y\}$$ where by $B_X$ and $B_Y$ we mean the closed unit balls of $X$ and $Y$ respectively. Every compact linear operator is bounded, hence continuous, but clearly not every bounded linear map is compact since one can take the identity operator on an infinite dimensional space $X$. Compact operators are natural generalizations of finite rank operators and thus dealing with compact operators provides us with the closest analogy to the usual theorems of finite dimensional spaces. Recall that $\mathcal{L}(X,Y)$ denotes the normed vector space of all continuous operators from $X$ to $Y$ and $\mathcal{L}(X)$ stands for $\mathcal{L}(X,X)$ and $\mathcal{K}(X,Y)$ is the collection of all compact operators from $X$ to $Y$. It is well known that if $Y$ is a Hilbert space then any compact $T:X \to Y$ is a limit of finite rank operators, in other words if $\mathcal{F}(X,Y)$ denotes the class of finite rank maps then,
$$ \mathcal{K}(X,Y)= \overline{\mathcal{F}(X,Y)}$$ where the closure is taken in the operator norm. However, the situation is quite different for Banach spaces, not every operator between Banach spaces is a uniform limit of finite rank maps. For further information we refer the reader to a well known example due to P. Enflo \cite{Enflo}, in which Enflo constructs a Banach space without the approximation property. The following classical results on compact operators will be used for our discussion later.
\begin{theorem}
For Banach spaces $X$, $Y$ and $Z$, we have the following:
\begin{enumerate}
\item $\mathcal{K}(X,Y)$ is a norm closed vector subspace of $\mathcal{L}(X,Y)$ .
\item If $
X\xrightarrow[]{~~~S~~~}Y\xrightarrow[]{~~~T~~~}Z
$ are continuous operators and either $S$ or $T$ is compact, then $TS$ is likewise compact.
\end{enumerate}
\end{theorem}
If one consider the continuos operators on a Banach space $X$, the above theorem asserts the fact that compact operators on $X$ form a two sided ideal in $\mathcal{L}(X)$. The following theorem of Schauder simply states that an operator is compact if and only if its adjoint is compact.
\begin{theorem} [Schauder]
A norm bounded operator $T: X \to Y$ between Banach spaces is compact if and only if its adjoint $T^*: Y^* \to X^*$ is compact.
\end{theorem}
The main idea in proving Schauder's theorem lies in the fact that $$ ||P_n T -T|| \to 0 \quad \mbox{implies} \quad || T^* P_n -T^*|| \to 0 $$ where $P_n : X \to \mbox{span} \{e_1,\dots, e_n\}$. A well known proof of Schauder's theorem may be found in Yosida [\cite{Yos}, p.282].
For our discussion below we also need the following characterization of the compact sets in a Banach space; in some sense, it is a comment on the smallness of compact sets.
\begin{theorem}[Grothendieck] \label{thm:Groth}
A subset of a Banach space is compact if and only if it is included in the closed convex hull of a sequence that converges in norm to zero.
\end{theorem} In other words, if we have $K$ a compact subset of a Banach space $X$, then we can find a sequence $\{x_n\}$ in $X$ such that $$ ||x_n|| \to 0 \quad \mbox{and} \quad K \subseteq \overline{\mbox{co}}\{x_n\}.$$ For a proof we refer the reader to[ \cite{Dis} , p.3]
\section{ Terz\.{i}o\u{g}lu's Theorem}
\begin{theorem}[Terzio\u{g}lu \cite{Ter1}] \label{thm:Tosun}
An operator $T:X \to Y$ between two Banach spaces is compact if and only if there exists a sequence $\{u_n\}$ of linear functionals in $X^*$ with $|| u_n || \to 0$ such that the inequality
$$ ||Tx|| \leq \sup_{n} \left | < u_n, x> \right | $$ holds for every $x\in X$.
\end{theorem}
\begin{proof}
Suppose $T: X \to Y$ is compact, then by Schauder's theorem $T^*: Y^* \to X^*$ is compact; thus by definition, if $V$ denotes the closed unit ball of $Y^*, $ $T^*(V)$ is a norm totally bounded subset of $X^*$. Now applying Grothendieck's result, we have a sequence $\{u_n\}$ of elements of $X^*$ with $||u_n|| \to 0$ and $T^* (V) \subseteq \overline{\mbox{co}\{u_n\}}$. In other words, each element of $T^*(V)$ can be written of the form
$$ \sum_{n=1}^{\infty} \alpha_n u_n \quad \mbox{with}\quad \sum_{n=1}^{\infty} |\alpha_n | \leq 1.$$ Thus, for each $x\in X$ we have
$$||Tx|| =\sup_{||v|| \leq 1} \left| <T^*v, x>\right| \leq \sum_{n=1}^{\infty} |\alpha_n| \sup _{n} | < u_n, x>|.$$
Suppose $T$ satisfies the inequality $ ||Tx|| \leq \sup_{n} \left | < u_n, x> \right | $ for some sequence $\{u_n\} \in X^* $. For $\epsilon >0$ choose $N$ such that $||u_n|| < \epsilon$ for $n> N$ and set $$M_{\epsilon} = \{ x\in X:\,\, < u_i,x> =0 \quad \mbox{for}\,\, i=1,2,\dots N \}, $$ then one can have
$$T^* (\mathring{V}) \subset \epsilon \mathring{U} + M_{\epsilon}^{\bot}$$ where $U$ denotes the unit ball of $X$ and for each linear subspace $M$ of $X$, the polar of $M$ denoted by $\mathring{M}$ is a linear subspace of $X^*$ defined as:
$$ \mathring{M}:= \left\{ a\in X^*:\quad |<x,a>|\leq 1 \quad \mbox{for}\quad x\in M\right\},$$
this shows that $T^*$ is compact and hence $T$ is compact.
\end{proof}
An application of the Theorem \ref{thm:Tosun} yields that every compact mapping of a Banach space into a $\mathcal{P}_{\lambda}$-space is $\infty$-nuclear.
\begin{definition}
We say $X$ is a $\mathcal{P}_{\lambda}$ space, ($\lambda \geq 1$) if every bounded linear operator $T$ from a Banach space $Y $ to $X$ and every $Z \supset Y$ there is a linear extension $\tilde {T}$ of $ Z $ to $X$ with $$||\tilde{T} ||\leq \lambda ||T||.$$
\end{definition}
As illustrated in the following diagram:
\[
\begin{tikzcd}
&Z \arrow{rd}{\overset{\sim}{T}}\\
&Y \arrow[hook]{u} \arrow{r}{}[swap]{T}
&X
\end{tikzcd}
\]
If $||\tilde{T} ||= ||T||$ in the above definition, we call $X$ extendible. This property is related to the existence of a global Hahn-Banach type an extension. J. Lindenstrauss in \cite{Lin1} examines the problem when is the extension $\tilde{T}$ is compact if $T$ itself is compact and the author's results are diverse and numerous and touches upon many related topics.
Next, we define \textit{infinite nuclear} mappings, this concept was first introduced in \cite{Pit2}.
\begin{definition}
Let $X $ and $Y$ be Banach spaces and $T:X \to Y $ a linear operator. Then $T $ is said to be infinite-nuclear, if there are sequences $\{u_n\} \subset X^*$ and $\{y_n\} \subset Y$ such that $\displaystyle \lim_{n} ||u_n|| = 0$,
$$\displaystyle \sup_{||v|| \leq 1}\left \{\displaystyle \sum_{n=1}^{\infty} |v(y_n)|:\,\, v \in Y^*\right \} < +\infty$$ and $$T x = \displaystyle \sum_{n=1}^{\infty}< u_n, x>y_n $$
for $ x \in X$.
\end{definition}
As an application to Terzio\u{g}lu's Theorem, under the condition that $T: X \to Y$ where $Y$ is a $\mathcal{P}_{\lambda}$-space, Terzio\u{g}lu also obtains a precise expression for $Tx$, which we state in the following:
\begin{theorem}[\cite{Ter1}]
Let $T$ be a compact mapping of a Banach space $X$ into a $\mathcal{P}_{\lambda}$ space $Y$.Then for every $\epsilon>0$ there exists sequence $\{u_n\}$ in $ X^*$ with $$\displaystyle\lim_n ||u_n|| =0 \quad \mbox{ and}\quad \displaystyle \sup_{n} ||u_n|| \leq ||T|| + \epsilon$$ and a sequence $\{y_n\} \in Y$ with $\displaystyle \sup_{||v|| \leq 1} \displaystyle \sum_{n=1}^{\infty} |< v, y_n> | \leq \lambda $ such that $T$ has the form
$$ Tx= \sum_{n=1}^{\infty} <u_n, x> y_n.$$
\end{theorem}
The complete details of the proof can be found in \cite{Ter1}. However, it is worth pointing out that idea of the proof provides a factorization of a compact map through the space $c_0$ as follows:\\[.02in]
Use Theorem \ref{thm:Tosun}, choose the sequence $\{u_n\}$ in $ X^*$ satisfying
$$\displaystyle\lim_n ||u_n|| =0\,\,\,\mbox{ and}\,\,\, \displaystyle \sup_{n} ||u_n|| \leq ||T|| + \epsilon\,\,\,\mbox{and}\,\, ||Tx|| \leq \sup|<u_n, x>|.$$
Define linear mapping $$S: X \to c_0 \quad \mbox{ by} \quad Sx=\{<u_n,x>\}, $$ and observe that $S$ is compact, then define a linear mapping $$R_0: S(X) \to Y\,\,\mbox{by}\,\, R_0(Sx)=Tx,$$ the inequality
$$ ||R_0(Sx)||=||Tx|| \leq \sup|<u_n, x>|= ||Sx||$$ implies that $||R_0|| \leq 1$.
\[
\begin{tikzcd}
&c_0 \arrow{rd}{\overset{\sim} {R}}\\
&S(E) \arrow[hook]{u} \arrow{r}{}[swap]{R_0}
&F
\end{tikzcd}
\]
Since $Y$ is a $\mathcal{P}_{\lambda}$ space, there exists an extension $\tilde{R}$ of $R_0$ from $\tilde{R:} c_0\to F$ with $||\tilde{R}|| \leq \lambda ||R_0||=\lambda$ and
$$
E\xrightarrow[]{~~~S~~~}c_0\xrightarrow[]{~~~\widetilde R~~~}F,
$$
evidently $T=\widetilde{R} S$.
By considering $\{e_n\}\in c_0$ and setting $y_n =\widetilde{R}(e_n)$ we obtain
$$ \displaystyle \sup_{||v|| \leq 1} \displaystyle \sum_{n=1}^{\infty} |< v, y_n> | \leq \lambda,\,\,\mbox{and}\,\, Tx= \sum_{n=1}^{\infty} <u_n, x> y_n.$$
Using all of the above results of Terzio\u{g}lu one can find the following conclusions in \cite{Ter2}.
\begin{corollary}
\begin{enumerate}
\item Every $\mathcal{P}_{\lambda}$ space has the approximation property.
\item Every compact linear operator of an $L^{\infty}$ space into a Banach space is infinite-nuclear.
\item Let $T$ be a compact linear map of an infinite-dimensional space $X$ into a Banach space $Y$. Then there exists an infinite dimensional closed subspace $M$ of $X$ such that
$T_{M}: M \to T(M)$ is infinite nuclear.
\end{enumerate}
\end{corollary}
\section {Compactness with Approximation Scheme}
Approximation schemes were introduced by Butzer and Scherer for Banach spaces in 1968 \cite{But} and later by Brudnij and Krugljak \cite{BK}. These concepts find its best application in a paper by Pietsch \cite{Pi}, where he defined approximation spaces, proved embedding, reiteration and representation results and established connection to interpolation spaces.
Let $X$ be a Banach space and $\{A_n\}$ be a sequence of subsets of $X$ satisfying:
\begin{enumerate}
\item $A_1 \subseteq A_2 \subseteq \dots \subseteq X$
\item $\lambda A_n \subseteq A_n$ for all scalars $\lambda$ and $n=1,2,\dots$.
\item $A_m + A_n \subseteq A_{m+n}$ for $m,n= 1,2,\dots$.
\end{enumerate}
For example, for $1\leq p < \infty$ if we consider the space $X= L_p[0,1]$, then the collection of sets $\{A_n\}= \{L_{p+ \frac{1}{n}}\}$ form an approximation scheme like above. \\ Pietsch's approximation spaces $ X^{\rho}_{\mu}$ ($0< \rho< \infty, \,\, 0< \mu \leq \infty $) is defined by considering the $n$-th approximation number $\alpha_n(f,X)$, where
$$ \alpha_n(f,X): = \inf \{ ||f-a|| : \,\, a\in A_{n-1}\}$$ and
$$ X^{\rho}_{\mu}=\{ f\in X: \,\,\{n^{\rho-\frac{1} {\mu} } \, \alpha_n(f,X)\}\in \ell^{\mu} \}.$$
In the same paper \cite{Pi}, embeddings, composition and commutation as well as representation interpolation of such spaces are studied and applications to the distribution of Fourier coefficients and eigenvalues of integral operators are given.
In the following we consider for each $n\in \mathbb{N}$ a family of subsets $Q_n$ of $X$ satisfying the very same three conditions stated above. For example for $Q_n$ could be the set of all at most $n$-dimensional subspaces of any Banach space $X$, or if our Banach space $X= \mathcal{L}(E)$, namely the set of all bounded linear operators on another Banach space $E$, then we can take $Q_n= N_n(E)$ the set of all $n$-nuclear maps on $E$.
Compactness relative to an approximation scheme for bounded sets and linear operators can be studied by using Kolmogorov diameters as follows.
Let $D \subset X$ be a bounded subset and $U_X$ denote the closed unit ball of $X$. Suppose $Q=(Q_n(X)_{n\in \mathbb{N}})$ be an approximation scheme on $X$, then the $n$th Kolmogorov diameter of $D$ with respect to this scheme $Q$ is denoted by $\delta_n(D, Q)$ and defined as
$$ \delta_n(D, Q)= \inf\{r>0 : \,\, D \subset rU_X +A\quad \mbox{for some} \quad A \in Q_n(X)\}.$$
Let $Y$ be another Banach space and $T\in \mathcal{L}(Y,X)$, then the $n$th Kolmogorov diameter of $T$ with respect to this scheme $Q$ is denoted by $\delta_n(T, Q)$ and defined as
$$ \delta_n(T, Q)= \delta_n(T(U_X), Q) .$$
\begin{definition}
We say $D$ is $Q$-compact set if $$ \lim_{n}\delta_n(D, Q)=0$$ and similarly $T\in \mathcal{L}(Y,X)$ is a $Q$-compact map, if $$ \lim_{n} \delta_n(T, Q)=0.$$
\end{definition}
The following example illustrates that not every $Q$-compact operator is compact.
\begin{example}
Let $\{r_n(t)\}$ be the space spanned by the Rademacher functions. It can be seen from the Khinchin inequality \cite{Lin}that
\begin{equation}
\ell^2 \approx \{r_n(t)\}\subset L_p[0,1] \text{ for all }1\leq p \leq \infty.
\end{equation}
We define an approximation scheme $A_n$ on $L_p[0,1]$ as follows:
\begin{equation}
A_n=L_{p+\frac{1}{n}}.
\end{equation}
$L_{p+\frac{1}{n}}\subset L_{p+\frac{1}{n+1}}$ gives us $A_n\subset A_{n+1}$. for $n=1,2,\dots,$ and it is easily seen that $A_n+A_m \subset A_{n+m}$ for $n,m=1,2,\dots,$ and that $\lambda A_n \subset A_n$ for all $\lambda$. Thus $\{A_n\}$ is an approximation scheme.
Next, we claim that for $p\geq 2$ the projection $P: L_p[0,1] \to R_p$ is a $Q$-compact map, but not compact,
where $R_p$ denotes the closure of the span of $\{r_n(t)\}$ in $L_p[0,1]$.
\[
\begin{array}{ccc}
L_p & \stackrel{i}{\longrightarrow} & L_2 \\
P \downarrow & & \downarrow P_2 \\
R_p & \stackrel{j}{\longleftarrow}& R_2 \\
\end{array}
\]
We know that for $p\geq 2$, $L_p[0,1]\subset L_2[0,1]$ and $R_2$ is a closed subspace of $L_2[0,1]$ and $$P=j\circ P_2\circ i$$ where $i,j$ are isomorphisms shown in the above figure. $P$ is not a compact operator, because dim$R_p=\infty$, on the other hand it is a $Q$-compact operator because, if we let $U_{R_p},U_{L_p}$ denote the closed unit balls of $R_p$ and $L_p$ respectively, it is easily seen that $P(U_{L_p})\subset \|P\|U_{R_p}$. But $U_{R_p} \subset CU_{R_{P+\frac{1}{n}}}$ where $C$ is a constant follows from the Khinchin inequality. Therefore, $$P(U_{L_p})\subset L_{p+\frac{1}{n}},\quad\mbox{ which gives} \quad \delta_n(P,Q)\to 0.$$
Next we give a characterization of $Q$-compact sets as subsets of the closed convex hull of certain uniform null-sequences.
\begin{definition}
Suppose $X$ is a Banach space with an approximation scheme $Q_n$. A sequence $\{x_{n,k\}_k}$ in $X$ is called an order $c_0$-sequence if
\begin{enumerate}
\item $\forall n=1,2,\dots$ there exists $A_n \in Q_n$ and a sequence $\{x_{n,k}\}_k \subset A_n$
\item $||x_{n,k}|| \to 0$ as $n\to \infty$ uniformly in $k$.
\end{enumerate}
\end{definition}
\begin{theorem} \label{thm:order}
Let $X$ be a Banach space with an approximation scheme with sets $A_n \in Q_n$
satisfy the condition $ |\lambda | A_n \subset A_n$ for $|\lambda| \leq 1$. A bounded subset $D$ of $X$ is $Q$-compact if and only if there is an order $c_0$-sequence $\{x_{n,k\}_k} \subset A_n$ such that
$$D \subset\left \{ \sum_{n=1}^{\infty} \lambda_n x_{n,k(n)} :\quad x_{n,k(n)}\in (x_{n,k})\quad \sum_{n=1}^{\infty} |\lambda_n| \leq 1 \right \}.$$
\end{theorem}
Proof of the above theorem can be obtained from the one given for $p$-Banach spaces in \cite{Ak-Nk}. Clearly this is an analogue of Grothendieck's theorem given above in Theorem \ref{thm:Groth} for $ Q$-compact sets.
\section{Terzio\u{g}lu's Theorem for $Q$-compact Maps}
Terzio\u{g}lu's characterization of compact maps relies on both Grothendieck's and Schauder's theorems. Above Theorem 3.2 is Grothendieck's theorem for $Q$-compact sets, therefore we turn our attention to the relationship between $T$ being $Q$-compact and its transpose $T^*$ being $Q$-compact. The relationship between the approximation numbers of $T$ and $T^*$ was studied by several authors, it is shown in \cite{Hut} that for $T \in \mathcal{L}(X)$, we have
$$ \mbox{dist}(T, \mathcal{F}) \leq 3\, \mbox{dist}(T^*, \mathcal{F^*})$$
where $ \mathcal{F}$ and $ \mathcal{F^*}$ denote the class of all finite rank operators on $X$ and $X^*$ respectively. Central to the proof of such result is the assumption of local reflexivity possessed by all Banach spaces, (see \cite{Lin}). It is not hard to show that if we assume that our space $X$ with approximation scheme $Q_n$ satisfies slight modification of this property, called extended local reflexivity principle, then we have
$$ \alpha_n(T, Q) \leq 3\, \alpha_n(T^*, Q^*).$$ Where, by $\alpha_n(T, Q)$ we mean the $n$th approximation number defined as
$$ \alpha_n(T, Q) = \inf \left\{||T-B||: \,\, B\in \mathcal{L}(X),\,\, B(X) \in Q_n (X)\right\}.$$
However, we do not have a proof of the Schauder's theorem for $Q$-compact maps. In the following we present a result analogous to Terzio\u{g}lu's Theorem for $Q$-compact maps under the assumption that both $T$ and $T^*$ are $Q$-compact.
\begin{theorem}
Let $E$ and $F$ be Banach spaces , $T\in \mathcal{L}(E,F) $ and assume that both $T$ and $T^*$ are $Q$-compact maps. Then there exists sequence $\{u_{n,k}\}\in Q_n$ with $||u_{n,k}|| \to 0$ for $n\to \infty$ for all $k$, such that the inequality
$$ ||Tx|| \leq \mbox{sup} | < u_{n,k(n)}, x>|$$ holds for every $x\in E$. Here $Q_n$ is a ``special" class of subsets of $E^*$ with the property that $u_{n,k(n)}\in \{u_{n,k}\}$.
\end{theorem}
\begin{proof}
Since $T^*: F^* \to E^*$ is $Q$-compact, thus by the Theorem \ref{thm:order}, $T^*(U_{F^*})$ is a $Q$-compact set, thus there exists a sequence $\{u_{n,k}\}_k \subset A_n\in Q_n$ such that $|| u_{n,k}|| \to 0$ as $n\to \infty$ uniformly in $k$ and
$$T^*(U_{F^*}) \subset\left \{ \sum_{n=1}^{\infty} \lambda_n u_{n,k(n)} :\quad u_{n,k(n)}\in (u_{n,k})\quad \sum_{n=1}^{\infty} |\lambda_n| \leq 1 \right \}.$$
Then for each $x\in E$, we have:
$$ || Tx|| =\displaystyle \sup _{v \in U_{F^*} } |<v, Tx> | = \sup _{v \in U_{F^*} } |<T^*v, x> | = \sup _{n} |<\sum_{n=1}^{\infty}\lambda_n u_{n,k(n)}, x > |$$
and thus
$$ || Tx|| \leq \sum_{n=1}^{\infty} |\lambda_n| \sup_{n} |< u_{n,k(n)}, x>| \leq \,\sup_{n} | < u_{n,k(n)}, x>|.$$
\end{proof}
\begin{remark}
We say a map $T\in \mathcal{L}(E,F)$ is a $Q$-compact map, if $ \lim_{n} \delta_n(T, Q)=0$. To obtain "Schauder's Theorem" for $Q$-compact maps, one seeks a relationship between $\delta_n(T)$ and $\delta_n(T^*)$. K. Astala in \cite{As} proved that under the assumption that the Banach space $E$ has the lifting property and the Banach space $F$ has the extension property, for a map $T\in \mathcal{L}(E,F)$, one has $\gamma(T) =\gamma (T^*)$, where $\gamma(T)$ denotes the measure of non-compactness. Since
$$ \lim_{n\to \infty}\delta_n(T) = \gamma(T),$$ by imposing extension and lifting properties on $E$ and $F$ respectively and keeping tract of approximation schemes on these spaces one might obtain Schauder's type of a theorem in this special case.
\end{remark}
\end{example}
\bibliographystyle{amsplain}
|
1,314,259,996,656 | arxiv | \section{Introduction}
A \emph{histogram} computed on a dataset $D$ is a vector of counts, such that each record in $D$ affects at most a single histogram element, called \emph{bin}. The histogram constitutes one of the most basic statistical tools for describing the dataset distribution. A \emph{range-sum} query over a histogram returns the sum of values of a set of \emph{contiguous} bins. Our goal is to publish a histogram on $D$ that satisfies \emph{$\epsilon$-differential privacy} \cite{dwork11}. This paradigm entails perturbing the bins prior to their publication, so that each individual record in $D$ is protected. We aim at minimizing the total error incurred by the perturbation when answering \emph{arbitrary} (i.e., not known a priori) range-sum queries. For example, consider a database $D$ with medical records, and a histogram on $D$ where each bin contains the number of patients of a certain age. Assuming bins sorted on age, a range-sum query over this histogram returns the number of patients in the range. In this scenario, it is important that the published histogram does not violate the privacy of any individual patient; at the same time the range-sum results should be accurate.
Differentially private histogram publication has been studied extensively. The existing schemes can be divided into two categories. ``Data-aware'' methods exploit the underlying dataset distribution \cite{acs12,xu12,zhang14,li14}. They \emph{smooth} the histogram prior to its perturbation by grouping \textit{similar} bins and replacing their values with the group \emph{average}. This yields reduced perturbation per bin. However, these mechanisms exhibit superquadratic time complexities, which may be prohibitive in time-critical applications. ``Data-oblivious'' methods build a perturbed \emph{aggregate tree} on top of the histogram, and answer range-sum queries by summing a small number of tree nodes, instead of numerous individual bins falling in the range \cite{cormode12,hay10,qardaji13,xiao11}. Such approaches are very efficient, running in time linear to the number of histogram bins, but may yield low utility for some practical datasets.
To the best of our knowledge, this is the first work that aims at efficiency, without compromising utility. Towards this goal, we address the problem in a principled, \emph{modular} approach. Specifically, we first identify three building blocks, which we call \emph{modules}, namely \emph{Smoothing}, \emph{Hierarchy Level}, and \emph{Fixed Queries}; the first is inspired by the data-aware techniques, the second works on a single level in the aggregate tree, and the third is based on techniques such as the matrix mechanism \cite{li10}, which has been applied to increase utility for fixed query workloads \cite{li14}. We then formulate a \emph{scheme} as a combination of these modules, integrated with certain components, called \emph{connectors}. The latter do not affect privacy, but serve to properly format the inputs of the modules. Subsequently, we express the existing state-of-the-art methods in our modular framework, and discover opportunities for optimization. Finally, we devise novel efficient and effective schemes by composing the modules non-trivially. Concretely, our contributions are:
\begin{itemize}
\vspace{-0.1cm}
\item We introduce a modular framework on differentially private histograms for range-sum queries. Our approach offers multiple benefits: (i)~using a small set of simple modules and connectors, we can analyze all existing methods, and devise novel schemes with variable efficiency and utility, (ii)~given the privacy level of each module, we can easily derive the privacy of arbitrarily complex schemes, and (iii)~each module can be optimized separately; furthermore, potential future optimizations can be incorporated to an existing scheme with minimal effort.
\vspace{-0.06cm}
\item We analyze an important submodule of the Smoothing module, namely the grouping method, which essentially solves an optimization problem. We point out the two objective functions most heavily used in the literature, propose optimizations and evaluate their effect on utility.
\vspace{-0.06cm}
\item We design novel schemes based on the defined modules, including the first mechanism that seamlessly combines the data-aware and -oblivious methodologies. In addition, we efficiently adapt schemes for fixed query workloads (e.g., the matrix mechanism) to arbitrary range-sums, via a simple but powerful technique based on \emph{prefix sums} \cite{ho97}.
\vspace{-0.06cm}
\item We provide a thorough experimental evaluation that compares the best existing methods with our new solutions, testing over three real datasets with different characteristics. We exhibit that there is a trade-off between algorithmic efficiency and utility across the various approaches.
\vspace{-0.1cm}
\end{itemize}
The remainder of the paper is organized as follows. Section \ref{sec:background} includes preliminary information and surveys the related work. Section \ref{sec:modular} introduces our modular framework. Section \ref{sec:grouping} investigates grouping in depth. Section \ref{sec:schemes} presents our proposed mechanisms. Section \ref{sec:experiments} experimentally evaluates all schemes, whereas Section \ref{sec:conclusion} concludes our work.
\section{Background}\label{sec:background}
Section \ref{subsec:setting} formulates our setting, and includes the necessary primitives on differential privacy. Section \ref{subsec:related} surveys differentially private histogram publication.
\subsection{Setting and Primitives}\label{subsec:setting}
Let $\mathcal{D}$ be a collection of datasets. We define a family of functions $\mathcal{F} = \{F_j: \mathcal{D} \rightarrow \mathcal{H}\}$, such that for all $j$ and all $D \in \mathcal{D}$, $F_j(D) = \mathbf{h} \in \mathcal{H}$ is an (ordered) vector called \emph{histogram}. An element of $\mathbf{h}$ is termed \emph{bin} and consists of a value and a label, where $\mathbf{h}[i]$ represents the $i^\textrm{th}$ bin value of $\mathbf{h}$. All histograms have the property that any record in $D$ increments \emph{at most a single} $\mathbf{h}[i]$ by $1$. Finally, we call $F_j$ a \emph{histogram algorithm}. For instance, let $D\in \mathcal{D}$ be a dataset of medical records. Then, $F_1 \in \mathcal{F}$ may produce histogram $\mathbf{h}_1$ such that $\mathbf{h}_1[i]$ is the number of patients in $D$ having age $i$, and $F_2 \in \mathcal{F}$ may produce histogram $\mathbf{h}_2$ such that $\mathbf{h}_2[i]$ is the number of patients in hospital with id $i$. Observe that, the presence of a patient in $D$ increments at most one bin by $1$ in both histograms.
Our goal is to publish a $n$-element histogram $\mathbf{h}$ produced by some \emph{fixed} algorithm $F$ on a $D \in \mathcal{D}$, while satisfying $\epsilon$-\emph{differential privacy} and allowing \emph{arbitrary range-sum queries} on its bins with high utility. Specifically, we define a range-sum query as a range of bins $[i_l, i_u]$, $1 \leq i_l \leq i_u \leq n$, which returns the sum $\sum_{i=i_l}^{i_u} \mathbf{h}[i]$. In our example above, a range-sum query on $\mathbf{h}_1$ could be $[10,20]$, asking for the number of patients between $10$ and $20$ years old. We assume that the range queries are \textit{not known} prior to the publication of the histogram.
To achieve $\epsilon$-differential privacy, we apply a mechanism $M$ on the histogram, which perturbs it in a way that satisfies the following definition, adapted from \cite{dwork06orig}.
\begin{definition}\label{def:epsilon}
A mechanism $M:\mathcal{H}\rightarrow \mathcal{\hat{H}}$ satisfies $\boldsymbol{\epsilon}$\textbf{\emph{-differential privacy}} for a histogram algorithm $F \in \mathcal{F}$, if for all sets $\hat{H} \subseteq \mathcal{\hat{H}}$, and every pair $D,D' \in \mathcal{\mathcal{D}}$ where $D'$ is obtained from $D$ by removing a record ($D,D'$ are called neighboring), it holds that
$$
\mathrm{Pr}[M(F(D))\in \hat{H}] \leq e^\epsilon \cdot \mathrm{Pr}[M(F(D')) \in \hat{H}]
$$
\end{definition}
Intuitively, $\epsilon$-differential privacy guarantees that the perturbed histogram $\hat{H}$ will be the same with high probability (tunable by $\epsilon$), regardless of whether a patient agrees to participate in the publication or not. Equivalently, the sensitive information of any patient cannot be inferred from the published data.
\begin{definition}\label{def:sens}
The \emph{\textbf{sensitivity}} of any histogram algorithm $F \in \mathcal{F}$ is $\Delta(F) =\max_{D, D' \in \mathcal{D}}\left\|F(D)-F(D')\right\|=1$
for all neighboring $D,D' \in \mathcal{D}$.
\end{definition}
In other words, the sensitivity of $F$ represents how much the histogram $F(D)$ changes when a record is deleted from $D$. Since any record contributes $1$ to at most a single bin, the sensitivity is $1$ for any histogram algorithm $F \in \mathcal{F}$.
The most basic technique to achieve $\epsilon$-differential privacy is to add Laplace noise to the histogram bins using the Laplace Perturbation Algorithm (${\sf LPA}$ \cite{dwork06,dwork11}). Let $\mathit{Lap}(\lambda)$ be a random variable drawn from a Laplace distribution with mean zero and scale parameter $\lambda$. ${\sf LPA}$ achieves $\epsilon$-differential privacy through the mechanism outlined in the following theorem, adapted from \cite{dwork06}.
\begin{theorem}\label{theo:laplace_mech} Let $F \in \mathcal{F}$ and define $\mathbf{h} \stackrel{\textrm{\emph{def}}}{=} F(D)$. A mechanism $\mathcal{M}$ that adds independently generated noise from a zero-mean Laplace distribution with scale parameter $\lambda=\Delta(F)/\epsilon=1/\epsilon$ to each of the values of $\mathbf{h}$, i.e., which produces transcript $\mathbf{\hat{h}} = \mathbf{h} + \langle \mathit{Lap}(1/\epsilon) \rangle^n$,
enjoys $\epsilon$-differential privacy.
\end{theorem}
With ${\sf LPA}$, a range-sum query $[i_l, i_u]$ is processed on the noisy $\mathbf{\hat{h}}$ and returns $\sum_{i=i_l}^{i_u} \mathbf{\hat{h}}[i]$. The Laplace noise injected in each bin introduces error, which is aggregated when the noisy bin values are added. For large ranges, this error may completely destroy the utility of the answer. Numerous works (overviewed in Section~\ref{subsec:related}) introduce alternative mechanisms for improving the utility of the output histograms in the case of range-sum queries.
Finally, we include a \emph{composition} theorem (adapted from \cite{pinq}) that is useful for our proofs. It concerns executions of multiple differentially private mechanisms on non-disjoint and disjoint inputs.
\begin{theorem}\label{theo:comp}
Let $M_1, \ldots, M_r$ be mechanisms, such that each $M_i$ provides $\epsilon_i$-differential privacy. Let $\mathbf{h}_1$, $\ldots$, $\mathbf{h_r} \in \mathcal{H}$ be histograms created on pairwise non-disjoint (resp. disjoint) datasets $D_1, \ldots$, $D_r$, respectively. Let $M$ be another mechanism that executes $M_1(\mathbf{h}_1), \ldots,$ $M_r(\mathbf{h}_r)$ using independent randomness for each $M_i$, and returns their outputs. Then, $M$ satisfies
$\left(\sum_{i=1}^r{\epsilon_i}\right)$-differential privacy (resp. $\left(\max_{i=1}^r{\epsilon_i}\right)$-differential privacy).
\end{theorem}
The above theorem allows us to view $\epsilon$ as a \emph{privacy budget} that is distributed among the $r$ mechanisms. Moreover, note that the theorem holds even when $M_i$ receives as input the private outputs of $M_1, \ldots, M_{i-1}$ \cite{pinq}.
\subsection{Differentially Private Histograms}\label{subsec:related}
Existing literature on differentially private histograms for range-sum queries aims at improving upon {\sf LPA} in terms of utility. We divide the approaches into two categories; \textit{data-aware} that utilize \textit{smoothing}, and \textit{data-oblivious} that rely on \textit{hierarchical} tree structures.
\medskip
\noindent\textbf{Data-aware methods.}
These approaches first smooth the histogram, typically either by grouping similar bin values and substituting them with their average, or by performing a smoothing filter such as the Discrete Fourier Transform (DFT). Subsequently, they apply Laplace noise similar to ${\sf LPA}$ to the averages or the DFT coefficients. Range-sum queries are processed by summing the histogram bin values in the query range. Smoothing reduces the sensitivity and, hence, the injected Laplace noise, but adds approximation error. Consequently, smoothing methods are effective if the Laplace noise error reduction exceeds the smoothing approximation error. The bin grouping algorithm assigns scores to a set of potential grouping strategies, and selects the one with the minimum score, in a manner that does not compromise differential privacy. Existing approaches differ in the set of examined strategies, the scoring function, and the selection process.
The SF algorithm \cite{xu12} follows the grouping and averaging paradigm. Specifically, given as input a \emph{fixed} parameter $k$ and privacy budgets $\epsilon$, $\epsilon'$, SF initially finds a set of $k$ groups of \emph{contiguous} bins through an $\epsilon'$-differentially private process. Subsequently, it smooths the bin values based on the grouping, and adds Laplace noise generating $(\epsilon-\epsilon')$-differentially private histogram. Due to linear composition (Theorem \ref{theo:comp}), the SF mechanism achieves $\epsilon$-differential privacy. The grouping sub mechanism of SF operates on the original histogram and determines the $k$ groups such that the estimated \textit{squared} error is minimized. This error is expressed as the sum of (i) the squared approximation error due to smoothing, and (ii) the squared error from injecting Laplace noise with scale $1/(\epsilon-\epsilon')$ prior to publication. It then applies the exponential mechanism \cite{mcsherry07} in order to alter the group borders and achieve $\epsilon'$-differential privacy. Note that, due to this step, the total error of SF eventually deviates from the actual minimum. The grouping submodule of SF runs in $O(n^2)$.
Acs et al. \cite{acs12} present two mechanisms, EFPA and P-HP. EFPA is an improvement of \cite{rastogi10}, which smooths the histogram using a subset of its DFT coefficients perturbed with Laplace noise, while guaranteeing that the output histogram satisfies $\epsilon$-differential privacy. P-HP is a grouping and averaging method that improves SF \cite{xu12}. In particular, instead of receiving the number of groups $k$ as input, it discovers the optimal value of $k$ on-the-fly. Contrary to SF, it utilizes an \textit{absolute} error metric. The grouping algorithm of P-HP runs also in $O(n^2)$, but similarly to SF does not examine all possible groups. P-HP is shown to outperform both EFPA and SF in terms of utility \cite{acs12}.
Motivated by \cite{kellaris13} (for a different setting), AHP \cite{zhang14} first applies ${\sf LPA}$ to the histogram with scale $1/\epsilon'$, and \emph{sorts} the resulting bins in descending or ascending order. Subsequently, it executes a grouping and averaging technique that is different from SF and P-HP. Specifically, it operates on already $\epsilon'$-differentially private data and, hence, does not need to apply the exponential mechanism. Moreover, it finds the grouping that minimizes the \textit{squared} error metric expressed as a function of the noisy data, rather than the original histogram (and, thus, similar to \cite{xu12,acs12}, it does not guarantee the actual minimum error). Note that the ordering attempts to minimize the approximation error, since it results in groups with more uniform bin values. The authors present two algorithms; one that evaluates all possible groups and runs in $O(n^3)$ time, and a greedy one that considers only a subset of the possible options and runs in $O(n^2)$. They conducted experiments using the latter, and demonstrated that AHP offers better utility than P-HP.
DAWA \cite{li14} comprises of two stages. The first stage executes a
smoothing technique, while the second an optimized version of the matrix mechanism \cite{li10}. Its grouping and averaging submodule invests $\epsilon'$ budget to reduce the \textit{absolute} error metric similar to \cite{acs12}. However, instead of executing the exponential mechanism, it adds noise to the costs of the groups used in the selection process on-the-fly. The authors present two instantiations; the first evaluates all possible groupings and runs in $O(n^2\log n)$ time, whereas the second considers only a subset and runs in $O(n\log^2 n)$. The output of the smoothing procedure is fed to the matrix mechanism. The latter belongs to a category of schemes \cite{li10,yuan12,hardt12} that take as input a set of \textit{pre-defined} range-sum queries, and assign more privacy budget to the bins affecting numerous queries. DAWA can be adapted to our setting of \textit{arbitrary} queries in two ways; either by completely ignoring the second stage, resulting in time complexity $O(n^2\log n)$ (or $O(n\log^2 n)$ in the approximate version), or
by feeding all the $O(n^2)$ possible queries to the input of the matrix mechanism, yielding time complexity $O(n^3\log n)$.
\medskip
\noindent\textbf{Data-oblivious methods.} These schemes build an aggregate tree on the original histogram; each bin value is a leaf, and each internal node represents the sum of the leaves in its subtree. In order to achieve $\epsilon$-differential privacy, they add Laplace noise to each node, which is proportional to the tree height (since each bin value is incorporated in all the sums along its path to the root). A range-sum query is processed by identifying the maximal subtrees that exactly cover the range, and summing the values stored in their roots. Compared to ${\sf LPA}$, the hierarchical methods essentially increase the sensitivity from $1$ to $\log n$, but sum fewer noisy values when processing the range-sum, reducing the aggregate error. For a range-sum covering $m$ bins, these methods induce $O(\log m \log n)$ error, as opposed to ${\sf LPA}$ that inflicts $O(m)$ error. Therefore, the hierarchical methods exhibit benefits for large ranges. Moreover, their time complexity is $O(n)$.
Hay et al. \cite{hay10} build a binary aggregate tree and inject Laplace noise uniformly across all nodes. In addition to constructing the final range-sum from the roots of the maximal subtrees that cover the range, they also explore other node combinations. Independently from \cite{hay10}, Privelet \cite{xiao11} builds a Haar wavelet tree and adds Laplace noise, achieving practically the same effect as \cite{hay10}. Based on the observation that the privacy budget should not be divided equally among all levels, Cormode et al. \cite{cormode12} enhance \cite{hay10} with a geometric budget allocation technique. Qardaji et al. \cite{qardaji13} survey the above approaches, concluding that the theoretical optimal fan-out of the tree is $16$. They experimentally showed that \cite{hay10}, when combined with the budget allocation of \cite{cormode12} and their optimal fan-out, outperforms Privelet and SF.
\medskip
\noindent\textbf{Discussion.} Data-oblivious methods are fast, but may have low utility for practical datasets. Data-aware schemes avoid this by exploiting the underlying data, but they may be prohibitively slow. For example, assuming the squared error metric, the lowest time complexity achieved by any method is $O(n^3)$. Attempts to boost performance via approximation, by ignoring possible groupings, compromise utility in an unpredictable way. Moreover, the naive adaptation of DAWA to arbitrary queries, by feeding all the $O(n^2)$ possible range-sums, is impractical.
Finally, all the discussed methods involve common components. For instance, data-aware schemes only differ in their grouping technique (e.g., different error metrics in the scoring function), whereas data-oblivious methods only differ in the tree fanout and the budget allocation across the levels. These design decisions are orthogonal; e.g., we could use the tree fan-out of one method with the budget allocation policy of another. Going one step further, novel methods could combine the merits of both data-aware and -oblivious schemes.
Motivated by the above, in this work we formulate a principled approach, which defines the core privacy techniques as primitive modules. Our framework allows (i)~the careful study and optimization of each individual module, (ii)~the construction of efficient and effective schemes via the seamless combination of these modules, and (iii)~the effortless adaptation of additional modules, such as the matrix mechanism, in our problem setting.
\section{Modular Framework}\label{sec:modular}
Section \ref{subsec:defs} formulates the concept of \emph{module} along with related notions. Section \ref{subsec:modules} describes the module instantiations utilized to construct range-sum schemes. Section \ref{subsec:modular_related} demonstrates how the existing state-of-the-art range-sum schemes (used as competitors in our experiments) can be expressed in our modular framework.
\subsection{Definitions}\label{subsec:defs}
There are two types of building blocks in our approach: the \emph{module} and the \emph{connector}, formulated in the next two definitions.
\begin{definition}\label{def:module}
A \textbf{\emph{module}} is a mechanism that takes as input a sensitive histogram $\mathbf{h} \in \mathcal{H}$ and a vector of public parameters $\mathbf{p}$, and outputs a differentially private histogram $\mathbf{\hat{h}} \in \mathcal{\hat{H}}$. The privacy level (i.e., $\epsilon$) of the module depends on $\mathbf{h}, \mathbf{p}$, and its internal mechanics.
\end{definition}
\begin{definition}\label{def:connector}
A \textbf{\emph{connector}} is an algorithm that takes as input a vector of public parameters $\mathbf{p}$, and either $H \subseteq \mathcal{H}$ (i.e., sensitive histograms) or $\hat{H} \subseteq \mathcal{\hat{H}}$ (i.e., differentially private histograms), and outputs another vector of parameters $\mathbf{p}'$, along with sets $H' \subseteq \mathcal{H}$ and $\hat{H}' \subseteq \mathcal{\hat{H}}$. It must obey two constraints: (i) it must spend \emph{no} privacy budget, and (ii) if it takes as input some $\mathbf{h} \in \mathcal{H}$, all its outputs must be consumed by modules.
\end{definition}
Simply stated, modules are responsible for perturbing sensitive data with noise, whereas connectors \emph{connect} modules (and optionally also other connectors). The connectors essentially format the data prior to feeding them to the modules. The public parameters facilitate determining the amount of noise added by a module. The second condition of the connectors is due to technical purposes in our proofs, which will become clear later in this section. Hereafter, we denote a module by $M$ and a connector by $C$. Finally, note that a module may be further comprised of other modules and connectors, in which case we refer to it as \emph{composite}. The motivation behind distinguishing connectors from modules is to compartmentalize the components related to privacy within the scope of a module, so that we facilitate the understanding of its privacy level and possible optimization.
\begin{definition}
A \textbf{\emph{range-sum scheme}} consists of a directed acyclic graph (DAG) of modules and connectors, and a query processor. It takes as input a histogram $\mathbf{h} \in \mathcal{H}$, public parameters $\mathbf{p}$, and privacy budget $\epsilon$. The DAG of modules and connectors outputs a \emph{structure} $\mathbf{\hat{S}}$ (e.g., a histogram or tree) that satisfies $\epsilon$-differential privacy, which is fed to the query processor. The latter uses the structure to answer arbitrary range-sum queries.
\end{definition}
Note that the above definition can capture even \textit{iterative} schemes, such as MWEM \cite{hardt12}, as follows. We decompose a loop into modules, and then \textit{serialize} the loop by repeating its modules as many times as the number of loop iterations. We do not delve into more details, as we do not deal with iterative schemes in this work.
The next theorem formulates $\epsilon$-differential privacy for a range-sum scheme. Intuitively, it states that the connectors do not affect privacy at all. The privacy level of the entire scheme depends \emph{solely} on the modules and, thus, it suffices to analyze each module individually.
\begin{theorem}\label{theo:scheme}
Let a range-sum scheme comprised of modules $M_1, M_2, \ldots, M_r$ and connectors $C_1, C_2, \ldots, C_l$. Suppose that $M_1, M_2, \ldots, M_r$ work on sensitive inputs derived from pairwise non-disjoint (resp. disjoint) datasets, and each $M_i$ satisfies $\epsilon_i$-differential privacy. Then, the scheme satisfies $\left(\sum_{i=1}^r{\epsilon_i}\right)$-differential privacy (resp. $\left(\max_{i=1}^r{\epsilon_i}\right)$-differential privacy).
\end{theorem}
\begin{proof}
We distinguish two cases, assuming for now that the connectors take single inputs and produce single outputs: (i)~A connector $C$ takes as input a differentially private histogram $\mathbf{\hat{h}} \in \mathcal{\hat{H}}$ from a module $M_i$. Since $C$ spends zero privacy budget by definition, its output will retain the privacy level of the input, independently of the computations it performs. Therefore, we can devise a module $M_i'$ that encompasses $M_i$ and $C$, and retains the $\epsilon_i$-differential privacy of $M_i$. (ii)~A connector $C$ takes as input a sensitive histogram $\mathbf{h} \in \mathcal{H}$ from the scheme input. By definition, $C$ can only produce a sensitive histogram $\mathbf{h}' \in \mathcal{H}$ as output and direct it to a module $M_i$. Hence, we can trivially merge $C$ with $M_i$ to create a module $M_i'$ that retains the $\epsilon_i$-differential privacy of $M_i$.
Replicating connectors to simulate multiple inputs and outputs, and executing the processes described in the above two cases repeatedly, from a DAG of $M_1, \ldots, M_r$ modules and $C_1, \ldots, C_l$ connectors we can derive an equivalent DAG of mechanisms $M_1', \ldots, M_r'$, where $M_i'$ satisfies $\epsilon_i$-differential privacy. Due to Theorem \ref{theo:comp}, the scheme satisfies $\left(\sum_{i=1}^r{\epsilon_i}\right)$-differential privacy (resp. $\left(\max_{i=1}^r{\epsilon_i}\right)$-differential privacy).
\\
\end{proof}
\vspace{-0.1pt}
As a final remark on privacy, recall the second constraint we imposed on the connector. If $\mathbf{h}$ were the input of a connector $C$ whose output was not directed to a module, $C$ could have been allowed to send $\mathbf{h}$ to the output of the range-sum scheme, violating differential privacy. The constraint prevents this case.
The benefits of modularity are threefold: (i)~novel schemes with variable efficiency and utility can be developed based on a small set of simple modules and connectors, (ii)~given the privacy level of each module and using Theorem \ref{theo:scheme}, we can easily prove the privacy of complex schemes, and (iii)~the modules can be optimized independently, and incorporate potential future improvements.
In the following, we deconstruct existing techniques into modules and connectors in order to investigate their performance bottlenecks, and identify opportunities for improvement. The internals of composite modules and schemes are illustrated using figures, depicting a module with a rectangle, a connector with a diamond, and a query processor with a parallelogram.
\subsection{Atomic Modules}\label{subsec:modules}
The three basic modules in our framework are \textit{Smoothing}, \textit{Hierarchy Level}, and \textit{Fixed Queries}. These modules are composite, i.e., they consist of other modules and connectors. However, they are used as \emph{atomic}\footnote{\scriptsize The submodules and connectors of an atomic module are never used outside of this particular module.} blocks when analyzing existing and novel schemes in later sections. We next explain each module in turn.
\medskip
\noindent\textbf{Smoothing module.} This module constitutes a building block for the data-aware techniques. It imposes an order on the bins of the input histogram, groups and averages bins, applies noise, and outputs the perturbed histogram. Figure~\ref{fig:smoothing} depicts the internal mechanics of the Smoothing module in more detail. Its input consists of the initial histogram $\mathbf{h}$, and public parameters $\mathbf{p}$ that include a vector $\mathbf{L}$, an error metric $\mu$ (absolute or squared), and three privacy budgets $\epsilon_1, \epsilon_2, \epsilon_3$. Each element of $\mathbf{L}$ has the form $\langle g_i, v_i \rangle$, where $g_i$ is some encoding for a group of histogram bins, and $v_i$ quantifies the error in $g_i$ due to the subsequent addition of noise (the value of $v_i$ will be elaborated shortly).
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{smoothing-eps-converted-to.pdf}
\caption{Smoothing module} \label{fig:smoothing}
\end{figure}
The module consists of three submodules, called Ordering, Grouping, and Noise Addition. Ordering receives the histogram $\mathbf{h}$ and budget $\epsilon_1$, and works as in \cite{zhang14}; it adds (Laplace) noise with scale $\lambda=1/\epsilon_1$ to each bin value, and sorts them in descending order. It then forwards the noisy sorted histogram $\mathbf{\hat{h}}_o$ to the Grouping submodule.
The Grouping submodule spends budget $\epsilon_2$ to discover the groups for its input histogram, considering $\mathbf{L}$ and $\mu$. $\mathbf{L}$ (i)~describes the \emph{permissible groups}, and (ii)~includes error values $v_i$ that parameterize $\mu$. A permissible group $g_i$ can only contain \emph{contiguous} bins, and is encoded simply by a range of elements in $\mathbf{\hat{h}}_o$, but is independent of the corresponding bin labels or values. After determining the groups, the submodule incorporates a group id into each bin label. Finally, it outputs the result, which is denoted by $\mathbf{\hat{h}}_{o,avg}$. The tasks performed by Grouping are elaborated further in Section~\ref{sec:grouping}.
Noise Addition receives the original histogram $\mathbf{h}$, budget $\epsilon_3$, and the output $\mathbf{\hat{h}}_{o,avg}$ of the Grouping submodule. It groups the bins of $\mathbf{h}$ according to the (augmented with group ids) bin labels in $\mathbf{\hat{h}}_{o,avg}$, and averages their values. Then, it adds noise to the respective average with scale $1/(\epsilon_3\cdot |g_i|)$. Finally, it sets the noisy average of every group $g_i$ as the value of the bins in $g_i$, and outputs the noisy smoothed histogram $\mathbf{\hat{h}}$, which satisfies $\epsilon_3$-differential privacy\footnote{\scriptsize Contrary to {\sf LPA}, Smoothing distributes the noise \emph{non-uniformly} over the bins of $\mathbf{h}$. This can be thought of as splitting $\mathbf{h}$ into $|G|$ \emph{disjoint} histograms, each corresponding to a $g_i \in G$ and, due to averaging, having sensitivity $1/|g_i|$. Due to Theorem \ref{theo:laplace_mech}, injecting noise with scale $1/(\epsilon_3|g_i|)$ renders each histogram $\epsilon_3$-differentially private. Due to Theorem \ref{theo:comp}, Smoothing is also $\epsilon_3$-differentially private.}. Note that the bin labels in $\mathbf{\hat{h}}$ incorporate the group ids of $\mathbf{\hat{h}}_{o,avg}$.
Ordering is $\epsilon_1$-, Grouping is $\epsilon_2$-, and Noise Addition is $\epsilon_3$-differentially private, and they all operate on histograms derived from pairwise non-disjoint inputs. Hence, due to Theorem \ref{theo:comp}, Smoothing satisfies $(\epsilon_1+\epsilon_2+\epsilon_3)$-differential privacy. If we set $\epsilon_1 = 0$ ($\epsilon_2 = 0$), the Ordering (Grouping) submodule acts as a connector. Specifically, Ordering just outputs the input histogram, whereas Grouping outputs the best strategy without spending privacy budget. However, based on Definition \ref{def:connector}, it is not permitted to simultaneously set $\epsilon_1 =0$ and $\epsilon_2 = 0$; in that case, Ordering would forward a sensitive histogram $\mathbf{h}$ to another connector. Finally, if we set $\epsilon_3 = 0$, the Noise Addition submodule adds noise with infinite scale to each group average. Although the returned $\mathbf{\hat{h}}$ contains useless values, its bin labels incorporate the grouping information from the Grouping submodule.
\medskip
\noindent\textbf{Hierarchy Leve
.} This is a typical component of the data-oblivious schemes. Recall that these methods build an aggregate tree on the original histogram. Every level of the tree can be viewed as a separate histogram. The Hierarchy Level module operates on a histogram of a specific tree level. Figure \ref{fig:hier} illustrates its internal parts. The module receives a histogram $\mathbf{h}$, a vector $\mathbf{L}$, privacy budget $\epsilon$, tree height $t$, and a tree level $\ell$. It consists of two connectors (Scale Budget and Scalar Product), and a submodule Noise Addition.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{hierarchical-eps-converted-to.pdf}
\caption{Hierarchy Level module} \label{fig:hier}
\end{figure}
In our implementation, the Scale Budget connector allocates the privacy budget based on the tree level, using the method of \cite{cormode12} to maximize utility. It receives as input the triplet $(\epsilon, t, \ell)$, and outputs $\alpha \epsilon$, i.e., it determines a parameter $\alpha$ that scales budget $\epsilon$. The Scalar Product connector takes as input $\alpha \epsilon$ and public vector $\mathbf{L}$, and simply outputs their scalar product $(\alpha \epsilon) \mathbf{L}$. This essentially distributes the budget assigned for the level (potentially) non-uniformly over the bins. The output $(\alpha \epsilon) \mathbf{L}$ is forwarded to the Noise Addition submodule, which adds noise with scale $1/((\alpha \epsilon)\mathbf{L}[i])$ to the $i^\textrm{th}$ bin of the histogram, and outputs the resulting noisy histogram $\mathbf{\hat{h}}$.
The $\mathbf{L}$ parameter is selected, so that the Hierarchy Level module is $(\alpha \epsilon)$-differentially private. In our schemes, we distinguish two cases: (i) $\mathbf{L} = \mathbf{1}^n$, and every bin receives the same noise with scale $1/(\alpha \epsilon)$. (ii) $\mathbf{L}[i] = 0$ for some bins, in which case the module adds noise with infinite scale. Observe that in both cases, the added noise achieves $(\alpha \epsilon)$-differential privacy.
\medskip
\noindent\textbf{Fixed Querie
.} This module is the building block of methods that target at range-sum queries known a priori. It receives as input a histogram $\mathbf{h}$, a privacy budget $\epsilon$, and a range-sum query workload $\mathbf{W}$. It executes an off-the-shelf mechanism such as MWEM \cite{hardt12} or the matrix mechanism \cite{li10}, and outputs the noisy histogram $\mathbf{\hat{h}}$. Figure~\ref{fig:matrix} shows the Fixed Queries module, instantiated with the optimized matrix mechanism submodule of \cite{li14}, used in our implementation.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{matrix-eps-converted-to.pdf}
\caption{Fixed Queries module} \label{fig:matrix}
\end{figure}
\subsection{Modularizing Existing Schemes} \label{subsec:modular_related}
In this section we show how the existing approaches can be constructed using modules and connectors.
\medskip
\noindent\textbf{Smoothing scheme.} All data-aware mechanisms \cite{xu12,acs12,zhang14,li14} described in Section~\ref{subsec:related} are captured by the scheme of Figure~\ref{fig:S}, which is a simple combination of the Smoothing module with a Query Processor. The latter receives the noisy histogram output by the Smoothing module, and replies to range-sum queries. The queries are processed by adding the bins falling in the query range.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{S-eps-converted-to.pdf}
\caption{Smoothing scheme} \label{fig:S}
\end{figure}
Depending on the choice of public parameters $\mathbf{p}$, we can have the following alternative scheme instantiations:
\begin{itemize}
\item \emph{With/without ordering:} We can deactivate (activate) the Ordering submodule by setting the value of $\epsilon_1$ to $0$ ($>0$). For instance, if we set $(\epsilon_1, \epsilon_2, \epsilon_3) = (\epsilon/2, 0, \epsilon/2)$ we reproduce AHP \cite{zhang14}. Note that $\epsilon_2=0$ because the Grouping submodule operates directly on noisy data and does not need to inject extra noise (i.e., it acts as a connector). On the other hand, if we set $(\epsilon_1, \epsilon_2, \epsilon_3) = (0, \epsilon/4, 3\epsilon/4)$, we reproduce the smoothing scheme of DAWA \cite{li14}. Observe that either case results in an $(\epsilon_1 + \epsilon_2 + \epsilon_3=\epsilon)$-differentially private Smoothing module. Since this is the only module in the scheme, the Smoothing scheme is also $\epsilon$-differentially private.
\item \emph{Exact/approximate grouping:} The Grouping submodule can be implemented either as an exact or an approximate algorithm. In the first case, public parameter $\mathbf{L}$ includes \emph{all} possible groups of contiguous bins. In the second case, $\mathbf{L}$ contains a \emph{proper subset}, which reduces the running time. In both cases, all $v_i$ values in $\mathbf{L}$ are set to $1/\epsilon_3$, which is the expected error incurred by the Noise Addition submodule.
\item \textit{Absolute/squared error metric:} There are also two options for the error metric $\mu$ utilized by the Grouping submodule; absolute as in \cite{acs12,li14} or squared as in \cite{xu12,zhang14}. As shown later in the paper, this choice impacts both utility and performance.
\end{itemize}
\noindent\textbf{Hierarchical scheme.} The scheme captures data-oblivious methods. As shown in Figure \ref{fig:H}, it consists of connectors $C_1$ and $C_2$, $t$ Hierarchy Level modules (where $t$ depends on the input public parameters), and a Query Processor. It receives as input a histogram $\mathbf{h}$, privacy budget $\epsilon$, and public parameters $\mathbf{L}$ and $f$, where $\mathbf{L} = \mathbf{1}$ and $f$ is the fan-out of the tree\footnote{\scriptsize In our implementation, we set $f=16$ because it is optimal in terms of utility for range-sum queries \cite{qardaji13}.}. Connector $C_1$ initially receives $\mathbf{h}$, $\epsilon$, $\mathbf{L}$ and $f$. Based on $\mathbf{h}$ and $f$, it creates an aggregate tree, and determines the tree height $t$. It next perceives each level of the tree as a histogram $\mathbf{h}_\ell$ for $\ell=1, \ldots, t$. Finally, it splits the budget $\epsilon$ into $t$ budgets $\epsilon/t$ and forwards $\mathbf{h}_\ell$ and $(\mathbf{1}, \epsilon/t, t, \ell)$ to the $\ell^\textrm{th}$ Hierarchy Level module.
The $\ell^\textrm{th}$ Hierarchy Level module sends a noisy histogram $\mathbf{\hat{h}}_\ell$ to $C_2$. The latter assembles a noisy tree $\mathbf{\hat{T}}$ from these histograms and forwards it to the Query Processor. In order to maximize utility, in our implentation the Query Processor answers range-sum queries by combining nodes from the noisy tree using the method of \cite{hay10} (see Section~\ref{subsec:related}).
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{H-eps-converted-to.pdf}
\caption{Hierarchical scheme} \label{fig:H}
\end{figure}
Each Hierarchy Level module $\ell$ offers $\alpha_\ell \epsilon/t$-differential privacy. Moreover, the modules work on non-disjoint sensitive inputs. As such, due to Theorem \ref{theo:scheme}, the Hierarchical scheme offers $(\sum_{\ell=1}^t{\frac{\alpha_\ell\epsilon}{t}})$-differential privacy. Note that the $\ell^\textrm{th}$ Hierarchy Level module sets its $\alpha_\ell$ as defined in \cite{cormode12} (through a closed formula based on $t, \ell$), in a way that \emph{guarantees} that $\sum_{\ell=1}^t{\alpha_\ell}=t$. Consequently, the Hierarchical scheme satisfies $\epsilon$-differential privacy.
\medskip
\noindent \textbf{DAWA-like scheme.} This scheme is a generalization of DAWA \cite{li14}. Recall that, in addition to a smoothing stage, DAWA employs the matrix mechanism, which receives as input all the possible range-sum queries. We abstract these two stages, so that any smoothing and fixed-queries scheme can be combined to realize DAWA's concept.
Figure \ref{fig:DAWA} depicts the mechanics of the scheme. It consists of modules Smoothing and Fixed Queries, a connector, and a Query Processor. It receives as input histogram $\mathbf{h}$, budget $\epsilon$, and parameters $(\mathbf{W}, \mathbf{L},\mu, \epsilon/4, 3\epsilon/4)$. Following \cite{li14}, budget $\epsilon/4$ is allocated to the Smoothing module, and $3\epsilon/4$ to Fixed Queries. Vector $\mathbf{W}$ holds all possible $O(n^2)$ range-sum queries; $\mu$ defines the utilized error metric by the Smoothing; $\mathbf{L}$ contains the permissible groups. For each group $g_i$ in $\mathbf{L}$, $v_i$ is set to $4/(3\epsilon)$, which is the expected error due to the subsequent noise addition by the Fixed Queries module.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{DAWA-eps-converted-to.pdf}
\caption{DAWA-like scheme} \label{fig:DAWA}
\end{figure}
The Smoothing module takes as input $\mathbf{h}$ and parameters $(\mathbf{L},\mu, 0,\epsilon/4,0)$, and outputs a noisy histogram $\mathbf{\hat{h}}_{g}$ that incorporates the group labels. Connector $C$ receives $\mathbf{\hat{h}}_{g}$, $\mathbf{W}$ and $\mathbf{h}$. It smooths $\mathbf{h}$ using $\mathbf{\hat{h}}_{g}$, and creates $\mathbf{h}_{avg}$. Then, it modifies workload $\mathbf{W}$ to $\mathbf{W}_{avg}$ to reflect the queries on $\mathbf{h}_{avg}$. The technical details of this conversion are included in \cite{li14}. Finally, it feeds $(\mathbf{h}_{avg}, \mathbf{W}_{avg})$ to Fixed Queries, which receives budget $3\epsilon/4$. This module computes and forwards a noisy histogram $\mathbf{\hat{h}}$ to the Query Processor, which answers range-sums by summing the bins included in the query range.
The Smoothing module satisfies $\epsilon/4$-differential privacy, while the Fixed Queries module $3\epsilon/4$-differential privacy. Both work on non-disjoint inputs and, therefore, the whole scheme satisfies $\epsilon$-differential privacy.
\section{Grouping and Metrics}\label{sec:grouping}
The Grouping submodule of Smoothing determines the way the bins are privately grouped. In all existing schemes, this is modeled as an optimization problem where the resulting grouping must minimize a certain error metric. In this section, we first present in detail the two error metrics used in the literature, namely \textit{absolute} \cite{acs12, li14} and \textit{squared} \cite{xu12, zhang14} error, explain their usage, and analyze the overall time complexity of Grouping in each case. Next, we introduce an \textit{optimal} way to compute the squared error, which (i) reduces the time complexity of the current best method by a factor of $n$, and (ii) improves the accuracy of Smoothing.
Recall that Grouping takes as input a histogram $\mathbf{\hat{h}}_o$, a privacy budget $\epsilon_2$, public vector $\mathbf{L}$, and an error metric $\mu$. Its goal is to find the groups that minimize $\mu$, while satisfying $\epsilon_2$-differential privacy. Let $G$ be a \emph{grouping strategy}, i.e., a set of $|G|$ groups of \textit{contiguous} bins that cover all histogram bins and are mutually disjoint. Let $b_j$ be a bin value, and $\bar{g}_i$ the average of the bins in group $g_i \in G$, i.e., $\bar{g}_i = \sum_{b_j \in g_i} b_j/|g_i|$.
The total error has two components. The first is due to the smoothing process and depends on the difference between the value $b_j$ of a bin and the average $\bar{g}_i$ of the group in which it belongs. The second component is due to the noise injected by the module that succeeds grouping. For each group $g_i$, $\mathbf{L}$ contains a value $v_i$ that corresponds to the latter. The \textit{absolute} and \textit{squared} error metrics combine the two components in different ways. Both metrics represent the collective error per bin, rather than the final error in a range-sum query. However, they are in general good indicators of accuracy and their minimization is likely to maximize utility.
\medskip
\noindent \textbf{Absolute error.} This metric is defined in \cite{acs12,li14} as:
\begin{equation}\label{eq:error1}
err_1=\sum_{i=1}^{|G|}{\left(\sum_{b_j\in g_i}{|b_j-\bar{g}_i|}+v_i\right)}
\end{equation}
The state-of-the-art algorithm that uses the absolute error is the Smoothing module of DAWA \cite{li14}, which works as follows. It first calculates the cost $c_i = \sum_{b_j\in g_i}{|b_j-\bar{g}_i|}+v_i$ of each group $g_i$ in Equation \ref{eq:error1} by utilizing a binary search tree in $O(\log n)$ time. Then, it adds noise with scale $1/(\epsilon_2|g_i|)$ to $c_i$ producing $\hat{c}_i = \sum_{b_j\in g_i}{|b_j-\bar{g}_i|}+v_i+Lap(1/(\epsilon_2|g_i|))$. Finally, it finds the groups that minimize $\hat{err_1} = \sum_{i=1}^{|G|}{\hat{c_i}}$ using dynamic programming in $O(n^2)$ time. The authors prove that an optimization algorithm that operates with such noisy costs ensures $\epsilon_2$-differential privacy.
The total time of Grouping is dominated by that of computing the costs of all the $O(n^2)$ groups, which is $O(n^2 \log n)$.
\medskip
\noindent \textbf{Squared error.} This metric is defined in \cite{xu12,zhang14} as:
\begin{equation}\label{eq:error2}
err_2=\sum_{i=1}^{|G|}{\left(\sum_{b_j\in g_i}{(b_j-\bar{g}_i)^2}+v_i^2\right)}
\end{equation}
The state-of-the-art grouping algorithm that utilizes the squared error is AHP \cite{zhang14}, which works as follows. It adds noise with scale $1/\epsilon_2$ to each bin of the initial histogram, and computes cost $\hat{c_i} = \sum_{\hat{b_j}\in g_i}{(\hat{b_j}-\bar{g}_i)^2}+v_i^2$, where $\hat{b_j}$ is a noisy bin value, and $\bar{g}_i$ the average of a group of noisy bins. Finally, it finds the groups that minimize $\hat{err_2} = \sum_{i=1}^{|G|}{\hat{c_i}}$. The algorithm satisfies $\epsilon_2$-differential privacy because it operates on values perturbed with noise scale $1/\epsilon_2$. Its time complexity is $O(n^3)$.
The following theorem provides a lower bound on the time complexity of Grouping, in the case that all possible groups of contiguous bins are considered. The lower bound applies to \textit{both} error metrics.
\begin{theorem}\label{theo:lower}
A grouping algorithm on a histogram with $n$ bins runs in $\Omega(n^2)$.
\end{theorem}
\begin{proof}
The number of all the possible groups is $\Theta(n^2)$. This is because we have $n$ groups of size $1$, $n-1$ groups of size $2$, and so on (recall that a permissible group can only consist of contiguous bins). Thus, the total number of groups is $n+(n-1)+(n-2)+\ldots+1=\frac{n(n+1)}{2}$. It suffices to prove that there is an input for which any algorithm must check all the possible groups at least once.
We build a histogram such that every group $g_i$ contributes cost $\hat{c}_i = |g_i|$ (i.e., equal to its cardinality) to the error metric. In this scenario, \emph{any} grouping strategy $G$ minimizes the error metric, since \emph{every} $G$ leads to error $\sum_{g_i \in G} \hat{c}_i = n$. Now suppose that we reduce the cost of a random group $g_j$ to $(|g_j|-\delta)$ for some $\delta>0$. Any grouping strategy that includes $g_j$ will result in error $n-\delta$, whereas any other will result in $n$. Therefore, the grouping strategy $G^*$ that minimizes the error metric \emph{must} include $g_j$. Since $g_j$ is a random group, the algorithm that finds $G^*$ must check the $\hat{c}_i$ of \emph{every} group $g_i$ in order to find $g_j$. This concludes our proof.
\end{proof}
We next present an algorithm that minimizes the squared error $\hat{err_2}$ in $O(n^2)$. Therefore, due to the lower bound in Theorem \ref{theo:lower}, our algorithm is \emph{optimal}. Given that $\hat{g}_i = \frac{\sum_{\hat{b}_j\in g_i}{\hat{b}_j}}{|g_i|}$, we observe that the cost of each group can be rewritten as follows.
\begin{equation*}\label{eq:cost2}
\hat{c_i} = \sum_{\hat{b}_j\in g_i}{\left(\hat{b_j}-\bar{g}_i\right)^2}+v_i^2=
\sum_{\hat{b}_j\in g_i}{\hat{b_j}^2}-\frac{\left(\sum_{\hat{b}_j\in g_i}{\hat{b}_j}\right)^2}{|g_i|}+v_i^2
\end{equation*}
Based on the above equation, we can efficiently compute the cost of each group using the following procedure. Initially, we add noise with scale $1/\epsilon_2$ to every histogram bin. In a \textit{pre-processing stage}, we build vector $\mathbf{v}_1$ that stores the noisy bin values $\hat{b}_j$, and vector $\mathbf{v}_2$ that stores their squares $\hat{b}^2_j$. Subsequently, we construct the \textit{prefix sums} for each vector. Specifically, the prefix sums for $\mathbf{v}_1$ ($\mathbf{v}_2$) is a vector $\mathbf{v}_1'$ ($\mathbf{v}_2'$), such that $\mathbf{v}_1'[j] = \sum_{i=1}^j \mathbf{v}_1[i]$ ($\mathbf{v}_2'[j] = \sum_{i=1}^j \mathbf{v}_2[i]$). The pre-processing takes $O(n)$ time. For each group $g_i$ over contiguous bins $l, l+1, \ldots, u$, we can compute $\sum_{\hat{b}_j \in g_i} \hat{b}_j$ as $\mathbf{v}_1'[u] - \mathbf{v}_1'[l-1]$ and $\sum_{\hat{b}_j \in g_i} \hat{b}_j^2$ as $\mathbf{v}_2'[u] - \mathbf{v}_2'[l-1]$ in $O(1)$ time. Thus, the cost of any group requires $O(1)$ time. Since there are $O(n^2)$ possible groups, we can calculate all their costs in $O(n^2)$. Finally, in order to find the grouping strategy that minimizes $\hat{err_2}$, we employ the dynamic programming procedure of \cite{li14}, which runs in $O(n^2)$ time. Therefore, our algorithm has total running time $O(n^2)$.
We conclude this section with an improvement on the accuracy yielded by the use of the squared error. Recall that our algorithm computes the group costs on the noisy histogram in order to ensure $\epsilon_2$-differential privacy. Thus, the grouping strategy that minimizes $\hat{err}_2$, may not minimize $err_2$ (defined on the original bins). In order to alleviate the effects of the extra noise in $\hat{err}_2$ we exploit the following observation. Using a similar approach as in the proof of Lemma 1 in \cite{xu12}, we can show that each group is expected to have its cost increased due to noise by $2\frac{|g_i|-1}{\epsilon_2^2}$, i.e., proportionally to the group size. The additional error leads to smaller groups for $\hat{err}_2$ minimization, compared to $err_2$. To mitigate this, we reduce the calculated cost $\hat{c_i}$ of each group $g_i$ by $2\frac{|g_i|-1}{\epsilon_2^2}$, before feeding it to the dynamic programming procedure. Compared to its direct competitor AHP \cite{zhang14}, our algorithm improves the utility by up to $70\%$ and the complexity by $n$.
\section{Novel Schemes}\label{sec:schemes}
We design two schemes based on our modular framework. The first, called \textit{Subtree Smoothing}, constitutes the first approach that seamlessly combines smoothing with aggregate trees, running in $O(n)$ time. The second, called \textit{Smoothed Prefix Sums}, reduces the time complexity of DAWA by a factor of $n$, while maintaining its utility.
\subsection{Subtree Smoothing Scheme}\label{subsec:subtree_smoothing}
Recall that the Hierarchical scheme builds an aggregate tree in order to compose the range-sum answer from a small number of noisy values, thus reducing the error resulting from noise aggregation as opposed to ${\sf LPA}$. However, due to the publication of multiple non-disjoint histograms (one per level), it must add more noise per level than ${\sf LPA}$. On the other hand, the Smoothing scheme reduces the sensitivity of a set of bins via grouping and averaging, thus lowering the required noise. Our Subtree Smoothing scheme builds an aggregate tree similar to the Hierarchical scheme (thus reducing the error from noise aggregation), but \emph{smooths entire subtrees} via grouping and averaging similar to the Smoothing scheme (thus reducing the per-level, per-bin noise).
Figure \ref{fig:subtree_example} illustrates the main idea. The scheme runs the Smoothing module \emph{only once} for the leaf level (i.e., for $\mathbf{h}$), setting as permissible groups only those that correspond to the leaves of \emph{full} subtrees. Suppose that the black nodes in the figure comprise a group in the returned grouping strategy. We refer to the root of the subtree corresponding to a group as the \emph{group root}. Next, the scheme creates the aggregate tree, \emph{pruning} the nodes under the group roots (black nodes). Subsequently, it feeds each level of this aggregate tree to a Hierarchy Level module, which outputs a noisy histogram. The final noisy histograms comprise a noisy tree. Finally, the scheme puts the pruned nodes back to the tree, deriving their values from their corresponding group root. Specifically, the value in the group root is distributed evenly across the nodes of the same level in the subtree. This is equivalent to smoothing the nodes at each level of the subtree via averaging.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{subtree_example-eps-converted-to.pdf}
\caption{Subtree smoothing example} \label{fig:subtree_example}
\end{figure}
Figure \ref{fig:subtree_smoothing} illustrates the modules of the Subtree Smoothing scheme. There is a single Smoothing module, $t$ Hierarchy Level modules, two connectors, and a Query Processor. The input of the scheme includes the sensitive histogram $\mathbf{h}$, privacy budget $\epsilon$, and public parameters $\mathbf{p}=(\mathbf{L},\mu, \epsilon/4, 3\epsilon/4, f)$. $\mathbf{L}$ is the set of permissible groups for the Smoothing module and their associated $v_i$ values; $\mu$ is the error metric; $\epsilon/4$ is the budget allocated to the Smoothing module; $3\epsilon/4$ is the budget distributed evenly to the $t$ Hierarchy Level modules; $f$ is the fan-out of the aggregate tree, and $t$ is its derived height. Following \cite{qardaji13}, we set $f=16$ .
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{subtree_smoothing-eps-converted-to.pdf}
\caption{Subtree Smoothing scheme} \label{fig:subtree_smoothing}
\end{figure}
The Smoothing module takes as input $\mathbf{h}$ and parameters $(\mathbf{L},\mu, 0, \epsilon/4, 0)$, and outputs a noisy histogram $\mathbf{\hat{h}}_{g}$. $\mathbf{L}$ contains the groups formed by bins that can be leaves of full subtrees in the final aggregate tree. Their $v_i$ values are all set to $4t/(3\epsilon)$. Ordering must \emph{always} be deactivated because the final aggregate tree is built considering the order of the bins in $\mathbf{h}$. If Ordering were activated, Grouping could select a group $g_i$, whose bins are not the leaves of a full subtree on $\mathbf{h}$ (since Ordering may permute the bins of $\mathbf{h}$). Therefore, $g_i$ could not determine a group root to smooth a subtree, thus violating the scheme. The Grouping submodule takes budget $\epsilon/4$. The Noise Addition submodule receives $0$ budget; the returned $\mathbf{\hat{h}}_{g}$ contains only the grouping information, which is used later.
Connector $C_1$ receives $\mathbf{h}$, $\mathbf{\hat{h}}_{g}$, budget $3\epsilon/4$ and fan-out $f$. It builds the aggregate tree utilizing $\mathbf{h}$, $\mathbf{\hat{h}}_{g}$, $f$, and considers each tree level $\ell$ as a histogram $\mathbf{h}_\ell$ to be sent to the $\ell^\textrm{th}$ Hierarchy Level module. For every pruned node $j$ in the aggregate tree (i.e., black node in Figure \ref{fig:subtree_example}) at level $\ell$, it sets its scalar to $\mathbf{L}_\ell[j] = 0$, and for any other node $j'$ it sets $\mathbf{L}_\ell[j'] = 1$.
The $\ell^\textrm{th}$ Hierarchy Level module receives the histogram $\mathbf{h}_\ell$. For each bin $b_j$ in $\mathbf{h}_{\ell}$, it adds noise with scale $\frac{4t}{ \alpha_\ell \cdot 3\epsilon \cdot \mathbf{L}_\ell[j]}$. If $b_j$ is a node to be pruned, then $\mathbf{L}_\ell[j]=0$, and $b_j$ is perturbed with infinite noise, while a special annotation is added to its label. This is essentially equivalent to completely disregarding the pruned nodes. Otherwise, $\mathbf{L}_\ell[j]=1$ and the module adds noise with scale $\frac{4t}{ \alpha_\ell \cdot 3\epsilon}$. This procedure ensures that each Hierarchy Level module satisfies $\frac{\alpha_\ell \cdot 3\epsilon}{4t}$-differential privacy (by a direct application of Theorems \ref{theo:laplace_mech} and \ref{theo:comp}).
Connector $C_2$ receives the noisy histograms from the Hierarchy Level modules. First, it assembles a noisy aggregate tree from the histograms. Next, it substitutes the values of the nodes that received infinite noise in the Hierarchy Level modules, with the values derived from their group root, as we explained in the context of Figure~\ref{fig:subtree_example}. Finally, it forwards the resulting tree $\mathbf{\hat{T}}$ to the Query Processor, which answers range-sum queries using the technique of \cite{hay10}.
We next analyze the privacy of the scheme. The Smoothing module spends budget $\epsilon/4$ and satisfies $\epsilon/4$-differential privacy. Each of the $t$ Hierarchy Level modules satisfies $\frac{\alpha_\ell \cdot 3\epsilon}{4t}$-differential privacy as explained above. Moreover, all the modules work on non-disjoint inputs. Due to Theorem~\ref{theo:scheme}, and recalling that the $\alpha_\ell$ values are selected according to \cite{cormode12} such that $\sum_{i=1}^t \alpha_\ell=t$, the whole scheme satisfies $\epsilon$-differential privacy.
Finally, the running time of the scheme depends on the error metric $\mu$. Observe that the number of groups examined by Grouping is equal to the number of nodes in the aggregate tree, i.e., $O(n)$. For the case when $\mu$ is the absolute error, the running time of Grouping is $O(n \log n)$, using the smoothing algorithm of \cite{li14}. If $\mu$ is the squared error metric, the complexity is $O(n)$ using our optimal algorithm from Section \ref{sec:grouping}. Each Hierarchy Level module runs in time linear in the number of input nodes, thus, all the $t$ Hierarchy Level modules run collectively in $O(n)$. In our experiments, we demonstrate that the error metric in Subtree Smoothing does not significantly affect the utility. Hence, we fix $\mu$ to the more efficient square error, which yields total running time $O(n)$.
\medskip
\noindent\textbf{Utility Optimization.}
Instead of completely disregarding the nodes of a pruned subtree, we can actually utilize them to reduce the noise of its root. Specifically, for each level of the pruned subtree, we sum the node values and add noise, producing a noisy estimation of the root. Subsequently, we use the \textit{average} of these estimations as the root noisy value. The mechanism then proceeds as described above, i.e., the root value is distributed evenly among the subtree nodes. This reduces the \textit{squared} error of the root value by $t'$, where $t'$ is the height of the subtree. We omit the proof due to its simplicity.
\subsection{Smoothed Prefix Sums Scheme}\label{subsec:smoothed_prefix_sums}
This scheme is based on \emph{prefix sums} \cite{ho97}. A prefix sum query over $\mathbf{h}$ is simply described by an index $j$, and returns the sum of bins $b_1, \ldots, b_j$, i.e., $\sum_{i=1}^j \mathbf{h}[i]$. There are $n$ prefix sums, hereafter represented by a vector $\mathbf{s}$ such that $\mathbf{s}[j] = \sum_{i=1}^j \mathbf{h}[i]$ for $j=1, \ldots, n$. Moreover, observe that \emph{any arbitrary} range-sum query can be always computed by the subtraction of \emph{exactly two} prefix sums; for instance, range-sum $[i_l, i_u]$ is answered as $\mathbf{s}[i_u] - \mathbf{s}[i_l-1]$.
The Smoothed Prefix Sum scheme takes advantage of the fact that there are $n$ prefix sums, as opposed to $O(n^2)$ possible range queries, to improve the complexity of DAWA-like methods by a factor of $n$. It considers the prefix sums as the fixed workload $\mathbf{W}$, and produces a noisy histogram $\mathbf{\hat{h}}$. The latter enables the computation of a vector of noisy prefix sums $\mathbf{\hat{s}}$, such that $\mathbf{\hat{s}}[i] = \sum_{i=1}^j \mathbf{\hat{h}}[i]$. Vector $\mathbf{\hat{s}}$ is fed to the Query Processor, which computes in $O(1)$ time any range-sum $[i_l, i_u]$ as $\mathbf{\hat{s}}[i_u] - \mathbf{\hat{s}}[i_l-1]$. Since the Fixed Queries module leads to highly accurate $\mathbf{\hat{s}}[i]$, the range-sum result is expected to have very low error.
Figure \ref{fig:AM} depicts the Smoothed Prefix Sums scheme. The only differences with respect to the DAWA-like scheme of Figure \ref{fig:DAWA} are (i) the workload $\mathbf{W}$, which now contains the prefix sums, and (ii) an extra connector $C_2$ before the Query Processor, which converts the output histogram $\mathbf{\hat{h}}$ into a prefix sums array $\mathbf{\hat{s}}$.
\begin{figure}[!ht]
\centering
\includegraphics[width=\linewidth]{AM-eps-converted-to.pdf}
\caption{Smoothed Prefix Sums scheme} \label{fig:AM}
\end{figure}
\begin{figure*}[!b]
\vspace{-0.3cm}
\begin{center}
\centering
\subfigure[\emph{Citations}]{
\includegraphics[width=0.28\linewidth]{cite-eps-converted-to.pdf}
\label{plot:citedist}
}
\subfigure[\emph{Rome}]{
\includegraphics[width=0.28\linewidth]{rome-eps-converted-to.pdf}
\label{plot:romedist}
}
\subfigure[\emph{GoWalla}]{
\includegraphics[width=0.28\linewidth]{gw-eps-converted-to.pdf}
\label{plot:gwdist}
}
\vspace{-0.3cm}
\caption{Data distribution}
\vspace{-0.3cm}
\label{plot:dist}
\end{center}
\vspace{-0.3cm}
\end{figure*}
The Smoothing module satisfies $\epsilon/4$-differential privacy, while the Fixed Queries module $3\epsilon/4$-differential privacy. Both work on non-disjoint inputs and, therefore, the whole scheme satisfies $\epsilon$-differential privacy. Its time complexity is $O(|\mathbf{W}|n\log n) = O(n^2\log n)$, since now $|\mathbf{W}| = n$. The expected error is at most two times larger than that of DAWA because Smoothed Prefix Sums subtracts two noisy values from the prefix sums array to answer a range query, while DAWA essentially returns a value for the same range. However, in our experiments we demonstrate that the utility of Smoothed Prefix Sums is practically the same as that of the DAWA scheme.
A remark concerns allocation of budget $\epsilon$ to the various modules. In this work, we followed the empirical allocation policies of the existing schemes. Determining the optimal allocation is out of our scope, but we consider it as an interesting problem for future work. Finally, except for ${\sf LPA}$ and the Hierarchical scheme, where the expected error is expressed theoretically, the rest of the schemes are highly \emph{data-dependent}. Therefore, their utility must be experimentally evaluated under different real settings, a task we undertake in the next section.
\section{Experimental evaluation}\label{sec:experiments}
In this section we evaluate the methods of Table \ref{tab:summary} in terms of utility and efficiency. ${\sf LPA}$ corresponds to the Laplace Perturbation Algorithm. ${\sf H}$ implements the Hierarchical scheme, using all optimizations of \cite{qardaji13}. ${\sf S_1}$ incorporates the smoothing algorithm of DAWA \cite{li14}, based on the absolute error metric. ${\sf \tilde{S}}$ is the approximate version of ${\sf S_1}$ that considers only a subset of the possible groups \cite{li14}. ${\sf S_2}$ applies smoothing using the squared error metric, and it utilizes the quadratic algorithm and utility optimization described in Section \ref{sec:grouping}. ${\sf S_o}$ orders the bin values \cite{zhang14}, and then uses ${\sf S_2}$ for smoothing. ${\sf DAWA}$ \cite{li14} is implemented with input all the possible range queries. ${\sf SUB}$ is our Subtree Smoothing scheme based on the squared error metric as it offers similar utility to the absolute metric, and is faster by a $\log n$ factor. It also contains the utility optimization technique described at the end of Section \ref{subsec:subtree_smoothing}. ${\sf SPS}$ is our Smoothed Prefix Sum scheme. For both ${\sf SPS}$ and ${\sf DAWA}$, we use the absolute error metric, but the choice of metric does not affect either their utility, or performance.
Our evaluation includes all the dominant techniques in their respective settings. Specifically, the smoothing module of DAWA (${\sf S_1}$) offers better utility and time complexity than previous methods that are based on the absolute error metric and check all possible groupings \cite{li14}. Its approximate counterpart ${\sf \tilde{S}}$ also dominates its competitor P-HP, which in turn has been shown to outperform EFPA and SF \cite{acs12} (details about these methods can be found in Section \ref{subsec:related}). Among the exhaustive techniques based on the squared error, the state-of-the-art is AHP \cite{zhang14}, which is dominated by ${\sf S_2}$ in terms of running time and utility, as explained in Section \ref{sec:grouping}. Moreover, the approximate methods using the squared error metric have the same quadratic complexity as ${\sf S_2}$, while at best they can reach the same utility. Finally, the optimizations incorporated in ${\sf H}$ have been shown to yield the best hierarchical method in the survey of \cite{qardaji13}.
\begin{table}[t]
\centering
\vspace{-0.2cm}
\caption{Summary of schemes}\label{tab:summary}
\begin{scriptsize}
\begin{tabular}{c | c | c }
\textbf{Scheme} & \textbf{Abbrv} & \textbf{Time} \\
\hline
\hline
Laplace perturbation algorithm & {${\sf LPA}$} & $O(n)$ \\
Hierarchical scheme & ${\sf H}$ & $O(n)$ \\
Smoothing with absolute error metric & {${\sf S_1}$} & $O(n^2 \log n)$ \\
Approximate Smoothing & ${\sf \tilde{S}}$ & $O(n\log^2 n)$ \\
Smoothing with squared error metric & ${\sf S_2}$ & $O(n^2)$\\
Smoothing with ordering & {${\sf S_o}$} & $O(n^2)$ \\
DAWA & ${\sf DAWA}$ & $O(n^3\log n)$ \\
Subtree smoothing & ${\sf SUB}$ & $O(n)$ \\
Smoothed prefix sums & ${\sf SPS}$ & $O(n^2\log n)$ \\
\end{tabular}
\end{scriptsize}
\vspace{-0.5cm}
\end{table}
We implemented all methods of Table \ref{tab:summary} in Java, and conducted experiments on an Intel Core i5 CPU 2.53GHz with 4GB RAM, running Windows 7. Following the literature, we assess utility using the \emph{Mean Squared Error} (MSE), fixing $\epsilon = 1$. The cardinality of the range-sum queries varies between $10\%$ and $50\%$ of the input histogram size. Every reported result is the average of 100 executions, each containing 2000 random queries of the selected cardinality.
We used three real datasets, henceforth referred to as \emph{Citations} \cite{cite}, \emph{Rome} \cite{rome}, and \emph{GoWalla} \cite{gowalla}. In \emph{Citations}, we created a histogram of $2414$ bins as in \cite{qardaji13}, where each bin $b_i$ is the number of papers cited $i$ times. A range-sum query $[i_l,i_u]$ returns the papers cited between $i_l$ and $i_u$ times. The \emph{Rome} dataset consists of $14420$ bins, where each bin $b_i$ is the number of cars on a specific road at time instance $i$. A range-sum query asks for the traffic at this road segment during a time interval. Finally, \emph{GoWalla} consists of user check-ins at $2791$ locations. We sorted the locations in ascending order of their $x$-coordinates as in \cite{qardaji13}, and viewed them as histogram bins. A range-sum query returns the number of users in a vertical geographical strip.
\emph{Citations}, \emph{Rome}, and \emph{GoWalla} feature considerably different distributions, depicted in Figures \ref{plot:citedist}, \ref{plot:romedist}, and \ref{plot:gwdist}, respectively. \emph{Citations} is very sparse, and its consecutive bin values are similar, especially for bins that correspond to numerous citations (most such bins have 0 values). \emph{Rome} exhibits high fluctuations at specific contiguous bins (reflecting peak hours), and includes numerous small values (reflecting non-peak hours). Finally, \emph{GoWalla} contains almost random values, since the number of check-ins is independent of the value of the $x$-coordinate.
\begin{figure*}[!h]
\begin{center}
\centering
\subfigure[\emph{MSE}]{
\includegraphics[width=0.28\linewidth]{citemse-eps-converted-to.pdf}
\label{plot:citemse}
}
\subfigure[\emph{Running Time}]{
\includegraphics[width=0.28\linewidth]{citetime-eps-converted-to.pdf}
\label{plot:citetime}
}
\subfigure[\emph{Running Time vs. Error}]{
\includegraphics[width=0.28\linewidth]{skyl-eps-converted-to.pdf}
\label{plot:citesky}
}
\vspace{-0.3cm}
\caption{\emph{Citations}}
\vspace{-0.3cm}
\label{plot:cite}
\end{center}
\vspace{-0.3cm}
\end{figure*}
\begin{figure*}[!h]
\vspace{-0.3cm}
\begin{center}
\centering
\subfigure[\emph{MSE}]{
\includegraphics[width=0.28\linewidth]{romeMSE-eps-converted-to.pdf}
\label{plot:romemse}
}
\subfigure[\emph{Running Time}]{
\includegraphics[width=0.28\linewidth]{rometime-eps-converted-to.pdf}
\label{plot:rometime}
}
\subfigure[\emph{Running Time vs. Error}]{
\includegraphics[width=0.28\linewidth]{skyl2-eps-converted-to.pdf}
\label{plot:romesky}
}
\vspace{-0.3cm}
\caption{\emph{Rome}}
\vspace{-0.3cm}
\label{plot:rome}
\end{center}
\vspace{-0.3cm}
\end{figure*}
Figure \ref{plot:citemse} plots the MSE for \emph{Citations}, when varying the range size (expressed as a fraction of the number of bins). ${\sf SPS}$ and ${\sf DAWA}$ achieve the highest accuracy. The error of ${\sf S_1}, {\sf \tilde{S}}$, and ${\sf SUB}$ is up to two times higher, while that of ${\sf H}, {\sf LPA}$, ${\sf S_2}$ is more than an order of magnitude larger. ${\sf S_o}$ exhibits the worst performance because the noise injected by ordering yields a poor grouping strategy. The low MSE of ${\sf SPS}$ and ${\sf DAWA}$ is mainly due to their effective combination of smoothing and the matrix mechanism. Their almost identical error confirms our claim in Section \ref{subsec:smoothed_prefix_sums} that feeding prefix sums to the matrix mechanism of the Fixed Queries module ($\sf SPS$) leads to the same practical utility as providing all the possible ranges ($\sf DAWA$). In general, all smoothing techniques perform well because consecutive bins have similar values, leading to groups with low error (this dataset yields a small number of large groups). This also explains the marginal difference of ${\sf S_1}$ and ${\sf \tilde{S}}$; ${\sf \tilde{S}}$ can easily find a good grouping strategy even though it does not explore all possible groups. In contrast, ${\sf S_2}$ performs worse than ${\sf S_1}$ and ${\sf \tilde{S}}$, as the squared error metric is sensitive to some small fluctuations in the dataset, which leads to unnecessarily small groups. Methods that do not rely on aggregate trees (i.e., ${\sf LPA}$, ${\sf S_1}, {\sf \tilde{S}}$, ${\sf S_2}$ and ${\sf S_o}$) are affected by the range size, as the number of noisy values participating in the calculation of the range-sum (and, thus, the resulting error accumulation) increases linearly with the size. On the other hand, the range has small effect on the utility of hierarchical methods, which increases logarithmically with the range size.
Figure \ref{plot:citetime} evaluates the CPU-time as a function of the data size. In order to reduce the data size to a percentage $x\%$, we select the first $x\%$ values and the corresponding bins. $\sf H$ and $\sf LPA$ are the fastest methods as expected by their linear complexity. $\sf SUB$ is slightly more expensive because of the additional smoothing step at the leaf level of the aggregate tree. The next method in terms of efficiency is ${\sf \tilde{S}}$, with complexity $O(n\log^2 n)$, followed by the quadratic ${\sf S_2}$ and ${\sf S_o}$. ${\sf S_1}$ and ${\sf SPS}$ have almost the same running time due to their identical complexity $O(n^2\log n)$. ${\sf DAWA}$ ($O(n^3\log n)$) is more than an order of magnitude more expensive than any other method.
In order to demonstrate the utility-efficiency trade-off, Figure \ref{plot:citesky} plots the error (x-axis) versus time (y-axis), when fixing the range-sum size to $30\%$ of the bins and using the entire dataset. The best solutions on both aspects lie closest to the axes origin. Although $\sf SPS$ and $\sf DAWA$ feature the best utility, they are also the most expensive. However, $\sf DAWA$ is dominated by $\sf SPS$, which is much more efficient. On the other hand, fast methods such as $\sf LPA$ and $\sf H$ incur high error. In between the two extremes lies $\sf SUB$, which is almost as fast as $\sf H$ and $\sf LPA$, but exhibits $3.5$ times lower error than $\sf H$ and an order of magnitude lower than $\sf LPA$.
Figure \ref{plot:romemse} assesses the utility of the schemes on the \emph{Rome} dataset. The results for $\sf DAWA$ are omitted, since it failed to terminate within a reasonable time (in fact, we estimated that it would take approximately three months to finish for this dataset). Similar to Figure \ref{plot:citemse}, ${\sf SPS}$ is the best scheme, reducing the error of the next best solutions (${\sf H}$ and ${\sf SUB}$) by up to $70\%$. ${\sf H}$ and ${\sf SUB}$ have almost identical error. Compared to Figure \ref{plot:citemse}, smoothing-based techniques have inferior performance because, due to the high fluctuations of the dataset, it is difficult to find effective grouping strategies. As opposed to \textit{Citations}, \emph{Rome} yields a large number of small groups. ${\sf S_1}$, ${\sf S_2}$ and ${\sf \tilde{S}}$ achieve gains through smoothing only for small ranges. ${\sf S_2}$ outperforms ${\sf S_1}$ and ${\sf \tilde{S}}$ because the squared error distinguishes small fluctuations, whereas the absolute error erroneously merges small groups into larger ones. Similarly, ${\sf \tilde{S}}$ results in up to $50\%$ worse error than ${\sf S_1}$ and up to an order of magnitude worse than ${\sf S_2}$ because the arbitrary set of the examined groups of the former does not include the groups that minimize the error. Finally, ${\sf S_o}$ and ${\sf LPA}$ are the worst techniques.
Figure \ref{plot:rometime} measures the CPU-time versus the data size. The results and the relative order of the schemes are consistent with Figure \ref{plot:citetime}. $\sf DAWA$ can only run for up to $40\%$ of the dataset. Figure \ref{plot:romesky} plots the error versus efficiency for \textit{Rome}. Again $\sf SPS$ and $\sf LPA$ lie at the two extremes of utility and efficiency, respectively. $\sf SUB$ and $\sf H$ provide the best trade-off, dominating all the schemes but $\sf SPS$.
Figure \ref{plot:gwmse} depicts the MSE on \emph{GoWalla} as a function of the range size. \emph{GoWalla} features almost random bin values. Consequently, smoothing spends privacy budget, without finding groups that lead to noise reduction. The schemes that depend solely on smoothing, i.e., ${\sf S_1}$, ${\sf S_2}$, ${\sf S_o}$ and ${\sf \tilde{S}}$, are outperformed even by ${\sf LPA}$. On the other hand, ${\sf SUB}$, ${\sf DAWA}$ and ${\sf SPS}$ are more robust to the dataset characteristics, since they inherit the benefits of the aggregate tree and matrix mechanism, respectively. For this dataset, a simple aggregate tree generated by ${\sf H}$ is the best method.
\begin{figure*}[!h]
\begin{center}
\centering
\subfigure[\emph{MSE}]{
\includegraphics[width=0.28\linewidth]{gwMSE-eps-converted-to.pdf}
\label{plot:gwmse}
}
\subfigure[\emph{Running Time}]{
\includegraphics[width=0.28\linewidth]{gwtime-eps-converted-to.pdf}
\label{plot:gwtime}
}
\subfigure[\emph{Running Time vs. Error}]{
\includegraphics[width=0.28\linewidth]{skyl3-eps-converted-to.pdf}
\label{plot:gwsky}
}
\vspace{-0.3cm}
\caption{\emph{GoWalla}}
\vspace{-0.3cm}
\label{plot:gw}
\end{center}
\vspace{-0.3cm}
\end{figure*}
Figure \ref{plot:gwtime} plots the CPU-time. The relative performance is similar to the previous diagrams. ${\sf DAWA}$ terminates because \emph{GoWalla} is smaller than \emph{Rome} (2791 versus 14420 bins). Figure \ref{plot:gwsky} shows the error-efficiency trade-off for \textit{GoWalla}. $\sf H$ dominates all solutions in both aspects, whereas $\sf SUB$ lies close to $\sf H$.
\medskip
\noindent \textbf{Summary.} Our experiments demonstrate that data-aware techniques lead to considerable error reductions for datasets that have similar values in consecutive bins. However, the gains of smoothing vanish in datasets with numerous high fluctuations. In these scenarios, data-oblivious methods are preferable because they do not waste privacy budget on smoothing. In between these two extremes lie schemes that integrate smoothing with other modules (i.e., ${\sf SUB}$ and ${\sf SPS}$), and are more robust to the dataset characteristics. Specifically, ${\sf SPS}$ performed identically to ${\sf DAWA}$ in terms of utility, while reducing the complexity by a factor of $n$ to $O(n^2 \log n)$. On the other hand, for time-critical applications (e.g., real-time traffic), where even ${\sf SPS}$ may be too slow, ${\sf SUB}$ achieves comparable accuracy, while having the lowest running time ($O(n)$) among all the data-aware methods.
\section{Conclusion}\label{sec:conclusion}
This paper introduces a modular framework for differentially private histogram publication. We first express existing methods in the framework, and identify opportunities for optimization. We then design a new optimal algorithm for smoothing, which improves utility and reduces the running time of the current state-of-the-art. Next, we develop new schemes that combine heterogeneous privacy techniques, previously deemed unrelated. Finally, we experiment on three datasets with diverse characteristics. Our results confirm that our modular approach enables the design of schemes that (i) are tailored to the data characteristics of the application at hand, and (ii) offer a desirable tradeoff between efficiency and utility.
\bibliographystyle{abbrv}
|
1,314,259,996,657 | arxiv | \section*{Introduction}
Sharp discontinuities separating regions with different physical
parameters are a key feature of space, astrophysical and laboratory
plasmas. Their dynamics about pressure-balanced equilibrium, magnetohydrodynamic
(MHD) surface waves, act as an efficient mechanism of filtering, accumulating,
and guiding the turbulent disturbances omnipresent between/through
their respective systems. Surface waves have been observed and modelled
within tokamak experiments \cite{connor98}, plasma tori surrounding
planets \cite{He2020}, the solar atmosphere (e.g. in coronal loops
\cite{li13}), the heliopause \cite{baranov92}, accretion disks \cite{stehle99},
and astrophysical/relativistic jets \cite{zhao92} to name a few.
This makes understanding surface waves of universal importance.
While many of these environments can only be remote sensed, planetary
magnetospheres (particularly that of Earth) provide the opportunity
of \textit{in situ} measurements of surface wave processes. The motion
of the outer boundary of a magnetosphere, the magnetopause, is of
primary importance in dictating global magnetospheric dynamics since
it controls the flow of mass, energy, and momentum from the solar
wind into the terrestrial system having direct and indirect space
weather impacts on the radiation belts, auroral oval, and ionosphere
\cite{summers13,elkington06,keiling16}. Surface waves on a planetary
magnetopause, which occupy the lower ends of the so-called ultra-low
frequency range (ULF; fractions of milliHertz to a few Hertz), are
excited either by pressure imbalances (typically on the dayside) or
flow shears (on the flanks) \cite{pu83,kivelson95}. Magnetopause
surface waves are thus similar to surface waves on bodies of water,
which form due to and travel in the direction of the wind \cite{phillip06,miles06}.
Since magnetopause surface waves impart momentum on the magnetosphere,
the antisunward flow of the external driver --- the shocked solar
wind --- has led to a well-accepted paradigm of the tailward propagation
of outermagnetospheric ULF waves at all local times \cite{samson71,leonovich16,plaschke16,hwang16}.
Surface waves may subsequently become non-linear via the Kelvin-Helmholtz
instability at the magnetotail (when sufficient free energy is present
to overcome magnetic tension or plasma compressibility) forming vortices
which undergo magnetic reconnection, transporting mass across the
boundary \cite{nykyri01,ma14}. The paradigm of tailward propagation
in magnetospheric dynamics is thought to hold even in response to
the rather common impulsive events that drive intense space weather
\cite{sibeck90}, for example large-scale solar wind pressure pulses
and shock waves\cite{zuo15,villante16} or smaller ($\mathrm{R_{E}}$
scale or less) kinetically-generated bow shock phenomena like magnetosheath
jets \cite{plaschke18}. The models predict an exception, in agreement
with several observations, only at the early post-noon magnetopause,
since pressure fronts aligned with the Parker spiral interplanetary
magnetic field (IMF) strike this region before the pre-noon sector.
Reported instances of sunward propagating ULF waves have been attributed
to internal processes, such as energetic particle instabilities \cite{Constantinescu09}
or changes in the magnetotail configuration \cite{nielsen84,eriksson08}.
In physics, a common approach to understanding a dynamical system
is to determine its normal modes. These form in a magnetosphere when
system-scale MHD waves become trapped through reflection by boundaries
or turning points. Transverse Alfv\'{e}n waves, propagating along
field lines due to magnetic tension forces, are reflected by the highly
conductive ionosphere resulting in standing waves akin to those on
a guitar string \cite{southwood74}. Fast magnetosonic waves, driven
by correlated thermal and magnetic pressure gradients, trapped radially
form so-called cavity/waveguide modes \cite{kivelson84,kivelson85},
somewhat similar to the resonances of wind instruments. These magnetospheric
normal modes due to MHD body waves have been well studied and tend
to conform to the aforementioned paradigm \cite{samson71,leonovich16,plaschke16,hwang16}.
However, it had long been proposed that magnetopause surface waves
propagating along the terrestrial magnetic field in response to impulsive
pressure variations might too reflect at the northern and southern
ionospheres, forming a magnetopause surface eigenmode (MSE) somewhat
analagous to the vibrations of a drum's membrane \cite{chen74}. The
theory of these standing waves has been developed using ideal incompressible
MHD in box model magnetospheres \cite{plaschke11}. Despite their
simplicity, these models have been able to reproduce many features
captured by more advanced global MHD simulations \cite{hartinger15}.
For example, MSE frequencies near the subsolar magnetopause can be
approximated in the limit $k_{\phi}\ll k_{\parallel}$ as (equation~6
of \cite{archer15}, using pressure balance at the magnetopause)
\begin{align}
\omega & \approx k_{\parallel}\frac{B_{sph}}{\sqrt{\mu_{0}\rho_{msh}}}\label{eq:mse-frequency}\\
& \approx k_{\parallel}\sqrt{2\frac{\rho_{sw}}{\rho_{msh}}}v_{sw}\label{eq:mse-frequency-sw}
\end{align}
for angular frequency $\omega$, wavenumber $k$, magnetic field strength
$B$, mass density $\rho$ and speed $v$ with subscripts $sw$, $msh$,
and $sph$ corresponding to the solar wind, magnetosheath, and magnetosphere
respectively. MSE thus constitute the lowest frequency normal mode
of the magnetospheric system, given the smaller phase speeds and wavenumbers
than other modes. Indeed, equation~\ref{eq:mse-frequency-sw} yields
fundamental frequencies below $2\,\mathrm{mHz}$ and thus evanescent
scales that highly penetrate the dayside magnetosphere \cite{archer15,hartinger15}.
However, MSE are thought to be strongly damped due to the finite thickness
of the boundary, perhaps persisting for only a few wave periods \cite{chen74,hartinger15,kozyreva19}.
Direct evidence of MSE was discovered only recently \cite{archer19},
likely due to the observational challenges in unambiguously demonstrating
such a low frequency normal mode has been excited. Fortuitous multi-spacecraft
observations of the response to an isolated, broadband magnetosheath
jet revealed narrowband magnetopause oscillations and magnetospheric
ULF waves that were in excellent agreement with the theoretical predictions
of MSE and could not be explained by other known mechanisms. These
observations in the mid--late morning sector strikingly showed no
azimuthal motion of the boundary despite the expectation that surface
waves be advected tailward \cite{pu83,sibeck90}, hinting that this
eigenmode may challenge the usual paradigm. It is currently unclear
how to reconcile this with current models, especially since (unlike
meridionally) there is no boundary azimuthally for surface waves to
reflect against to reverse course.
In this paper we address this conundrum by considering surface waves'
energy flux through spacecraft observations, global MHD simulations,
and analytic MHD theory in order to explain the resonant response
of the magnetospheric system globally. We show that magnetopause surface
waves propagate against the flow forming an azimuthally stationary
wave across a wide local time range.
\section*{Results}
\subsection*{Spacecraft observations}
We use Time History of Events and Macroscale Interactions (THEMIS)
\cite{angelopoulos08} spacecraft observations from satellites A--E
(THA--THE) during the previously reported event of MSE triggered
by a magnetosheath jet on 07~August~2007. See the spacecraft observations
section of Methods for further details of instruments and techniques
employed. The spacecraft were located at $\sim$09:30~MLT (magnetic
local time) in a string-of-pearls formation. For context, at 22:25~UT
an isolated $\sim100\,\mathrm{s}$ magnetosheath jet occurred upstream
of the magnetopause which was followed by a period of $\sim18\,\mathrm{min}$
with little pressure variations (demarked by vertical dotted lines)
until another jet occurred at 22:45~UT (Figure~2d of \cite{archer19}).
The magnetopause moved in response to the jet, undergoing two boundary
oscillations at $1.8\,\mathrm{mHz}$ corresponding to the fundamental
mode MSE (Figures 2g and 3b of \cite{archer19}). Figure~\ref{fig:spacecraft-timeseries}
shows magnetospheric observations by the THA (panels a--i) and THE
(panels j--r) spacecraft of the magnetic (panels~a, j), velocity
(panels~c, l), and electric fields (panels~e, n). Dynamic spectra
of these using the continuous wavelet transform can also be found
in Supplementary Figure~1 revealing the $1.8\,\mathrm{mHz}$ fundamental
mode MSE (clearest in the compressional magnetic field components
at both spacecraft) and $3.3\,\mathrm{mHz}$ second harmonic MSE (e.g.
in the perpendicular components of the magnetic field), as well as
a $6.7\,\mathrm{mHz}$ fundamental toroidal standing Alfv\'{e}n wave
at THA (azimuthal velocity / radial electric field) \cite{archer19}.
THD spacecraft observations proved similar to THE, and the other spacecraft
encountered the magnetosheath too often for use here. We aim to measure
the Poynting vector and corresponding energy velocity associated with
MSE, concepts which are further explained in the Poynting's theorem
for MHD waves section of Methods. This necessitates extracting the
associated wave perturbations from the data, removing noise and other
signals as described in the time-based filtering section of Methods,
resulting in the filtered magnetic (panels~b, k), velocity (panels~d,
m), and electric fields (panels~f, o) shown in Figure~\ref{fig:spacecraft-timeseries}.
These are then used to determine energy densities and fluxes. Between
the two dotted lines (which indicate the times of little upstream
pressure variations) all spacecraft observed time-averaged Poynting
vectors with components consistently azimuthally eastward and a slight
tendency towards radially outwards too (panels~g, p). This was also
evident at the MSE frequencies in the Poynting vectors computed using
the wavelet transforms (see fourier and wavelet techniques of Methods
for details) which are shown in Supplementary Figure~1 in time-frequency
and in Supplementary Figure~2 as a function of frequency by averaging
over the interval. The average Poynting directions at each spacecraft
location are shown in Figure~\ref{fig:spacecraft-locations}, showing
excellent agreement across all spacecraft in the equatorial plane
(panel~a). These observations show MSE do not conform to the typical
ULF wave paradigm of tailward propagation \cite{sibeck90,keiling16}
(waveguide modes' Poynting fluxes are directed tailward or have no
net azimuthal component \cite{elsden15,elsden19,wright20}; Kelvin-Helmholtz
generated surface waves travel tailward and radiate energy into the
magnetosphere \cite{juninger85,sakurai01}). The wave energy density
(Figure~\ref{fig:spacecraft-timeseries}~h, q) is dominated by the
magnetic component, though the kinetic energy becomes comparable later
in the interval. The waves' azimuthal energy velocity (Figure~\ref{fig:spacecraft-timeseries}~i,
r) is comparable to the flow speed in the magnetosheath (absolute
value in grey) but oppositely directed, as indicated in Figure~\ref{fig:spacecraft-locations}a,
suggesting the two forms of opposing energy flux might balance one
another out and result in no net energy flow, i.e. an azimuthally
stationary wave. This potentially may be behind the lack of azimuthal
propagation in the observed boundary motion during this interval \cite{archer19}
and may hold the key to how MSE are even possible away from the noon
meridian.
\begin{figure*}
\begin{centering}
\noindent \makebox[\textwidth]{\includegraphics{fig1}}
\par\end{centering}
\caption{\textbf{THEMIS spacecraft time series observations in the magnetosphere}.
Shown for THA (panels a--i) and THE (panels j--r). From top to bottom
the first set of panels show perturbations in the magnetic (a--b,
j--k), ion velocity (c--d, l--m), and electric (e--f, n--o) fields.
In these vertical pairs, top panels show the raw (thin) and LOESS
smoothed (thick) data, whereas the bottom panels show the latter once
detrended. Subsequent panels depict the Poynting vector (g, p) and
energy density (h, q), showing instantaneous (thin) and time-averaged
(thick) values. Finally the energy velocity (i, r) is shown compared
to the absolute magnetosheath flow speed at THB (grey). Throughout,
standard errors are indicated by shaded areas. Vertical dotted lines
demark the times of little upstream pressure variations following
the isolated impulsive jet that triggered this event. \label{fig:spacecraft-timeseries}}
\end{figure*}
\begin{figure}
\begin{centering}
\noindent \makebox[\textwidth]{\includegraphics{fig2}}
\par\end{centering}
\caption{\textbf{Directions of the time-averaged Poynting vectors at each THEMIS
spacecraft location}. These are displayed in the $z_{GSM}=-2.1\,\mathrm{R_{E}}$
(a) and $y_{GSM}=-5.3\,\mathrm{R_{E}}$ (b) planes. Coloured squares
represent the THEMIS spacecraft positions. Black arrows emanating
from these indicate normalised Poynting vectors. Grey arrows depict
the magnetosheath flow velocity direction. A representative geomagnetic
field line (grey) and model magnetopause location (black) are also
shown.\label{fig:spacecraft-locations}}
\end{figure}
Since surface waves are formulated as collective magnetosonic waves
on both sides of the boundary, the component of the Poynting vector
towards the magnetopause might be understood in terms of the magnetosonic
dispersion relation (equation 7 of \cite{pu83})
\begin{equation}
k_{r}^{2}=-k_{\phi}^{2}-k_{\parallel}^{2}+\frac{\omega^{4}}{\omega^{2}v_{A}^{2}+c_{s}^{2}\left(\omega^{2}-\left[\mathbf{k}\cdot\mathbf{v}_{A}\right]^{2}\right)}\label{eq:dispersion}
\end{equation}
, where $\mathbf{v}_{A}$ is the Alfv\'{e}n velocity and $c_{s}$
the speed of sound. Under incompressibility the last term is neglibigle
resulting in a purely imaginary $k_{r}$ and thus evanescence over
similar scales to the length of the geomagnetic field lines. We relax
this assumption and use a complex frequency $\omega=\omega_{\mathfrak{Re}}-i\gamma$
with damping rate $\gamma>0$, since surface waves on a boundary of
finite thickness are thought to be damped \cite{chen74}. For the
magnetospheric side, the phase of the last term in equation~\ref{eq:dispersion}
is negative (approximately twice that of $\omega$ as the plasma beta
is small) and thus $k_{r}^{2}$ has a negative imaginary component.
This implies, for a physically reasonable solution with zero amplitude
at infinity, $k_{r}$ should have a small real component pointed towards
the magnetopause. We estimate that a damping ratio $\gamma/\omega_{\mathfrak{Re}}=0.15$
should result in radial phase velocity components of $\sim10\,\mathrm{km}\,\mathrm{s}^{-1}$
(and between $1\text{\textendash}60\,\mathrm{km}\,\mathrm{s}^{-1}$
for $\gamma/\omega_{\mathfrak{Re}}=0.02\text{\textendash}1$ \cite{hartinger15,kozyreva19}),
i.e. considerably smaller than the Alfv\'{e}n speed of $\sim1000\,\mathrm{km}\,\mathrm{s}^{-1}$
and consistent with the average observed radial energy velocities
of $9\text{\textendash}46\,\mathrm{km}\,\mathrm{s}^{-1}$. This sense
of propagation is opposite to what is expected for a Kelvin-Helmholtz
unstable boundary, where the sign of $\gamma$ is reversed (being
a growth rate) and thus results in energy radiating into the magnetosphere.
By conservation of energy flux across the boundary, the Poynting vector
component towards the boundary would imply that damped magnetopause
surface waves lose some of their energy to the magnetosheath. This
energy pathway would be in addition to the theorised irreversible
conversion of surface wave energy to the Alfv\'{e}n continuum \cite{kozyreva19}.
THE and THD both observed significant field-aligned energy flux also,
seemingly less prominent at THA. One might naively expect no field-aligned
energy flux for a standing surface wave. However, considering this
is a dynamical mode involving surface waves reflecting and interfering
along the field under asymmetric conditions and driving, a resultant
flux in this direction may be expected. For example, reflection at
the ionospheres is not perfect nor is the absorption north-south symmetric
\cite{southwood00}. This will yield a superposition of standing and
propagating waves with a ``null point'', shifted slightly from the
standing wave's nodes/antinodes, either side of which some resultant
wave energy propagates to the respective ionospheres where it is dissipated
\cite{allan82}. The field-aligned flux at THE and THD peaks approximately
one MSE bounce time after the driving jet. The corresponding energy
velocity is consistent with the surface wave phase speed (equation~\ref{eq:mse-frequency}).
It therefore seems plausible that these signatures are due to both
the dipole tilt (Figure~\ref{fig:spacecraft-locations}b) resulting
in different reflectances in both hemispheres and the localised driver
exciting multiple harmonics with different relative phases causing
shifts in the interference pattern. To the first point, the dipole
tilt for this event was $17^{\circ}$, hence different conductances
in the northern and southern ionospheres would be expected. Further,
THA's footpoint ($66^{\circ}$ geomagnetic latitude) could also have
a different conductance to that for THD and THE ($71^{\circ}$, i.e.
near/in the auroral oval) \cite{ridley04}, which could result in
a difference in the proportion of wave energy reflected back to the
spacecrafts' respective locations. To the second point, the wavelet
transform demonstrates differences in the field-aligned Poynting fluxes
for the two harmonics, most clearly shown in Supplementary Figure~2
where averaging over time has been applied. The time-averaged Poynting
flux at the fundamental MSE frequency of $1.8\,\mathrm{mHz}$ has
a component in the direction of the geomagnetic field at all spacecraft,
indicating the spacecraft were all located above this harmonic's ``null
point''. The direction of the Poynting vectors agreed to within $26^{\circ}$
and thus are consistent taking noise into account. However, at the
second harmonic MSE of $3.3\,\mathrm{mHz}$, while the Poynting vectors'
projections in the equatorial plane are similar (to within $9^{\circ}$),
along the field we find that THE/THD observed southward fluxes whereas
at THA they were slightly northward (though not statistically significant
from noise). A second harmonic wave has a node in displacement near
the equator, thus at this frequency the spacecraft observations are
sensitive to which side of the ``null point'' they are located.
In addition, smaller wavelength surface waves are less penetrating
into the magnetosphere (equation~\ref{eq:dispersion}) which would
weaken the signal at THA's location. We conclude that THA was very
close to the second harmonic MSE's ``null point'' whereas THE/THD
were slightly below it. Nonetheless, the main result of interest in
this paper, i.e. the fluxes radially and azimuthally, are in good
agreement across all spacecraft at both frequencies. MSE's field-aligned
energy flux may have implications on energy deposition in the ionosphere
and warrants investigation in future work.
The above analysis was limited to a relatively short interval of confirmed
MSE activity following an isolated magnetosheath jet. However, several
other jets were also observed on this day and it was noted that similarly
directed Poynting vectors followed many of them. We therefore take
a wider interval and compute the time-averaged Poynting vector as
a function of frequency, as detailed in the fourier and wavelet techniques
section of Methods. This was performed for THA as it was the only
spacecraft to experience uninterupted magnetospheric observations.
Supplementary~Figure~3 shows that at MSE frequencies (dotted lines)
the radial and azimuthal Poynting vector components were statistically
significant and in agreement with the previous results, namely outward
and eastward. The parallel Poynting flux is positive at MSE frequencies
indicating that THA was overall located above the respective ``null
points'' of these waves \cite{allan82}. We note that there is a
reversal of the parallel Poynting flux around the local Alfv\'{e}n
mode frequency ($6.7\,\mathrm{mHz}$) thus unrelated to MSE. The typical
tailward energy flow paradigm emerges only at much higher frequencies
($>10\,\mathrm{mHz}$).
\subsection*{Global MHD simulations}
To further test our hypotheses from the THEMIS observations, global
MHD simulations of the global magnetospheric response to a $1\,\mathrm{min}$
large-scale solar wind density pulse are now employed (see Global
MHD simulations in Methods). This reproduces a previous simulation
\cite{hartinger15}, where the subsolar response could only be explained
by MSE and not other mechanisms. The normal displacement of the magnetopause
in the XY plane is shown in Figure~\ref{fig:Magnetopause-motion}a.
This highlights the dayside magnetopause undergoes a strong compression
when the pulse arrives, rebounds returning to equilibrium (dashed
line) but overshoots, subsequently undergoing damped oscillations.
Results are identical on both flanks due to the symmetry of the system
and driver. The oscillations' primary frequency is $1.4\,\mathrm{mHz}$
at all local times (panel~b), consistent with a fundamental MSE \cite{hartinger15}.
A secondary peak in the spectra, not previously reported, grows further
downtail between $2.5\text{\textendash}3.3\,\mathrm{mHz}$. Both spectral
peaks are associated with the damped oscillations and not the broadband
initial compression/rebound motions, as checked by a wavelet transform.
The secondary mode is likely due to (and at the frequency which maximises)
the Kelvin-Helmholtz instability given the increasing flow shear across
the boundary down the flanks \cite{merkin13}. Both modes become larger
in amplitude (panel~b and inset) and persist longer (panel~a) further
downtail, though the primary mode is always dominant. This suggests
that MSE at $1.4\,\mathrm{mHz}$, which originates on the dayside
magnetopause, seeds fluctuations which subsequently grow via Kelvin-Helmholtz
in the flanks despite not being at the instability's peak growth frequency.
\begin{figure*}[p]
\begin{centering}
\noindent \makebox[\textwidth]{\raisebox{0pt}[23cm]{\includegraphics{fig3}}}
\par\end{centering}
\caption{\textbf{Magnetopause motion in MHD simulation. a) Normal displacement
with magnetic local time (MLT)}. b) Spectra of the displacement normalised
by total power for each MLT (colour scale), with the root mean squared
(RMS) also given inset. Azimuthal slowness (c) and effective wavenumber
(d) of filtered magnetopause perturbations for the raw data (lighter)
as well as after applying MLT smoothing (darker), with corresponding
standard errors shown in both cases as shaded areas.\label{fig:Magnetopause-motion}}
\end{figure*}
We now investigate the propagation of the $1.4\,\mathrm{mHz}$ magnetopause
oscillations, extracted as described in the time-based filtering section
of Methods. From Figure~\ref{fig:Magnetopause-motion}a it appears
that across much of the dayside the waves do not propagate azimuthally
(the phase fronts are vertical), whereas it is clear in the flanks
that tailward propagating waves are present (inclined fronts, see
also Supplementary Movie~1). Here we quantify this propagation via
the azimuthal slowness $s_{\phi}$ (reciprocal of apparent phase speed,
see slowness in Methods) since the slowness vector is always normal
to phase fronts \cite{gaiser90}. The results are shown in Figure~\ref{fig:Magnetopause-motion}c.
This reveals that between $\sim$09--15h~MLT the slowness is zero
and thus the surface wave is apparently azimuthally stationary in
this region. Further down both flanks though the usual tailward motion
is recovered. It may be instructive to express effective local azimuthal
wavenumbers $m_{eff}=s_{\phi}\omega r_{mp}\left(\phi\right)$, shown
in panel~d ($r_{mp}\left(\phi\right)$ is the magnetopause geocentric
distance at each azimuth). Care must be taken in interpreting these
since the magnetopause crosses L-shells and is not azimuthally symmetric,
so the dependence cannot be expressed simply as $\exp\left(im\phi\right)$
everywhere. Instead a superposition of wavenumbers will be present,
with $m_{eff}$ capturing the local azimuthal propagation of the overall
phase \cite{degeling14}. $\left|m_{eff}\right|$ is zero in the stationary
wave region, rises slowly to $\sim0.5$ by the terminator, then more
rapidly increases to $\sim1$ within a further 2~h of LT. This global
structure cannot be attributed to the driver, since the intersection
of the pressure pulse with the magnetopause on arrival spans 08--16h~MLT,
i.e. larger than the stationary region.
\begin{figure}
\begin{centering}
\noindent \makebox[\textwidth]{\includegraphics{fig4}}
\par\end{centering}
\caption{\textbf{Unfiltered perturbations in MHD simulation along the Sun-Earth
line}. Panel~a shows motion of the bow shock (grey) and magnetopause
(black) about equilibrium. Subsequent panels show perturbations in
the b) radial velocity, c) azimuthal velocity, and d) compressional
magnetic field components (note the bi-symmetric log scale). Median
absolute perturbations by distance are displayed to the right as the
dark grey areas on a logarithmic scale. The bow shock (light grey)
and magnetopause (black) locations are also plotted.\label{fig:subsolar}}
\end{figure}
\begin{figure}
\begin{centering}
\noindent \makebox[\textwidth]{\includegraphics{fig5}}
\par\end{centering}
\caption{\textbf{Unfiltered perturbations in MHD simulation along the equatorial
terminator}. Formatting is the same as Figure~\ref{fig:subsolar}.
Panel~a shows motion of the magnetopause (black) about equilibrium.
Subsequent panels show perturbations in the b) radial velocity, c)
azimuthal velocity, and d) compressional magnetic field components
(note the bi-symmetric log scale). Median absolute perturbations by
distance are displayed to the right as the dark grey areas on a logarithmic
scale. The magnetopause (black) location is also plotted.\label{fig:terminator}}
\end{figure}
We now look at the grid point data within the simulation. Supplementary
Movie~1 shows the compressional magnetic field perturbations in the
XY and XZ planes. Figure~\ref{fig:subsolar} shows boundary (panel~a),
radial (b) and azimuthal (c) velocity, and compressional magnetic
field (d) perturbations along the Sun-Earth line These demonstrate
that the arrival of the pressure pulse and inward magnetopause motion
launches a compressional wave into the dayside magnetosphere which
reflects at/near the simulation's inner boundary and subsequently
leaks into the magnetosheath where it dissipates. This all happens
within $\sim2\,\mathrm{min}$ (i.e. before the magnetopause has finished
rebounding) in agreement with the magnetosonic speed profile. Such
a short timescale provides further evidence (in addition to that in
\cite{hartinger15}) that the subsequent magnetopause oscillations
on the dayside cannot be attributed to cavity/waveguide modes as the
lowest frequency (quarter wavelength \cite{mann99}) mode should be
$\gtrsim4\,\mathrm{mHz}$. Azimuthal velocities are negligible, hence
there is no evidence of toroidal Alfv\'{e}n waves in this region.
The MSE signatures instead are radial plasma motions and associated
compressions/rarefactions of the magnetic field, both of which decay
in amplitude with distance from the magnetopause as indicated by the
median absolute perturbations (grey areas in Figure~\ref{fig:subsolar}).
The phase fronts, however, are not purely evanescent and can be seen
propagating towards the magnetopause on the magnetospheric side. This
occurs rather slowly though at around $30\text{\textendash}40\,\mathrm{km}\,\mathrm{s}^{-1}$
near the boundary, in agreement with the estimates due to damping
made earlier. Deeper into the magnetosphere more evanescent and less
propagating behaviour is found, as expected from equation~\ref{eq:dispersion}
due to the greater Alfv\'{e}n speeds. The magnetic field perturbations
on either side of the boundary are in approximate antiphase with one
another throughout the dayside (note the magnetopause thickness in
the simulation is $\sim1.5\,\mathrm{R_{E}}$, considerably larger
than in reality since gyroradius-scales are not resolved \cite{berchem82}).
There is evidence of large-scale bow shock motion related to MSE,
a consequence which has not been considered before. At the subsolar
point (see Figure~\ref{fig:subsolar}) the bow shock lags the magnetopause
motion by $\sim1\,\mathrm{min}$, consistent with the fast magnetosonic
travel time through the magnetosheath, confirming that the resonance
is occurring at the magnetopause and subsequently driving the shock
oscillations. This lag occurs because magnetosheath plasma is highly
compressible \cite{plaschke11,archer15}, thereby deviating from the
evanescent behaviour expected under incompressibility. Since both
the magnetopause and bow shock move asynchronously, the patterns present
in the subsolar magnetosheath are somewhat complicated. These could
be explored further in the future.
In Supplementary Movie~1, magnetic field perturbations in the equatorial
plane are in phase across much of the dayside showing little evidence
of azimuthal propagation, confirming that $k_{\phi}$ is much smaller
than $k_{r}$ and $k_{\parallel}$. Tailward travelling disturbances
can be seen emanating from the oscillations near the dayside magnetopause
only at $\sim$09h and 15h~MLT, hence are associated with the propagating
surface waves discussed earlier. Supplementary Movie~2 separates
out these two regimes for further clarity. Further down the flanks,
at $\sim$07h and 17h~MLT, structure normal to the magnetopause emerges
with strong peaks/troughs $\sim2R_{E}$ inwards from the boundary.
Figure~\ref{fig:terminator} shows cuts along the terminator. This
reveals, in addition to the surface waves, the presence of a quarter
wavelength waveguide mode \cite{mann99,wright20} (at the magnetopause
there is a $\delta v_{r}$ antinode and $\delta B_{\parallel}$ node;
$\delta B_{\parallel}$ exhibits nodal structure radially). The waveguide
mode couples to a toroidal Alfv\'{e}n mode \cite{samson92} at $Y_{GSM}\sim11.5\,\mathrm{R_{E}}$
($\delta v_{\phi}$ antinode). These two modes occur at the same $\sim10\,\mathrm{min}$
period as the surface waves that originate at the subsolar point.
Therefore, MSE can couple to body eigenmodes in regions of the magnetosphere
where their frequencies sufficiently match (checked through time-of-flight
estimates). Magnetospheric Alfv\'{e}n speed profiles are highly variable
and significantly alter the eigenfrequencies of both body modes \cite{archer15b},
thus we expect that whether and where this coupling may occur will
vary substantially.
In the XZ plane, Supplementary Movie~1 reveals that the $\sim10\,\mathrm{min}$
period oscillations do not extend beyond the cusps into the northern
and southern tail lobes. The waves are thus confined to closed magnetic
field lines, further backing the surface eigenmode interpretation.
\begin{figure*}
\begin{centering}
\noindent \makebox[\textwidth]{\includegraphics{fig6}}
\par\end{centering}
\caption{\textbf{Virtual spacecraft observations within MHD simulation}. Displayed
in a similar format to Figure~\ref{fig:spacecraft-timeseries}. From
top to bottom the first set of panels show perturbations in the magnetic
(a--b), ion velocity (c--d), and electric (e--f) fields. In these
vertical pairs, top panels show the raw data, whereas the bottom panels
show the filtered data. Subsequent panels depict the Poynting vector
(g) and energy density (h), showing instantaneous (thin) and time-averaged
(thick) values. Finally the energy velocity (i) is shown compared
to the absolute magnetosheath flow speed (grey). Note the bi-symmetric
log scale on panels a, c, and e.\label{fig:virtual-spacecraft}}
\end{figure*}
\begin{figure}
\centering{}\noindent \makebox[\textwidth]{\includegraphics{fig7}}\caption{\textbf{Wave energy flux maps}. Panels show time-averaged wave Poynting
(a), advective (b), and total (c) energy fluxes. Magnitude (colour)
and direction (arrows) are shown along with the equilibrium magnetopause
(white dashed) and virtual spacecraft location from Figure~\ref{fig:virtual-spacecraft}
(star).\label{fig:maps}}
\end{figure}
\begin{figure}
\centering{}\noindent \makebox[\textwidth]{\includegraphics{fig8}}\caption{\textbf{Wave energy fluxes tangential to the magnetopause}. Panels
show Poynting (a) and advective (b) energy fluxes tangential to the
magnetopause along magnetopause normals. Integrals along the normal
are shown in panel c for the Poynting (purple) and advective (green)
fluxes along with their sum (black).\label{fig:tangential}}
\end{figure}
We finally study energy propagation throughout the simulation. Figure~\ref{fig:virtual-spacecraft}
shows results from a virtual spacecraft in the magnetosphere close
to the boundary at roughly the same location as THE. Computing the
Poynting vector as before (panel~g) shows it to be directed azimuthally
towards the subsolar point and slighly radially outwards, similarly
to the observations. This corresponds to an energy velocity (panel~i)
approximately equal but opposite to the background magnetosheath flow
speeds (grey) at this local time, like in the observations.
Figure~\ref{fig:maps} shows equatorial maps of the time-averaged
Poynting (panel~a) and advective (panel~b) energy fluxes as well
as the sum of the two (panel~c). Within the magnetosphere, the Poynting
vectors are directed azimuthally towards the subsolar point across
the entire dayside, flipping direction at around the terminator to
recover the more usual tailward energy flux associated with Kelvin-Helmholtz
generated surface waves and waveguide modes \cite{juninger85,sakurai01,elsden15}.
Later within the simulation, however, this point of reversal does
slowly move slightly towards noon by $\sim$1h of MLT on both flanks
as the wave energy dissipates. A component directed towards the magnetopause
is also present across the dayside until well into the flanks and
the continuity of this energy flux into the magnetosheath is apparent.
The advective energy flux in Figure~\ref{fig:maps}b consists predominantly
of the tailward flow throughout the magnetosheath. Therefore, the
sum of the two clearly shows across the dayside that energy fluxes
tangential to the magnetopause are in opposition to one another on
either side. A small amount of energy flows across the boundary from
the magnetosphere into the magnetosheath, which will then be swept
downtail.
We therefore investigate the potential balance of tangential energy
fluxes on either side of the magnetopause in Figure~\ref{fig:tangential}.
At each local time we construct rays normal to the equilibrium magnetopause
and interpolate the time-averaged Poynting (panel~a) and advective
(panel~b) energy fluxes, taking the component tangential to the boundary.
Integrating these along the normal, we arrive at panel~c showing
the total tangential energy flux across both sides of the magnetopause.
This demonstrates the tailward energy flow due to advection (green)
and opposing Poynting flux (purple) across the dayside. Taking the
sum of these shows that they cancel out between 08:40--15:20~MLT,
i.e. the local time range for which the magnetopause oscillations
were found to be azimuthally stationary. This range is stable in time
for the duration of the oscillations. The results therefore demonstrate
that the stationary nature of MSE azimuthally is due to a balance
of the surface wave Poynting flux directed towards the subsolar point
opposing the tailward magnetosheath flow. Outside of this region,
however, even when the Poynting flux is in opposition to the magnetosheath
flow it is unable to overcome advection and thus travelling surface
waves result.
\subsection*{Analytic MHD theory}
Finally, we look to incompressible MHD theory (where $k_{r}^{2}+k_{\phi}^{2}+k_{\parallel}^{2}=0$
for surface waves \cite{pu83,plaschke11}) to understand this picture
of the energy flow and azimuthal propagation present within MSE. We
consider a fundamental mode magnetopause surface wave of amplitude
$A$ in displacement within a box model magnetosphere with homogeneous
half-spaces as depicted in Figure~\ref{fig:cartoon}a. The azimuthal
component of the Poynting vector at the equator on the magnetosphere
side of the boundary for northward IMF is given by (equation~15 of
\cite{juninger85})
\begin{equation}
\left\langle S_{\phi,sph}\right\rangle =A^{2}\omega k_{\phi}\frac{k_{\parallel}^{2}}{k_{\phi}^{2}+k_{\parallel}^{2}}\frac{B_{0,sph}^{2}}{2\mu_{0}}\exp\left(-2\left|\mathfrak{Im}\left(k_{r}\right)\right|\left|r-r_{mp}\right|\right)
\end{equation}
with its equivalent on the magnetosheath side being $B_{0,msh}^{2}/B_{0,sph}^{2}$
times this and thus negligible. The wave energy densities are (following
equations~11 and 13 of \cite{juninger85})
\begin{equation}
\begin{array}{ccccc}
\left\langle u_{sph}\right\rangle & \approx & \left\langle u_{B,sph}\right\rangle & = & \frac{B_{0,sph}^{2}}{4\mu_{0}}A^{2}\frac{k_{\parallel}^{4}}{k_{\phi}^{2}+k_{\parallel}^{2}}\exp\left(-2\left|\mathfrak{Im}\left(k_{r}\right)\right|\left|r-r_{mp}\right|\right)\\
\left\langle u_{msh}\right\rangle & \approx & \left\langle u_{K,msh}\right\rangle & = & \frac{1}{4}\rho_{0,msh}\omega^{2}A^{2}\frac{2k_{\phi}^{2}+k_{\parallel}^{2}}{k_{\phi}^{2}+k_{\parallel}^{2}}\exp\left(-2\left|\mathfrak{Im}\left(k_{r}\right)\right|\left|r-r_{mp}\right|\right)
\end{array}
\end{equation}
Constructing the net energy velocity of the surface wave and simplifying
using equation~\ref{eq:mse-frequency} gives
\begin{align}
\mathbf{v}_{E,tot} & =\frac{\left\langle \mathbf{S}_{sph}\right\rangle +\left\langle \mathbf{S}_{msh}\right\rangle +\left\langle u_{msh}\right\rangle \mathbf{v}_{0,msh}}{\left\langle u_{sph}+u_{msh}\right\rangle }\label{eq:vE1}\\
& \approx\frac{\omega k_{\phi}k_{\parallel}^{2}\frac{B_{0,sph}^{2}}{2\mu_{0}}+\frac{1}{4}\rho_{0,msh}\frac{B_{0,sph}^{2}}{\mu_{0}\rho_{0,msh}}k_{\parallel}^{2}\left(2k_{\phi}^{2}+k_{\parallel}^{2}\right)v_{0,msh}}{\frac{B_{0,sph}^{2}}{4\mu_{0}}\left[k_{\parallel}^{4}+\omega^{2}\frac{k_{\parallel}^{2}}{\omega^{2}}\left(2k_{\phi}^{2}+k_{\parallel}^{2}\right)\right]}\hat{\boldsymbol{\phi}}\label{eq:vE2}\\
& \approx\left[\frac{\omega k_{\phi}}{k_{\phi}^{2}+k_{\parallel}^{2}}+\frac{2k_{\phi}^{2}+k_{\parallel}^{2}}{2\left(k_{\phi}^{2}+k_{\parallel}^{2}\right)}v_{0,msh}\right]\hat{\boldsymbol{\phi}}\label{eq:vE3}
\end{align}
By setting this to zero, i.e. no net azimuthal energy flow, and solving
for real azimuthal wavenumbers yields the requirement
\begin{equation}
\frac{\omega}{k_{\parallel}}\geq\sqrt{2}v_{0,msh}
\end{equation}
This sets a limit on where it is possible for surface wave energy
to be trapped locally due to the speed of the adjacent magnetosheath.
We can frame this limit purely in terms of solar wind and magnetosheath
conditions using equation~\ref{eq:mse-frequency-sw} as
\begin{equation}
v_{0,msh}\leq\sqrt{\frac{\rho_{sw}}{\rho_{msh}}}v_{sw}
\end{equation}
According to gas-dynamic models of magnetosheath plama conditions
\cite{spreiter66} this is satisfied for 08:40--15:20h~MLT, in excellent
agreement with the stationary region in the global MHD simulation.
This extent should vary only slightly with solar wind conditions (based
on previous magnetosheath and MSE variability studies \cite{archer15,walsh12}),
however, future work could test this.
\begin{figure}
\begin{centering}
\includegraphics[scale=0.9]{fig9}
\par\end{centering}
\caption{\textbf{Cartoon illustrating the results of this study}. Panel~a
shows the box model magnetosphere, magnetosheath flow (white), and
surface mode wavevectors (dark blue) excited by the pressure pulse
(orange). Subsequent panels depict the resultant energy flow (lighter
coloured arrows) of the surface mode wavevectors for b) large and
c) small magnetosheath flows.\label{fig:cartoon}}
\end{figure}
\section*{Discussion}
In this paper we show that the recently discovered magnetopause surface
eigenmode (MSE), the lowest frequency normal mode of a magnetosphere,
does not conform to the well-established paradigm in global magnetospheric
dynamics of tailward propagation. Multi-spacecraft observations, global
MHD simulations, and analytic MHD theory are employed. Both the observations
and simulation show Poynting vectors in the magnetosphere which point
towards the subsolar point across the dayside, contrary to current
models of the magnetospheric response to impulsive driving \cite{sibeck90}.
This energy flux thus opposes advection by the magnetosheath and we
find from the simulation that these two cancel one another in the
region 09--15h magnetic local time, resulting in an azimuthally stationary
surface wave. Outside of this region, however, the waves travel tailward.
Considering surface wave energy fluxes in a simple box model of the
magnetosphere shows excellent agreement with the simulation on the
conditions required for a stationary wave to be possible. Our conclusions
are summarised in Figure~\ref{fig:cartoon} within this box model.
When an impulsive solar wind transient arrives at the magnetopause,
its broadband nature excites surface waves on the boundary with a
wide range of frequencies $\omega$ and wave vectors $\mathbf{k}$.
The boundary conditions at the northern and southern ionospheres quantise
the possible values of $k_{\parallel}$ largely determining $\omega$,
however, $k_{\phi}$ will be unconstrained as depicted in panel~a.
For large magnetosheath flow speeds (panel~b) none of the excited
wave vectors are able to compete with advection and the resultant
motion is tailward, in line with expectations. In the regime of small
magnetosheath flow speeds (panel~c), however, there exists an excited
$k_{\phi}$ in opposition to the magnetosheath flow which is able
to exactly balance its advective effect. This leads to surface wave
energy being trapped locally as an azimuthally stationary wave. All
waves of other $k_{\phi}$ will be lost down the tail. This picture
not only explains the global propagation of magnetopause surface waves
but also how MSE on the dayside can seed fluctuations into the magnetospheric
flanks. The simulation shows that these seeded waves which originate
on the dayside subsequently grow in amplitude via the Kelvin-Helmholtz
instability, despite being at a lower frequency to its intrinsic one,
and may couple to cavity/waveguide and Alfv\'{e}n modes in regions
of the magnetosphere where their frequencies match. This reveals MSE's
effects are not confined merely to the dayside (standing) region,
instead having global effects on the magnetosphere as its most fundamental
normal mode.
The cartoon highlights that, at each location on the boundary, after
the other (blue) wavevectors have been swept downtail and the boundary
has formed its resonance, the physics of the azimuthally stationary
surface wave is confined to a small local time region, i.e. a single
meridian of geomagnetic field lines. While the initial perturbation
on the boundary and the corresponding transient response will depend
on the specifics of the driving pressure variation (scale sizes, location
of impact etc.), one can simply decompose the initial perturbation
at each local time into the normal modes along the field (MSE) and
this should entirely dictate the subsequent resonant response at that
local time. Indeed, the local boundary motion and Poynting fluxes
were in agreement across both observations and simulations despite
different scale size drivers being leveraged. The locality to the
physics means that the azimuthally stationary wave should be limited
to the local times in which the driver impacted the magnetopause,
hence the scale of the driver in azimuth (within the 09--15h local
time range) would be imprinted in the stationary waves excited.
These results raise the question why only tailward propagating dynamics
are reported in current models and observations of the magnetospheric
response to impulsive driving \cite{sibeck90}. It is clear that the
models do not incorporate the possibility of surface wave reflection
due to bounding by the ionosphere, which is key to our results, since
while this was proposed long ago \cite{chen74} it has only recently
been discovered \cite{archer19,He2020}. MSE constitute the lowest
possible frequency normal mode of the magnetospheric system and its
fundamental frequencies can often be fractions of milliHertz \cite{archer15}.
Such long period narrowband waves are challenging to identify observationally
in general, either by orbiting spacecraft or ground-based measurements,
due to potential spatio-temporal mixing \cite{urban16} and the difficultly
in distinguishing from turbulence/noise \cite{dimatteo17,dimatteo18}.
For these reasons global magnetospheric dynamics and their associated
ULF waves have often concentrated on the continuous pulsation (Pc)
bands above $2\,\mathrm{mHz}$ \cite{jacobs64}, which would not typically
incorporate the effects presented here. Furthermore, Figure~\ref{fig:cartoon}
shows that the majority of surface wave energy excited by the driver
does still propagate tailward, with only a small amount being trapped
locally that propagate against the flow. Therefore, if further upstream
driving by pressure variations occurs during these oscillations, the
superposition of waves present could easily mask the sunward Poynting
vectors associated with MSE thereby showing only a net tailward energy
flow. Future work is required in developing less restrictive observational
criteria for the detection of MSE in general and to undertake statistical
studies of MSE occurrence to better understand how common this mode,
and the results presented on its energy flow, may be in reality to
the variety of impulsive drivers that impact on geospace.
The global waves associated with this normal mode of Earth's magnetosphere,
possible due to the surface wave propagation against the magnetosheath
flow, will have important implications upon radiation belt dynamics
\cite{elkington06,summers13}. The large-scale oscillations of the
magnetopause may cause the further shadowing of radiation belt electrons
than predicted simply by a pressure balanced quasi-static response
to the driver. Furthermore, MSE's ULF wave signatures present coherent
and slowly varying perturbations in compressional magnetic fields
and azimuthal electric fields which deeply penetrate across the entire
dayside magnetosphere, which may be ideal for the drift-resonant interaction
and/or radial diffusion of radiation belt particles. However, current
methods of understanding these processes are suited only to the inner
magnetosphere since they assume azimuthal symmetry, therefore, more
work is required in assessing the impact on the radiation belts of
this normal mode and asymmetric outer magnetospheric waves in general.
While the observations confirm significant energy flow along the magnetic
field towards the polar regions, the wide region of stationarity from
the simulations suggests weak coupling of the surface waves to the
Alfv\'{e}n mode across the dayside. This implies MSE have auroral,
ionospheric, and ground magnetometer signatures unlike other known
ULF waves, with these remaining poorly understood \cite{archer19,He2020}.
In the flanks, however, it seems likely that MSE-seeded waves could
at times easily be mistaken for intrinsic Kelvin-Helmholtz waves or
waveguide modes, despite the origin of the fluctuations on the dayside
as shown in the simulation. These factors may be why previous ground-based
searches, through widely-used diagnostics for other wave modes, called
MSE's existence into question \cite{pilipenko17,pilipenko18}. The
work thus highlights that care needs to be taken in understanding
the mechanisms which result in various dynamical modes in near-Earth
space since they can all be intimately coupled.
Surface waves are known to be present at the other planetary magnetospheres
\cite{sundberg12,delamere16}, which span a vast range of sizes, morphologies,
and plasma conditions \cite{bagenal13}. The surface eigenmode in
principle should be a universal feature of boundaries in magnetospheres
\cite{chen74}, and thus the simple analytic theory presented here
(in the magnetospheric reference frame) may be instructive in assessing
where and in what frequency ranges these fundamental dynamics of the
boundary may be prevalent at other environments. The simple predictions
could then be compared to tailored global MHD simulations of these
systems as well as spacecraft observations.
Many other space and astrophysical systems too exhibit surface waves
where, like in the case of a magnetopause, substantial background
flows may be present. A notable example are the sausage and kink modes
of coronal loops, which share many conceptual similarities to the
surface eigenmodes -- they are standing (though sometimes propagating)
transverse oscillations of the dense flux tubes in coronal active
regions, anchored on both ends by the chromosphere, excited by loop
displacements from coronal eruptions or shear flows in coronal plasma
non-uniformities \cite{li13,yu20}. Asymmetric and/or inhomogeneneous
flows around/along these structures affect surface wave evolution,
with important space weather consequences such as causing coronal
mass ejections to turn away from their original propagation direction
\cite{foullon13}, though these effects are not typically incorporated
into models of coronal loop oscillations. Our results from \textit{in
situ} observations at the magnetopause (not possible for the corona
and other space/astrophysical environments) challenge the paradigm
that surface waves necessarily propagate in the direction of the driving
flow/pressure, as when discontinuities are bounded the trapping of
surface waves may occur in opposition to advective effects, allowing
these waves to form across broader regions and to persist longer than
would otherwise be expected. The work may therefore have insights
into the structure and stability of these universal dynamical modes
elsewhere.
\section*{Methods}
\subsection*{Poynting's theorem for MHD waves}
Energy conservation for MHD wave perturbations (denoted by $\delta$'s
with subscript $0$'s representing equilibrium values) involves the
wave energy density
\begin{equation}
u=u_{B}+u_{K}=\frac{\left|\delta\mathbf{B}\right|^{2}}{2\mu_{0}}+\frac{1}{2}\rho_{0}\left|\delta\mathbf{v}\right|^{2}\label{eq:energy-density}
\end{equation}
(consisting of magnetic $u_{B}$ and kinetic $u_{K}$ contributions)
and wave energy flux given by the Poynting vector
\begin{equation}
\mathbf{S}=\frac{\delta\mathbf{E}\times\delta\mathbf{B}}{\mu_{0}}\label{eq:Poynting}
\end{equation}
\cite{walker05} where $\mathbf{E}$ is the electric field. Time-averaging
and taking their ratio yields the so-called energy velocity
\begin{equation}
\mathbf{v}_{E}=\frac{\left\langle \mathbf{S}\right\rangle }{\left\langle u\right\rangle }\label{eq:energy-velocity}
\end{equation}
, equivalent to the group velocity for stable waves \cite{bers00}.
In a moving medium, wave energy advects with the background plasma,
giving an additional flux $u\mathbf{v}_{0}$. These principles are
applied throughout.
\subsection*{Spacecraft observations}
Observations in this paper are taken from the Time History of Events
and Macroscale Interactions (THEMIS; \cite{angelopoulos08}) spacecraft
taken during the previously reported interval of MSE \cite{archer19}.
The five spacecraft were close to the equilibrium magnetopause in
a string-of-pearls formation. Data from the fluxgate magnetometer
(FGM) \cite{auster08}, electrostatic analyser (ESA) \cite{mcfadden08a},
and electric field (EFI) \cite{bonnell08} instruments are used. Note
for the latter we use the $\mathbf{E}\cdot\mathbf{B}=0$ approximation
(valid over ULF timescales) for THD and THE to replace the measured
axial fields at each time, however the instrument was not yet deployed
by THA so $\mathbf{E}=-\mathbf{v}\times\mathbf{B}_{0}$ is used, which
was found to be reliable for the other spacecraft. We note that THA
plasma measurements were not available prior to 22:08~UT. Magnetosheath
intervals have been removed from THA, THD and THE observations, identified
when the electron density was greater than $5\,\mathrm{cm}^{-3}$
or the magnetic field strength was less than $45\,\mathrm{nT}$. Vectors
within the magnetosphere have been rotated into local orthogonal field-aligned
coordinates ($r,\phi,\bigparallel$). The field-aligned direction
($\parallel$) is given based on a robust linear regression of the
magnetic field vectors \cite{huber81,street88}, with the azimuthal
($\phi$) direction being the cross product of $\parallel$ with the
spacecraft's geocentric position thus pointing eastward, and the radial
($r$) direction completing the right-handed set directed away from
the Earth. While this coordinate rotation may result in some small
$E_{\parallel}$, these are negligible compared to the other components
and do not influence the results.
\subsection*{Global MHD simulations}
We reproduce a high-resolution ($\nicefrac{1}{8}\text{\textendash}\nicefrac{1}{16}\,\mathrm{R_{E}}$
in the regions considered in this paper, see Supplementary~Figure~4
for grid) Space Weather Modeling Framework (SWMF; \cite{toth05,toth12})
simulation run of MSE excited by a $1\,\mathrm{min}$ solar wind density
pulse (with sunward normal) under northward IMF \cite{hartinger15}.
Full details of the run are given in Supplementary~Table~1. For
all simulation quantities, perturbations are defined as the difference
to the linear trend before ($t=0\,\mathrm{min}$) and after ($t=60\,\mathrm{min}$)
the response to the pulse. Vectors are rotated into similar local
field-aligned coordinates. The magnetopause location is determined
as the last closed field line along geocentric rays through a bisection
method accurate to $0.01\,\mathrm{R_{E}}$. The bow shock standoff
distance has been identified via interpolation as the point where
the density is half that in the solar wind. In displaying perturbations
in the simulation, a bi-symmetric log transform \cite{webber12} is
often employed due to the much larger amplitudes present during compression
and rebound phases.
\subsection*{Time-based filtering}
A time-based filtering technique is used to extract MSE wave perturbations
and suppress noise and higher/lower frequency signals. This was chosen
to avoid the potential influence of ringing artefacts or edge effects
when using frequency-based methods due to the nonstationary process.
Nonetheless, several different filtering methods were tested and the
main results of the paper remained robust.
In the method presented for the spacecraft observations, first the
raw data is smoothed using a $400\,\mathrm{s}$ robust LOESS method
\cite{cleveland79}. For stationary processes this has a corresponding
cutoff frequency of $3.6\,\mathrm{mHz}$, therefore retains both the
$1.8\,\mathrm{mHz}$ fundamental and $3.3\,\mathrm{mHz}$ second harmonic
MSE signals present \cite{archer19}. To remove any lower frequency
trends still present, the mean envelope from cubic Hermite interpolation
\cite{fritsch80} is subtracted. These effectively bandpass filtered
quantities are used for calculating the instantaneous wave Poynting
vectors and energy densities. A time-averaging method is performed
also by using the mean envelope from interpolation. The time-based
methods used also allow uncertainties to be estimated. This is done
via a running root-mean-squared (RMS) deviation between the raw and
LOESS smoothed time-series, which are then propagated through the
subsequent methods used \cite{horst81,white17}.
To extract the MSE signal from the simulation, either in magnetopause
location or grid point data, we firstly neglect the initial large
amplitude compression and rebound. This is done by only using data
from half an MSE period after the magnetopause's return to equilibrium,
i.e. after the dotted line in Figure~\ref{fig:Magnetopause-motion}a.
For grid point data the timing at the magnetopause with the same X
coordinate is used. The secondary spectral peak is then suppressed
using the same filtering procedure as for the THEMIS data. The only
differences are that standard (rather than robust) LOESS is used due
to reduced temporal resolution, and the window size used was $570\,\mathrm{s}$
corresponding to a $2.4\,\mathrm{mHz}$ cutoff.
\subsection*{Fourier and wavelet techniques}
To compute time-averaged Poynting vectors as a function of frequency
a standard complex Fourier approach is used (equation 2 of \cite{hartinger13})
\begin{equation}
\left\langle \mathbf{S}\left(\omega\right)\right\rangle =\frac{\mathrm{\mathfrak{Re}}\left(\mathbf{E}\left(\omega\right)\times\mathbf{B}^{*}\left(\omega\right)\right)}{2\mu_{0}}
\end{equation}
This is done both in frequency-space using Welch's method \cite{welch67}
in computing one-sided cross spectra, i.e. the products of electric
and magnetic field components, and in time-frequency space using products
of analytic Morse continuous wavelet transforms \cite{lilly12}. In
both cases, a null hypothesis of autoregressive noise is assumed,
where the AR(1) parameters for each component of the electric and
magnetic fields are estimated using constrained maximum likelihood
and 500 independent Monte Carlo simulations are performed based on
these models \cite{box}, with 95\% confidence intervals being constructed
by taking percentiles (2.5\% and 97.5\%) of the resulting time-averaged
Poynting vectors.
\subsection*{Slowness}
To quantify the propagation of MSE boundary perturbations, the slowness
was computed through cross correlating the filtered magnetopause signals
between adjacent local times. By interpolating the peak to find its
corresponding time lag $\Delta t$, the azimuthal slowness is given
by
\begin{equation}
s_{\phi}=\frac{\Delta t}{\left|\Delta\mathbf{r}\right|}
\end{equation}
where $\left|\Delta\mathbf{r}\right|$ is the distance between the
two points on the boundary used. Standard errors in the correlation
coefficient were also calculated and propagated through the interpolation
procedure to arrive at uncertainties.
\section*{Data availability}
The THEMIS spacecraft data are available at \url{http://themis.ssl.berkeley.edu/data/themis/}
where level-2 data from the FGM, ESA, and EFI instruments on each
spacecraft has been used in this study. The SWMF simulation data generated
in this study are available in the Community Coordinated Modeling
Center (CCMC) at \url{https://ccmc.gsfc.nasa.gov/results/viewrun.php?domain=GM&runnumber=Michael_Hartinger_061418_1}.
\section*{Code availability}
The SWMF and BATS-R-US (Block-Adaptive Tree Solarwind Roe-type Upwind
Scheme) software is available at \url{https://github.com/MSTEM-QUDA}.
The SWMF and BATS-R-US tools used are available at \url{https://ccmc.gsfc.nasa.gov}.
\bibliographystyle{unsrt}
|
1,314,259,996,658 | arxiv | \section{Introduction}
Holographic correlators play a central role in checking and exploiting the AdS/CFT correspondence. Thanks to the recent breakthroughs of the bootstrap methods, holographic four-point functions of $\frac{1}{2}$-BPS operators with arbitrary Kaluza-Klein masses have been systematically computed at tree level in a plethora of string theory/M-theory models \cite{Rastelli:2016nze,Rastelli:2017udc,Rastelli:2019gtj,Alday:2020lbp,Alday:2020dtb,Alday:2021odx}.\footnote{See also \cite{Bissi:2022mrs} for a review of these results.} These bootstrap methods rely only on symmetries and basic consistency conditions, and circumvent the enormous difficulties related to the traditional method which stalled progress in this field for many years. Note that according to the standard recipe of AdS/CFT, Kaluza-Klein modes of the AdS supergravity fields are dual to ``single-trace'' operators in the CFT.\footnote{To be precise, the single-trace operators are the leading part of the dual operator. There are also higher-trace operators which are suppressed by inverse powers of the central charge \cite{Arutyunov:1999en,Arutyunov:2000ima,Rastelli:2017udc,Aprile:2018efk,Aprile:2019rep,Alday:2019nin}.} In the bulk, they are mapped to states which are ``single-particle''. However, in the dual CFT there are also ``double-trace'' (or more generally, ``multi-trace'') operators which are normal ordered products of single-trace operators. Correlation functions involving such operators can be viewed in the bulk as scattering processes where some of the scattering states are multiple-particle ``bound states''. In principle, such correlators are already contained in the set of all ``single-trace'' correlators because we can produce ``double-trace'' operators from taking the OPE limit. In practice, however, computing these bound state correlators via such a detour through higher-point functions seems rather inefficient. Already computing five-point functions is a highly nontrivial task even equipped with bootstrap techniques \cite{Goncalves:2019znr,Alday:2022lkk}, and going beyond that to higher multiplicities presents serious challenges for the current technology. Therefore, it will be of great interest to develop a more straightforward approach that allows us to directly apply the bootstrap strategy to such correlators with bound state operators.
In this paper, we make progress in this direction by initiating a study of the underlying Witten diagrams. This is necessary because the properties of these diagrams related to bound state scattering processes have not been explored in the literature. In particular, there is currently no knowledge of their analytic structure in Mellin space, which will become important if we want to adapt the bootstrap methods of \cite{Rastelli:2016nze,Rastelli:2017udc,Alday:2020lbp,Alday:2020dtb,Alday:2021odx} to this case. Another motivation for looking into these diagrams comes from the recent work \cite{Ceplak:2021wzz}. The series of papers \cite{Giusto:2019pxc,Giusto:2018ovt,Galliani:2017jlg,Bombini:2017sge,Giusto:2020neo} developed an alternative approach to the bootstrap methods to compute holographic correlators in the $AdS_3\times S^3$ background. This approach starts from a ``heavy-heavy-light-light'' (HHLL) limit of the four-point function. The correlator in this limit can be computed semi-classically as the fluctuation dual to the light operators in a supergravity background created by the heavy operators. By taking a formal limit where the heavy operators become light, the HHLL correlator can produce four-point functions with all light operators. Extending this method, \cite{Ceplak:2021wzz} managed to compute all light four-point correlators at tree level with two single-particle states and two $n$-particle bound states. Interestingly, \cite{Ceplak:2021wzz} found that for $n\geq 2$ the correlators necessarily contain higher order polylogarithms while in the single-particle case at most dilogarithms appear. Curiously, these higher order polylogarithms also show up in loop-level correlators of single-particle operators \cite{Aprile:2017bgs,Aprile:2017qoy,Aprile:2019rep,Bissi:2020wtv,Bissi:2020woe,Huang:2021xws,Drummond:2022dxw}. This seems to imply that certain tree-level diagrams with external bound states might share structural similarity with AdS loop diagrams.\footnote{It is important to note that the supergravity calculation of \cite{Ceplak:2021wzz} is semi-classical. Therefore, the contributing Witten diagrams are tree-level diagrams.} In this paper, we will provide strong evidence that there is indeed such a connection.
\begin{figure}
\centering
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=0.7\linewidth]{subfig_bccc}
\caption{One bound state}
\label{subfig:bccc}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=0.7\linewidth]{subfig_bbcctypeI}
\caption{Two bound states (type I)}
\label{subfig:bbcctypeI}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=0.7\linewidth]{subfig_bbcctypeII}
\caption{Two bound states (type II)}
\label{subfig:bbcctypeII}
\end{subfigure}
\caption{Three basic of Witten diagrams with one or two bound states and only one bulk-to-bulk propagator. Here the external bound states are the boundary points from which two lines emanate. }
\label{fig:bsdiags}
\end{figure}
As a first step towards a more systematic exploration, we will limit ourselves to studying diagrams of scalar fields in this paper. More precisely, we will mostly focus on the three diagrams depicted in Fig. \ref{fig:bsdiags} which contain up to two bound states and only one bulk-to-bulk propagator. The bound states are of the ``bi-particle'' type and correspond to double-trace operators in the CFT. This might seem a very small set of diagrams. However, we will explain how a vast array of bound state tree-level diagrams with more bulk-to-bulk propagators can be reduced to these basic diagrams. Moreover, even with just these three diagrams, we find that there is already a rich spectrum of behavior. We will see that the first two diagrams are similar in structure to tree-level exchange diagrams with single-particle external states while the last diagram resembles a one-loop diagram in AdS.
Due to the technical nature of this paper, we offer below a brief summary of the sections to help the reader to navigate it, and highlight some of our main results.
In Section \ref{Sec:Preliminaries}, we review the basic technical ingredients which will be used in this paper. This includes the Mellin representation which recasts correlators in a form similar to flat-space amplitudes and manifests their analytic structure. We will also review two important properties of AdS propagators. One is an integrated vertex identity that allows us to integrate out an internal line of the diagram connected to two external lines via a cubic vertex. The other is the so-called split representation of the bulk-to-bulk propagator.
These two properties of the bulk-to-bulk propagator are used in Section \ref{Sec:Wcccc} in a warm-up example where we compute the tree-level exchange Witten diagram with single-particle states as its external states. We reproduce the well known result in the literature and the Mellin amplitude takes the following form
\begin{equation}
\mathcal{M}_{\circ\circ\circ\circ}=\sum_{m=0}^\infty \frac{C^{(0)}_m}{s-\Delta-2m}\;,
\end{equation}
where $C^{(0)}_m$ are constants and $\Delta$ is the conformal dimension of the exchanged scalar field. We used $\circ$ to denote an external single-particle operator while later we will also use $\bullet$ to denote a two-particle bound state.
We begin to consider diagrams with bound states in Section \ref{Sec:Wbccc}. We will compute Fig. \ref{subfig:bccc} using two methods. The first method is based on the integrated vertex identity, and generalizes the single-particle case considered in the warm-up section. The second method computes the diagram by taking a coincidence limit of a five-point single-particle diagram. Both approaches lead to the same answer for the Mellin amplitude which has the following schematic form
\begin{equation}
\mathcal{M}_{\bullet\circ\circ\circ}=\sum_{m=0}^\infty \frac{C^{(1)}_m}{s-\Delta-\Delta_5-2m}\;.
\end{equation}
Here $C^{(1)}_m$ are constants and $\Delta_5$ is the conformal dimension of the additional scalar line that makes the single-particle exchange diagram the bound state diagram Fig. \ref{subfig:bccc}. From the expression, we find that the Mellin amplitude is quite similar to the exchange amplitude $\mathcal{M}_{\circ\circ\circ\circ}$, except that the poles are now at shifted locations.
In Section \ref{Sec:WbbcctypeI} we study the diagram in Fig. \ref{subfig:bbcctypeI} which has two bound states and was dubbed ``Type I''. The method based on the integrated vertex identity can also be applied to this case and leads to the following Mellin amplitude
\begin{equation}
\mathcal{M}^{\rm I}_{\bullet\bullet\circ\circ}=\sum_{m=0}^\infty \frac{C^{(2)}_m}{s-\Delta-\Delta_5-\Delta_6-2m}\;.
\end{equation}
Here the numerators $C^{(2)}_m$ do not depend on Mandelstam variables and $\Delta_5$, $\Delta_6$ are the conformal dimensions of the two additional scalar lines. The amplitude again has a ``tree-like'' analytic structure. We will also confirm this result by reproducing it from taking a coincidence limit of a six-point function.
We consider the ``Type II'' two bound state diagram (Fig. \ref{subfig:bbcctypeII}) in Section \ref{Sec:WbbcctypeII}, which turns out to have a drastically different structure in Mellin space. The method using the integrated vertex identity no longer applies here because the all the vertices are quartic. However, we can still compute this diagram by taking the coincidence limit. We find that its Mellin amplitude has the form of a sum of simultaneous poles
\begin{equation}
\mathcal{M}^{\rm II}_{\bullet\bullet\circ\circ}=\sum_{m,n=0}^\infty \frac{\widetilde{C}^{(2)}_{mn}}{(s-\Delta-\Delta_3-\Delta_6-2m)(t-\Delta-\Delta_1-\Delta_5-2n)}\;.
\end{equation}
Remarkably, this is the same structure of one-loop correlators found in $AdS_5\times S^5$ IIB supergravity and $AdS_5\times S^3$ SYM \cite{Alday:2018kkw,Alday:2019nin,Alday:2021ajh}. This connection is further sharpened in Section \ref{Sec:Wbbbccand1loop} where we look at a family of examples of Fig. \ref{subfig:bbcctypeII} with special conformal dimensions. We will show that in the flat-space limit the Mellin amplitudes of these diagrams reduce to the massless one-loop box diagram in flat space.
Finally, in Section \ref{Sec:morediagrams} we discuss how we can use the three diagrams in Fig. \ref{fig:bsdiags} as building blocks to obtain other more complicated diagrams. We conclude in Section \ref{Sec:outlook} with an outline of future research directions. The paper also contains several appendices where we relegate additional technical details and collect useful formulae.
\section{Preliminaries}\label{Sec:Preliminaries}
\subsection{Mellin representation}
To discuss holographic correlators, a convenient language is the Mellin representation \cite{Mack:2009mi,Penedones:2010ue}. In this formalism, holographic correlators in general display simple analytic structure. In particular, tree-level correlators with external single-particle states have Mellin amplitudes similar to flat-space scattering amplitudes. An $n$-point function of scalar operators is represented as a multi-fold inverse Mellin transformation
\begin{equation}\label{defMellinnpt}
\langle \mathcal{O}_1(x_1)\ldots \mathcal{O}_n(x_n)\rangle=\int [d\gamma_{ij}] \bigg(\prod_{i<j} (x_{ij}^2)^{-\gamma_{ij}}\Gamma[\gamma_{ij}]\bigg) \mathcal{M}(\gamma_{ij})\;.
\end{equation}
Here we have defined $x_{ij}^2\equiv (\vec{x}_i-\vec{x}_j)^2$, and we can set
\begin{equation}\label{MMcond1}
\gamma_{ij}=\gamma_{ji}\;,\quad \gamma_{ii}=-\Delta_i\;.
\end{equation}
Conformal invariance requires
\begin{equation}\label{MMcond2}
\sum_{j=1}^n\gamma_{ij}=0\;.
\end{equation}
The variables $\gamma_{ij}$ then satisfy the same set of constraints as the flat-space Mandelstam variables, except that the external squared masses are now replaced by the conformal dimensions $m_i^2=\Delta_i$. The function $\mathcal{M}(\gamma_{ij})$ is defined to be the Mellin amplitude which contains all the nontrivial dynamic information. Let us also write down the case of $n=4$ explicitly. We can write the correlator as
\begin{equation}
\langle\mathcal{O}_1(x_1)\ldots \mathcal{O}_4(x_4)\rangle=\frac{1}{(x_{12}^2)^{\frac{\Delta_1+\Delta_2}{2}}(x_{34}^2)^{\frac{\Delta_3+\Delta_4}{2}}}\left(\frac{x_{14}^2}{x_{24}^2}\right)^a\left(\frac{x_{14}^2}{x_{13}^2}\right)^b \mathcal{G}(U,V)\;,
\end{equation}
where $a=\frac{1}{2}(\Delta_2-\Delta_1)$, $b=\frac{1}{2}(\Delta_3-\Delta_4)$, and
\begin{equation}\label{eq:ConformalCrossRatios}
U=\frac{x_{12}^2x_{34}^2}{x_{13}^2x_{24}^2}\;,\quad V=\frac{x_{14}^2x_{23}^2}{x_{13}^2x_{24}^2}
\end{equation}
are the conformal cross ratios. The function $\mathcal{G}(U,V)$ is represented by
\begin{equation}\label{defMellin4pt}
\begin{split}
\mathcal{G}(U,V)=&\int_{-i\infty}^{i\infty}\frac{dsdt}{(4\pi i)^2}U^{\frac{s}{2}}V^{\frac{t}{2}-\frac{\Delta_2+\Delta_3}{2}}\mathcal{M}(s,t)\,\Gamma[\tfrac{\Delta_1+\Delta_2-s}{2}]\Gamma[\tfrac{\Delta_3+\Delta_4-s}{2}]\\
&\quad\quad\quad\times \Gamma[\tfrac{\Delta_1+\Delta_4-t}{2}]\Gamma[\tfrac{\Delta_2+\Delta_3-t}{2}] \Gamma[\tfrac{\Delta_1+\Delta_3-u}{2}]\Gamma[\tfrac{\Delta_2+\Delta_4-u}{2}]\;,
\end{split}
\end{equation}
and the Mandelstam variables satisfy $s+t+u=\sum_{i=1}^4\Delta_i$.
\subsection{Basics of AdS diagrams}
The Witten diagrams which we will consider in this paper are built from AdS propagators, following rules that are similar to the flat-space position space Feynman rules. These propagators are Green's functions in AdS and can further be divided into bulk-to-bulk or bulk-to-boundary depending on the points of insertions. To write down these propagators it is useful to introduce the so-called embedding space formalism. In this formalism, a point $x^\mu$ in $\mathbb{R}^{d}$ is represented by a null ray $P^A$ in a $d+2$ dimensional embedding space $\mathbb{R}^{d+1,1}$
\begin{equation}
P^AP_A=0\;,\quad P^A\sim \lambda P^A\;.
\end{equation}
The nonlinear conformal transformations are linearized in the embedding space as rotations in $\mathbb{R}^{d+1,1}$. To make connection with the coordinates $x^\mu$, we can fix the rescaling degree of freedom and parameterize the null ray as
\begin{equation}
P^A=\bigg(\frac{1+x^2}{2},\frac{1-x^2}{2},\vec{x}\bigg)\;,
\end{equation}
where the signature is $(-,+,+,\ldots,+)$. The distance in $\mathbb{R}^{d}$ is represented in the embedding space as
\begin{equation}
x_{ij}^2=-2P_i\cdot P_j\equiv P_{ij}\;.
\end{equation}
The AdS space can also be conveniently represented by the embedding space. A point with Poincar\'e coordinates $z^\mu=(z_0,\vec{z})$ becomes a point $Z$ in $\mathbb{R}^{d+1,1}$
\begin{equation}
Z^A=\frac{1}{z_0}\bigg(\frac{1+z_0^2+\vec{z}^2}{2},\frac{1-z_0^2-\vec{z}^2}{2},\vec{z}\bigg)\;.
\end{equation}
Using this formalism, the bulk-to-boundary propagator of a scalar field with dimension $\Delta$ reads
\begin{equation}
G_{B\partial}^\Delta(P,Z)=\frac{1}{(-2Z\cdot P)^\Delta}\;.
\end{equation}
The scalar bulk-to-bulk propagator is given by
\begin{equation}
G_{BB}^\Delta(Z,W)=\tilde{C}_\Delta (2u^{-1})^\Delta {}_2F_1\bigg(\Delta,\Delta-\frac{d}{2}+\frac{1}{2};2\Delta-d+1;-2u^{-1}\bigg)\;,
\end{equation}
where
\begin{equation}
\tilde{C}_\Delta=\frac{\Gamma[\Delta]\Gamma[\Delta-\frac{d}{2}+\frac{1}{2}]}{(4\pi)^{\frac{d+1}{2}}\Gamma[2\Delta-d+1]}\;,
\end{equation}
and
\begin{equation}
u=\frac{(z_0-w_0)^2+(\vec{z}-\vec{w})^2}{2z_0w_0}=\frac{(Z-W)^2}{2}\;.
\end{equation}
The bulk-to-bulk propagator satisfies the equation of motion
\begin{equation}\label{EOMGBB}
(\square_Z-\Delta(\Delta-d))G_{BB}^\Delta(Z,W)=\delta(Z,W)\;.
\end{equation}
\begin{figure}
\centering
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=0.5\linewidth]{fig_cccc}
\caption{Exchange Witten diagram}
\label{subfig:cccc}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=0.5\linewidth]{fig_contact}
\caption{Contact Witten diagram}
\label{subfig:contact}
\end{subfigure}
\caption{The simplest four-point Witten diagrams at tree level.}
\label{fig:basic4pttrees}
\end{figure}
The two simplest tree-level diagrams we can construct from these propagators are the exchange Witten diagram (Fig. \ref{subfig:cccc})
\begin{equation}
\label{eq:Wcccc}W_{\circ\circ\circ\circ}(P_i)=\int_{\rm AdS} dZ_1dZ_2 G_{B\partial}^{\Delta_1}(P_1,Z_1)G_{B\partial}^{\Delta_2}(P_2,Z_1)G_{BB}^\Delta(Z_1,Z_2)G_{B\partial}^{\Delta_3}(P_3,Z_2)G_{B\partial}^{\Delta_4}(P_4,Z_2)\;,
\end{equation}
and the contact Witten diagram (Fig. \ref{subfig:contact})
\begin{equation}
W_{\rm contact}(P_i)=\int_{\rm AdS} dZG_{B\partial}^{\Delta_1}(P_1,Z)G_{B\partial}^{\Delta_2}(P_2,Z)G_{B\partial}^{\Delta_3}(P_3,Z)G_{B\partial}^{\Delta_4}(P_4,Z)\;.
\end{equation}
Here we have used the notation $W_{\circ\circ\circ\circ}$, where the symbol $\circ$ denotes a single-particle state. This is in anticipation of later discussions of diagrams with external bound states which are denoted by $\bullet$. The contact Witten diagram $W_{\rm contact}$ is commonly known in the literature as the $D$-function and is denoted by $D_{\Delta_1\Delta_2\Delta_3\Delta_4}$. Before we proceed, let us point out two useful properties of these propagators which we will use in this paper.
\vspace{0.5cm}
\noindent{\bf The integrated vertex identity}
\vspace{0.3cm}
\noindent The first useful property is an identity about the following three-point integral
\begin{equation}
I(P_1,P_2;W)=\int_{\rm AdS} dZ G_{B\partial}^{\Delta_1}(P_1,Z)G_{B\partial}^{\Delta_2}(P_2,Z)G_{BB}^\Delta(Z,W)\;,
\end{equation}
which involves two bulk-to-boundary propagators and one bulk-to-bulk propagator. The bulk-to-bulk propagator can be integrated out and the integral reduces to a sum of products of bulk-to-boundary propagators \cite{DHoker:1999mqo,Zhou:2018sfz}
\begin{equation}\label{ividentity}
\begin{split}
{}&I(P_1,P_2,Z)=\sum_{i=0}^\infty (-2P_1\cdot P_2)^i T_i G_{B\partial}^{\Delta_1+i}(P_1,Z)G_{B\partial}^{\Delta_2+i}(P_2,Z)\\
{}&\quad\quad+\sum_{i=0}^\infty (-2P_1\cdot P_2)^{\frac{\Delta-\Delta_1-\Delta_2+2i}{2}} Q_i G_{B\partial}^{\frac{\Delta+\Delta_1-\Delta_2}{2}+i}(P_1,Z)G_{B\partial}^{\frac{\Delta-\Delta_1+\Delta_2}{2}+i}(P_2,Z)\;,
\end{split}
\end{equation}
where
\begin{equation}
T_i=\frac{(\Delta_1)_i (\Delta_2)_i}{(\Delta -\Delta_1-\Delta_2) (-d+\Delta +\Delta_1+\Delta_2) \left(\frac{-\Delta +\Delta_1+\Delta_2+2}{2}\right)_i \left(\frac{-d+\Delta +\Delta_1+\Delta_2+2}{2}\right)_i}\;,
\end{equation}
and
\begin{equation}
\begin{split}
Q_i={}&\frac{(-1)^i \Gamma[\frac{d-2 i-2\Delta}{2}]\sin[\frac{\pi (d-2 \Delta )}{2}]\Gamma[\frac{-d+\Delta +\Delta_1+\Delta_2}{2}]\Gamma[\frac{-\Delta -\Delta_1+\Delta_2+2}{2}] }{4 \pi \Gamma [i+1]\Gamma [\Delta_1] \Gamma [\Delta_2]}\\
{}&\times \frac{\Gamma [\frac{\Delta -\Delta_1+\Delta_2}{2}] \Gamma [\frac{\Delta +\Delta_1-\Delta_2}{2}] \Gamma[\frac{-\Delta +\Delta_1+\Delta_2}{2}]\Gamma[\frac{-\Delta +\Delta_1-\Delta_2+2}{2}] }{\Gamma[\frac{-\Delta +\Delta_1-\Delta_2-2 i+2}{2}]\Gamma[\frac{-\Delta -\Delta_1+\Delta_2-2 i+2}{2}]}\;.
\end{split}
\end{equation}
We will refer to this identity as the {\it integrated vertex identity} and it is diagrammatically depicted in Fig. \ref{fig:ivi}. Using this identity we can, for example, write the exchange Witten diagram (Fig. \ref{subfig:cccc}) as the sum of infinitely many contact Witten diagrams (Fig. \ref{subfig:contact}).
\begin{figure}[h]
\centering
\includegraphics[width=0.75\textwidth]{fig_ivi}
\caption{An illustration of the integrated vertex identities. After integrating out the scalar bulk-to-bulk propagator, the part of the diagram with a cubic vertex can be written as a sum of contact vertices.}
\label{fig:ivi}
\end{figure}
\vspace{0.5cm}
\noindent{\bf The split representation}
\vspace{0.3cm}
\noindent The second useful property is the so-called split representation for the bulk-to-bulk propagator \cite{Costa:2014kfa} and is illustrated in Fig. \ref{fig:sr}. The bulk-to-bulk to propagator can be written as a product of a pair of bulk-to-boundary propagators with dimension $\frac{d}{2}+c$ and $\frac{d}{2}-c$ along the principal series, and is further integrated over the boundary point and the parameter $c$. More precisely, we have
\begin{align}
\label{eq:split}G_{BB}(Y,Z)=\int dP\int_{-i\infty}^{i\infty}\frac{dc}{2\pi i}\frac{2}{(\Delta-h)^2-c^2}\frac{\Gamma[h+c]\Gamma[h-c]}{\Gamma[c]\Gamma[-c]}G^{h+c}_{B\partial}(Y,P)G_{B\partial}^{h-c}(Z,P)\;,
\end{align}
where we have defined
\begin{align*}
h=\frac{d}{2}\;.
\end{align*}
\begin{figure}[h]
\centering
\includegraphics[width=0.75\textwidth]{fig_sr}
\caption{An illustration of the split representation. The bulk-to-bulk propagator can be written as a product of bulk-to-boundary propagators integrated over the boundary point and the spectral parameter.}
\label{fig:sr}
\end{figure}
\section{Warm-up: No bound states}\label{Sec:Wcccc}
Let us warm up in this section with the simple case of an exchange Witten diagram (\ref{eq:Wcccc}) where all the external operators are dual to single-particle states. This is a standard example in the literature and the answer has been known for a long time. Our purpose of revisiting this example is to demonstrate the techniques reviewed in Section \ref{Sec:Preliminaries}, which will later be applied to more complicated examples.
\subsection{Using the integrated vertex identity}\label{Subsec:Wccccivi}
Let us first compute this diagram using the integrated vertex identity. Using (\ref{ividentity}), we can write the exchange Witten diagram (\ref{eq:Wcccc}) as
\begin{equation}\label{WccccasDfun}
\begin{split}
W_{\circ\circ\circ\circ}(x_i)={}&\sum_{i=0}^\infty (x_{12}^2)^i T_i D_{\Delta_1+i,\Delta_2+i,\Delta_3,\Delta_4}\\
{}&+\sum_{i=0}^\infty (x_{12}^2)^{\frac{\Delta-\Delta_1-\Delta_2+2i}{2}} Q_i D_{\frac{\Delta+\Delta_1-\Delta_2}{2}+i,\frac{\Delta-\Delta_1+\Delta_2}{2}+i,\Delta_3,\Delta_4}\;.
\end{split}
\end{equation}
This form of the answer is not particularly illuminating, therefore we now translate it into Mellin space. The Mellin amplitude of a $D$-function is just a constant \cite{Penedones:2010ue}
\begin{equation}\label{defDfunMellin}
D_{\Delta_1\ldots \Delta_k}(x_i)=\int [d\delta_{ij}]\prod_{i<j} (x_{ij}^2)^{-\delta_{ij}}\Gamma[\delta_{ij}] \mathcal{M}_{\Delta_1,\ldots,\Delta_k}\;,
\end{equation}
where
\begin{equation}
\mathcal{M}_{\Delta_1,\ldots,\Delta_k}=\frac{\frac{\pi^{\frac{d}{2}}}{2}\Gamma[\frac{\sum_i\Delta_i-d}{2}]}{\prod_i \Gamma[\Delta_i]}\;.
\end{equation}
Simple manipulations then give the Mellin amplitude for the following type of functions
\begin{equation}\label{Dnfunctions}
D^{\{n_{ij}\}}_{\Delta_1\ldots \Delta_k}(x_i)\equiv \prod_{i<j} (x_{ij}^2)^{n_{ij}}D_{\Delta^n_1\ldots \Delta^n_k}(x_i)\;,
\end{equation}
which appear in the RHS of (\ref{WccccasDfun}) with $n=4$. Here we require the parameters $n_{ij}$ and $\Delta_i^n$ to satisfy
\begin{equation}
n_{ij}=n_{ji}\;,\quad n_{ii}=0\;,\quad \sum_j n_{ij}=\Delta_i^n-\Delta_i\;,
\end{equation}
so that the external dimensions of each $D^{\{n_{ij}\}}_{\Delta_1\ldots \Delta_k}(x_i)$ are still $\Delta_i$. Using the Mellin representation (\ref{defDfunMellin}) on the RHS of (\ref{Dnfunctions}), we find that, after appropriately shifting the variables, the Mellin representation of $D^{\{n_{ij}\}}_{\Delta_1\ldots \Delta_k}(x_i)$ is
\begin{equation}
D^{\{n_{ij}\}}_{\Delta_1\ldots \Delta_k}(x_i)=\int [d\delta_{ij}]\prod_{i<j} (x_{ij}^2)^{-\delta_{ij}}\Gamma[\delta_{ij}] \mathcal{M}^{n_{ij}}_{\Delta_1,\ldots,\Delta_k}(\delta_{ij})\;,
\end{equation}
where
\begin{equation}\label{MellinofDn}
\mathcal{M}^{n_{ij}}_{\Delta_1,\ldots,\Delta_k}(\delta_{ij})=\prod_{i<j}\frac{\Gamma[\delta_{ij}+n_{ij}]}{\Gamma[\delta_{ij}]}\mathcal{M}_{\Delta^n_1,\ldots,\Delta^n_k}\;.
\end{equation}
Specifying to our current case, we arrive at the following expression for the Mellin amplitude of the exchange Witten diagram
\begin{equation}\label{MellinMccccinterm}
\begin{split}
\mathcal{M}_{\circ\circ\circ\circ}(s,t)={}&\sum_{i=0}^\infty T_i \frac{\Gamma[\frac{\Delta_1+\Delta_2-s}{2}+i]}{\Gamma[\frac{\Delta_1+\Delta_2-s}{2}]}\mathcal{M}_{\Delta_1+i,\Delta_2+i,\Delta_3,\Delta_4}\\
{}&+\sum_{i=0}^\infty Q_i \frac{\Gamma[\frac{\Delta-s}{2}+i]}{\Gamma[\frac{\Delta_1+\Delta_2-s}{2}]}\mathcal{M}_{\frac{\Delta+\Delta_1-\Delta_2}{2}+i,\frac{\Delta-\Delta_1+\Delta_2}{2}+i,\Delta_3,\Delta_4}\;.
\end{split}
\end{equation}
On the other hand, it is known that the Mellin amplitude has the following analytic structure
\begin{equation}\label{MellinMcccc}
\mathcal{M}_{\circ\circ\circ\circ}(s,t)=\sum_{m=0}^\infty \frac{C^{(0)}_m}{s-\Delta-2m}\;.
\end{equation}
This structure is anticipated from the large $N$ expansion analysis \cite{Penedones:2010ue} and can be rigorously derived by using the Casimir equation (equation of motion identity) in Mellin space.\footnote{More precisely, the identity is given by
\begin{equation}
\big({\rm Cas}-\Delta(\Delta-d)\big)W_{\circ\circ\circ\circ}=W_{\rm contact}\;,
\end{equation}
where ${\rm Cas}=-\frac{1}{2}(L_1^{AB}+L_2^{AB})(L_{1,AB}+L_{2,AB})$ is the bi-particle quadratic conformal Casimir built from the conformal generators $L^{AB}_{1,2}$ acting on operators 1 and 2. This identity follows from (\ref{EOMGBB}) which is the equation of motion for the AdS scalar field, and translates into a difference equation for the Mellin amplitude in the Mellin space. For more details, see for instance Appendix C of the review \cite{Bissi:2022mrs}.
} Therefore, we can just focus on the poles at $s=\Delta+2m$ in (\ref{MellinMccccinterm}), and we get
\begin{equation}\label{C0}
C^{(0)}_m=-\frac{\pi^{\frac{d}{2}}\Gamma[\frac{\Delta+\Delta_1+\Delta_2-d}{2}]\Gamma[\frac{\Delta+\Delta_3+\Delta_4-d}{2}]}{4\Gamma[\Delta_1]\Gamma[\Delta_2]\Gamma[\Delta_3]\Gamma[\Delta_4]\Gamma[1-\frac{d}{2}+\Delta]}\frac{(\frac{\Delta-\Delta_1-\Delta_2+2}{2})_m(\frac{\Delta-\Delta_3-\Delta_4+2}{2})_m}{m!(-\frac{d}{2}+\Delta+1)_m}\;.
\end{equation}
\subsection{Using the split representation}\label{SplitM4}
In this subsection, we will use split representation to compute the Mellin amplitudes of $W_{\circ\circ\circ\circ}$. Using the split representation for bulk-to-bulk propagator $G^{\Delta}_{BB}(Y,Z)$ in the definition (\ref{eq:Wcccc}), $W_{\circ\circ\circ\circ}$ can be written as
\begin{align}
W_{\circ\circ\circ\circ}=\frac{1}{2\pi^d}\int_{-i\infty}^{i\infty}\frac{dc}{2\pi i}\frac{1}{(\Delta-h)^2-c^2}\frac{\Gamma(h+c)\Gamma(h-c)}{\Gamma(c)\Gamma(-c)}\int dP_0 W^{L}_{\circ\circ\circ}W^R_{\circ\circ\circ}\;,
\end{align}
where
\begin{align}
W^L_{\circ\circ\circ}=\int dZ\bigg(\prod_{i=1}^2G^{\Delta_i}_{B\partial}(Z,P_i)\bigg)G^{h+c}_{B\partial}(Z,P_0)\;,
\end{align}
and
\begin{align}
W^R_{\circ\circ\circ}=\int dY\bigg(\prod_{i=3}^4G^{\Delta_i}_{B\partial}(Y,P_i)\bigg)G^{h-c}_{B\partial}(Y,P_0)\;.
\end{align}
The left and right three-point amplitudes can be recast in the Mellin representation through\footnote{This is not necessary for the current case because the Mellin-Mandelstam variables are completely fixed by (\ref{MMcond1}) and (\ref{MMcond2}). However, this representation will become nontrivial and useful later when we will follow the same procedure to compute other exchange diagrams where the vertices are quartic or higher.}
\begin{align}
\int dZ\prod_{i=1}^nG^{\Delta_i}_{B\partial}(Z,P_i)=\frac{\pi^h}{2}\Gamma\left[\frac{\sum_{i=1}^n\Delta_i-d}{2}\right]\prod_{i=1}^n\frac{1}{\Gamma[\Delta_i]}\int[d\gamma]\prod_{i<j}^n\Gamma[\gamma_{ij}]P_{ij}^{-\gamma_{ij}}\;,
\end{align}
leading to
\begin{align}
W_{\circ\circ\circ\circ}=&\frac{1}{8}\prod_{i=1}^4\frac{1}{\Gamma[\Delta_i]}\int\frac{dc}{2\pi i}\frac{1}{(\Delta-h)^2-c^2}\frac{\Gamma[\frac{\Delta_1+\Delta_2-h+c}{2}]\Gamma[\frac{\Delta_3+\Delta_4-h-c}{2}]}{\Gamma[c]\Gamma[-c]}\\\nonumber
&\times\int[d\tilde{\gamma}]_L[dl]_L\Gamma[\tilde{\gamma}_{12}]P_{12}^{-\tilde{\gamma}_{12}}\int[d\tilde{\gamma}]_R[dl]_R\Gamma[\tilde{\gamma}_{34}]P_{34}^{-\tilde{\gamma}_{34}}\int dP_0\prod_{i=1}^4\Gamma[l_i]P_{0i}^{-l_i}\;.
\end{align}
Here the integration measure $[d\tilde{\gamma}]_L[dl]_l$ satisfies
\begin{align}
l_1+l_2=h+c\;, \quad l_1+\tilde{\gamma}_{12}=\Delta_1\;,\quad l_2+\tilde{\gamma}_{12}=\Delta_2\;,
\end{align}
and the integration measure $[d\tilde{\gamma}]_R[dl]_R$ satisfies
\begin{align}
l_3+l_4=h-c\;, \quad l_3+\tilde{\gamma}_{34}=\Delta_3\;,\quad l_4+\tilde{\gamma}_{34}=\Delta_4\;.
\end{align}
The integral over the boundary is conformal because $\sum_{i=1}^4l_i=d$ and can be evaluated through the Symanzik formula \cite{Symanzik:1972wj}
\begin{align}
\int dP_0\prod_{i=1}^4\Gamma[l_{i}]P_{0i}^{-l_{i}}=\pi^h\int[d\gamma_{ij}]\left(\prod_{1\leq i<j\leq 4}\Gamma[\gamma_{ij}]P_{ij}^{-\gamma_{ij}}\right)\;,
\end{align}
where the measure is constrained by
\begin{align}
\sum_{\substack{j=1\\j\neq i}}^4\gamma_{ij}=l_i\;,\qquad i=1,2,3,4\;.
\end{align}
After that, one can shift $\gamma_{ij}$ by $\gamma_{ij}\rightarrow \gamma_{ij}-\tilde{\gamma}_{ij}$ for $1\leq i<j\leq 2$ as well as $3\leq i<j\leq 4$. This gives the correct coordinate dependence factor in the definition (\ref{defMellinnpt}) and allows one to easily read off the Mellin amplitudes $\mathcal{M}_{\circ\circ\circ\circ}$, which is given by
\begin{align}
\mathcal{M}_{\circ\circ\circ\circ}=&\frac{\pi^{h}}{8}\prod_{i=1}^4\frac{1}{\Gamma[\Delta_i]}\int\frac{dc}{2\pi i}\frac{1}{(\Delta-h)^2-c^2}\frac{\Gamma[\frac{\Delta_1+\Delta_2-h+c}{2}]\Gamma[\frac{\Delta_3+\Delta_4-h-c}{2}]}{\Gamma[c]\Gamma[-c]}\\\nonumber
&\times\int[d\tilde{\gamma}]_L[dl]_L\frac{\Gamma[\tilde{\gamma}_{12}]\Gamma[\gamma_{12}-\tilde{\gamma}_{12}]}{\Gamma[\gamma_{12}]}\int[d\tilde{\gamma}]_R[dl]_R\frac{\Gamma[\tilde{\gamma}_{34}]\Gamma[\gamma_{34}-\tilde{\gamma}_{34}]}{\Gamma[\gamma_{34}]}\;.
\end{align}
By solving the constraints, we compute the integral over $[d\tilde{\gamma}]_L[dl]_L$ and $[d\tilde{\gamma}]_R[dl]_R$, leading to
\begin{align}
\mathcal{M}_{\circ\circ\circ\circ}=&\frac{\pi^{h}}{8}\prod_{i=1}^4\frac{1}{\Gamma[\Delta_i]}\int\frac{dc}{2\pi i}\frac{1}{(\Delta-h)^2-c^2}\frac{U(s,c)U(s,-c)}{\Gamma[\frac{\Delta_1+\Delta_2-s}{2}]\Gamma[\frac{\Delta_3+\Delta_4-s}{2}]}\;,
\end{align}
where we defined
\begin{align}
s=\Delta_1+\Delta_2-2\gamma_{12}=\Delta_3+\Delta_4-2\gamma_{34}\;,
\end{align}
and
\begin{align}
U(s,c)=\frac{\Gamma[\frac{\Delta_1+\Delta_2-h+c}{2}]\Gamma[\frac{\Delta_3+\Delta_4-h+c}{2}]\Gamma[\frac{h+c-s}{2}]}{\Gamma[c]}\;.
\end{align}
By pinching the $c$-contour between two colliding poles in $c$, we can find poles in $s$, given by
\begin{align}
s=\Delta+2m\;,\qquad m\in \mathbb{Z}_{\geq0}\;.
\end{align}
As a result, $\mathcal{M}_{\circ\circ\circ\circ}$ can be written as
\begin{align}
\label{eq:Mcccc}\mathcal{M}_{\circ\circ\circ\circ}=\sum_{m=0}^{\infty}\frac{C^{(0)}_m}{s-\Delta-2m}\;,
\end{align}
and we find the same residues $C^{(0)}_m$ as given in \eqref{C0}.
\section{Four-point function with one bound state}\label{Sec:Wbccc}
We now proceed to compute the diagram with one bound state, as is depicted in Fig. \ref{fig:bccc}.\footnote{One may wonder if one can get other bound state Witten diagrams with the same set of vertices and propagators. For example, one may consider a diagram where the propagator with $\Delta_5$ starts from the same bulk point but ends on 4. However, this case is trivial because the bulk-to-boundary propagators satisfy the relation $G^{\Delta_i}_{B\partial}(P,Z)G^{\Delta_j}_{B\partial}(P,Z)=G^{\Delta_i+\Delta_j}_{B\partial}(P,Z)$ and the diagram reduces to the diagram (\ref{eq:Wcccc}) without bound states. Therefore, the only nontrivial bound state Witten diagram is Fig. \ref{fig:bccc} up to permutations.} We will use two approaches. The first approach (Section \ref{Subsec:Wbcccivi}) uses the integrated vertex identity and is a straightforward generalization of the calculation presented in Section \ref{Sec:Wcccc}. The second approach (Section \ref{CoincidenceLimit}) obtains the bound state diagram from taking a coincidence limit of a five-point diagram with single-particle external states.
\begin{figure}[h]
\centering
\includegraphics[width=0.25\textwidth]{fig_bccc}
\caption{The exchange Witten diagram with one bound state. }
\label{fig:bccc}
\end{figure}
\subsection{Using the integrated vertex identity}\label{Subsec:Wbcccivi}
The computation of this diagram is similar to that of Section \ref{Subsec:Wccccivi}. We apply the integrated vertex identity to the cubic vertex integral involving the three propagators with dimensions $\Delta_1$, $\Delta_2$ and $\Delta$. This again turns the diagram into infinite sums of $D$-functions
\begin{equation}\label{WbcccinDfun}
\begin{split}
W_{\bullet\circ\circ\circ}={}&\sum_{i=0}^\infty (x_{12}^2)^i T_i D_{\Delta_1+\Delta_5+i,\Delta_2+i,\Delta_3,\Delta_4}\\
{}&+\sum_{i=0}^\infty (x_{12}^2)^{\frac{\Delta-\Delta_1-\Delta_2+2i}{2}} Q_i D_{\frac{\Delta+\Delta_1-\Delta_2}{2}+\Delta_5+i,\frac{\Delta-\Delta_1+\Delta_2}{2}+i,\Delta_3,\Delta_4}\;.
\end{split}
\end{equation}
Translating this result into Mellin space, we get an expression similar to (\ref{MellinMccccinterm}). The expression has poles at $s=\Delta+\Delta_5+2m$ for $m\in \mathbb{Z}_{\geq 0}$. Therefore, we can rewrite the amplitude as a sum over these simple poles. However, we can no longer use the equation of motion identity to rule out additional regular terms because the bi-particle Casimir operator for $x_{1,2}$ necessarily acts on the bulk-to-boundary propagator with dimension $\Delta_5$ as well. On the other hand, for special dimensions satisfying $\Delta_1+\Delta_2-\Delta\in 2\mathbb{Z}_{\geq 0}$, we can verify that regular terms from the two sums cancel and there is no regular part in the Mellin amplitude.\footnote{For example, for $\Delta_1=\Delta_2=\Delta=2$, terms with $i\geq 0$ in the first sum and terms with $i\geq 1$ in the second sum are regular. However, the $(i+1)$-th term in the second sum precisely cancel the $i$-th term in the first sum.} Therefore, we will assume in this subsection that the absence of the regular term is a general feature and the Mellin amplitude has the form
\begin{equation}\label{MellinMbccc}
\mathcal{M}_{\bullet\circ\circ\circ}(s,t)=\sum_{m=0}^\infty \frac{C^{(1)}_m}{s-\Delta-\Delta_5-2m}\;.
\end{equation}
But in the ensuing subsection, we will reproduce this result using a complementary method which also allows us to prove that the regular term is absent. Having determined the analytic structure (\ref{MellinMbccc}), computing $C^{(1)}_m$ is straightforward. These coefficients can be extracted from the residues of (\ref{WbcccinDfun}) in Mellin space and read
\begin{equation}
\label{eq:C1}
\begin{split}
C_m^{(1)}={}&\mathcal{N}^{(1)}{}_3F_2\left(\left.\begin{array}{c}-m, \frac{d}{2}-m-\Delta, 1-m-\frac{\Delta+\Delta_1-\Delta_2}{2}-\Delta_5 \\1-m-\frac{\Delta+\Delta_1-\Delta_2}{2},1-m+\frac{d-\Delta-\Delta_3-\Delta_4-\Delta_5}{2} \end{array}\right.\bigg|1\right)\\
{}&\times \frac{(-1)^m \left(\frac{\Delta -\Delta_1-\Delta_2+2}{2}\right)_m \left(\frac{\Delta +\Delta_1-\Delta_2}{2}\right)_m \left(\frac{-d+\Delta +\Delta_3+\Delta_4+\Delta_5}{2}\right)_m}{m! \left(-\frac{d}{2}+\Delta +1\right)_m \left(\frac{\Delta +\Delta_1-\Delta_2+2 \Delta_5}{2}\right)_m }\;,
\end{split}
\end{equation}
where
\begin{equation}
\mathcal{N}^{(1)}=-\frac{\pi^{\frac{d}{2}}\Gamma[\frac{\Delta+\Delta_1+\Delta_2-d}{2}]\Gamma[\frac{\Delta+\Delta_3+\Delta_4+\Delta_5-d}{2}]\Gamma[\frac{\Delta-\Delta_2+\Delta_1}{2}]}{4\Gamma[\Delta_1]\Gamma[\Delta_2]\Gamma[\Delta_3]\Gamma[\Delta_4]\Gamma[1-\frac{d}{2}+\Delta]\Gamma[\frac{\Delta-\Delta_2+\Delta_1}{2}+\Delta_5]}\;.
\end{equation}
Note that setting $\Delta_5=0$ reduces the Witten diagram to the exchange Witten diagram in Section \ref{Sec:Wcccc}. We find
\begin{equation}
C^{(1)}_m\big|_{\Delta_5=0}=C^{(0)}_m\;,
\end{equation}
reproducing the expression (\ref{MellinMcccc}).
\subsection{From the coincidence limit}\label{CoincidenceLimit}
In this subsection, we will rederive the Mellin amplitude $\mathcal{M}_{\bullet\circ\circ\circ}$ by taking the coincidence limit of $P_1\to P_5$ in Fig. \ref{fig:5pt}. Specifically, the four-point diagram $W_{\bullet\circ\circ\circ}(P_i)$ can be obtained from the five-point diagram $W_{\circ\circ\circ\circ\circ}(P_i)$ through
\begin{equation}
\begin{split}
W_{\bullet\circ\circ\circ}={}&\lim_{P_5\rightarrow P_1}W_{\circ\circ\circ\circ\circ}(P_i)\\
={}&\lim_{P_5\rightarrow P_1}\int[d\gamma_{ij}]_5\mathcal{M}_{\circ\circ\circ\circ\circ}(s-\Delta_5,t)\prod_{1\leq i<j\leq 5}\Gamma[\gamma_{ij}]P_{ij}^{-\gamma_{ij}}\;,
\end{split}
\end{equation}
where the integration measure $[d\gamma_{ij}]_5$ is constrained by
\begin{align}
\sum_{\substack{j=1\\j\neq i}}^5\gamma_{ij}=\Delta_i
\end{align}
for $1\leq i\leq 5$, and we defined $s$ by
\begin{equation}
s=\Delta_1+\Delta_2+\Delta_5-2\gamma_{12}\;.
\end{equation}
The first step of the calculation is to compute the five-point Mellin amplitude $\mathcal{M}_{\circ\circ\circ\circ\circ}$, which can be obtained by using the split representation. Because the calculation is very similar to the four-point case presented in Section \ref{SplitM4}, we will omit the details and just write down the result. The Mellin amplitude reads
\begin{equation}
\label{eq:M5}\mathcal{M}_{\circ\circ\circ\circ\circ}(s,t)=\prod_{i=1}^5\frac{1}{\Gamma[\Delta_i]}\sum_{m=0}^{\infty}\frac{K^{(0)}_m}{s-\Delta-2m}\;,
\end{equation}
where
\begin{equation}
\label{eq:Km}K^{(0)}_m=\frac{-\pi^h\Gamma[\frac{\sum_{i=1}^2\Delta_i+\Delta-d}{2}]\Gamma[\frac{\sum_{i=3}^5\Delta_i+\Delta-d}{2}](\frac{2+\Delta-\sum_{i=1}^2\Delta_i}{2})_m(\frac{2+\Delta-\sum_{i=3}^5\Delta_i}{2})_m}{4m!\Gamma[\Delta-h+1+m]}\;.
\end{equation}
We now start to take the coincidence limit. Let us first perform a shift $\gamma_{1j}$ for $2\leq j\leq 4$ by $\gamma_{1j}\rightarrow\gamma_{1j}-\gamma_{j5}$, which gives
\begin{align}
\begin{split}
W_{\bullet\circ\circ\circ}=&\lim_{P_5\rightarrow P_1}\int[d\gamma_{ij}]_5^{\prime}\mathcal{M}_{\circ\circ\circ\circ\circ}(s-\Delta_5+2\gamma_{25},t)P_{15}^{-\gamma_{15}}\prod_{1\leq i<j\leq 4}\Gamma[\gamma_{ij}]P_{ij}^{-\gamma_{ij}}\\
&\times\frac{\Gamma[\gamma_{15}]\Gamma[\gamma_{25}]\Gamma[\gamma_{35}]\Gamma[\gamma_{45}]\Gamma[\gamma_{12}-\gamma_{25}]\Gamma[\gamma_{13}-\gamma_{35}]\Gamma[\gamma_{14}-\gamma_{45}]}{\Gamma[\gamma_{12}]\Gamma[\gamma_{13}]\Gamma[\gamma_{14}]}\;.
\end{split}
\end{align}
Here the integration measure $[d\gamma_{ij}]_5^{\prime}$ is constrained by
\begin{align}
&\sum_{j=2}^4\gamma_{1j}=\Delta_1+\Delta_5-2\gamma_{15},\qquad\sum_{\substack{j=1\\j\neq i}}^4\gamma_{ij}=\Delta_i,\quad 2\leq i\leq5\;.
\end{align}
In the limit that $P_5$ approaches $P_1$, we can close the integration contour of $\gamma_{15}$ to the left in the $\gamma_{15}$-complex plane. Due to the existence of $\Gamma[\gamma_{15}]$, the leading contribution is given by the residue of pole at $\gamma_{15}=0$ and it is the only contribution which we need to keep. Physically, this corresponds to the fact that the limit $x_{15}^2\to 0$ is regular. Thus evaluating the integral over $\gamma_{15}$ leads to
\begin{align}
\begin{split}
W_{\bullet\circ\circ\circ}=&\int[d\gamma_{ij}]_4^{\prime}\prod_{i=2}^3\frac{d\gamma_{i5}}{2\pi i}\mathcal{M}_{\circ\circ\circ\circ\circ}(s-\Delta_5+2\gamma_{25},t)\Gamma[\gamma_{25}]\Gamma[\gamma_{35}]\prod_{1\leq i<j\leq 4}\Gamma[\gamma_{ij}]P_{ij}^{-\gamma_{ij}}\\
&\times\frac{\Gamma[\Delta_5-\gamma_{25}-\gamma_{35}]\Gamma[\gamma_{12}-\gamma_{25}]\Gamma[\gamma_{13}-\gamma_{35}]\Gamma[-\Delta_5+\gamma_{14}+\gamma_{25}+\gamma_{35}]}{\Gamma[\gamma_{12}]\Gamma[\gamma_{13}]\Gamma[\gamma_{14}]}\;,
\end{split}
\end{align}
where the integration measure $[d\gamma_{ij}]_4^{\prime}$ is constrained by
\begin{align}
&\sum_{j=2}^4\gamma_{1j}=\Delta_1+\Delta_5\;,\qquad\sum_{\substack{j=1\\j\neq i}}^4\gamma_{ij}=\Delta_i,\quad 2\leq i\leq4\;.
\end{align}
The Mellin amplitude $\mathcal{M}_{\bullet\circ\circ\circ}$ is thus given by
\begin{align}
\label{eq:gamma}
\begin{split}
\mathcal{M}_{\bullet\circ\circ\circ}=&\prod_{i=2}^3\int\frac{d\gamma_{i5}}{2\pi i}\mathcal{M}_{\circ\circ\circ\circ\circ}(s-\Delta_5+2\gamma_{25},t)\Gamma[\gamma_{25}]\Gamma[\gamma_{12}-\gamma_{25}]\\
&\times\frac{\Gamma[\gamma_{35}]\Gamma[\Delta_5-\gamma_{25}-\gamma_{35}]\Gamma[\gamma_{13}-\gamma_{35}]\Gamma[-\Delta_5+\gamma_{14}+\gamma_{25}+\gamma_{35}]}{\Gamma[\gamma_{12}]\Gamma[\gamma_{13}]\Gamma[\gamma_{14}]}\;.
\end{split}
\end{align}
Performing the integrals over $\gamma_{35}$ and $\gamma_{25}$ leads to an expression for $\mathcal{M}_{\bullet\circ\circ\circ}$, which agrees with \eqref{MellinMbccc}. We will show this explicitly in Appendix \ref{Mbccc}. Moreover, the approach of taking the coincidence limit also enables us to prove that the regular term vanishes when $\Delta_1>0$. For the sake of readability, we will only outline the proof here and leave the details to Appendix \ref{Mbccc}. The starting point of the proof is to write $\mathcal{M}_{\bullet\circ\circ\circ}$ as a sum of two parts by performing the remaining integral. Each part can be rewritten as a sum over poles, up to a regular term which we wish to show to be absent. The sum over all the poles can be performed and leads to a generalized hypergeometric function ${}_3F_2$. Thanks to a hypergeometric function identity, which is valid when $\Delta_1>0$, the summation over the poles turns out to be already the same as the original expression. This leads to the conclusion that the regular term must be absent.
\begin{figure}[h]
\centering
\includegraphics[width=0.25\textwidth]{fig_5pt}
\caption{The exchange Witten diagram with one bound state can be obtained from a five-point exchange Witten diagram by taking a coincidence limit where $x_5\to x_1$.}
\label{fig:5pt}
\end{figure}
\section{Four-point function with two bound states: Type I}
We now consider the Witten diagram with two bound states of Type I (Fig. \ref{fig:bbcctypeI}). As in the previous section, we can also evaluate the diagram using two methods and we find the result is structurally similar to the one bound state case.
\label{Sec:WbbcctypeI}
\begin{figure}[h]
\centering
\includegraphics[width=0.28\textwidth]{fig_bbcctypeI}
\caption{Exchange Witten diagram with two bound states (Type I). }
\label{fig:bbcctypeI}
\end{figure}
\subsection{Using the integrated vertex identity}
Because the diagram contains a cubic vertex, the method based on the integrated vertex identity can also be applied here. Using the identity on the cubic vertex connecting propagators with dimensions $\Delta_1$, $\Delta_2$, $\Delta$, the diagram is reduced to $D$-functions
\begin{equation}
\begin{split}
W_{\bullet\bullet\circ\circ}^{\rm I}={}&\sum_{i=0}^\infty (x_{12}^2)^i T_i D_{\Delta_1+\Delta_5+i,\Delta_2+\Delta_6+i,\Delta_3,\Delta_4}\\
{}&+\sum_{i=0}^\infty (x_{12}^2)^{\frac{\Delta-\Delta_1-\Delta_2+2i}{2}} Q_i D_{\frac{\Delta+\Delta_1-\Delta_2}{2}+\Delta_5+i,\frac{\Delta-\Delta_1+\Delta_2}{2}+\Delta_6+i,\Delta_3,\Delta_4}\;.
\end{split}
\end{equation}
With the help of (\ref{MellinofDn}) we can translate the result into Mellin space and obtain an expression for its Mellin amplitude which is similar to (\ref{MellinMccccinterm}). It is not difficult to see that in Mellin space this diagram has poles at $s=\Delta+\Delta_5+\Delta_6+2m$ for $m\in \mathbb{Z}_{\geq 0}$. Like the diagram considered in Section \ref{Sec:Wbccc}, we also cannot use the equation of motion identity argument to rule out the regular term. But based on the explicit results with $\Delta_1+\Delta_2-\Delta\in 2\mathbb{Z}_{\geq 0}$, we will assume that the regular term vanishes in general and the Mellin amplitude has the form
\begin{equation}\label{MellinMbbccI}
\mathcal{M}^{\rm I}_{\bullet\bullet\circ\circ}(s,t)=\sum_{m=0}^\infty \frac{C^{(2)}_m}{s-\Delta-\Delta_5-\Delta_6-2m}\;.
\end{equation}
Later in Section \ref{Subsec:WbbccIcoinlim}, we will explain how the regular term can be shown to be absent by using the other method based on the coincidence limit. We can compute the residues and find
\begin{equation}
\begin{split}
\label{eq:C2}
C_m^{(2)}={}&\mathcal{N}^{(2)}{}_4F_3\left(\left.\begin{array}{c}-m, \frac{d}{2}-m-\Delta, 1-m-\frac{\Delta+\Delta_1-\Delta_2}{2}-\Delta_6, 1-m-\frac{\Delta-\Delta_1+\Delta_2}{2}-\Delta_5 \\1-m-\frac{\Delta+\Delta_1-\Delta_2}{2},1-m-\frac{\Delta-\Delta_1+\Delta_2}{2},1-m+\frac{d-\Delta-\Delta_3-\Delta_4-\Delta_5-\Delta_6}{2} \end{array}\right.\bigg|1\right)\\
{}&\times \frac{(-1)^m \left(\frac{\Delta -\Delta_1-\Delta_2+2}{2}\right)_m \left(\frac{\Delta -\Delta_1+\Delta_2}{2}\right)_m \left(\frac{\Delta +\Delta_1-\Delta_2}{2}\right)_m \left(\frac{-d+\Delta +\Delta_3+\Delta_4+\Delta_5+\Delta_6}{2}\right)_m}{m! \left(-\frac{d}{2}+\Delta +1\right)_m \left(\frac{\Delta +\Delta_1-\Delta_2+2 \Delta_6}{2}\right)_m \left(\frac{\Delta -\Delta_1+\Delta_2+2 \Delta_5}{2}\right)_m}\;,
\end{split}
\end{equation}
where
\begin{equation}
\mathcal{N}^{(2)}=-\frac{\pi^{\frac{d}{2}}\Gamma[\frac{\Delta+\Delta_1+\Delta_2-d}{2}]\Gamma[\frac{\Delta+\Delta_3+\Delta_4+\Delta_5+\Delta_6-d}{2}]\Gamma[\frac{\Delta-\Delta_1+\Delta_2}{2}]\Gamma[\frac{\Delta-\Delta_2+\Delta_1}{2}]}{4\Gamma[\Delta_1]\Gamma[\Delta_2]\Gamma[\Delta_3]\Gamma[\Delta_4]\Gamma[1-\frac{d}{2}+\Delta]\Gamma[\frac{\Delta-\Delta_1+\Delta_2}{2}+\Delta_5]\Gamma[\frac{\Delta-\Delta_2+\Delta_1}{2}+\Delta_6]}\;.
\end{equation}
Note that setting $\Delta_6=0$ reduces to the case considered in Section \ref{Sec:Wbccc}. Furthermore, there is a symmetry of exchanging $(\Delta_1,\Delta_6)$ with $(\Delta_2,\Delta_5)$, which is manifest in Fig. \ref{fig:bbcctypeI}.
\subsection{From the coincidence limit}\label{Subsec:WbbccIcoinlim}
\begin{figure}[h]
\centering
\includegraphics[width=0.28\textwidth]{fig_6ptI}
\caption{The exchange Witten diagram Fig. \ref{fig:bbcctypeI} with two bound states can be obtained from a six-point exchange Witten diagram (denoted by $W^I_{\circ\circ\circ\circ\circ\circ}$) by taking a double coincidence limit where $x_6\to x_1$, $x_5\to x_2$. }
\label{fig:6ptI}
\end{figure}
In addition to using the integrated vertex identity, we can also obtain the Mellin amplitude $\mathcal{M}^{\rm I}_{\bullet\bullet\circ\circ}$ from the six-point exchange diagram $W^{\rm I}_{\circ\circ\circ\circ\circ\circ}$ in Fig. \ref{fig:6ptI} by taking a coincidence limit where both $P_{6}\to P_1$ and $P_5\to P_2$. The Mellin amplitude $\mathcal{M}^{\rm I}_{\circ\circ\circ\circ\circ\circ}$ of the six-point exchange Witten diagram can be deduced by using the split representation. The result is given by
\begin{align}
\label{eq:M6typeI}\mathcal{M}^{\rm I}_{\circ\circ\circ\circ\circ\circ}=\prod_{i=1}^6\frac{1}{\Gamma[\Delta_i]}\sum_{m=0}^{\infty}\frac{R^{(0)}_m}{s^{\prime}-\Delta-2m}\;.
\end{align}
Here $s^{\prime}$ is defined as
\begin{equation}
s^{\prime}=\Delta_1+\Delta_2-2\gamma_{12}\;,
\end{equation}
and the residues are constants
\begin{align}
\label{eq:Rm}R^{(0)}_m=\frac{-\pi^h\Gamma[\frac{\sum_{i=1}^2\Delta_i+\Delta-d}{2}]\Gamma[\frac{\sum_{i=3}^6\Delta_i+\Delta-d}{2}](\frac{2+\Delta-\sum_{i=1}^2\Delta_i}{2})_m(\frac{2+\Delta-\sum_{i=3}^6\Delta_i}{2})_m}{4m!\Gamma[\Delta-h+1+m]}\;.
\end{align}
With $\mathcal{M}^{\rm I}_{\circ\circ\circ\circ\circ\circ}$ in hand, we can proceed with taking the coincidence limit. Let us first identify $P_6$ and $P_1$. This gives a five-point Witten diagram with one bound state
\begin{align}
W^{\rm I}_{\bullet\circ\circ\circ\circ}&=\lim_{P_6\rightarrow P_1}\int[d\gamma_{ij}]_6\mathcal{M}^{\rm I}_{\circ\circ\circ\circ\circ\circ}(s^{\prime\prime}-\Delta_6,t)\prod_{1\leq i<j\leq 6}\Gamma[\gamma_{ij}]P_{ij}^{-\gamma_{ij}}\;,
\end{align}
where the integration measure $[d\gamma_{ij}]_6$ is constrained by
\begin{align}\label{eq:6ptmeasure}
\sum_{\substack{j=1\\j\neq i}}^6\gamma_{ij}=\Delta_i\;,
\end{align}
for $1\leq i\leq 6$ and we defined $s^{\prime\prime}$ by
\begin{align}
s^{\prime\prime}=\Delta_1+\Delta_2+\Delta_6-2\gamma_{12}\;.
\end{align}
Following the same steps as in Section \ref{CoincidenceLimit}, we can shift the Mellin parameters and evaluate the integral over $\gamma_{16}$. After that, we compute the integral over $\gamma_{36}$ and $\gamma_{46}$ by using the first Barnes' lemma, leading to
\begin{align}
\begin{split}
\mathcal{M}^{\rm I}_{\bullet\circ\circ\circ\circ}=&\int\frac{d\gamma_{26}}{2\pi i}\mathcal{M}_6(s^{\prime\prime}-\Delta_6+2\gamma_{26},t)\Gamma[\gamma_{26}]\\
&\times\frac{\Gamma[\Delta_6-\gamma_{26}]\Gamma[\frac{\Delta_1-\Delta_2-\Delta_6+s^{\prime\prime}}{2}+\gamma_{26}]\Gamma[\frac{\Delta_1+\Delta_2+\Delta_6-s^{\prime\prime}}{2}-\gamma_{26}]}{\Gamma[\frac{\Delta_1+\Delta_2+\Delta_6-s^{\prime\prime}}{2}]\Gamma[\frac{\Delta_1-\Delta_2+\Delta_6+s^{\prime\prime}}{2}]}\;.
\end{split}
\end{align}
As in Section \ref{CoincidenceLimit} (which was further detailed in Appendix \ref{Mbccc}), we can perform the integral over $\gamma_{26}$ to obtain an expression for $\mathcal{M}^{\rm I}_{\bullet\circ\circ\circ\circ}$. The result is given by
\begin{align}
\label{eq:Mbcccc}\mathcal{M}^{\rm I}_{\bullet\circ\circ\circ\circ}(s,t)=&\prod_{i=1}^5\frac{1}{\Gamma[\Delta_i]}\sum_{m=0}^{\infty}\frac{K^{(1)}_{m}}{s^{\prime\prime}-\Delta-\Delta_6-2m}\;,
\end{align}
where we have assumed $\Delta_1>0$ and the residue $K^{(1)}_m$ is given by
\begin{align}
\begin{split}
K^{(1)}_{m}=&\frac{R^{(0)}_m\Gamma[\frac{\Delta_1-\Delta_2+\Delta+2m}{2}]}{\Gamma[\frac{\Delta_1-\Delta_2+2\Delta_6+\Delta+2m}{2}]}{}_3F_2\left(\left.\begin{array}{c}-m, \Delta_6,h-\Delta-m \\1-\frac{\Delta_1-\Delta_2+\Delta+2m}{2},\frac{\sum_{i=3}^6\Delta_i-\Delta-2m}{2}\end{array}\right.\bigg|1\right)\;.
\end{split}
\end{align}
To get $\mathcal{M}^{\rm I}_{\bullet\bullet\circ\circ}$, we need to further identify $P_5$ and $P_2$, \textit{i.e.},
\begin{align}
W^{\rm I}_{\bullet\bullet\circ\circ}=&\lim_{P_5\rightarrow P_2}\int[d\gamma_{ij}]_5\mathcal{M}_{\bullet\circ\circ\circ\circ}^{\rm I}(s-\Delta_5,t)\prod_{1\leq i<j\leq 5}\Gamma(\gamma_{ij})P_{ij}^{-\gamma_{ij}}\;,
\end{align}
where we redefined $s$ by
\begin{align}
s=\Delta_1+\Delta_2+\Delta_5+\Delta_6-2\gamma_{12}\;.
\end{align}
Repeating the steps in Section \ref{CoincidenceLimit}, we arrive at an integral representation for $\mathcal{M}^{\rm I}_{\bullet\bullet\circ\circ}$, which is given by
\begin{align}\label{gamma15typeI}
\begin{split}
\mathcal{M}^{\rm I}_{\bullet\bullet\circ\circ}=&\int\frac{d\gamma_{15}}{2\pi i}\mathcal{M}_{\bullet\circ\circ\circ\circ}^{\rm I}(s-\Delta_5+2\gamma_{15},t)\Gamma[\gamma_{15}]\Gamma[\Delta_5-\gamma_{15}]\\
&\times\frac{\Gamma[\frac{\Delta_1+\Delta_2+\Delta_5+\Delta_6-s}{2}-\gamma_{15}]\Gamma[\frac{\Delta_2-\Delta_1-\Delta_5-\Delta_6+s}{2}+\gamma_{15}]}{\Gamma[\frac{\Delta_1+\Delta_2+\Delta_5+\Delta_6-s}{2}]\Gamma[\frac{-\Delta_1+\Delta_2+\Delta_5-\Delta_6+s}{2}]}\;.
\end{split}
\end{align}
Performing the integral over $\gamma_{15}$ finally leads to an expression for $\mathcal{M}^{\rm I}_{\bullet\bullet\circ\circ}$
\begin{align}
\label{eq:MbbcctypeI}\mathcal{M}^{\rm I}_{\bullet\bullet\circ\circ}(s,t)=&\sum_{m=0}^{\infty}\frac{C^{(2)}_{m}}{s-\Delta-\Delta_5-\Delta_6-2m}\;,
\end{align}
where we assumed $\Delta_1, \Delta_2>0$ and the residue $C^{(2)}$ is given by
\begin{align}\label{eq: C21}
C^{(2)}_{m}=&\sum_{n=0}^{\infty}\frac{(-1)^nK^{(1)}_{m-n}}{n!}\frac{\Gamma[\Delta_5+n]\Gamma[\frac{\Delta_1+\Delta_2-\Delta}{2}-m+n]\Gamma[\frac{\Delta_2-\Delta_1+\Delta}{2}+m-n]}{\Gamma[\Delta_1]\Gamma[\Delta_2]\Gamma[\Delta_3]\Gamma[\Delta_4]\Gamma[\frac{\Delta_1+\Delta_2-\Delta}{2}-m]\Gamma[\frac{-\Delta_1+\Delta_2+2\Delta_5+\Delta}{2}+m]}\;.
\end{align}
In this case, one can also show that the regular term vanishes by using a similar argument based on the identity \eqref{3F2Identity0}. Details of the computation can be found in Appendix \ref{App:MbbcctypeI}. Note that to compare the above $C^{(2)}_m$ with \eqref{eq:C2} a re-summation has to be performed.
\section{Four-point function with two bound states: Type II}\label{Sec:WbbcctypeII}
\subsection{Mellin amplitude}
\begin{figure}[h]
\centering
\includegraphics[width=0.25\textwidth]{fig_bbcctypeII}
\caption{Exchange Witten diagram with two bound states (type II). }
\label{fig:bbcctypeII}
\end{figure}
The two bound state diagram of Type II (Fig. \ref{fig:bbcctypeII}) no longer contains cubic vertices and therefore can not be computed using the method based on the integrated vertex identity. However, the method using the coincidence limit can still be applied to this case. In this section, we will obtain its Mellin amplitude $\mathcal{M}^{\rm II}_{\bullet\bullet\circ\circ}$ from the six-point Mellin amplitudes $\mathcal{M}^{\rm II}_{\circ\circ\circ\circ\circ\circ}$, depicted in Fig. \ref{fig:6pt},
\begin{figure}[h]
\centering
\includegraphics[width=0.28\textwidth]{fig_6pt}
\caption{The exchange Witten diagram with two bound states can be obtained from a six-point exchange Witten diagram by taking a double coincidence limit where $x_6\to x_1$, $x_5\to x_3$.}
\label{fig:6pt}
\end{figure}
by taking the coincidence limit $P_6\rightarrow P_1$ together with $P_5\rightarrow P_3$. The six-point Mellin amplitudes $\mathcal{M}^{\rm II}_{\circ\circ\circ\circ\circ\circ}$ can be computed by using the split representation, giving
\begin{align}
\label{eq:M6typeII}\mathcal{M}^{\rm II}_{\circ\circ\circ\circ\circ\circ}(s,t)=\prod_{i=1}^6\frac{1}{\Gamma[\Delta_i]}\sum_{m=0}^{\infty}\frac{\widetilde{R}^{(0)}_m}{s^{\prime}-\Delta-2m}\;.
\end{align}
Here $s^{\prime}$ is defined as
\begin{equation}
s^{\prime}=\Delta_1+\Delta_2+\Delta_3-2\gamma_{12}-2\gamma_{13}-2\gamma_{23}\;,
\end{equation}
and the residues are constants
\begin{align}
\label{eq:Rtm}\widetilde{R}^{(0)}_m=\frac{-\pi^h\Gamma[\frac{\sum_{i=1}^3\Delta_i+\Delta-d}{2}]\Gamma[\frac{\sum_{i=4}^6\Delta_i+\Delta-d}{2}](\frac{2+\Delta-\sum_{i=1}^3\Delta_i}{2})_m(\frac{2+\Delta-\sum_{i=4}^6\Delta_i}{2})_m}{4m!\Gamma[\Delta-h+1+m]}\;.
\end{align}
\begin{figure}[h]
\centering
\includegraphics[width=0.3\textwidth]{fig_5ptbcccc}
\caption{A five-point Witten diagram with one bound state. }
\label{fig:5ptbcccc}
\end{figure}
After taking $P_6\rightarrow P_1$ in $\mathcal{M}^{\rm II}_{\circ\circ\circ\circ\circ\circ}$, a five-point function with one bound state in Fig. \ref{fig:5ptbcccc} can be obtained
\begin{align}
W^{\rm II}_{\bullet\circ\circ\circ\circ}&=\lim_{P_6\rightarrow P_1}\int[d\gamma_{ij}]_6\mathcal{M}^{\rm II}_{\circ\circ\circ\circ\circ\circ}(s^{\prime\prime}-\Delta_6,t)\prod_{1\leq i<j\leq 6}\Gamma[\gamma_{ij}]P_{ij}^{-\gamma_{ij}}\;,
\end{align}
where the integration measure $[d\gamma_{ij}]_6$ is again constrained by \eqref{eq:6ptmeasure} and we defined $s^{\prime\prime}$ by
\begin{align*}
s^{\prime\prime}=\Delta_1+\Delta_2+\Delta_3+\Delta_6-2\gamma_{12}-2\gamma_{13}-2\gamma_{23}\;.
\end{align*}
Similar to the previous sections, one can change variables and evaluate the integral over $\gamma_{36}$, $\gamma_{46}$ and $\gamma_{56}$, leading to
\begin{align}\label{eq:gamma26typeII}
\begin{split}
\mathcal{M}^{\rm II}_{\bullet\circ\circ\circ\circ}=&\int\frac{d\gamma_{26}}{2\pi i}\mathcal{M}^{\rm II}_{\circ\circ\circ\circ\circ\circ}(s^{\prime\prime}-\Delta_6+2\gamma_{26},t^{\prime\prime})\\
&\times\frac{\Gamma[\Delta_6-\gamma_{26}]\Gamma[\frac{s^{\prime\prime}-t^{\prime\prime}+\Delta_1-\Delta_6+2\gamma_{26}}{2}]\Gamma[\frac{t^{\prime\prime}-s^{\prime\prime}+\Delta_1+\Delta_6-2\gamma_{26}}{2}]\Gamma[\gamma_{26}]}{\Gamma[\frac{\Delta_1+\Delta_6-s^{\prime\prime}+t^{\prime\prime}}{2}]\Gamma[\frac{\Delta_1+\Delta_6+s^{\prime\prime}-t^{\prime\prime}}{2}]}\;,
\end{split}
\end{align}
where we defined another Mellin-Mandelstam variable
\begin{align}
t^{\prime\prime}=\Delta_1+\Delta_4+\Delta_5+\Delta_6-2\gamma_{14}-2\gamma_{15}-2\gamma_{45}\;.
\end{align}
Although now $\mathcal{M}^{\rm II}_{\bullet\circ\circ\circ\circ}$ depends on both $s^{\prime\prime}$ and $t^{\prime\prime}$, one can still expand $\mathcal{M}^{\rm II}_{\bullet\circ\circ\circ\circ}$ around its poles and show that the regular term vanishes by using \eqref{3F2Identity0}. We leave the details to Appendix \ref{App:MbbcctypeII} and just write down the results here
\begin{align}
\label{eq:MbcccctypeII}\mathcal{M}^{\rm II}_{\bullet\circ\circ\circ\circ}(s^{\prime\prime},t^{\prime\prime})=\mathcal{M}^{1}_{\bullet\circ\circ\circ\circ}(s^{\prime\prime},t^{\prime\prime})+\delta_{\Delta_6,0}\mathcal{M}^{2}_{\bullet\circ\circ\circ\circ}(s^{\prime\prime},t^{\prime\prime})+\delta_{\Delta_1,0}\mathcal{M}^{3}_{\bullet\circ\circ\circ\circ}(s^{\prime\prime},t^{\prime\prime})\;.
\end{align}
Here $\mathcal{M}^{2}_{\bullet\circ\circ\circ\circ}(s^{\prime\prime},t^{\prime\prime})$ and $\mathcal{M}^{3}_{\bullet\circ\circ\circ\circ}(s^{\prime\prime},t^{\prime\prime})$ contain only single poles and are given by
\begin{align}
&\mathcal{M}^{2}_{\bullet\circ\circ\circ\circ}(s^{\prime\prime},t^{\prime\prime})=\prod_{i=1}^5\frac{1}{\Gamma[\Delta_i]}\sum_{m=0}^{\infty}\frac{\widetilde{R}^{(0)}_m|_{\Delta_6=0}}{s^{\prime\prime}-\Delta-2m}\;,\\
&\mathcal{M}^{3}_{\bullet\circ\circ\circ\circ}(s^{\prime\prime},t^{\prime\prime})=\prod_{i=2}^6\frac{1}{\Gamma[\Delta_i]}\sum_{m=0}^{\infty}\frac{\widetilde{R}^{(0)}_m|_{\Delta_1=0}}{t^{\prime\prime}-\Delta-2m}\;.
\end{align}
By contrast, $\mathcal{M}^{1}_{\bullet\circ\circ\circ\circ}(s^{\prime\prime},t^{\prime\prime})$ contains simultaneous poles in $s^{\prime\prime}$ and $t^{\prime\prime}$, and is given by
\begin{align}
&\mathcal{M}^{1}_{\bullet\circ\circ\circ\circ}(s^{\prime\prime},t^{\prime\prime})=\prod_{i=1}^6\frac{1}{\Gamma[\Delta_i]}\sum_{m_1,m_2=0}^{\infty}\frac{\widetilde{K}^{(1)}_{m_1m_2}}{(s^{\prime\prime}-\Delta_6-\Delta-2m_1)(t^{\prime\prime}-\Delta_1-\Delta-2m_2)}\;,
\end{align}
with the residue
\begin{align}\label{K1tildem1m2}
\begin{split}
\widetilde{K}^{(1)}_{m_1m_2}=\frac{\pi^h\Gamma[\frac{\sum_{i=1}^3\Delta_i+\Delta-d}{2}]\Gamma[\frac{\sum_{i=4}^6\Delta_i+\Delta-d}{2}](1-\Delta_6-m_1)_{m_2}(1-\Delta_1-m_2)_{m_1}}{2m_1!m_2!\Gamma[\Delta-h+1]}\\
\times{}_4F_3\left(\left.\begin{array}{c}-m_1, -m_2, 1-\frac{\sum_{i=1}^3\Delta_i-\Delta}{2}, 1-\frac{\sum_{i=4}^6\Delta_i-\Delta}{2} \\ 1-\Delta_1-m_2, 1-\Delta_6-m_1, 1+\Delta-h \end{array}\right.\bigg|1\right)\;.
\end{split}
\end{align}
Setting $\Delta_1=0$ or $\Delta_6=0$, one finds that $\mathcal{M}^{1}_{\bullet\circ\circ\circ\circ}(s^{\prime\prime},t^{\prime\prime})=0$ and the expression \eqref{eq:MbcccctypeII} directly reduces to the desired five-point Mellin amplitudes.
The four-point Mellin amplitudes in Fig. \ref{fig:bbcctypeII} can be obtained by further identifying $P_5$ and $P_3$ in Fig. \ref{fig:5ptbcccc}
\begin{align}\label{eq:WbbcctypeII}
W^{\rm II}_{\bullet\bullet\circ\circ}&=\lim_{P_5\rightarrow P_2}\int[d\gamma_{ij}]_5\mathcal{M}^{\rm II}_{\bullet\circ\circ\circ\circ}(s^{\prime\prime},t^{\prime\prime})\prod_{1\leq i<j\leq 5}\Gamma[\gamma_{ij}]P_{ij}^{-\gamma_{ij}}\;.
\end{align}
Due to the fact that the expression \eqref{eq:MbcccctypeII} for $\mathcal{M}^{\rm II}_{\bullet\circ\circ\circ\circ}$ contains three parts, $\mathcal{M}^{\rm II}_{\bullet\bullet\circ\circ}$ can be correspondingly written as
\begin{align}\label{eq:MbbcctypeII}
\mathcal{M}^{\rm II}_{\bullet\bullet\circ\circ}&=\mathcal{M}^{1}_{\bullet\bullet\circ\circ}(s,t)+\mathcal{M}^{2}_{\bullet\bullet\circ\circ}(s,t)+\mathcal{M}^{3}_{\bullet\bullet\circ\circ}(s,t)\;,
\end{align}
where $\mathcal{M}^{i}_{\bullet\bullet\circ\circ}$ for $i=1, 2, 3$ are obtained by substituting the corresponding $\mathcal{M}^{i}_{\bullet\circ\circ\circ\circ}$ into \eqref{eq:WbbcctypeII}. Moreover, $\mathcal{M}^{2}_{\bullet\circ\circ\circ\circ}$ and $\mathcal{M}^{3}_{\bullet\circ\circ\circ\circ}$ represent two five-point Mellin amplitudes, the resulting $\mathcal{M}^{2}_{\bullet\bullet\circ\circ}$ and $\mathcal{M}^{3}_{\bullet\bullet\circ\circ}$ are just four-point Mellin amplitudes with one bound state. In other words, we have
\begin{align}\label{eq:M23bbcc}
\begin{split}
\mathcal{M}^{2}_{\bullet\bullet\circ\circ}(s,t)=\delta_{\Delta_6,0}\mathcal{M}_{\bullet\circ\circ\circ}\bigg[\begin{array}{c} 12345\\ 45123\end{array}\bigg]\;,\qquad\mathcal{M}^{3}_{\bullet\bullet\circ\circ}(s,t)=\delta_{\Delta_1,0}\mathcal{M}_{\bullet\circ\circ\circ}\bigg[\begin{array}{c} 12345\\ 32465\end{array}\bigg]\;,
\end{split}
\end{align}
where $\mathcal{M}_{\bullet\circ\circ\circ}\bigg[\begin{array}{c} 12345\\ abcde\end{array}\bigg]$ means that we relabel $1, 2, 3, 4, 5$ in \eqref{MellinMbccc} by $a, b, c, d, e$, respectively. To compute $\mathcal{M}^{1}_{\bullet\bullet\circ\circ}(s,t)$, we follow the steps in the previous sections and reach an integral representation for $\mathcal{M}^{1}_{\bullet\bullet\circ\circ}$, given by
\begin{align}\label{eq:integral}
\begin{split}
\mathcal{M}^{1}_{\bullet\bullet\circ\circ}=\int\frac{d\gamma_{25}}{2\pi i}\frac{d\gamma_{45}}{2\pi i}\frac{\mathcal{M}_{\bullet\circ\circ\circ\circ}^{1}(\Delta_4+\Delta_5-2\gamma_{45},t-\Delta_5+2\gamma_{25})\Gamma[\Delta_5-\gamma_{45}-\gamma_{25}]}{\Gamma[\frac{s+t-\Delta_2-\Delta_4}{2}]}\\
\frac{\Gamma[\gamma_{25}]\Gamma[\gamma_{45}]\Gamma[\frac{\sum_{i=3}^5\Delta_i-s}{2}-\gamma_{45}]\Gamma[\frac{\Delta_2+\Delta_3+\Delta_5-t}{2}-\gamma_{25}]\Gamma[\frac{s+t-\Delta_2-\Delta_4-2\Delta_{5}}{2}+\gamma_{25}+\gamma_{45}]}{\Gamma[\frac{\Delta_2+\Delta_3+\Delta_5-t}{2}]\Gamma[\frac{\Delta_3+\Delta_4+\Delta_5-s}{2}]}\;,
\end{split}
\end{align}
where
we have defined $s$ and $t$ as
\begin{align}
s=\Delta_1+\Delta_2+\Delta_6-2\gamma_{12}\;,\quad t=\Delta_1+\Delta_4+\Delta_6-2\gamma_{14}\;.
\end{align}
Performing the remaining integral finally leads an expression for $\mathcal{M}^{1}_{\bullet\bullet\circ\circ}(s,t)$. The computation is technical and tedious. We leave the details in the Appendix \ref{App:MbbcctypeII} and only write down the final result here
\begin{align}\label{eq:M1bbcc}
\begin{split}
\mathcal{M}^{1}_{\bullet\bullet\circ\circ}(s,t)=&\delta_{\Delta_5,0}\mathcal{M}_{\bullet\circ\circ\circ}\bigg[\begin{array}{c} 12345\\ 64231\end{array}\bigg]+\delta_{\Delta_3,0}\mathcal{M}_{\bullet\circ\circ\circ}\bigg[\begin{array}{c} 12345\\ 12456\end{array}\bigg]\\
&+\sum_{m_1,m_2=0}^{\infty}\frac{\widetilde{C}^{(2)}_{m_1m_2}}{(s-\Delta_3-\Delta_6-\Delta-2m_1)(t-\Delta_1-\Delta_5-\Delta-2m_2)}\;.
\end{split}
\end{align}
The coefficients $\widetilde{C}^{(2)}_{m_1m_2}$ in the second line are constants given by
\begin{align}\label{eq:C2typeII}
\begin{split}
\widetilde{C}^{(2)}_{m_1m_2}=\sum_{n_1,n_2=0}^{\infty}\frac{(1-\frac{\Delta_4+\Delta_5-\Delta_6-\Delta}{2}+n_1)_{m_1-n_1}(1-\frac{-\Delta_1+\Delta_2+\Delta_3-\Delta}{2}+n_2)_{m_2-n_2}}{\Gamma[\Delta_1]\Gamma[\Delta_2]\Gamma[\Delta_3]\Gamma[\Delta_4]\Gamma[\Delta_5]\Gamma[\Delta_6](m_1-n_1)!(m_2-n_2)!}\\
\frac{\Gamma[\frac{-\Delta_4+\Delta_5+\Delta_6+\Delta}{2}+n_1-n_2+m_2]\Gamma[\frac{\Delta_1-\Delta_2+\Delta_3+\Delta}{2}+m_1-n_1+n_2]}{\Gamma[\frac{\Delta_1-\Delta_2+\Delta_3-\Delta_4+\Delta_5+\Delta_6+2\Delta}{2}+m_1+m_2]}\widetilde{K}^{(1)}_{n_1n_2}\;,
\end{split}
\end{align}
where $\widetilde{K}^{(1)}_{n_1n_2}$ has already been defined in (\ref{K1tildem1m2}). Substituting \eqref{eq:M1bbcc} and \eqref{eq:M23bbcc} into \eqref{eq:MbbcctypeII} leads to the final expression for $\mathcal{M}^{\rm II}_{\bullet\bullet\circ\circ}(s,t)$.
\subsection{Relation with AdS one-loop box diagrams}\label{Sec:Wbbbccand1loop}
The appearance of simultaneous poles in \eqref{eq:M1bbcc} is reminiscent of the Mellin amplitudes for one-loop box diagrams in AdS \cite{Alday:2018kkw,Alday:2019nin,Alday:2021ajh}. However, a direct comparison that pinpoints the precise AdS diagram is difficult. On the one hand, a closed form expression for one-loop box diagrams with generic conformal dimensions in any spacetime dimension is still absent. On the other, in the supersymmetric cases where there are explicit results \cite{Alday:2018kkw,Alday:2019nin,Alday:2021ajh} one works with the {\it reduced} correlator\footnote{This is analogous to stripping off a factor supercharge delta functions in flat-space super amplitudes. See, {\it e.g.}, the review \cite{Alday:2021ajh} for more details.}. The one-loop correction of the reduced correlator does not admit a clear interpretation as a collection of one-loop Witten diagrams when the AdS radius is finite. The one-loop correction sees not only the AdS factor of the background but the internal space as well. Therefore, these diagrams are extended into the internal dimensions and are not pure AdS diagrams. In this subsection we will not attempt to find the exact AdS loop diagrams. Instead, we will content ourselves with confirming that the bound state amplitudes $\mathcal{M}_{\bullet\bullet\circ\circ}^{\rm II}(s,t)$ with certain conformal dimensions reduce to the flat-space massless box diagrams in the flat-space limit \cite{Penedones:2010ue}. We leave the precise identification with AdS loop diagrams at a finite radius for the future.
We consider a special class of bound state amplitudes $\mathcal{M}_{\bullet\bullet\circ\circ}^{\rm II}(s,t)$ with conformal dimensions $\Delta_1=\Delta_3=\Delta_5=\Delta_6=1$ and $\Delta_2=\Delta_4=\Delta$. The four-point function therefore has external dimensions $2$, $2$, $\Delta$, $\Delta$. In this case, we find that the Mellin amplitude $\mathcal{M}_{\bullet\bullet\circ\circ}^{\rm II}(s,t)$ reads
\begin{align}\label{eq:flatspacelimt}
\begin{split}
\mathcal{M}^{\rm II}_{\bullet\bullet\circ\circ}(s,t)=&\sum_{m_1,m_2=0}^{\infty}\frac{\widetilde{C}^{(2)}_{m_1m_2}}{(s-2-\Delta-2m_1)(t-2-\Delta-2m_2)}\;,
\end{split}
\end{align}
with residues
\begin{align}
\begin{split}
\widetilde{C}^{(2)}_{m_1m_2}=\frac{\pi^h\Gamma[\Delta+1-h]}{2\Gamma[\Delta]^2(m_1+m_2+1)}\;.
\end{split}
\end{align}
Note that the poles of the Mellin amplitude are precisely those corresponding to the double-trace operators. This is the same situation as in the one-loop case \cite{Alday:2018kkw,Alday:2019nin,Alday:2021ajh}. Let us also mention that the sums in (\ref{eq:flatspacelimt}) can be performed and gives
\begin{align}
\begin{split}
\mathcal{M}^{\rm II}_{\bullet\bullet\circ\circ}(s,t)=&\frac{\pi^h\Gamma[\Delta+1-h]}{8\Gamma^2[\Delta](s+t-2-2\Delta)}\bigg((\gamma-H_{\frac{\Delta-t}{2}})(\gamma-2H_{\frac{\Delta-s}{2}}+H_{\frac{\Delta-t}{2}})+\psi^{(1)}[\frac{s-\Delta}{2}]\\
&+\psi^{(1)}[\frac{2-t+\Delta}{2}]+\psi^{(0)2}[\frac{s-\Delta}{2}]-2\psi^{(0)}[\frac{s-\Delta}{2}]\psi^{(0)}[\frac{2-s+\Delta}{2}]\bigg)\;.
\end{split}
\end{align}
Here $\gamma$ is the Euler constant and $H_x$ and $\psi^{(n)}[x]$ are the Harmonic number and polygamma function with order $n$ respectively. Let us now examine the flat-space limit of this bound state Mellin amplitude. The flat-space limit corresponds to the high energy limit where both $s$, $t$ become large \cite{Penedones:2010ue}. The leading contribution in the sum (\ref{eq:flatspacelimt}) arises from the region with large $m_1,m_2\sim s,t$. From the explicit expression, we find that $\widetilde{C}^{(2)}_{m_1m_2}$ has the following large $m_1$, $m_2$ behavior
\begin{align}\label{eq:C flatspacelimit}
\widetilde{C}^{(2)}_{m_1m_2}=\frac{1}{m_1+m_2}+\cdots.
\end{align}
This is a special case of the one-loop diagrams considered in \cite{Alday:2021ajh} where the Mellin amplitudes have the form
\begin{equation}
\mathcal{M}(s,t)=\sum_{m_1m_2}\frac{c_{m_1m_2}}{(s-2m_1)(t-2m_2)}\;.
\end{equation}
The coefficients $c_{m_1m_2}$ are assumed to have the asymptotic behavior
\begin{equation}
c_{m_1m_2}=\frac{(m_1m_2)^{\frac{D}{2}-3}}{(m_1+m_2)^{\frac{D}{2}-2}}+\cdots\;,
\end{equation}
in the large $m_1$, $m_2$ limit. It was shown in \cite{Alday:2021ajh} that in the flat-space limit the Mellin amplitude reduces to the massless one-loop box diagram in a $D$-dimensional flat spacetime. The behavior \eqref{eq:C flatspacelimit} implies $D=6$. Therefore, we find that the bound state Mellin amplitude \eqref{eq:flatspacelimt}
becomes the 6D one-loop box diagram in the flat-space limit.
\subsection{A special case: The two-loop four-mass ladder diagram}
\begin{figure}[h]
\centering
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=0.7\linewidth]{subfig_laddermassless}
\caption{The four-mass diagram}
\label{subfig:laddermassless}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=0.7\linewidth]{subfig_laddermassive}
\caption{The fully massive diagram}
\label{subfig:laddermassive}
\end{subfigure}
\caption{Flat-space two-loop ladder diagrams in momentum space (in black) and their dual diagrams (in green). The diagram (a) is a special limit of (b) obtained by taking $x_1\to x_6$ and $x_3\to x_5$.}
\label{fig:ladderdiagrams}
\end{figure}
Finally, as a consistency check of our results, let us consider a special case of (\ref{eq:flatspacelimt}) with $\Delta=1$ and reproduce the result of an important conformal integral in the literature. This special case should correspond to the two-loop four-mass diagram in flat space which is depicted in Fig. \ref{subfig:laddermassless}. In terms of the dual coordinates, the diagram \ref{subfig:laddermassless} is defined by the integral
\begin{equation}
A_{\rm 2-mass}(x_i)=\int \frac{d^4xd^4y}{(x_1-x)^2(x_2-x)^2(x_3-x)^2(x-y)^2(x_1-y)^2(x_3-y)^2(x_4-y)^2}\;,
\end{equation}
and can be obtained from the fully massive diagram \ref{subfig:laddermassive}
\begin{equation}
A_{\text{fully massive}}(x_i)=\int \frac{d^4xd^4y}{(x_1-x)^2(x_2-x)^2(x_3-x)^2(x-y)^2(x_4-y)^2(x_5-y)^2(x_6-y)^2}\;,
\end{equation}
by taking the massless limit $x_1\to x_6$, $x_3\to x_5$. To see why the diagram \ref{subfig:laddermassless} should match $\mathcal{M}^{\rm II}_{\bullet\bullet\circ\circ}(s,t)\big|_{\Delta_i=\Delta=1}$, let us note that the Mellin amplitude of the fully massive diagram was computed in \cite{Paulos:2012nu} and turned out to be
\begin{equation}
\mathcal{M}_{\text{fully massive}}\propto \frac{1}{s'-1}\;.
\end{equation}
This is precisely the six-point exchange Witten diagram Fig. \ref{fig:6pt} with $\Delta_i=\Delta=1$.\footnote{Note that this identification is valid for all spacetime dimensions $d$. This is because the Mellin amplitude contains only one pole and $d$ only appears in the numerator as an overall factor.} Therefore, by further taking the coincidence limit the four-mass diagram \ref{subfig:laddermassless} should be identical to the two bound state Witten diagram $W^{\rm II}_{\bullet\bullet\circ\circ}$.
The result for the four-mass two-loop ladder diagram is well known in the literature and is given by \cite{Usyukina:1992wz,Usyukina:1993ch}
\begin{equation}
A_{\rm 2-mass}(x_i)=-\frac{\pi^4}{x_{13}^4x_{24}^2}\Phi^{(2)}(U,V)\;,
\end{equation}
where
\begin{align}\label{Phi2}
\begin{split}
\Phi^{(2)}(U,V)=&\frac{1}{\lambda}\bigg(6(\text{Li}_4(-\rho U)+\text{Li}_4(-\rho V))+3\text{log}\frac{V}{U}(\text{Li}_3(-\rho U)-\text{Li}_3(-\rho V))\\
&\qquad+\frac{1}{2}\text{log}^2\frac{V}{U}(\text{Li}_2(-\rho U)+\text{Li}_2(-\rho V))+\frac{1}{4}\text{log}^2(\rho U)\text{log}^2(\rho V)\\
&\qquad+\frac{\pi^2}{2}\text{log}(\rho U)\text{ln}(\rho V)+\frac{\pi^2}{12}\text{log}^2\frac{V}{U}+\frac{7\pi^2}{60}\bigg)\;,
\end{split}
\end{align}
with $\lambda$ and $\rho$ given by
\begin{align}
\lambda=\sqrt{(1-U-V)^2-4UV},\hspace{1cm}\rho=\frac{2}{1-U-V+\lambda}\;.
\end{align}
Here $U$ and $V$ are the conformal cross ratios defined in \eqref{eq:ConformalCrossRatios}. Plugging the Mellin amplitude (\ref{eq:flatspacelimt}) with $\Delta=1$ in the Mellin representation (\ref{defMellin4pt}) and closing the contours for $s$ and $t$ to pick up the residues, we obtain an expansion in small $U$ and $V$. It is not difficult to verify that up to an overall normalization the expansion matches precisely with the small $U$, $V$ expansion of (\ref{Phi2}). Therefore, we can conclude that we have reproduced the four-mass two-loop ladder diagram in flat space as a special case of bound state Witten diagrams.
\section{More general diagrams}\label{Sec:morediagrams}
From the basic diagrams we computed in the previous sections, we can construct a bevy of tree-level diagrams with one or two bound states by using the integrated vertex identity. In this section, we briefly explain how this works.
Let us start with case with only one bound state. In Fig. \ref{fig:bccc} there is only one bulk-to-bulk propagator. We can consider the more complicated diagrams with two bulk-to-bulk propagators by moving the bulk point of the bulk-to-boundary propagator with dimension $\Delta_5$ away from the quartic vertex to end on other propagators. The new diagrams contain only cubic vertices. There are three inequivalent possibilities which are depicted in Fig. \ref{fig:MGDbccc}. Using the integrated vertex identity on the green vertices eliminates the bulk-to-bulk propagator, and we reduce the three diagrams to the basic diagram $W_{\bullet\circ\circ\circ}$.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{fig_MGDbccc}
\caption{Diagrams with one bound state and only cubic vertices.}
\label{fig:MGDbccc}
\end{figure}
We now move on to the tree-level diagrams with two bound states. Let us first consider the case where the diagrams only have cubic vertices. These diagrams have three bulk-to-bulk propagators. They can be obtained from the one bound state diagrams in Fig. \ref{fig:MGDbccc} by further attaching another bulk-to-boundary propagator which starts from 2, 3 or 4 and terminates on the existing propagators. There are now many more diagrams. However, one can show that using integrated vertex identities twice allows us to reduce all these diagrams to $W^{\rm I}_{\bullet\bullet\circ\circ}$ and $W^{\rm II}_{\bullet\bullet\circ\circ}$. Some examples of these diagrams are included in Fig. \ref{fig:MGDbbcc}, and the integrated vertex identity is applied to the vertices in green. Similarly, one can consider tree-level diagrams with two bound states and two bulk-to-bulk propagators. This requires the diagrams to have one quartic vertex. One can use the integrated vertex identity to show that these diagrams reduce to the basic diagrams considered in this paper. In fact, they arise from the aforementioned case with three bulk-to-bulk propagators after using once the integrated vertex identity.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{fig_MGDbbcc}
\caption{Examples of diagrams with two bound states and only cubic vertices. Using the integrated vertex identity at the green indices reduces the diagrams to the two basic diagrams $W^{\rm I}_{\bullet\bullet\circ\circ}$ and $W^{\rm II}_{\bullet\bullet\circ\circ}$.}
\label{fig:MGDbbcc}
\end{figure}
\section{Outlook}\label{Sec:outlook}
In this paper, we initiated the study of tree-level Witten diagrams with external bound states. We considered the case with only scalar fields and focused on three basic diagrams using which we can construct an array of more complicated diagrams. We showed that these diagrams have simple analytic structures in Mellin space and obtained explicit expressions for their Mellin amplitudes for generic conformal dimensions. These explorations lead to many interesting directions for future research.
\begin{itemize}
\item One immediate generalization is to include diagrams in which the internal propagators carry Lorentz spins. Such diagrams with spins up to two appear in correlators in full-fledged supergravity theories. The techniques which we developed in this paper are also useful in this more complicated scenario. In particular, the method based on the integrated vertex identity generalize straightforwardly to the spinning case. We expect that the Mellin amplitudes of these diagrams will be structurally similar to the ones studied here.
\item Once these diagrams with internal spinning operators have been computed, we can use them in the bootstrap calculation of four-point functions in 4d $\mathcal{N}=4$ SYM at strong coupling with two $\frac{1}{4}$-BPS operators and two $\frac{1}{2}$-BPS operators. The $\frac{1}{2}$-BPS operators are dual to single-particle states while the $\frac{1}{4}$-BPS operators are dual to bi-particle bound states. The starting point of such a calculation is an ansatz for the four-point function in terms of all possible diagrams with unfixed coefficients. We then use superconformal constraints to solve these unknowns. The superconformal kinematics of such ``bound state'' four-point functions has recently been analyzed in \cite{Bissi:2021hjk}. Such a bootstrap strategy is similar in spirit to the one first devised in \cite{Rastelli:2016nze,Rastelli:2017udc}.
\item Relatedly, it would be interesting to see if our results in Mellin space, for special operator spectra appearing in top-down holographic models, can be re-expressed in position space in terms of the generalized Bloch-Wigner-Ramakrishnan functions found in the $AdS_3\times S^3$ case \cite{Ceplak:2021wzz}. A good starting point is to consider the double-discontinuities \cite{Caron-Huot:2017vep,Alday:2017vkk}, which are simpler objects but contain all the essential information. Being able to find an efficient algorithm to rewrite the Mellin results in terms of these building block functions will be useful for implementing the bootstrap strategy in position space.
\item Our analysis of the type II tree-level bound state diagram in Fig. \ref{fig:bbcctypeII} showed that it has intimate connections with one-loop diagrams in AdS. The resemblance between the two is manifested in the Mellin representation where they can be both represented as a sum of simultaneous poles. Moreover, we showed that the flat-space limits of certain bound state diagrams coincide with those of the AdS one-loop diagrams. It would be very interesting to develop a more systematic understanding in the future to establish a precise connection that remains valid at finite AdS radius. For example, a possible route is to consider generalizations of the integrated vertex identities \cite{Jepsen:2019svc}. It might be possible to use these identities to transform the diagrams and directly prove the equivalence between the bound state tree diagrams and certain one-loop diagrams.
\item It would also be of great interest to explore other methods for computing bound state diagrams. For example, a powerful technique is the AdS unitarity method initiated in \cite{Aharony:2016dwx}, which mirrors the unitarity method in flat space. One can cut an AdS diagram into tree-level diagrams and ``glue'' them together in a sense that can be rigorously defined in the CFT language. For example, both diagrams Fig. \ref{subfig:bbcctypeI} and \ref{subfig:bbcctypeII} with two bound states can be viewed as the results of gluing together two five-point functions. The application of this method to AdS loop diagrams has already been streamlined in the literature. It would be very interesting to extend this technique to calculate bound state Witten diagrams and reproduce our results. Moreover, this perspective might also offer a more intuitive explanation of why these two diagrams have drastically different analytic structures.
\item Note that in this paper we only considered correlators with at most two bi-particle bound states. Clearly, an important generalization of the analysis in the future is to include more bound state operators. It would also be interesting to look at correlators with multi-particle bound states which are obtained from taking the OPE limit of more than two single-particle operators.
\item Finally, it would also be interesting to consider bound state Witten diagrams in other backgrounds such as those providing holographic duals for boundary CFTs (or interface CFTs). The simplest example for holographic interface CFTs is the so called probe brane setup where there are localized degrees of freedom living on an $AdS_d$ subspace inside $AdS_{d+1}$ \cite{DeWolfe:2001pq,Aharony:2003qf}. Witten diagrams in this background with single-particle external states were systematically studied in \cite{Rastelli:2017ecj,Mazac:2018biw}, and the Mellin formalism for BCFT correlators was developed in \cite{Rastelli:2017ecj}. The techniques developed in this paper will be useful for studying bound state scattering in these systems.
\end{itemize}
\acknowledgments
We thank Fernando Alday, Agnese Bissi, Giulia Fardelli, Vasco Goncalves and Andrea Manenti for interesting discussions and useful comments on the draft. The work of X.Z. is supported by funds from University of Chinese Academy of Sciences (UCAS) and from the Kavli Institute for Theoretical Sciences (KITS).
|
1,314,259,996,659 | arxiv | \section{Introduction}
Galaxy redshift surveys provide a 3-dimensional map of the universe's large scale structure (LSS), and hence serve as important probes of cosmology. The simplest statistic obtained from a redshift survey is the two point galaxy correlation function, or its Fourier transform, the galaxy power spectrum. Both quantities have been measured to good accuracy by recent large redshift surveys such as the 2dF Galaxy Redshift Survey (2dFGRS) \cite{Col01,Col03} and the Sloan Digital Sky Survey (SDSS) \cite{Yor00}, and the results have been used to put constraints on cosmological parameters, e.g., \cite{Eis05,Oku08,CabG09,San09,Kaz10a,Chu10} by using the correlation function and \cite{Col05,Hue06a,Hue06b,Teg06,Per07a,Per07b,Per07c,Per10,Rei10} by using the power spectrum.
It is known that the measurements of these two quantities are subject to several effects, which distort them to be anisotropic, i.e. the measured signal depends on the orientation
of the separation or wave vector.
One effect comes from the possibly wrong cosmology used to convert the measured coordinates, i.e. redshifts and angular positions, to the comoving ones---the Alcock-Paczynski effect (\cite{AlcPac79,MatS96,Bal96,HSB99,HuHai03}). A second one comes from the peculiar velocities of the galaxies that introduce uncertainties in the interpretation from redshift to comoving distance---the redshift distortion (\cite{Dav83,Kaiser87,Ham92,S04}). A third effect is caused by gravitational lensing mainly through magnification of the galaxies' fluxes (for flux-selected galaxy samples) and
changes to the apparent angular separations between galaxies---the magnification bias \cite{Tur84,WHHW88,N89,G03,SM05,Mat00,HuiGL07a,HuiGL07b}.
For measurements made with galaxy samples that are selected by the galaxy's flux, a fourth anisotropic effect would arise from the existence of cosmic dust, which causes extinction in the fluxes of the galaxies.
By cosmic dust we mean dust which is correlated with galaxies, but which may or may not reside
in galaxy haloes. In a flux selected galaxy sample, such cosmic dust would
modulate the galaxy density field -- fewer galaxies behind regions of higher extinction.
Moreover, the effect is anisotropic in 3D. Recall that a two-point correlation measurement
is essentially a measurement of pair counts. At a given separation,
pairs of galaxies that are aligned close to
the line of sight suffer more of an extinction effect -- i.e. dust correlated with
the foreground galaxies dims (and removes from sample) the background galaxies --
while pairs of galaxies oriented transverse to the line of sight are less susceptible.
This is much like how gravitational lensing or magnification bias introduces
an anisotropy to the galaxy correlation function or power spectrum \cite{Mat00,HuiGL07a,HuiGL07b}.
The difference is that gravitational lensing by the foreground galaxies
generally brightens the background galaxies, thus adding them to one's sample,
and it also causes an overall geometrical stretching which dilutes the apparent number density.
As we will see, the precise shapes of these two anisotropies are different. We also notice here that there could be other anisotropic effects, e.g. if the selection is orientation-dependent, anisotropy would be introduced for galaxies that are aligned by the large scale tidal fields~\cite{H09}.
From the observational side, the existence of cosmic dust has recently been detected by \cite{MenSFR09}: by using a quasar sample at $z>1$ and a galaxy sample at $z\sim0.3$ from the SDSS, the authors find a positive correlation between the redness of the background quasars and the overdensity of the foreground galaxies, up to an angular separation of $\sim 2^{\circ}$ or, a corresponding projected distance separation of $\sim20h^{-1}$Mpc at $z\sim0.3$, which indicates the existence of dust correlated with galactic
haloes and the LSS.
From the brightness of these quasars, the authors also find that the extinction by cosmic dust occurs at a level comparable to the magnification by gravitational lensing. Hence, it is
important to take into account of the effect of dust extinction and evaluate its significance for studies of LSS.
In this paper, by using the results from \cite{MenSFR09}, we investigate the effect of dust extinction on the galaxy correlation function, focusing on the anisotropic features it induces, and study how this affects cosmological probes through measurements of the galaxy correlation function such as the Baryon Acoustic Oscillations (BAO) and the linear redshift distortion, e.g. \cite{wigglez,boss,bigboss,euclid}. Our calculation accounts for distortion from
extinction, as well as that from peculiar motion and lensing.
The outline of the paper is as follows. In \S \ref{sec:calculation} we derive the formulas used for our calculation, with technical details relegated to the Appendices. In \S \ref{sec:anisotropy}, we present and discuss the anisotropic features caused by dust extinction, focusing on a comparison with those by gravitational lensing. The effects of dust extinction on cosmological probes such as BAO and linear redshift distortion are studied in \S \ref{sec:systematic}. Finally, we conclude in \S \ref{sec:discuss}.
\section{Calculation of the Distortion}
\label{sec:calculation}
\subsection{Fluctuation of Dust Extinction}
The optical depth due to dust extinction to a source at a comoving distance
$\chi$ away and angular position $\boldsymbol{\theta}$,
and at observed wavelength $\lambda_{\rm obs}$, takes the form:
\begin{eqnarray}
\tau(\chi, \boldsymbol{\theta}; \lambda_{\rm obs})
= \int_{0}^{\chi}\frac{d\chi'}{(1+z')} {\rho}_{\rm d} (\chi',\boldsymbol{\theta}) f(\chi', \lambda_{\rm obs}) \, ,
\end{eqnarray}
where $\rho_{\rm d}$ is the proper mass density of dust,
and $z'$ is the redshift associated with the comoving radial distance $\chi'$ on the light cone, (hereafter, we would use $z$, $\chi$ interchangeably.)
Here $f$ is the extinction efficiency --
more precisely, $\rho_d f$ gives the inverse (proper) mean free path of photons scattered by dust.
For simplicity, we assume the intrinsic dust extinction properties do not fluctuate
spatially -- namely, $f$ is a constant at fixed rest-frame wavelength.
Thus fluctuations in the optical depth arise entirely from fluctuations in dust density.
Subtracting the mean optical depth from the above expression, we have
\begin{eqnarray}
\delta\tau(\chi,\boldsymbol{\theta};\lambda_{\rm obs})=\int_{0}^{\chi}\frac{d\chi'}{(1+z')}\bar{\rho}_{\rm d}(\chi')\delta_{\rm d}(\chi',\boldsymbol{\theta}) f(\chi',\lambda_{\rm obs}),
\end{eqnarray}
where $\bar\rho_{\rm d}(\chi')$ is the mean dust density at $\chi'$ and
$\delta_d(\chi', \boldsymbol{\theta})$ is the fractional overdensity of dust. We will discuss how the evolution of $\bar\rho_{\rm d}$ is modeled below.
See \cite{Cor06} for a more detailed discussion.
\subsection{Corrections to the Correlation Function}
\label{subsec:xiobs}
When the effects from peculiar velocities, gravitational lensing and dust extinction are all taken into account, we find that, to first order in perturbations, the observed galaxy overdensity $\delta_{\rm obs}$ is related to its intrinsic counterpart $\delta_g$ by the following
\begin{equation}
\delta_{\rm obs}=\delta_g+\delta_v+\delta_{\mu}+\delta_e,\label{eqn:deltaobs}
\end{equation}
where $\delta_v$, $\delta_{\mu}$, and $\delta_e$ are corrections from the peculiar velocity, gravitational lensing and dust extinction respectively, and they are given by
\begin{eqnarray}
\delta_v &=& -\frac{(1+z)}{H(z)}\frac{\partial v_{\parallel}}{\partial \chi},\\
\delta_{\mu}&=&[5s(z)-2]\kappa,\label{eqn:deltamu}\\
\delta_e&=&-2.5s(z)\delta\tau\label{eqn:deltae},
\end{eqnarray}
where $v_{\parallel}$ is the line-of-sight peculiar velocity, positive if pointing away from us, $H(z)$ is the Hubble expansion rate at the redshift of observation $z$, $\kappa$ is the lensing convergence, $\delta\tau$ is, as before, the fluctuation of the extinction optical depth, $s(z)$ is the factor that describes how the distribution of a galaxy sample is modified by the changes in the individuals' fluxes. Note the two terms containing $s(z)$ in Eqn~(\ref{eqn:deltamu}) and Eqn~(\ref{eqn:deltae}) have opposite signs, which just indicates the opposite effects from dust extinction and gravitational magnification.
For a sharp faint-end cutoff selection of the galaxy sample, $s(z)$ (henceforth
referred to as the number count slope) is given by
\begin{equation}
s(z)=\frac{d\log_{10}\bar{n}_{\rm obs}(z, <m)}{dm}\bigg|_{m=m_{\rm max}}\label{eqn:ngslope},
\end{equation}
where $\bar{n}_{\rm obs}(z, <m)$ is the observed mean of the number density of the galaxies that are brighter than magnitude $m$, and $m_{\rm max}$ is the limiting magnitude for the sample. For a more general sample selection, the expression for $s(z)$ is given in Appendix A, where details of our derivation for the above results are presented. We have suppressed the position dependence of $(\chi,\boldsymbol{\theta})$ on both sides of Eqn~(\ref{eqn:deltaobs}), also for $v_{\parallel}$, $\kappa$, and $\delta\tau$, and the wavelength dependence of $\lambda_{\rm obs}$, the characteristic wavelength of the filter, for $\delta\tau$. Note, in our derivation of $\delta_v$, we have adopted the distant observer approximation and restricted ourselves to sub-horizon scales \cite{Kaiser87, MatS96,Dodelson}. Throughout this paper, we would assume the universe is flat, and set the speed of light $c=1$.
The observed galaxy correlation function $\xi_{\rm obs}(\chi_1,\boldsymbol{\theta_1};\chi_2,\boldsymbol{\theta_2})\equiv$$ \langle \delta_{\rm obs}(\chi_1,\boldsymbol{\theta_1})\delta_{\rm obs}(\chi_2,\boldsymbol{\theta_2}) \rangle$ is then given as a sum of 16 terms
\begin{equation}
\xi_{\rm obs}(1;2)=\sum_{a,b}\xi_{ab}(1;2),
\end{equation}
where we use $1, 2$ as shorthands for $(\chi_1,\boldsymbol{\theta_1})$, $(\chi_2,\boldsymbol{\theta_2})$, ``$a, b$" stand for any of ``$g$, $v$, $\mu$, $e$", and $\xi_{ab}(1;2)\equiv \langle \delta_a(1)\delta_b(2) \rangle$.
The detailed results are presented in Appendix B. Here, we make a further simplification of the results: we assume the radial separation between the two galaxies is much smaller than their mean comoving radial distance from us $\bar{\chi}$ (i.e. the distant observer approximation), and everything evaluated at $\chi_i$, with $i=1,2$, can be calculated by expanding around $\bar{\chi}$, and keeping only the contributions that are lowest order in $|\chi_i - \bar{\chi}|$. In the following, we give the contributions to $\xi_{\rm obs}(1;2)$ with this simplification.
First, the intrinsic galaxy correlation function is given by
\begin{equation}
\xi_{gg}=\int\frac{d^3k}{(2\pi)^3}e^{i\bf{k}\cdot(\bf{x_1-x_2})}P_{gg}(k,\bar{z}),
\end{equation}
where we suppress the dependence of $(1;2)$ for $\xi_{gg}$, same for the $\xi_{ab}$s given below, $\bf{x_i}$ is the position vector for $(\chi_i,\boldsymbol{\theta_i})$, and $P_{gg}(k,\bar{z})$ is the galaxy power spectrum at $\bar{z}$, with $\bar{z}$ the redshift corresponding to $\bar{\chi}$.
Second, when the effect from peculiar velocities is accounted for, $\xi_{\rm obs}$ has the following corrections
\begin{eqnarray}
\xi_{gv}+\xi_{vg}= 2\bar{f_D} \int\frac{d^3k}{(2\pi)^3}e^{i\bf{k}\cdot(\bf{x_1-x_2})}(\hat{k}\cdot\hat{z})^2 P_{gm}(k,\bar{z}),\\
\xi_{vv}= \bar{f_D}^2 \int\frac{d^3k}{(2\pi)^3}e^{i\bf{k}\cdot(\bf{x_1-x_2})}(\hat{k}\cdot\hat{z})^4 P_{mm}(k,\bar{z}),
\end{eqnarray}
where $\bar{f_D}=f_D(\bar{z})$, with $f_D \equiv d\ln{D}/d\ln{a}$, here $D$ is the linear growth factor of matter perturbation, and $a$ is the scale factor of the universe, $\hat{k}$, $\hat{z}$ are unit vectors pointing respectively in the direction of $\bf{k}$ and to the center of the galaxy sample, and $P_{gm}, P_{mm}$ in turn are the galaxy-matter power spectrum and matter power spectrum. Note we have used the plane-parallel approximation, which assumes the line-of-sight to all the galaxies are the same, parallel to $\hat{z}$. The results given here agree with the literature on the famous Kaiser's effect \cite{Kaiser87}-- the distortion by coherent bulk motions on large scales (in the linear regime), and we neglect the fingers-of-god effect \cite{Jac72}-- the distortion by random motions within collapsed haloes (on small scales). For convenience, the sum of $\xi_{gg}$, $\xi_{gv}+\xi_{vg}$ and $\xi_{vv}$ can be calculated equivalently by using the formulas given in \cite{Ham92} or \cite{MatS96}.
Third, when magnification bias is in addition included, $\xi_{\rm obs}$ has the following extra corrections
\begin{eqnarray}
\xi_{g\mu}+\xi_{\mu g} &=&\frac{3}{2}\Omega_m H_0^2(5\bar{s}-2)(1+\bar{z})|\chi_2-\chi_1|\times\nonumber \\&& \int\frac{d^2k_{\perp}}{(2\pi)^2}e^{i\bf{k_{\perp}}\cdot \bar{\chi}(\boldsymbol{\theta_1}-\boldsymbol{\theta_2})} P_{gm}(k_{\perp},\bar{z}),\label{eqn:xigmu}
\end{eqnarray}
\begin{eqnarray}
\xi_{\mu\mu} &=& \left[\frac{3}{2}\Omega_m H_0^2(5\bar{s}-2)\right]^2\int_0^{\bar{\chi}} d\chi \chi^2 \left(1-\frac{\chi}{\bar{\chi}}\right)^2 \times \nonumber \\&& (1+z)^2 \int\frac{d^2k_{\perp}}{(2\pi)^2}e^{i\bf{k_{\perp}}\cdot \chi(\boldsymbol{\theta_1}-\boldsymbol{\theta_2})} P_{mm}(k_{\perp},z),
\end{eqnarray}
where $\Omega_m$ is the matter density parameter, $H_0$ is the Hubble constant, $\bar{s}=s(\bar{z})$, and $\bf{k_{\perp}}$ is the component of $\bf{k}$ transverse to $\hat{z}$. We have used the Limber approximation, which makes $\xi_{v\mu}$ and $\xi_{\mu v}$ vanish.
Finally, with dust extinction, the following terms should be added to $\xi_{\rm obs}$
\begin{eqnarray}
\xi_{ge}+\xi_{eg}&=&-2.5\bar{s}(1+\bar{z})^{-1}\bar{\rho}_d(\bar{z})f(\bar{z},\lambda_{\rm obs}) \times\nonumber\\&& \int\frac{d^2k_{\perp}}{(2\pi)^2}e^{i\bf{k_{\perp}}\cdot \bar{\chi}(\boldsymbol{\theta_1}-\boldsymbol{\theta_2})} P_{gd}(k_{\perp},\bar{z})\label{eqn:xige},
\end{eqnarray}
where $P_{gd}$ is the galaxy-dust power spectrum. The dependence of $(1+\bar{z})^{-1}$ (the scale factor at $\bar{z}$) comes from converting comoving distance to proper distance in the calculation for $\delta\tau$. As before, we have used the Limber approximation, which also makes $\xi_{ve}$ and $\xi_{ev}$ vanish. We would neglect the corrections from $\xi_{\mu e}$ (or $\xi_{e \mu}$) and $\xi_{ee}$, the rationale being that
extinction is a relatively small effect, and so these corrections are
small compared to the corrections we are keeping, i.e. $\xi_{ee} \ll \xi_{ge}$ and
$\xi_{\mu e} \ll \xi_{\mu g}$.
With our approximation and the symmetry these terms exhibit, their dependence on (1;2) can be simplified, which we summarize as follows: besides $\bar{z}$, $\xi_{gg}$ depends only on the distance between the two points $r=|\bf{x_1-x_2}|$, $\xi_{\mu\mu}$ and $(\xi_{g e}+\xi_{e g})$ depend only on the transverse separation $\delta x_{\perp}=|\bar{\chi}(\boldsymbol{\theta_1}-\boldsymbol{\theta_2})|$, while all others depend on both $\delta x_{\perp}$ and the line-of-sight separation $\delta\chi=|\chi_1-\chi_2|$.
In particular, note how the magnification and extinction distortions
have different shapes: $\xi_{g\mu} + \xi_{\mu g}$ exhibits the
characteristic lensing-induced linear scaling with
the line-of-sight separation $\delta\chi$, while $\xi_{ge} + \xi_{eg}$ does not
depend on it at all.
We assume constant galaxy bias $b_g$ when calculating $P_{gg}$ and $P_{gm}$, i.e. $P_{gg}=b_g^2P_{mm}$ and $P_{gm}=b_gP_{mm}$, and we use the transfer function given by \cite{EH98} and the non-linear prescription given by \cite{smith03} to calculate $P_{mm}$.
\subsection{The Extinction Corrections}
To calculate the extinction corrections, we make use of the recent SDSS observational results by \cite{MenSFR09}, where a positive correlation between the color (g-i) excess of the background quasars and the angular overdensity of the foreground galaxies is found, up to an angular separation of $\simeq 100'$, which suggests the existence of cosmic dust, and by using an extinction curve given by the functional form of \cite{ODon94} with $R_V=3.1$, the one for the standard interstellar dust in the Galactic disk, the result is converted to the following extinction-galaxy cross-correlation
\begin{equation}
\label{AVdeltag}
\langle A_V(\boldsymbol{\theta_1})\delta_g^{2D}(\boldsymbol{\theta_2})
\rangle =2.4\times10^{-3} \left(\frac{|\boldsymbol{\theta_1-\theta_2}|}{1'}\right)^{-0.84},
\end{equation}
where $A_V$ is the V-band extinction from the dust between the observer and the quasars, and $\delta_g^{2D}$ is the angular (2D) overdensity of the galaxies.
With our formulation, the correlation between $A_V$ and $\delta_g^{2D}$ can be calculated by
\begin{eqnarray}
&& \langle A_V(\boldsymbol{\theta_1})\delta_g^{2D}(\boldsymbol{\theta_2}) \rangle \nonumber\\&=&\frac{2.5}{\ln{10}}\int_0^{z_q} dz (1+z)^{-1}\bar{\rho}_d(z)f(z,\lambda_V)\times\nonumber\\&& \frac{\bar{n}(z)}{\bar{n}^{2D}}\int \frac{d^2k_{\perp}}{(2\pi)^2}e^{i\bf{k_{\perp}}\cdot \chi(\boldsymbol{\theta_1}-\boldsymbol{\theta_2})} P_{gd}(k_{\perp},z),
\end{eqnarray}
where $z_q$ is the redshift of the quasars, $\bar{n}(z)$ is the redshift distribution of the galaxies with the normalization of $\bar{n}^{2D}$, their mean surface density,
and we have used the Limber approximation.
Note $A_V$ is defined as $A_V\equiv2.5 \tau_V /{\,\rm ln\,}10$.
In \cite{MenSFR09}, the galaxies' redshift distribution peaks around the mean redshift at $z_*=0.36$, hence we would make the approximation of $\bar{n}(z)/\bar{n}^{2D}\rightarrow \delta(z-z_*)$. Considering $z_q>z_*$, we get
\begin{eqnarray}
&&\langle A_V(\boldsymbol{\theta_1})\delta_g^{2D}(\boldsymbol{\theta_2}) \rangle\nonumber \\&\sim &\frac{2.5}{\ln{10}}(1+z_*)^{-1}\bar{\rho}_d(z_*) f( z_*,\lambda_V)\times\nonumber\\&& \int \frac{d^2k_{\perp}}{(2\pi)^2}e^{i\bf{k_{\perp}}\cdot \chi_* (\boldsymbol{\theta_1}-\boldsymbol{\theta_2})} P_{gd}(k_{\perp}, z_*),\label{eqn:AVobs}
\end{eqnarray}
where $\chi_*$ is the comoving radial distance corresponding to $z_*$.
Comparing Eqn~(\ref{eqn:AVobs}) with Eqn~(\ref{eqn:xige}), we find when $\bar{z}=z_*$, and when $\lambda_{\rm obs}=\lambda_V$,
\begin{equation}
\xi_{ge}+\xi_{eg}=-\ln{10} \bar{s} \langle A_V(\boldsymbol{\theta_1})\delta_g^{2D}(\boldsymbol{\theta_2}) \rangle \, .\label{eqn:xigeV}
\end{equation}
Thus, the result of \cite{MenSFR09} can be directly translated into
the quantity we are interested in. Of course, this direct translation works
only when the mean properties of the galaxies (e.g. redshift, clustering bias
and so on) coincide with those used in \cite{MenSFR09}. We thus need to
extrapolate. First, we extrapolate in angle. Keeping everything (e.g. redshift
and so on) fixed, the result summarized in Eqn (\ref{AVdeltag}) is applicable
for angular separations up to $100'$. Beyond that, we assume
the shape of the two-point function follows that of matter, in other words
that
\begin{equation}
[\xi_{ge}+\xi_{eg}] (z_*)\propto \int \frac{d^2k_{\perp}}{(2\pi)^2}e^{i\bf{k_{\perp}}\cdot \chi_*(\boldsymbol{\theta_1}-\boldsymbol{\theta_2})} P_{mm}(k_{\perp},z_*) \, ,
\end{equation}
with a normalization that matches the observed result at $100'$.
Next we extrapolate in bias. Since the galaxy sample used in \cite{MenSFR09}
has a clustering bias $\simeq 1$, we multiply Eqn (\ref{eqn:xigeV}) by
$\bar b_g$ to obtain $\xi_{ge} + \xi_{eg}$ for
a different galaxy sample with a mean bias of $\bar b_g$.
Lastly, we extrapolate in redshift. Based on Eqn~(\ref{eqn:xige}), we assume the following redshift scaling
for a fixed comoving transverse separation $\delta x_\perp$:
\begin{equation}
[\xi_{ge}+\xi_{eg}] (\bar z) \propto(1+\bar{z})^{-1}\bar{\rho}_d(\bar{z})D(\bar{z})^2.\label{eqn:xigeevol}
\end{equation}
The number count slope $\bar s$ and the clustering bias $\bar b_g$
should be redshift dependent as well -- we will systematically vary these
two parameters in the following computations to illustrate the range of possibilities.
It should be stressed that
the scaling should also take into account of the redshift dependence of the clustering bias of dust and the extinction efficiency $f$ for a given $\lambda_{\rm obs}$ (in our case, $\lambda_{\rm obs}=\lambda_V$). In this paper, without detailed modeling of clustering and extinction properties of the dust, we would simply neglect the $z$-dependence of both quantities, (we notice that, for $f$, this is supported by the slowly-varying extinction curve in the visible range, if the dust is like that in the Galactic disk, see e.g. \cite{ODon94}.) and hope our procedure of systematically exploring variations in $\bar s$ and $\bar b_g$ is sufficient to bracket the range of possibilities.
For the redshift dependence of $\bar{\rho}_d$, we follow \cite{Cor06}, and assume the dust particles are ejected to the intergalactic medium with a constant yield when new stars are born, so
\begin{equation}
\bar{\rho}_d(z)\propto (1+z)^3\int_z^{z_s}\frac{\dot{\rho}_{\rm SFR}(z')dz'}{(1+z')H(z')},
\end{equation}
where we set the star formation beginning at redshift $z_s=10$ \cite{Cor06}, and for the cosmic star formation rate (SFR) $\dot{\rho}_{\rm SFR}$, the mass of baryons that form into star per unit comoving volume per unit proper time, we use the results from \cite{MadP00} after converting to our cosmology according to \cite{PorM01}.
\section{The Anisotropic Distortion}
\label{sec:anisotropy}
In this section, we show our results on the extinction distortion of the galaxy correlation function, and compare it with other anisotropic distortions. We first list the values of the parameters used in our calculation. Then by plotting contours of the galaxy correlation function, we explicitly display the anisotropic distortions. The results are then extensively studied in the paragraphs following the italicized titles.
The corrections from dust extinction depend on the properties of galaxy sample through three parameters, $\lambda_{\rm obs}$, $s$ and $b_g$. We have set $\lambda_{\rm obs}=\lambda_V$ for the calculation in this paper, and for the purpose of illustration, we set $s=1.5$ and $b_g=2$, similar to those for the SDSS Luminous Red Galaxies (LRGs) \cite{GCH09}. We discuss the dependence of the results on these parameters at the end of this section. For the cosmological model, we choose it to be the best-fit flat $\Lambda$CDM model from the WMAP 7-year results that has: matter density $\Omega_m=0.27$, baryon density $\Omega_b=0.045$, Hubble constant $h=0.71$, normalization of the power spectrum $\sigma_8=0.8$, and the spectral index $n_s=0.96$ \cite{WMAP7}.
\begin{figure*}[htb]
\vspace{-25mm}
\resizebox{160mm}{!}{\includegraphics{dustz036.ps}}
\vspace{-2mm}
\caption{\label{fig:contz036} Contours of the galaxy correlation function divided by $b_g^2$ at $\bar{z}=0.36$. $\delta\chi$ and $\delta x_{\perp}$ are the line-of-sight and transverse separations respectively. Dotted lines in the upper three panels are for the intrinsic galaxy correlation function, while those in the lower three panels include redshift distortion as well. In each row, from left to right, the solid lines are the same with the dotted lines except that dust extinction, magnification bias, and both dust extinction and magnification bias are respectively added in. The colors from black through the rainbow to red represent contour levels at ($-1.5\times 10^{-4}, 0, 0.001, 0.002, 0.005, 0.01$) in the upper three panels, and ($-0.002, -0.001, -0.0005, 0, 0.001, 0.002, 0.01$) in lower three panels.}
\end{figure*}
\begin{figure*}[htb]
\vspace{-25mm}
\resizebox{160mm}{!}{\includegraphics{dustz2.ps}}
\vspace{-2mm}
\caption{\label{fig:contz2} Same as Figure~\ref{fig:contz036} but for $\bar{z}=2$. The colors from black through the rainbow to red represent contour levels at ($-3\times 10^{-5}, 0, 0.0003, 0.0005, 0.001, 0.003$) in the upper three panels, and ($-0.001, -0.0005, -0.00025, -0.00015, 0, 0.0003, 0.0005, 0.002$) in lower three panels.}
\end{figure*}
In Figure~\ref{fig:contz036}, we show the contours of galaxy correlation function after dividing by $b_g^2$ at $\bar{z}=0.36$. The dotted lines in the upper three panels are for the intrinsic galaxy correlation function $\xi_{gg}$, while those in the lower three panels include redshift distortion as well. In each row, from left to right, the solid lines are the results with dust extinction, magnification bias, and both dust extinction and magnification bias added in, compared to the dotted lines. So by comparing the solid lines with dotted lines in the left two panels, we can see distortions caused by dust extinction to the intrinsic galaxy correlation function with redshift distortion included (lower panel) or not (upper panel), similar for the middle and the right two panels, where the distortions caused by magnification bias and the combined effect of magnification bias and dust extinction can be seen.
{\it{The Extinction Anisotropy.}} The upper left panel explicitly shows that dust extinction introduces anisotropy to the galaxy correlation function in the ($\delta x_{\perp}, \delta\chi$) plane, as is expected from our calculation: Eqn~(\ref{eqn:xige}) depends only on $\delta x_{\perp}$, as a result of Limber approximation. For a given $\delta x_{\perp}$, since the amplitude of the intrinsic correlation generally decreases when $\delta\chi$ (or $r$) increases, while the extinction corrections remain the same, this leads to a bigger effect of dust extinction at a larger $\delta\chi$ (or $r$). Similarly, for a given $r$, since the amplitude of the extinction corrections generally increases when $\delta x_{\perp}$ decreases, while the intrinsic correlation remains the same, a larger effect of dust extinction is expected at smaller $\delta x_{\perp}$, or when the alignment of the separation is closer to the line of sight (LOS). Therefore, the distortion by dust extinction is expected to be most significant at large separations along the LOS, which agrees with our findings in the upper left panel of Figure~\ref{fig:contz036}.
{\it{Comparison with Magnification Bias.}} The upper middle panel of Figure~\ref{fig:contz036} allows us to make a comparison of the extinction anisotropy with that from the magnification bias, which has been well studied by earlier works, e.g. \cite{Mat00,HuiGL07a}. Same as dust extinction, the effect of magnification bias is also more important when the (LOS) separation is larger and the alignment of the separation is closer to the LOS. In a similar way as before, this can be understood by the following: for a given $\delta x_{\perp}$, in general, the amplitude of the total corrections from magnification bias increases when $\delta\chi$ increases, ($\xi_{\mu\mu}$ remains the same, $\xi_{g\mu}+\xi_{\mu g}$ increases linearly with $\delta\chi$,) while that for the intrinsic correlation decreases, leading to a bigger effect of magnification bias at a larger $\delta\chi$; for a given $r$, the amplitude of the total corrections generally increases when $\delta x_{\perp}$ decreases, while the intrinsic correlation remains the same, leading to a bigger effect at a smaller $\delta x_{\perp}$, see also \cite{Mat00,HuiGL07a}.
However, different from dust extinction, the anisotropy from magnification bias has opposite features---the contours are distorted to the opposite sides of their intrinsic locations.
An examination of Eqn~(\ref{eqn:xigmu}) and Eqn~(\ref{eqn:xige}) tells us that $(\xi_{g\mu}+\xi_{\mu g})$ and $(\xi_{ge}+\xi_{eg})$ have similar expressions: both are given as a 2D integral of the matter power spectrum (note we assume constant bias for both the galaxies and the dust) multiplied by a factor. This is the result of the fact that both $\kappa$ and $\delta\tau$ are weighted integrals of fluctuations along the LOS, and that we adopt Limber approximation in the calculation.
There are two differences between the two:
first, magnification bias has the characteristic linear dependence on the LOS separation $\delta\chi$ from lensing; second dust extinction has a dependence on the
observed wavelength. These differences can be exploited to separate out the
two effects from data.
For magnification bias, the sign of the overall multiplicative factor is determined by $(5s-2)$, while for dust extinction, it is always negative. With our choice of $s=1.5$, $(5s-2)=5.5>0$, so $(\xi_{g\mu}+\xi_{\mu g})$ and $(\xi_{ge}+\xi_{eg})$ always have opposite signs in our calculation. Since the correction from $\xi_{\mu\mu}$ is less important at a lower redshift compared with those from $(\xi_{g\mu}+\xi_{\mu g})$ \cite{HuiGL07a}, the analysis here explains the opposite anisotropic features from magnification bias and dust extinction throughout the displayed regions in Figure~\ref{fig:contz036}. Specifically, with our choice of cosmology, the 2D integral of matter power spectrum is positive when $\delta x_{\perp}\lsim 115 h^{-1}$Mpc, and negative otherwise. It is the same for the sign of $(\xi_{g\mu}+\xi_{\mu g})$, while the opposite holds true for that of $(\xi_{ge}+\xi_{eg})$. Therefore, the contours are distorted to where $\xi_{gg}$ has larger (smaller) values by dust extinction (magnification bias) when $\delta x_{\perp}\lsim 115 h^{-1}$Mpc, and to where $\xi_{gg}$ has smaller (larger) values when $\delta x_{\perp}\gsim 115 h^{-1}$Mpc.
{\it{Combination with Magnification Bias.}} Due to the canceling effect between dust extinction and magnification bias in our calculation, we can see from the upper right panel that the anisotropy from the combination of these two is weakened to some extent, with the final features dominated by those from dust extinction.
{\it{Including Redshift Distortion.}} Since in redshift space, the most significant anisotropic features in the galaxy correlation function are caused by redshift distortion, we replot the upper three panels of Figure~\ref{fig:contz036} in the lower three ones by including redshift distortion in all the contours, for a more realistic view of the anisotropies we saw before. According to the prediction of the Kaiser's effect, which is represented by the dotted lines in the lower three panels, the galaxy correlation function has a quadrupole and a hexadecapole component of anisotropy \cite{Ham92, MatS96}, with the magnitudes controlled by the linear redshift distortion parameter $\beta\equiv f/b_g$.
Comparing the dotted lines with the solid lines, we can see that, same as before, dust extinction distorts the contours to where they had larger values for most of the displayed regions, while magnification bias does the opposite, so the distortions from their combination are reduced, with the final results still dominated by dust extinction. Along each contour, dust extinction would be more important where $\delta x_{\perp}$ is smaller, while magnification bias would be more important where both $\delta\chi$ is larger and $\delta x_{\perp}$ is smaller.
{\it{Dependence on Redshift.}} To see the redshift dependence of the anisotropic features, we show the above contours for a higher redshift $\bar{z}=2$ in Figure~\ref{fig:contz2}. For dust extinction, its relative importance to the intrinsic correlation scales with redshift approximately by a factor of $(1+\bar{z})^{-1}\bar{\rho}_d(\bar{z})$, see Eqn~(\ref{eqn:xigeevol}), which, according to our model for the evolution of dust, increases with redshift until $\bar{z}\simeq 1.2$, where it has a $\sim 60\%$ increase from its value at $\bar{z}=0.36$, then decreases gradually. At $\bar{z}=2$, the factor is $\sim 30\%$ larger than at $\bar{z}=0.36$, so the extinction anisotropy shown in the upper left panel of Figure~\ref{fig:contz2} gets slightly stronger.
At the same time, the relative importance of magnification bias to the intrinsic correlation also becomes larger, as can be directly seen from the much stronger anisotropy shown in the upper middle panel. Both the corrections from $(\xi_{g\mu}+\xi_{\mu g})$ and that from $\xi_{\mu\mu}$ become more important at a higher redshift: for the former, its ratio to $\xi_{gg}$ scales with $\bar{z}$ roughly by $(1+\bar{z})$, hence increases with $\bar{z}$; for the latter, the ratio also increases, for $\xi_{\mu\mu}$ increases as a sum along a longer LOS, while $\xi_{gg}$ decreases. The latter also becomes more important compared to the former, see \cite{HuiGL07a}, so, different from at the low redshift, the total corrections for $\delta x_{\perp}\gsim 115 h^{-1}$Mpc at $\bar{z}=2$ is now also positive, and all the displayed contours are distorted to where $\xi_{gg}$ has smaller values. At this high redshift, the combination of dust extinction and magnification bias is absolutely dominated by magnification bias, with its effect mildly weakened by dust extinction when $\delta x_{\perp}\lsim 115 h^{-1}$Mpc, while strengthened when $\delta x_{\perp}\gsim 115 h^{-1}$Mpc. Similar results hold when redshift distortion is included.
{\it{Dependence on Galaxy Sample.}} Finally, we discuss the dependence of the anisotropic features on the properties of the galaxy sample. For dust extinction, the anisotropy depends on the ratio of $s$ to $b_{g}$, and is stronger for galaxy samples with larger $s$ or smaller $b_g$. For magnification bias, the anisotropy is controlled by the factor of $(5s-2)/b_g$, which vanishes for $s=0.4$ where $(5s-2)$ switches sign. In our calculation, had we chosen $s<0.4$, $(\xi_{g\mu}+\xi_{\mu g})$ would have the same sign as $(\xi_{ge}+\xi_{e g})$, and the combination of magnification bias and dust extinction would instead add to each other at the low redshift. The anisotropy from redshift distortion is known to be controlled by $\beta$ that depends on the sample through $1/b_g$. So for galaxy samples with different $s$ and $b_g$, the relative importance of these anisotropic effects would be different. Besides $s$ and $b_g$, the anisotropy from dust extinction in addition depends on the bandpass used to observe the sample. The shorter wavelength the bandpass allows, the more extinction the dust particles cause, and the stronger the anisotropy would be. Altogether, the different dependence on the sample parameters of these anisotropic effects provide an opportunity to potentially isolate them from one another.
\section{Effects on Cosmological Probes}
\label{sec:systematic}
In this section, we study the effect of dust extinction on cosmological probes through measurements of the galaxy correlation function. We consider two of these: one is the BAO peak, which serves as a standard ruler and probes the geometry of the universe, the other is the linear redshift distortion parameter $\beta$, which directly probes the growth rate of the universe's structure.
\subsection{The BAO Peak}
\begin{figure*}[htb]
\centering
\subfigure[
{\includegraphics[scale=0.5]{pscombs.ps}\label{subfig:baovs}}
\hspace{5mm}
\subfigure[]
{\includegraphics[scale=0.5]{pscombb.ps}\label{subfig:baovb}}
\caption{\label{fig:bao} Fractional shift of the BAO peak from the monopole of the galaxy correlation function by dust extinction (the solid lines, labeled by ``$e$''), magnification bias (the dotted lines, labeled by ``$\mu$'') and their combined effects (the dashed lines, labeled by ``$e+\mu$''). In Figure~\ref{subfig:baovs}, we keep $b_g$ at our default choice of $b_g=2$ and vary $s$, while in Figure~\ref{subfig:baovb}, we set $s$ at the default value of 1.5 and vary $b_g$. From top to bottom, the three panels in both figures are for $z=0.36$, $z=1$ and $z=2$ respectively. Note redshift distortion is included in the calculation for all the monopoles, which by itself does not shift the monopole BAO peak according to the prediction of the Kaiser's effect \cite{Kaiser87,Ham92,MatS96}.}
\end{figure*}
The BAO peak has been detected from the monopole of the galaxy correlation function, i.e. the average over the alignment of the separation vector, see e.g. \cite{Eis05,Mar08,CabG09,Kaz10a}. In Figure~\ref{fig:bao}, by identifying the BAO peak as a local maximum in the monopole of $\xi_{\rm obs}$, we show the fractional shift of the peak location $\Delta r_{\rm BAO}/r_{\rm BAO}$ caused by dust extinction (the solid lines), and for comparison, by magnification bias (the dotted lines) and the combination of the two (the dashed lines). The results are given for three different redshifts: $z=0.36$ (upper panels), $z=1$ (middle panels) and $z=2$ (lower panels). In Figure~\ref{subfig:baovs}, we vary $s$ and keep $b_g=2$, while in Figure~\ref{subfig:baovb}, we vary $b_g$ and keep $s=1.5$. Note redshift distortion is included when we calculate all the monopoles, which by itself does not shift the monopole BAO peak according to the prediction of the Kaiser's effect \cite{Kaiser87,Ham92,MatS96}.
The scale-dependent correction to the monopole from dust extinction shifts $r_{\rm BAO}$ to larger values. This is understandable, since, as a negative but increasing component, the correction would shift the local maximum to the right.
The shift is larger when $s$ is larger or $b_g$ is smaller, consistent with our analysis above for the extinction distortion. As a positive but decreasing component, the correction from magnification bias shifts $r_{\rm BAO}$ to smaller values, except at low redshift and when $s<0.4$, and the shift is larger when $|5s-2|$ is larger or $b_g$ is smaller.
Dust extinction tends to act in the opposite direction,
so the combination of dust extinction and magnification bias helps to reduce the shift in $r_{\rm BAO}$, except at low redshift and when $s<0.4$. For most of the cases, the much stronger redshift evolution of the effect from magnification bias, compared to that for dust extinction, makes the latter negligible at high redshift, though it is the dominating effect at low redshift.
However, the fractional shift of $r_{\rm BAO}$ by dust extinction is on the order of $10^{-4}$, which makes it unlikely to be
an important factor for probes of the monopole BAO peak
even at low redshift.
There has been a claimed detection of the BAO peak from
the LOS galaxy correlation function by
\cite{GCH09}, which was disputed by \cite{Kaz10b}.
The issue is subtle: \cite{CG11} showed
using a large set of simulations
that even for the {\it monopole} correlation function,
the BAO peak detection is only marginal; yet, useful
cosmological constraints can be inferred once combined
with external data. Indeed, the LOS measurements
of \cite{GCH09} and \cite{Kaz10b} are consistent with
each other. It is the interpretation (of how the data should
be used) that differs.
For our purpose in this paper, it suffices to note that
the dust extinction correction has no LOS dependence
and so would not shift the LOS BAO peak at all.
\subsection{Redshift Distortion}
\begin{figure}[htb]
\resizebox{90mm}{!}{\includegraphics{beta.ps}}
\caption{\label{fig:beta} Fractional changes in the linear redshift distortion parameter $\beta$, inferred through Eqn~(\ref{eqn:getbeta}), as a function of the distance separation $r$. The solid lines show the changes caused by dust extinction (labeled by ``$e$''), the dotted lines are those by magnification bias (labeled by ``$\mu$''), and the dashed lines are by their combination (labeled by ``$e+\mu$''). To see the changes more easily, we also show $\Delta \beta=0$ using dot-dashed lines. From top to bottom, the three panels are for $z=0.36$, $z=1$ and $z=2$ respectively. The values for $s$ and $b_g$ are set at our default choice.}
\end{figure}
According to the prediction of the Kaiser's effect \cite{Kaiser87,Ham92,MatS96}, the redshift distortion parameter $\beta$ can be inferred from the anisotropy of the galaxy correlation function through \cite{Ham92}
\begin{equation}
\frac{\xi_2(r)}{\xi_0(r)-\bar{\xi}_0(r)}= \frac{\frac{4}{3}\beta+\frac{4}{7}\beta^2}{1+\frac{2}{3}\beta+\frac{1}{5}\beta^2},\label{eqn:getbeta}
\end{equation}
where $\xi_0$ and $\xi_2$ are the monopole and quadrupole of $\xi_{\rm obs}$, and $\bar{\xi}_0$ is the volume average of $\xi_0$, given by
\begin{equation}
\bar{\xi}_0(r)=\frac{3}{r^3}\int_0^{r}\xi_0(r')r'^2dr'.
\end{equation}
However, when the effects of dust extinction and magnification bias are taken into account, their corrections to the monopole and quadrupole would shift the inferred $\beta$ from its true value, and moreover, the scale-dependence of the corrections may cause the inferred $\beta$ to be scale-dependent too. This is what we find in Figure~\ref{fig:beta}, where the fractional changes in $\beta$ caused by dust extinction (the solid lines), magnification bias (the dotted lines) and their combination (the dashed lines) are presented at $z=0.36$ (the upper panel), $z=1$ (the middle panel), and $z=2$ (the lower panel) respectively. The results are obtained with our default values: $s=1.5$ and $b_g = 2$.
As can be seen, the changes by dust extinction and magnification bias are in opposite directions, with $\beta$ getting bigger for the former and smaller for the latter, so the changes by their combination tend to be reduced. If we define $Q\equiv \xi_2/(\xi_0-\bar{\xi}_0)$, we find dust extinction introduces $\Delta(\xi_0-\bar{\xi}_0)>0$ ($\Delta \xi_0$ increases with $r$), $\Delta \xi_2<0$ ($\Delta\xi$ increases when the alignment of the separation is closer to the transverse), while with the prediction of the Kaiser's effect, $(\xi_0-\bar{\xi}_0)$ and $\xi_2$ are both negative (same as $(\xi_{gg}-\bar{\xi}_{gg})$, see e.g. \cite{Ham92}), so dust extinction leads to $\Delta Q>0$. When we infer $\beta$ from $Q$ through Eqn~(\ref{eqn:getbeta}), we have $dQ/d\beta>0$, so this explains why dust extinction causes $\beta$ to be larger. With $s > 0.4$, magnification bias introduces opposite changes, so the smaller value of the inferred $\beta$ when magnification bias is included can be understood in a similar way.
Our calculation also shows that the fractional changes by dust extinction are on the order of a few percent, and vary mildly with redshift, while those by magnification bias are on the level of $\sim 1\%$ at $z=0.36$, but can grow up to $\sim 40\%$ at $z=2$, which turns it from a sub-dominant effect at low redshift to a dominating effect at high redshift. These fractional changes indicate that both dust extinction and magnification bias would be non-negligible for redshift distortion probes for the purpose of precision cosmology, e.g. \cite{wigglez,boss,bigboss,euclid}, especially at low redshift for the former and at high redshift for the latter, though the canceling effect between these two helps to reduce these systematics.
Finally, we point out that the scale dependence of the inferred $\beta$ may be mistakenly interpreted as an indication for modified gravity, which, different from General Relativity (GR), can give a scale-dependent growth factor, and lead to wrong conclusions for tests of GR through probes of the redshift distortion parameter \cite{Acq08,Acq10,ZhaL07,ZhaB08}, if these systematics are neglected.
\section{Discussion}
\label{sec:discuss}
Inhomogeneities in the extinction of the galaxies' fluxes by cosmic dust, whose existence is recently detected by \cite{MenSFR09}, modify the distribution of a flux-selected galaxy sample. This is similar to what the inhomogeneous magnification of the galaxies' fluxes by gravitational lensing does to the galaxies' distribution, but with opposite effect on average.
Hence, in addition to the Alcock-Paczynski effect, redshift distortion, magnification bias, dust extinction is a fourth effect that would create an anisotropic distortion to the galaxy correlation function for a flux-limited selection. In this paper, we have studied this extinction distortion to the galaxy correlation function, and evaluated its effect on cosmological probes such as the BAO and the linear redshift distortion.
We use the extinction-galaxy cross correlation found by \cite{MenSFR09} to calculate the corrections to the galaxy correlation function from dust extinction. We extrapolate their results to larger scales by assuming dust traces galaxies, and to other redshifts by assuming the evolution of dust follows that of the stars. With the choice of $\lambda_{\rm obs}=\lambda_V$, $s=1.5$, $b_g=2$, we show the anisotropic extinction distortion together with that from magnification bias and redshift distortion in Figure~\ref{fig:contz036} and Figure~\ref{fig:contz2}.
We find the distortion by dust extinction alone is most significant along the LOS and at large separations, which is similar to that by magnification bias. Their precise shapes are different
though. Lensing induces a correction to the correlation
function that rises with the LOS separation, while extinction does not.
The correction from dust extinction depends only on the transverse separation $\delta x_{\perp}$, and with our choice of cosmology, it is negative when $\delta x_{\perp}\lsim 115 h^{-1}$Mpc, positive otherwise. With our choice of $s>0.4$, the correction is almost always opposite in sign to that from magnification bias, leading to the opposite anisotropic features seen in the distortions by these two effects. So, the distortion by their combined effect tends to be reduced. The extinction distortion evolves with redshift approximately by $(1+\bar{z})^{-1}\bar{\rho}_d(\bar{z})$, which is much milder than the evolution of the lensing distortion.
At low redshifts ($\bar z \lsim 1$),
the extinction distortion tends to be more important
than lensing, while the opposite is true at high redshifts.
By identifying the BAO peak as a local maximum, we find the scale-dependent correction from dust extinction to the monopole of the galaxy correlation function shifts the monopole BAO peak to larger scales, but the shift is on the order of $10^{-4}$, and it does not change much when varying $s$, $b_g$ and $\bar{z}$. At the same time, the scale-independent correction from dust extinction to the LOS correlation function evaluated at a fixed $\delta x_{\perp}$ does not shift the LOS BAO peak at all. So for probes of the BAO, dust extinction is probably a negligible effect.
The anisotropic extinction distortion also introduces biases in the linear redshift distortion parameter $\beta$, inferred from the monopole and quadrupole of the observed galaxy correlation function according to the prediction of the Kaiser's effect. We find with dust extinction, the inferred $\beta$ is bigger than the true value by up to a few percent, and the shift varies mildly with redshift, while with magnification bias ($s>0.4$), $\beta$ is smaller than the true value by up to percent level at low redshift ($z \lsim 1$), but to $\sim 40\%$ at high redshift. This suggests both effects are non-negligible for precision probes of $\beta$, especially for extinction at low redshift and lensing at high redshift, though their combination tends to reduce the overall
shift in $\beta$.
With these two effects, the inferred $\beta$ also becomes scale-dependent, which should be taken into account for tests of GR through the scale-dependence of $\beta$ (the growth factor). Our analysis on $\beta$ can be extended to Fourier space.
It is possible that the changes in $\beta$ in Fourier space would be smaller, as suggested by earlier works of \cite{HuiGL07b}, who find that, for magnification bias, its impact on galaxy correlation is less severe in Fourier space than in real space. However, we would leave a rigorous study for future work.
The extinction distortion (extinction correction normalized by
the intrinsic galaxy correlation) scales with the properties of the galaxy sample as $s/b_g$, so it is more significant for galaxy samples with larger $s$ or smaller $b_g$, and it also depends on $\lambda_{\rm obs}$, the bandpass used to observe the sample, shorter $\lambda_{\rm obs}$, stronger distortion. This is different from the distortion by magnification bias, which depends on $(5s-2)/b_g$, and has no dependence on $\lambda_{\rm obs}$.
Moreover, these two distortions have different shapes, with
lensing exhibiting the signature linear dependence on the LOS
separation, while extinction depends exclusively on the transverse
separation.
These differences can be used to separate the two effects, to allow a simultaneous study of both the cosmic extinction and the cosmic magnification \cite{ZhaP05}, which we hope to explore in future work.
\begin{acknowledgments}
We thank J$\ddot{\rm o}$rg Dietrich and Guilin Liu for helpful discussions, and Zolt$\acute{\rm a}$n Haiman, Dragan Huterer for useful comments on the manuscript. W.F. is supported by the NSF under contract AST-0807564, and by the NASA under contract NNX09AC89G. L.H. is supported by the DOE
DE-FG02-92-ER40699 and the NASA NNX10AH14G.
\end{acknowledgments}
\vfill
\bibliographystyle{physrev}
|
1,314,259,996,660 | arxiv |
\section{Introduction}
The central 200 pc of the galaxy (Central Molecular Zone; CMZ) is an extreme Galactic environment. Molecular clouds in the CMZ have hotter average gas temperatures \citep[50$-$300 K;][]{Mauers86, mills13, Krieger17, Ginsburg16}, higher densities \citep[10$^{3-5}$ cm$^{-3}$;][]{Zylka92, mills18a}, and broader line widths, on the \til10 pc scale \citep[\til20$-$30 km s$^{-1}$;][]{bally87, Kauffmann17a}, than typical clouds in the interstellar medium (ISM) of the Galactic disk. The velocities of CMZ clouds range from $-$250 to +250 km s$^{-1}$~within the inner 1\fdg5 of our Galactic center. The large velocity range of these clouds, wide velocity dispersions, and line-of-sight confusion from multiple velocity components can make it difficult to place individual molecular clouds within the 3-dimensional context of the CMZ.
{Figure \ref{introfig} shows the inner 100 pc of the Galactic center, where many of these dense molecular clouds are shown in red in this 3-color image. }
Recent efforts have been made to connect {these} individual clouds (1$-$10 pc) to the larger structures (\til100 pc) in the Galactic center {\citep[][]{Sofue95, sawada04,Molinari11, Kru15,Henshaw16}.}
\begin{figure*}[tb!]
\centering
\includegraphics[scale=0.45]{Figure1.png}
\caption{
{Three-color} composite of the inner 100 pc of the CMZ, centered on \rar, {where red and green are the 160 and 70 $\mu$m emission, respectively, from HiGAL \citep{Molinari10}}, and blue is 8 $\mu$m emission from Spitzer {\citep{Churchwell09}}. The solid white circle shows the region of the CMZ targeted in this study. This field is centered on the M0.10$-$0.08~molecular cloud, but could overlap with some of the extended emission in M0.11$-$0.11. The dashed white circle shows the location of the \scloud~expanding shell\ {presented in \cite{my17}}. Additional prominent CMZ regions are labeled for reference purposes. {Overlaid on this figure is a dashed line showing the extent of the orbital stream proposed by \cite{Kru15}.} }
\label{introfig}
\end{figure*}
{The {3-dimensional} orientation of the large-scale structures in the CMZ can depend greatly on the interpretation of the gas kinematics. For example, \cite{Sofue95} and \cite{sawada04} suggest a two spiral arm structure, whereas \cite{Molinari11} argue for a twisted elliptical ring. The most recent orbital model, presented in \cite{Kru15}, suggests an open orbit solution (see dashed line in Figure \ref{introfig} for the projected trajectory of their orbital model solution).}
In the {\cite{Kru15} orbital} model, gas in the CMZ traces an open orbit set by the shape of the CMZ potential. Connected chains of molecular clouds all follow the same orbital path or `stream'. The {3-dimensional} arrangement of clouds along a continuous stream can be loosely reconstructed from their projected radial distance to Sgr A$^*$~and the observed line-of-sight velocity.
However, there is still some ambiguity about whether certain features are located on the near or far sides of the Galactic center. Additionally, multiple components along the same line-of-sight can make it challenging to disentangle the kinematics of a single cloud.
High spatial and spectral resolution observations targeting regions where the kinematics are complex are needed to resolve the {individual} components.
\input{spectral-table.tex}
One region of the CMZ where the kinematics are complex is toward the M0.10$-$0.08~molecular cloud\ {(solid white circle in Figure \ref{introfig})}.
The M0.10$-$0.08~cloud, and the adjacent M0.11$-$0.11\ cloud (annotated in Figure \ref{introfig}), have been observed in several large-scale surveys of molecular gas in the CMZ for many decades {\citep{Gusten81, tsuboi97, chuss03, Handa06, Jones12, mills17, cmzoom1, cmzoom2, guan21}. }
{Several of the} low-spatial-resolution surveys of cold dust and molecular gas show that the M0.10$-$0.08~cloud is relatively bright and compact ($<$3 pc), {with a mass of 1.7 $\times$ 10$^5$ M$_{\odot}$~\citep[{e.g.,}][]{tsuboi11}}. {The M0.10$-$0.08~cloud also been observed to have substructure, as detected in the recent 1mm CMZoom survey \citep{cmzoom1, cmzoom2}}. M0.11$-$0.11, however, is relatively faint and extended ($>$5 pc) and could spatially overlap with M0.10$-$0.08~(Figure \ref{introfig}).
The spatial overlap between the two clouds has led some {investigators} to argue for a possible connection between the two clouds \citep{Handa06, clavel13}. However, there are {unsolved questions} about this connection in the literature due to the large velocity separation between the two clouds {along this line-of-sight \citep[$\Delta$v\til30 km s$^{-1}$;][]{ponti10, Kru15}.}
{Understanding the connection or separation of the two clouds can give insight on the complex kinematics in the region. Furthermore, disentangling the complex kinematics into a somewhat simple solution is essential for understanding the {3-dimensional} structure of the gas and the effects that cloud-cloud interactions can have on the gas motions.}
We present high-resolution (\til2$-$3\arcsec) radio observations of M0.10$-$0.08~using the National Science Foundation's Karl G. Jansky Very Large Array (hereafter, VLA). Using these observations we analyze the morphological and kinematic structure of M0.10$-$0.08~at high-resolution (Section \ref{results}) and discuss the relationship of M0.10$-$0.08~to other clouds in the region (Section \ref{dis}).
\section{Observations and Data Calibration}
\label{obs}
The observations presented in this paper were taken with the VLA interferometric radio telescope, operated by the National Radio Astronomy Observatory.\footnote{The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.} These VLA observations were part of a larger survey of molecular cloud s in the CMZ (PI: Elisabeth A.C. Mills; Project code: 11B-210).\footnote{Results from this survey have also been presented in: \cite{mills13, mills14, mills15, dom16, my17, mills18b}.} This survey used the K (18.0$-$26.5 GHz) and~Ka (26.5$-$40.0 GHz) band receivers on 2012 January 14$^{\mathrm{th}}$ \& 13$^{\mathrm{th}}$, respectively, with the DnC hybrid array. In this survey we observed 15 spectral lines {from} several regions in the CMZ. The image cube parameters for all 15 lines are reported in Table \ref{Images}. The results presented in this paper focus on a single pointing containing M0.10$-$0.08,\footnote{All J$<$7 NH$_{3}$~images, shown in Figure \ref{morphfig}, are from a larger multi-pointing mosaic \citep[See Figure 3, left, in][]{my17}.} centered at $\alpha$(2000)=17$^{\mathrm{h}}$46$^{\mathrm{m}}$09\fs79, $\delta$(2000)=$-28\degr 53\arcmin 18\farcs0$, for K band, and $\alpha$(2000)=17$^{\mathrm{h}}$46$^{\mathrm{m}}$11\fs37, $\delta$(2000)=$-28\degr 53\arcmin 24\farcs3$, for Ka band, with a time-on-source of \til25 minutes in each frequency band.
\begin{figure*}[tb!]
\centering
\includegraphics[scale=0.82]{Figure2.pdf}
\caption{{Peak} intensity distribution of 11 of the 15 molecular line transitions detected in this paper. The top two rows show the NH$_{3}$~(1,1) $-$ (7,7) and CH$_3$OH~{line emission}. The bottom row shows the observed HC$_3$N~and CH$_3$CN~transitions. The bottom right-most panel shows the 20, 40, 80, and 140 $\sigma$ contour levels of the NH$_{3}$~(3,3) {emission}, with {annotations identifying} several {of the} `Features' discussed in Section \ref{g10morph}. The spatial resolution of each presented molecular transition is shown in the bottom left corner of every panel. The imaging parameters of all 15 detected molecular transitions are described in Table \ref{Images}. The black dashed line shows the orientation of the Galactic plane at $b=-0\fdg075$. }
\label{morphfig}
\end{figure*}
The correlator setup for this survey is described in \cite{mills15} and \cite{my17}. High-frequency VLA procedures\footnote{\href{https://casaguides.nrao.edu/index.php?title=EVLA_high_frequency_Spectral_Line_tutorial_-_IRC\%2B10216_part1}{Hyperlink to the high frequency CASA tutorial}. All imaging and calibration of the VLA observations presented here used the Common Astronomy Software Application (CASA) program provided by NRAO \citep{Casa}.} were used for calibration and imaging, as described in \cite{mills15}, with one difference. We employed the CLEAN parameter ``multiscale'' for all spectral lines that had a signal-to-noise ratio $>$15 and a peak intensity $>$20 mJy beam$^{-1}$~(see Table \ref{Images}) in order to improve our sensitivity to {data taken with} short baselines in our interferometric {observations}.
\section{Results}
\label{results}
\subsection{Morphology of the Molecular Emission in M0.10$-$0.08}
\label{mol-res}
Figure \ref{morphfig} presents the {peak} intensity emission of 11 molecular transitions detected in M0.10$-$0.08 (see Table \ref{Images} for imaging parameters). The remaining four detected molecular transitions in M0.10$-$0.08~are {relatively} faint ($<$9$\sigma$) and are therefore $not$ shown in Figure \ref{morphfig}. In the following sections we examine the bright, diffuse molecular emission (\S \ref{g10morph}: NH$_{3}$~\& HC$_3$N). We focus on the kinematics of the NH$_{3}$~emission and fit {the averaged gas profile} in Section \ref{g10kin}. The CH$_3$OH~($4_{\textrm{-}1}$$-$$3_{0}$) class I maser transition is discussed in detail in Section \ref{masertext}.
\subsubsection{Morphology of the Diffuse Molecular Emission: NH$_{3}$~and HC$_3$N}
\label{g10morph}
The top two rows in Figure \ref{morphfig} show the detected NH$_3$ (1,1)$-$(7,7) emission in M0.10$-$0.08. The distribution of the metastable NH$_{3}$~emission is similar across all seven~transitions. The speckled morphology observed in the NH$_{3}$~(7,7) transition is likely an artifact of cleaning {with delta functions (see Section \ref{obs} and Table \ref{Images} for a discussion on the cleaning process)}.
Most of the NH$_{3}$~emission in the M0.10$-$0.08~cloud~is concentrated within a square arcminute region near the center of the field. At high-resolution (3\arcsec) the M0.10$-$0.08~cloud has a wedge-like appearance that is narrow at lower Galactic longitude (10\arcsec; $l$=0\fdg095) and widens with increasing Galactic longitude (50\arcsec; $l$=0\fdg11). This wedge-like structure is also noticeable in both transitions of HC$_3$N: 3$-$2 and 4$-$3 (bottom row in Figure \ref{morphfig}). Additionally, there is a diffuse `filamentary extension' toward the southern region of M0.10$-$0.08,~as indicated in the bottom right-most panel of Figure \ref{morphfig} (i.e., `Features' panel). This filamentary extension is detected in both the NH$_{3}$~and HC$_3$N~transitions, but not in the CH$_3$OH~(4$-$3) transition. The longest extent of M0.10$-$0.08~is 75\arcsec~(\til3 pc), indicating that this cloud~is among the more compact of molecular clouds observed in the Galactic center~{\citep[diameters of 3$-$10 pc; e.g.,][]{Gusten81,bally87,Kauffmann17a,mills17}}.
\begin{figure}
\includegraphics[scale=0.2]{Figure3.png}
\caption{
{Intensity-weighted} velocity distribution (1st moment map) of the NH$_{3}$~(3,3) transition for emission above the 10$\sigma$ level, integrated over a velocity range of {-20}$-$100 km s$^{-1}$. The black contours correspond to emission at 10, 20, 40, 80, \&~140 $\times$ 2.3 mJy beam$^{-1}$~(rms level). The black {dashed lines show} the orientation of the Galactic plane at $b=-0\fdg075$ {and $b=-0\fdg09$}.
\label{spectrum}
\end{figure}
Within M0.10$-$0.08~there are several (\til5) compact {clumps (D$<$10\arcsec)} of brighter NH$_{3}$~emission {($>$0.2 Jy beam$^{-1}$)} that are most prominent in the (3,3) transition. Most of these compact clumps are concentrated toward the northeast region of the cloud, with the brightest NH$_{3}$~clump located at: $\alpha$(2000)=17\h46\m12\fs3,~$\delta$(2000)= $-$28\degr53\arcmin18\arcsec. The brightest clump (i.e., `Main Clump'; see `Features' panel in Figure \ref{morphfig}) contains emission in all 11 transitions shown in Figure \ref{morphfig}, including the fainter CH$_3$CN~(2$-$1) transition. Further, the Main Clump is the only location where we detect CH$_3$CN~emission. Directly south of the Main Clump is a lower intensity emission region (`Depression;' labeled in the Features panel of Figure \ref{morphfig}). This depression region is \til10\arcsec~across and is located at $\alpha$(2000)=17\h46\m12\fs5, $\delta$(2000)=$-$28\degr53\arcmin25\arcsec. The Depression is detected in both NH$_{3}$~and HC$_3$N, but is most {prominent} in the HC$_3$N~(4$-$3) transition.\footnote{There is a second lower level emission region to the west of the Main Clump and north of the Filamentary Extension (see Figure \ref{morphfig}). However, because we detect emission above the noise level in this region in the HC$_3$N~(4$-$3) transition, we do not characterize this feature as a second `depression'.} Although this feature is detected in all of our extended emission lines (NH$_{3}$, HC$_3$N), it could be produced by spatial filtering in our interferometer data. Future observations at different wavelengths are necessary to determine whether the Depression is some kind of cavity.
\begin{figure}
\centering
\includegraphics[scale=0.57]{Figure4.png}
\caption{NH$_{3}$~(5,5) velocity spectrum {averaged} over the entire field of view. The NH$_{3}$~(5,5) line was chosen as a representative {spectrum} to show the multiple components toward this region. In the (5,5) line all three components are detected and at this higher J transition, the hyperfine lines are suppressed. The black line shows the data. The blue Gaussians show the individual components (presented in Table \ref{55table}), with the red line showing the sum of the three Gaussian components. The solid green line at -0.05 K shows the residuals of the {three-Gaussian-component} fit. The dashed green line at -0.05 K shows the residuals of a {two-Gaussian-component} fit {(v$_c$ $\simeq$ 10 km s$^{-1}$~and 50 km s$^{-1}$)}. In the two Gaussian fit there is consistent excess of emission around \til20$-$40 km s$^{-1}$~(six spectral cube channels; dashed green line). {This excess emission around 20$-$40 km s$^{-1}$~is brighter than the emission in the 10.6 km s$^{-1}$~component and is detected in the HC$_3$N~transitions as well.} Therefore, we {interpret} the excess emission as an intermediate-velocity component.
}
\label{am-spectra}
\end{figure}
\subsection{Kinematics of the NH$_{3}$~Emission}
\label{g10kin}
Figure \ref{spectrum} shows the centroid velocity distribution (moment 1) of the NH$_{3}$~(3,3) transition. Most of the bright NH$_{3}$~emission ($>$10$\sigma$) is at a velocity of 35$-$65 km s$^{-1}$. However, as we will show in the following section, faint ($<$10$\sigma$) molecular emission is detected {at lower} velocities of \til10 km s$^{-1}$.
We note an asymmetry in the velocity distribution that results in roughly a 10 km s$^{-1}$ pc$^{-1}$~gradient (where 1 pc is \til25\arcsec). Most of the higher-velocity NH$_{3}$~(3,3) emission ($v$ $\geq$ 55 km s$^{-1}$) is located toward the {north-western} side of M0.10$-$0.08~({around \textit{b}=$-$0\fdg075}) and the lower-velocity emission ($v$ $\leq$ 45 km s$^{-1}$) is generally located toward the south and {south-eastern} sides of M0.10$-$0.08~({around \textit{b}=$-$0\fdg09}). The orientation of the described velocity gradient is perpendicular to the direction of orbital motion {in the \cite{Kru15} orbital model}. The filamentary extension, described in Section \ref{mol-res}, contains mainly lower velocity emission {(35$-$45 km s$^{-1}$)} and is oriented roughly parallel to the described velocity gradient.
\input{55-table.tex}
\subsubsection{Multiple velocity components toward M0.10$-$0.08}
\label{3comp}
Moment 1 maps, like the one presented in Figure \ref{spectrum}, have the advantage of showing the {predominant} velocity distribution across a cloud or region. However, these maps can be misleading since they can average over multiple components and be weighted by the brighter emission components. Integrating the emission across a cloud or region, and analyzing the spectra using fitting programs like \textit{pyspeckit} \citep{2011ascl.soft09001G,ginsburg22}\footnote{The $pyspeckit$ python program is available online at \href{https://github.com/pyspeckit/pyspeckit}{https://github.com/pyspeckit/pyspeckit}. } can help identify and distinguish multiple components. Once these velocity components are disentangled we can map their spatial distribution and morphology by isolating channels associated with the individual velocity component. {By analyzing the gas kinematics using numerous methods we can understand the {relative structure of the two clouds} towards this complex region.} In this section we will identify the velocity components towards the M0.10$-$0.08~cloud by analyzing the NH$_{3}$~(5,5) line.
\begin{figure}
\includegraphics[scale=0.47]{Figure5.png}
\caption{Molecular morphology of the three velocity components presented in Table \ref{55table} (integrated intensity, moment 0, in NH$_{3}$~(3,3)).
These panels were made using V$_c$ $\pm$ $\sigma_v$ (to the closest channel). The red contour in all three panels shows the 20$\sigma$ level from the `Features' panel in Figure \ref{morphfig} for spatial reference. {Annotated on the 10.6 and the 51.5 km s$^{-1}$\ panels are the M0.11$-$0.11\ and M0.10$-$0.08\ clouds, respectively}
The black box shows the {region used for the} position-velocity~slice in Figure \ref{g10pv}.}
\label{velocity-panels}
\end{figure}
Figure \ref{am-spectra} shows the raw integrated spectrum (black histogram) of the NH$_{3}$~(5,5) line. We chose to analyze the J=5 NH$_{3}$~transition because the hyperfine {satellite} lines are {quite weak} and do not contribute significantly to the spectrum. We initially fit the NH$_{3}$~(5,5) line with two main Gaussian components at \til10$-$15 km s$^{-1}$~and \til50$-$55 km s$^{-1}$. The residuals from this initial fit showed an excess around 20$-$40 km s$^{-1}$~(dashed green line; Figure \ref{am-spectra}). This excess emission, which appears as a lower-velocity wing to the brighter \til50$-$55 km s$^{-1}$~velocity component, is detected in nearly all of our observed lines (e.g., NH$_{3}$, HC$_3$N). Since this excess is detected in multiple molecules and transitions, we {interpret it} to be an intermediate velocity component. Including a third component in our fitting process greatly reduced the residuals to produce the solid green residual in Figure \ref{am-spectra}. The final fit parameters used to produce the three Gaussian components in Figure \ref{am-spectra} are listed in Table \ref{55table}.
\begin{figure*}
\includegraphics[scale=0.5]{Figure6.png}
\caption{Distribution of the 36.2 GHz CH$_3$OH~($4_{\textrm{-}1}$$-$$3_{0}$) masers in M0.10$-$0.08~showing the maximum intensity emission ({\it left}), from Figure \ref{morphfig}, and central velocities ({\it right}) for emission above the 12$\sigma$ level. The overlaid contours {show} 12, 30, 100, and 200 $\times$ 50 mJy b$^{-1}$~(rms noise in brightest channel). The 15 brightest masers from Table \ref{MaserTable} are marked on the \textit{left}~panel. }
\label{all maser}
\end{figure*}
The lowest velocity component, which has a central velocity of 10.6 km s$^{-1}$, is the faintest of the three components. This velocity component is detected in both the HC$_3$N~transitions and in the NH$_{3}$~lower J-transitions (J$<$7). The highest velocity component, fit with a central velocity of 51.5 km s$^{-1}$, is the brightest of the three components and is detected in all of our observed molecular lines. This velocity component appears to dominate the moment 1 map, shown in Figure \ref{spectrum}. The intermediate velocity component, which is best fit with a central velocity of 37.6 km s$^{-1}$~in the NH$_{3}$~(5,5) line, is shown to be slightly spatially offset from the high velocity component in Figure \ref{spectrum}. We note that, while present, the central velocity of the intermediate velocity component did vary between the different molecular transitions, ranging from \til30$-$45 km s$^{-1}$. Therefore, the error estimates on the central velocity of the intermediate component are much larger than those shown in the Low and High velocity components to reflect this uncertainty.
We can further analyze the morphology of the molecular emission by isolating the channels associated with each component. Figure \ref{velocity-panels} shows the distribution of the NH$_{3}$~(3,3) emission in each velocity component, labeled by their respective central velocities from Table \ref{55table}. We are using the NH$_{3}$~(3,3) line for this analysis due to the {faintness} of the {low-velocity} component in the NH$_{3}$~(5,5) {transition}. When integrating over the field we were sensitive enough to detect the 10.6 km s$^{-1}$~component, but for a spatial mapping the NH$_{3}$~(5,5) line is not bright enough to perform a pixel by pixel analysis of {that} component. We are aware {that} the hyperfine {satellite lines of the} NH$_{3}$~(3,3) emission will be more prominent {than} in the (5,5) transition and will acknowledge where those lines may contribute in the following discussion.
In general, the observed gas morphology is unique for each velocity component. The 51.5 km s$^{-1}$~velocity gas is concentrated toward the center of the field and closely follows the bright NH$_{3}$~emission in Figure \ref{morphfig}, with the exception of the filamentary extension (e.g., see the wedge-shaped distribution in the red contour, Figure \ref{velocity-panels}). The 10.6 km s$^{-1}$~component is distributed throughout the field-of-view and contains several elongated structures (e.g., black box in Figure \ref{velocity-panels}).
Further, this component does not appear to have similar morphology with the 51.5 km s$^{-1}$~component, suggesting this gas could be independent from the 51.5 km s$^{-1}$~emission.
The morphology of the 37.6 km s$^{-1}$~component has similar attributes to both the 10.6 and 51.5 km s$^{-1}$~components. Unlike the 51.5 km s$^{-1}$~component, the 37.6 km s$^{-1}$~component $is$ associated with the filamentary extension. Further, the filamentary extension closely follows the elongated structure in the 10.6 km s$^{-1}$~component (Figure \ref{velocity-panels}). The 37.6 km s$^{-1}$~component also contains concentrated emission toward the north which spatially overlaps with emission in the 51.5 km s$^{-1}$~component.
{Because of this spatial overlap,} some of the 37.6 km s$^{-1}$~emission could be from the hyperfine lines in the 51.5 km s$^{-1}$~component.
\subsection{36.2 GHz CH$_3$OH~Masers in M0.10$-$0.08}
\label{masertext}
Our Ka-band observations included the 36.2 GHz CH$_3$OH ($4_{\textrm{-}1}$$-$$3_{0}$) maser transition. This class I maser is known to trace shocks, as it is collisionally excited \citep{Morimoto85, Menten91a, Sj10}. The 36.2 GHz CH$_3$OH ($4_{\textrm{-}1}$$-$$3_{0}$) maser transition has previously been detected towards this region \citep{YZ13, cotton16}. Our data {suggest} there are at least one hundred compact CH$_3$OH~sources in M0.10$-$0.08~(Figure \ref{morphfig}, second row, last panel). The compact CH$_3$OH~sources in M0.10$-$0.08~are located within a square arcminute region, and closely follow the bulk of the NH$_{3}$~and HC$_3$N~emission at velocities from 40$-$60 km s$^{-1}$~(Figure \ref{velocity-panels}, bottom panel).
{Figure \ref{all maser} shows the spatial distribution (left) and the velocity distribution (right) of the bright, above 0.6 Jy beam$^{-1}$~(12$\sigma$), 36.2 GHz CH$_3$OH~emission. The CH$_3$OH~emission is not uniformly distributed throughout {M0.10$-$0.08}. Most of the CH$_3$OH~emission appears to be distributed throughout the wedge-like structure (discussed in Section \ref{g10morph}). We do not detect any compact emission, above 12$\sigma$, from the filamentary extension (e.g., see Figures \ref{morphfig} and \ref{all maser}).}
The velocity of the CH$_3$OH~emission in M0.10$-$0.08 ranges from \til35 to 65 km s$^{-1}$. This corresponds to the velocity range of the bright NH$_{3}$~emission (Figure \ref{spectrum}).
This velocity range indicates that most of the CH$_3$OH~maser emission is associated with the 37.6 and 51.5 km s$^{-1}$~velocity components.
{In order to characterize the nature of the point-like emission and to evaluate whether these detections represent maser emission, we used the source detection algorithm, \textit{Clumpfind}~\citep{Williams94}, to distinguish the emission both spectrally and spatially.~\textit{Clumpfind}~identifies local maxima and uses saddle-points in position and velocity space around the local maxima to determine the boundaries of the sources. \textit{Clumpfind}~then produces a list {of} clumps with uniform criteria, which was used to construct a catalog \citep[for more details on maser identification using the \textit{Clumpfind}~algorithm, see the description of this technique in][{Section 5.1}]{mills15}.
Sixty-four of the compact CH$_3$OH~sources have brightness temperatures over 400 K (i.e., `CH$_3$OH masers').
The properties of the 64 detected `CH$_3$OH masers'
identified with \textit{Clumpfind}~are listed in Table \ref{MaserTable}. The spectral profiles of these~masers are shown in Figure \ref{spectra1}. The 15 brightest masers in M0.10$-$0.08~are labeled in Figure \ref{all maser} (left). }
With \textit{Clumpfind}~we also detect 31 compact CH$_3$OH sources that have a brightness temperature between 100$-$400 K, {which we regard as `maser candidates.'
These sources are considered to be candidate masers based on their brightness temperature, which are similar to observed gas temperatures in CMZ clouds \cite[50 - 400 K,][]{mills13, Krieger17} Therefore, we assume that any emission above this upper 400 K limit is likely non-thermal (i.e., maser emission, sources in Table \ref{MaserTable}) and any emission that is below 100 K is most likely thermal. Therefore, we classify CH$_3$OH\ point-sources that have brightness temperatures between 100-400 K as `maser candidates'.}
The properties of all 31 maser candidates are listed in Table \ref{CanTable}, with their spectra shown in Figure \ref{spectra2}. These maser candidates are also located within the same square arcminute region as the detected CH$_3$OH~masers and have a similar velocity range (41$-$63 km s$^{-1}$).
\section{Discussion}
\label{dis}
In the following section we present a discussion and interpretation of our kinematics results on the M0.10$-$0.08~cloud (Section \ref{results}). Here, we attempt to explain the complicated and multiple-component velocity structure detected in the vicinity of M0.10$-$0.08~(Sections \ref{stream} \& \ref{g10-g11}).
\subsection{Locations and Origins of M0.10$-$0.08~and M0.11$-$0.11}
\label{stream}
The bulk emission in this region of the Galactic center has a velocity around 51.5 km s$^{-1}$~(Section \ref{3comp}). The morphology and gas kinematics of this velocity component {are consistent with those found in previous studies of M0.10$-$0.08~\citep[e.g.,][]{tsuboi11}}.
M0.10$-$0.08~appears to be part of a larger structure of molecular gas that has a velocity of around 50 km s$^{-1}$~\citep{Fukui77,tsuboi11}. \cite{tsuboi11} detected H$^{13}$CO$^+$ emission around 50 km s$^{-1}$~{extending} from $+$0.15$^{\circ}$~to $-$0.05$^{\circ}$~($d$\til27 pc; see their Figure 10). Within this extended diffuse structure {they detect} three concentrated regions of H$^{13}$CO$^+$ emission that coincide with the M0.10$-$0.08, M0.07$-$0.07, and {50 km s$^{-1}$~cloud (M$-$0.02$-$0.07)} molecular clouds (see Figure \ref{introfig} for locations of these clouds). The presence of all three clouds within this larger diffuse structure could be evidence that all three clouds are co-located {within a single lower-density envelope} that has a velocity of around 50 km s$^{-1}$.
This large diffuse gas structure, {observed in H$^{13}$CO$^+$ by \cite{tsuboi11},} may be Orbital Stream 1 in the \cite{Kru15} orbital model. The {50 km s$^{-1}$~cloud} is argued to be associated with Orbital Stream 1 \citep{Kru15}. Therefore, if M0.10$-$0.08~and M$-$0.02$-$0.07~are associated within the same gas stream, and {50 km s$^{-1}$~cloud} is located on Orbital Stream 1, then by extension {we can infer that} M0.10$-$0.08~{is} also located on Orbital Stream 1. We note, however, that in the \cite{Kru15} orbital stream model, gas at the closest angular location to the M0.10$-$0.08~cloud ($l=$0\fdg09, $b=-$0\fdg07; as the position of M0.10$-$0.08~is slightly offset from Orbital Stream 1 by \til1\arcmin) is {predicted} to have a line-of-sight velocity of {60 to 65} km s$^{-1}$. Although this suggested line-of-sight velocity is slightly higher then the central velocity of M0.10$-$0.08~that we measured in Figure \ref{am-spectra}, we do detect some gas at velocities of around 60 to 65 km s$^{-1}$~(see Figures \ref{spectrum} \& \ref{all maser}, right).
The \scloud~expanding shell~is also hypothesized to be located on Orbital Stream 1 \citep[][{see our Figure \ref{introfig} for spatial location of shell relative to other GC clouds}]{my17}. In \cite{my17} we reported a systemic velocity of \til53 km s$^{-1}$~for the M0.20$-$0.033~expanding shell and advocate that the shell is also located on Orbital Stream 1, based on position-velocity analysis. Indeed, the adjacent locations of M0.10$-$0.08~and the \scloud~expanding shell~(see Figure \ref{introfig}) and their similar velocities are consistent with both clouds being
on the same orbital stream. Additionally, based on the orbital direction of stream 1, the M0.10$-$0.08~cloud would be located `upstream' from the \scloud~expanding shell. Based on the orbital solution in \cite{Kru15}, M0.10$-$0.08~{would} orbit into the current location of the \scloud~expanding shell~in \til0.05 Myrs.
The 10.6 km s$^{-1}$~component (Figure \ref{am-spectra}; Section \ref{g10kin}) covers a velocity range of \til0$-$20 km s$^{-1}$~based on analysis of the NH$_{3}$~(5,5) emission and additional analysis of the HC$_3$N~lines. This velocity range is similar to observed velocities of the adjacent M0.11$-$0.11~molecular cloud \citep[\til10$-$30 km s$^{-1}$;][see their Figure 2]{Jones12,clavel13}, {and nearby gas velocities associated with with \cite{Kru15} orbital stream 3 (\til0--5 km s$^{-1}$)}. However, there are discrepancies in the literature concerning the velocity of the M0.11$-$0.11~cloud. \cite{tsuboi97}, \cite{Handa06}, {and \cite{tsuboi11}} report a slightly higher velocity range of 15$-$45 km s$^{-1}$. These velocity values of the M0.11$-$0.11~cloud in \cite{tsuboi97}, \cite{Handa06}, {and \cite{tsuboi11}} are closer to those of the intermediate velocity component in our observations (37.6 km s$^{-1}$; see Section \ref{3comp}). In our Figure \ref{am-spectra} this velocity component appears as a lower velocity `wing' of the main 51.5 km s$^{-1}$~component, rather than a distinct cloud. Further, the morphology of the 37.6 km s$^{-1}$~component in Figure \ref{velocity-panels} appears to overlap with the 51.5 km s$^{-1}$~component with the exception of the filamentary extension. Therefore, based on the previous work of \cite{Jones12} and \cite{clavel13} and our analysis above, we interpret the 10.6 km s$^{-1}$~component as extended emission associated with M0.11$-$0.11.
\subsubsection{Similar X-ray~fluorescence detected in both M0.10$-$0.08~and M0.11$-$0.11}
\label{xray-section}
Observed X-ray~fluorescence can be beneficial in determining radial distances, which, when combined with their projected separation from the Galactic center, can be used to infer inter-cloud distances \citep[e.g.,][]{clavel13, terrier18}. In our Galactic center, fluorescent iron emission at 6.4 keV is created in molecular clouds by K-shell photo-ionization and Compton scattering of neutral iron atoms from a previous, gigantic {X-ray} flare, presumably from Sgr A$^*$.
By observing the time delay of the detected X-ray~reflection {across multiple molecular clouds}, we can constrain the locations of clouds from the geometrical path-length between the clouds and Sgr A$^*$~and determine their location along our line-of-sight \citep[e.g.,][]{CS02}. Further, the time delay between the detected reflections provides a measurement of the total path traveled by the photons, assuming they were emitted simultaneously. This path length then gives an indication of the relative locations of the clouds.
Molecular clouds that show similar illumination at a similar timeframe are located along the same {3-dimensional} `parabola', assuming the illumination feature is produced by the same, single flaring event \citep[e.g.,][]{SC98}. Along this {3-dimensional} parabola, the path length of the propagating light signal (from Sgr A$^*$~to the cloud and then to the Earth) is the same at each location and therefore the time delay of the propagating signal is the same as well.
\cite{clavel13} detected a similar X-ray~fluorescence signature in both M0.10$-$0.08~and M0.11$-$0.11~(sources Br2 and G0.11$-$011 in their study).\footnote{The data presented in \cite{clavel13} used Chandra observations from 1999 to 2011 (see their Table 1 for observational information). These X-ray~observations had a resolution of 4\arcsec, and are therefore fairly comparable to the observations presented in this paper, Table \ref{Images}.}
This detection of similar X-ray~fluorescence illumination in M0.10$-$0.08~and M0.11$-$0.11~indicates the two clouds are located along the same {3-dimensional} parabola, assuming the fluorescence in both clouds is from the same event. Further, because the two clouds are aligned along the same line of sight, and have a similar X-ray~fluorescence {light curve}, \cite{clavel13} argue the two clouds must be at the same physical position, even with their differences in velocity. If the two clouds are almost at the same physical location, then we would expect to see evidence of this interaction.
\begin{figure}
\includegraphics[scale=0.32]{Figure7.png}
\caption{{Position-velocity distribution across the spatial slice shown in Figure \ref{velocity-panels}, for NH$_{3}$~(3,3) (\textit{top}) and HC$_3$N~(3$-$2) (\textit{bottom}). Annotations in the top panel show the gas associated with M0.11$-$0.11~and M0.10$-$0.08~(see Section \ref{stream}). The black dashed line shows the magnitude and orientation of the \til10 km s$^{-1}$ pc$^{-1}$~velocity gradient described in Section \ref{g10kin}. The blue regions in the top panel show the general locations of the hyperfine satellite lines ($\pm$20$-$30 km s$^{-1}$~from the main component) of M0.10$-$0.08. The red shaded region highlights the `bridge'-like feature discussed in Section \ref{g10-g11}. }}
\label{g10pv}
\end{figure}
\subsection{Proposed Physical Interaction between M0.10$-$0.08~and M0.11$-$0.11}
\label{g10-g11}
Previous studies have hinted at a possible connection between M0.10$-$0.08~and M0.11$-$0.11~\citep{Handa06, clavel13}. However, because of the large velocity difference between the two clouds {along this line-of-sight} ($\Delta$v$\sim$ 30 km s$^{-1}$), other investigators have suggested {these components} are physically separated \citep{ponti10,Kru15}. The high-resolution data presented in this paper can provide insight {into} this discrepancy in the literature. In this section we perform a detailed position-velocity~analysis on this region to investigate a possible connection between M0.10$-$0.08~and M0.11$-$0.11.
Figure \ref{g10pv} shows the position-velocity~distribution of NH$_{3}$~(3,3) {(top) and HC$_3$N~(3$-$2) (bottom)} across the filamentary extension (black box in Figure \ref{velocity-panels}). This slice was selected to maximize the relatively faint signal of the M0.11$-$0.11~cloud (top panel in Figure \ref{velocity-panels}) and illustrate a possible connection to M0.10$-$0.08.
The slice contains emission in all three velocity components (Table \ref{55table} and Figure \ref{velocity-panels}). Emission associated with the M0.10$-$0.08~cloud is clearly the brightest component in this region (50$-$60 km s$^{-1}$), with possible hyperfine lines above and below the main emission region (blue shaded region in Figure \ref{g10pv}, top). These hyperfine lines have a fixed known separation from the main component of $\pm$21.1 km s$^{-1}$~and $\pm$29.1 km s$^{-1}$~for the NH$_{3}$~(3,3) transition \citep[e.g.,][]{Krieger17}. The NH$_{3}$~emission at \til80 km s$^{-1}$~is not observed in HC$_3$N~{(3$-$2) (Figure \ref{g10pv}, bottom)}, suggesting it is hyperfine line emission.
Across this slice there is clear, extended emission in M0.11$-$0.11~(10.6 km s$^{-1}$~{component}; Figure \ref{g10pv}, top). The emission in M0.11$-$0.11~is relatively faint compared to M0.10$-$0.08~and spans a velocity range from 5 to 25 km s$^{-1}$.
The 37.6 km s$^{-1}$~component appears as a lower velocity wing to the 51.5 km s$^{-1}$~component in the integrated spectrum (Figure \ref{am-spectra}). When isolating velocity channels associated with each component we see that some of the gas in the 37.6 km s$^{-1}$~component is spatially offset from the 51.5 km s$^{-1}$~component (Figure \ref{velocity-panels}).
We also observe this offset {in} position-velocity~space, where some of the gas in the 37.6 km s$^{-1}$~component appears to be spatially offset from the bulk of the 51.5 km s$^{-1}$~component (Figure \ref{g10pv}, top). Additionally, the 37.6 km s$^{-1}$~component is mainly associated with emission along the velocity gradient (see Section \ref{g10kin}) and appears to be a distinct feature in position-velocity~space. Further, at the southern edge of the velocity gradient there is a bridge feature with emission between velocities 20$-$40 km s$^{-1}$~(red shaded region in Figure \ref{g10pv}, top). Including both the bridge feature and the velocity gradient results in continuous emission between 20$-$50 km s$^{-1}$, thereby showing that M0.10$-$0.08~and M0.11$-$0.11~have an apparent connection in position-velocity~space. Analysis of the HC$_3$N~(3$-$2) line shows a similar velocity gradient and bridge-like features across the slice (Figure \ref{g10pv}, bottom). However, the HC$_3$N~emission is \til5$-$10$\times$ fainter than the NH$_{3}$~(3,3) line, so these features appear loosely connected and barely above the noise level.
Recent studies simulating Galactic cloud-cloud interactions predict broad `bridge'-like feature in position-velocity~space, where the two clouds are physically connected \citep[e.g.,][]{Takahira14, Haworth15a, Torii17}.\footnote{This labeling of the `bridge'-like feature in position-velocity~space, defined in \cite{Haworth15a}, should not be confused with the X-ray~definition of the Bridge, labeled as Br1 and Br2 \citep{ponti10, clavel13}, which spatially connects M$-$0.02$-$0.07~to M0.10$-$0.08.} In these studies, there is intermediate-velocity gas between the two main cloud components, which produces the `bridge' in position-velocity space. {Such} bridge features have since been detected in numerous molecular clouds throughout the galaxy \citep[e.g.,][]{Fukui16, Torii17}.
Large-scale observations (\til45\arcsec~resolution) of the intermediate velocity component (15$-$45 km s$^{-1}$) from
\cite{tsuboi97} and \cite{Handa06}
show the gas is extended and dense. \cite{tsuboi97} extracted a position-velocity slice at $b$=$-$6\arcmin~(4\arcmin$\leq$$l$$\leq$14\arcmin) from their CS data cubes and observed two vertical features in velocity space that spanned 15 km s$^{-1}$~to 40 km s$^{-1}$, and were separated by \til2\arcmin~(see their Figure 2). \cite{Handa06} also saw similar vertical features in velocity space in their H$^{13}$CO$^+$ and SiO data cubes.
\cite{tsuboi97} attribute the vertical velocity features to be an expanding shell centered on a lower emission region near the centroid of the cloud, where the bright vertical features are the limb brightened edges of the shell. However, these vertical features could alternatively be signatures of the bridge feature, discussed above, on larger scales.
At the low spatial resolution of the \cite{Handa06} and \cite{tsuboi97} observations (\til45\arcsec), the detailed substructure we observed at -45\arcsec\ to 0\arcsec~in Figure \ref{g10pv} (top)
would blend into a single pixel. Therefore the high-resolution gradient and bridge features shown in Figure \ref{g10pv} would appear as broad, continuous emission at 45\arcsec~resolution.
Thus, based on the close physical proximity of M0.10$-$0.08 and M0.11$-$0.11~from X-ray fluorescence data, along with continuous emission connecting them in position-velocity space via a `bridge' feature, we argue the two clouds {are} physical interacting.
{Furthermore, this would imply that M0.11$-$0.11~is located on the same stream as M0.10$-$0.08\ and not on a separate stream as indicated by the \cite{Kru15} orbital model.}
\subsection{Gas Kinematics in CMZ Clouds}
{Disentangling the molecular gas kinematics in CMZ clouds can be complex. As we have shown in this paper, the multiple velocity components toward the M0.10$-$0.08~cloud can make isolating the individual components challenging.
For example, extensive efforts were conducted to fit the three components, with similar Gaussian fit parameters (V$_c$ \& $\sigma$), across multiple NH$_{3}$~transitions. However, we were unable to get converging values that satisfied the multiple transitions. The lower J transitions had brighter hyperfine structure for each component resulting in over nine blended profiles within the \til0$-$70 km s$^{-1}$~velocity range which could be fit with numerous solutions. At the higher J transitions, the 10.6 km s$^{-1}$~component was not bright enough to fit the spectrum.}
{The 37.6 km s$^{-1}$~component was also especially challenging to fit. Since this component appears as a low-velocity wing to the 51.5 km s$^{-1}$~component, there were numerous solutions to the profile that varied depending on the initial guesses and range limits in the $pyspeckit$ program, with central velocity values that ranged from \til30$-$45 km s$^{-1}$. However, the presence of an intermediate velocity component between \til30$-$45 km s$^{-1}$~was clear in all of our NH$_{3}$~and HC$_3$N~transitions (illustrated by the dashed green residuals in Figure \ref{am-spectra}). The fit solution we present in this paper (see Table \ref{55table}) was the best fit parameters that accurately reflected the uncertainty in the 37.6 km s$^{-1}$~component. However, we note that determining a simple kinematic solution to complex kinematics in the CMZ can be problematic and requires multiple methods to disentangle the velocity components (i.e., spectral fitting, position-velocity analysis, moment images of components, etc). }
{Furthermore, the complex kinematics in the CMZ can make understanding the gas flows and unusual orbits more challenging. We have attempted to disentangle the kinematics towards this complicated region using high spatial and spectral resolution observations. While we were able to identify the three velocity components towards this region using a variety of methods, providing a simple solution that satisfies the kinematics observed in this dataset is more difficult. Future observations of complex kinematic regions should use a variety of methods to isolate the velocity components. If possible, future observations should also use absorption observations toward radio continuum regions to constrain the line-of-sight arrangement, similar to the approach we used in \cite{my17}. }
{Despite the complexity of disentangling the kinematics of multiple components, the analysis is necessary constrain models of the large-scale gas structures. Models for the 3-dimensional orientations of these gas structures can be influenced by assumptions made in complex kinematic regions. Therefore, applying the solutions in complex kinematic regions in the models may help resolve some of the contingencies in future orbital solutions. }
\section{Summary}
\label{conclusion}
We present high-resolution (\til3\arcsec) VLA radio observations of the {compact (3 pc)} M0.10$-$0.08 molecular cloud, {finding that it is composed of} multiple compact molecular clumps (5+ clumps; D$_{clumps}$ $\leq$ 0.4 pc; Section \ref{mol-res}). We detect 15 molecular transitions in M0.10$-$0.08~(Table \ref{Images}); including eight transitions of NH$_{3}$, two HC$_3$N~transitions, OCS, CH$_3$CN, HC$_5$N, and abundant 36.2 GHz CH$_3$OH~masers (see Section \ref{masertext} and Appendix \ref{app} for details on the detected masers).
The main focus of this paper is on the molecular gas kinematics toward M0.10$-$0.08. We present the following results from this study:
\textbf{1) \underline{Three velocity components detected toward} \underline{M0.10$-$0.08}:} {The averaged} NH$_{3}$~(5,5) spectrum {reveals} three velocity components {centered at} 10.6, 37.6, and 51.5 km s$^{-1}$~(see Section \ref{3comp}, Figures \ref{am-spectra} \& \ref{velocity-panels}, and Table \ref{55table}).
{Initially, the NH$_{3}$~(5,5) spectrum was fit was two Gaussian components at \til10$-$15 km s$^{-1}$~and \til50$-$55 km s$^{-1}$. However, the residuals of this fit showed excess emission around 20$-$40 km s$^{-1}$, which we interpreted to be a third velocity component (see green dashed line in Figure \ref{am-spectra})}
In our high-resolution data the {51.5 km s$^{-1}$~component is the brightest emission in this region.}
The 10.6 km s$^{-1}$~component is relatively faint compared to the other two components in the field. We have also analyzed the gas morphology in each component by isolating channels associated with each component (Figure \ref{velocity-panels}). The morphology in all three components is unique.
\textbf{2) \underline{Relationship between M0.10$-$0.08~and Orb-} \underline{ital Stream 1}:}
M0.10$-$0.08~is part of a larger structure of gas that contains the M$-$0.02$-$0.07~and M0.07$-$0.07~molecular clouds and has a velocity of around 50 km s$^{-1}$~\citep{tsuboi11}.
The central velocity of M0.10$-$0.08 (51.5 km s$^{-1}$; Section \ref{stream}) indicates that M0.10$-$ 0.08 is likely located on Orbital Stream 1 in the \cite{Kru15} model.
\textbf{3)~\underline{Resolving the Kinematics of M0.11$-$0.11}:} Discrepant reports regarding the central velocity of M0.11$-$ 0.11 range from 10 to 45 km s$^{-1}$. In our high-resolution data, we detect two components in this velocity range: 10.6 and 37.6 km s$^{-1}$. We argue that gas in the 10.6 km s$^{-1}$~component is associated with M0.11$-$0.11~as the morphology is distinct from that of the M0.10$-$0.08~cloud (Figure \ref{velocity-panels}). Additionally, position-velocity~analysis towards this region of the CMZ shows extended emission ($>$70\arcsec; $>$2.7 pc) from 0 to 20 km s$^{-1}$~(Figure \ref{g10pv}), that we suggest is associated with M0.11$-$0.11.
\textbf{4)~\underline{Physical Interaction Between M0.10$-$0.08} \underline{and M0.11$-$0.11}:} Past X-ray fluorescence observations by \cite{clavel13} show similar time delay signatures from both M0.10$-$0.08~and M0.11$-$0.11~and argue the two clouds are in the same physical position of the Galactic center.
The intermediate morphology of the 37.6 km s$^{-1}$~velocity component could be indicative of a physical interaction between M0.10$-$0.08~and M0.11$-$0.11. Indeed, all three velocity components appear to be connected in position-velocity~space (Figure \ref{g10pv}). The intermediate velocity component, which has similar features to both M0.10$-$0.08~and M0.11$-$0.11, could be gas from where these two are physically connected.
\section{Acknowledgements}
This material is based upon work supported by grants from the National Science Foundation (NSF; no. AST-0907934, AST-15243000, AST-1614782, AST-2008101, CAREER-2142300). Support for this work was also provided by the NSF through the Grote Reber Fellowship Program administered by Associated Universities, Inc./National Radio Astronomy Observatory.
NOB would like to thank Dr. Farhad Yusef-Zadah {(Northwestern University)}, Dr. Robert Mutel {(University of Iowa)}, Dr. Steven Spangler {(University of Iowa)}, and Dr. Kennith Gayley {(University of Iowa)} for their helpful critique on this thesis analysis. The authors would also like to thank Dr. Allison Costa (University of Virginia), Dr. Monica Sanchez (University of New Mexico), and Dr. Diederik Kruijssen (Max-Planck Institute) for their helpful insight on this work.
The authors would like to thank Dr. Elisabeth Mills {(University of Kansas)} for her helpful insight on the spectral line fitting and analysis of the results presented in this paper. The authors would also like to thank Dr. John Bally for his inspiration in creating the 3-color image shown in Figure \ref{introfig}.
{The authors would also like to thank the anonymous referee for their helpful insight on this manuscript.}
\software{CASA \citep{2011ascl.soft07013I}; \textit{Clumpfind}~\citep{2011ascl.soft07014W}; $pyspeckit$ \citep{2011ascl.soft09001G,ginsburg22}}
|
1,314,259,996,661 | arxiv | \section{Introduction}
The B[e] phenomenon is associated with stars at different evolutionary stages, going from
the pre-main sequence to the planetary nebula stage. This phenomenon is
characterized by the simultaneous presence of low-excitation forbidden line emission and
strong infrared excess in the spectra of early-type stars.
The group of B[e] stars includes
high- and low-mass evolved stars, intermediate-mass pre-main
sequence stars and symbiotic objects.
In more than 50\% of the confirmed B[e] stars the evolutionary stage is still unknown.
These objects are generally called unclassified B[e] stars (e.g. Miroshnichenko 2007;
Borges Fernandes 2010).
This lack of a classification is mostly caused by the limited
knowledge regarding
their physical parameters, in particular the distance,
and the geometry of their circumstellar matter.
Recently, Miroshnichenko (2007) has noted that most of the unclassified B[e]
stars have unique observational properties that distinguish them with respect
to the rest of the B[e] stars and classified them within a new group: the FS CMa
stars. Miroshnichenko (2007) has proposed that this group of stars could be binary
systems that are currently undergoing or have recently undergone a phase of
rapid mass exchange associated with strong mass loss stored in a
circumbinary envelope. This scenario explains the higher IR excesses due to
circumstellar dust
despite its lower mass-loss rate with respect to sgB[e] stars and the fact that
FS CMa stars are not found in star forming regions.
As noted before, the determination of the geometry of the surrounding matter
via high-angular resolution observations
could provide valuable information on the nature of the unclassified
B[e] stars. We
have started a program to search for FS CMa stars with detectable radio continuum emission
in unpublished archive data from the Very
Large Array (VLA)
of the NRAO\footnote{The National Radio
Astronomy Observatory is operated by Associated Universities
Inc. under cooperative agreement with the National Science Foundation.}.
The sources detected will
be observed in the future to obtain high quality images with subarcsecond angular resolution
using the ultrasensitive Expanded Very Large Array (EVLA) at centimeter
wavelengths and the Plateau de Bure interferometer (PdBI) at millimeter wavelengths.
\begin{table*}[htbp]
\footnotesize
\setlength{\tabnotewidth}{1.0\columnwidth}
\tablecols{9}
\small
\caption{FS CMa Stars with Good Quality VLA Archive Observations}
\begin{center}
\begin{tabular}{lcccccccc}\hline\hline
&\multicolumn{2}{c}{Position$^a$} & Flux & Wavelength & VLA & & Epoch of \\
\cline{2-3}
Star & $\alpha$(J2000) & $\delta$(J2000) & Density (mJy) & (cm) & Conf. &
Project & Observation \\
\hline
FS CMa & 06 28 17.39 & $-$13 03 10.9 & 4.2$\pm$0.4 & 1.3 & DnC & AM570 & 1997 Sep 26 \\
MWC 819 & 06 44 37.67 & +01 19 32.5 & $\leq$0.31$^b$ & 6.0 & B & AP116 & 1986 Jul 29 \\
MWC 922 & 18 21 16.06 & $-$13 01 25.7 & 10.8$\pm$0.4 & 3.6 & B & AL329 & 1994 Jul 09 \\
AS 381 & 20 06 39.86 & +33 14 30.0 & 3.3$\pm$0.7 & 20.0 & C & AW271 & 1990 Nov 14 \\
V669 Cep & 22 26 38.71 & +61 13 31.6 & $\leq$0.57$^b$ & 20.0 & A & AM590 & 1998 Apr 04 \\
\hline\hline
\tabnotetext{a}{The position of MWC 819 is from Cutri et al. (2003) and the position of V669 Cep
is from Hog et al. (2000). The positions of the other three stars are
from the VLA images presented here.}
\tabnotetext{b}{Three-sigma upper limit.}
\label{tab:1}
\end{tabular}
\end{center}
\end{table*}
\begin{figure*}
\centering
\includegraphics[scale=0.5, angle=0]{FSCMAK.PS}
\caption{VLA contour image of the 1.3-cm continuum emission toward
FS CMa. Contours are -3, 3, 4, 5, 6, 8, and 10
times 0.36 mJy, the rms noise of the image.
The synthesized beam, shown in the bottom left corner,
has half power full width dimensions of
$2\rlap.{''}92 \times 1\rlap.{''}48$,
with the major axis at a position angle of $+86^\circ$.
The cross marks the optical position of FS CMa from
Perryman et al. (1997).
}
\label{fig1}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[scale=0.5, angle=0]{spectrumfscma.eps}
\caption{Radio spectrum of the star FS CMa. The five lower
frequency data points are reported here while the two
higher frequency data points are from Di Francesco et al. (2008).
The dashed line marks the least-squares best fit to the data, given by
$(S_\nu / mJy) = 0.141\pm 0.020 (\nu / GHz)^{1.09 \pm 0.03}$.
}
\label{fig2}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[scale=0.5, angle=0]{MWC922X.PS}
\caption{VLA contour image of the 3.6-cm continuum emission toward
MWC 922. Contours are -3, 3, 5, 10, 20, 40, 100, 200, and 400
times 24 $\mu$Jy, the rms noise of the image.
The synthesized beam, shown in the bottom left corner,
has half power full width dimensions of
$0\rlap.{''}95 \times 0\rlap.{''}66$,
with the major axis at a position angle of $-3^\circ$.
The cross marks the position of MWC 922 derived
from the average of the 2MASS H, J and K images.
}
\label{fig3}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[scale=0.7, angle=0]{mwc922.eps}
\caption{Real (filled squares) and imaginary (empty squares) components (given
in mJy) of the
emission from MWC 922 at 8.46 GHz as a function of baseline (given in wavelengths).
The real component decreases with increasing baseline, indicating that the
source is slightly resolved in these observations.
The imaginary component is consistent with zero, indicating that
the source is symmetric about the phase center (the
origin of the visibility plane) and has no significant structure on these spatial
scales.
}
\label{fig4}
\end{figure*}
\begin{table}[htbp]
\small
\setlength{\tabnotewidth}{0.8\columnwidth}
\tablecols{3}
\caption{Flux densities of 0607-085 and FS CMa for 1997 September 26}
\begin{center}
\begin{tabular}{ccc}\hline\hline
Frequency & Flux Density & Flux Density \\
(GHz) & 0607-085(Jy)$^a$ & FS CMa(mJy) \\
\hline
4.86 & 2.424$\pm$0.004 & 0.87$\pm$0.05 \\
8.46 & 2.240$\pm$0.008 & 1.46$\pm$0.03 \\
14.94 & 2.051$\pm$0.024 & 2.71$\pm$0.12 \\
22.46 & 1.776$\pm$0.036 & 4.21$\pm$0.36 \\
43.34 & 2.551$\pm$0.210 & 8.60$\pm$1.43 \\
\hline\hline
\tabnotetext{a}{The phase calibrator for all observations was 0607-085.}
\label{tab2}
\end{tabular}
\end{center}
\end{table}
\begin{figure*}
\centering
\includegraphics[scale=0.5, angle=0]{AS381L.PS}
\caption{VLA contour image of the 20-cm continuum emission toward
AS 381. Contours are -3, 3, 4, 5, 6, 8, and 10
times 0.32 mJy, the rms noise of the image.
The synthesized beam, shown in the bottom left corner,
has half power full width dimensions of
$14\rlap.{''}1 \times 11\rlap.{''}5$,
with the major axis at a position angle of $+19^\circ$.
The cross marks the position of AS 381 derived
from the average of the 2MASS H, J and K images.
}
\label{fig5}
\end{figure*}
\section{Data Reduction}
Of the $\sim$40 FS CMa stars and candidate stars reported
by Miroshnichenko (2007), only seven have good quality VLA observations
(that could
provide a noise of order a few tenths of a mJy and a possible detection
at the mJy level). Of these seven stars,
CI Cam and MWC 300 have been previously reported as radio sources.
CI Cam is a high-mass X-ray binary that has been observed extensively with the
VLA, in particular after its 1998 outburst (e.g. Mioduszewski \& Rupen 2004).
MWC 300 has been observed in one occasion (1990 Feb 11) by Skinner et al. (1993), who
detected it with a flux density of 0.49$\pm$0.03 mJy at 3.6 cm.
The remaining five stars are listed in Table 1, with the
parameters of their archive VLA observations.
The archive data from the Very
Large Array (VLA)
of the NRAO were edited and calibrated using the software package Astronomical Image
Processing System (AIPS) of NRAO.
\section{Discussion on Individual Sources}
For the stars MWC 819 and V669 Cep only upper limits were obtained. However,
the other three stars, FS CMa, MWC 922, and AS 381 have associated radio continuum
emission and we discuss them in what follows.
\subsection{FS CMa}
This star is the prototype of the class and we find it is associated with a radio
continuum source (see Fig. 1). This source was observed at 6.0, 3.6, 2.0, 1.3
and 0.7 cm in the same observing session (1997 September 26)
and the observed flux densities are given in Table 2.
Di Francesco et al. (2008) report flux densities for FS CMa
of 0.070$\pm$0.014 and 0.180$\pm$0.036 Jy at
850 and 450 $\mu$m, respectively.
These seven data points allow the analysis of its spectrum as shown in Figure 2.
Remarkably, the spectrum is well described over two decades of frequency
by a power-law of the form
$$\biggl[{{S_\nu} \over {mJy}} \biggr] = (0.141\pm 0.020) \biggl[{{\nu}
\over {GHz}} \biggr]^{1.09 \pm 0.03}.$$
This spectral index is consistent with partially optically-thick free-free emission.
It is significantly steeper than the spectrum from ionized winds
expanding at constant velocity, that
behaves as $S_\nu \propto \nu^{0.6}$ (i. e. Panagia \& Felli 1975).
The Herbig B[e] star MWC 297
(Cidale et al. 2000) also shows a spectral index of $\sim$1 from the
radio to the sub-mm (Sandell et al. 2011).
A spectral index of the order of 1 is also frequently found in
hypercompact H II regions (Ignace \& Churchwell 2004). This departure from
the expected value of 0.6 most probably indicates that the outflow
has velocity, temperature, or ionization fraction gradients with radius.
For example, assuming constant velocity and electron temperature in the outflow and
following Olnon (1975) and Panagia \& Felli (1975), the observed spectral
index of $\sim$1.1 implies an electron density gradient of
$n_e \propto r^{-2.8}$, steeper than the gradient of
$n_e \propto r^{-2.0}$ expected for a constant ionization fraction.
\subsection{MWC 922}
The radio source associated with this star (Fig. 3) is quite bright
at 3.6 cm. Unfortunately, there are
no observations at other frequencies that could allow the determination of
its spectral index.
The source is angularly resolved, as can be seen in the behavior of its
amplitude as a function of baseline (Fig. 4).
Analysis of the source with the task JMFIT of AIPS gives deconvolved
dimensions of $0\rlap.{''}28 \pm 0\rlap.{''}0.01 \times 0\rlap.{''}20 \pm 0\rlap.{''}0.01$
with a position angle of $169^\circ \pm 7^\circ$. From the measured flux density and
these angular dimensions, we obtain a brightness temperature of $\sim 3.3 \times 10^4$ K,
suggestive of partially optically thick free-free emission from photoionized gas.
Recently, Tuthill \& Lloyd (2007) reported detection of a biconical "Red
Square" nebula surrounding MWC 922 in the near-infrared.
This nebula extends for about 5$''$.
The radio continuum emission probably traces the very inner part of this structure.
\subsection{AS 381}
AS 381 is a binary system with a spectrum that indicates
the presence of both a hot (early B-type) star and a cool (K-type) star
(Miroshnichenko et al. 2002).
Of the three stars with radio continuum reported here, this is the only whose membership
is under debate since it has also been proposed as a possible galactic supergiant
candidate (sgB[e]; Miroshnichenko 2007).
The characteristics of AS 381
allow to classify it as an evolved object with an initial mass of about 20
solar masses (Miroshnichenko et al. 2002).
This source is angularly unresolved, but given the modest angular resolution of
the observations ($\sim13''$) this does not provide additional information.
\section{Conclusions}
Using VLA archive data, we report the detection of radio continuum emission
toward three FS CMa stars: FS CMa, MWC 922, and AS 381.
Given that we only found good quality data for five stars, these results
suggest that detectable radio continuum emission could be common in
FS CMa stars.
In the case of FS CMa, we combined the VLA data with
JCMT/SCUBA observations to show that its radio continuum spectrum
is well described by a single power law over two decades in
frequency.
Although the data do not have sufficient frequency coverage and
angular resolution to provide important new insight
into this type of stars, the flux densities detected are relatively bright and
will allow in the future a detailed radio study of the spectrum and morphology of the sources.
\acknowledgments
We are thankful to an anonymous referee for valuable comments that
improved the paper.
LFR is thankful for the support
of DGAPA, UNAM, and of CONACyT (M\'exico).
This publication makes use of data products from the Two Micron All Sky Survey, which is
a joint project of the University of Massachusetts and the Infrared Processing and
Analysis Center/California Institute of Technology, funded by the National
Aeronautics and Space Administration and the National Science Foundation.
This research has made use of the SIMBAD database,
operated at CDS, Strasbourg, France.
\vskip0.5cm
|
1,314,259,996,662 | arxiv | \section{Introduction}
Let $(R, {\frak m})$ be a Noetherian local ring with the maximal ideal ${\frak m}$ of dimension
$d>0$ and let $C$ be a nonzero $R$-module of finite length.
Let $\varphi: R^n \to R^r$ be an $R$-linear map of free modules with $C=\Coker \varphi$,
and put $M:=\Im \varphi \subset F:=R^r$. Then
one can consider the function
$$\lambda_C(p):=\ell_R([\Coker \Sym_R(\varphi)]_{p})=\ell_R(S_{p}/M^{p}), $$
where $S_p$ (resp. $M^p$) is a homogeneous component of
degree $p$ of $S=\Sym_R(F)$ (resp. $R[M]=\Im \Sym_R(\varphi)$). The function of this type was introduced by
Buchsbaum-Rim \cite{BR2} and they proved that $\lambda_C(p)$ is eventually a polynomial of degree $d+r-1$.
Then they defined a multiplicity of $C$ as
$$e(C):=\mbox{(The coefficient of} \ p^{d+r-1} \ \mbox{in the polynomial})\times (d+r-1)!, $$
which is now called the {\it Buchsbaum-Rim multiplicity} of $C$. They also proved that it
is independent of the choice of $\varphi$.
Note that the Buchsbaum-Rim multiplicity $e(R/I)$ of a cyclic module $R/I$ defined
by an ${\frak m}$-primary ideal $I$ in $R$
coincides with the ordinary Hilbert-Samuel multiplicity $e(I)$ of the ideal $I$.
More recently, Kleiman-Thorup \cite{KT1, KT2} and Kirby-Rees \cite{KR1, KR2} introduced another kind of multiplicities
which is related to the Buchsbaum-Rim multiplicity. They considered the function of two variables
$$\Lambda(p, q):={\ell}_R(S_{p+q}/M^{p}S_{q}), $$
and proved that it is eventually a polynomial of total degree $d+r-1$.
Then they defined a sequence of multiplicities of $C$ as, for $j=0, 1, \dots , d+r-1$,
$$e^j(C):=(\mbox{The coefficient of} \ p^{d+r-1-j}q^j \ \mbox{in the polynomial})\times (d+r-1-j)!j! $$
and proved that it is independent of the choice of $\varphi$. Moreover they proved that
$$e(C) = e^0(C) \geq e^1(C) \geq \dots \geq e^{r-1}(C)>e^r(C)= \dots = e^{d+r-1}(C)=0$$
where $r=\mu_R(C)$ is the minimal number of generators of $C$. Namely,
the first multiplicity $e^0(C)$ is just the classical Buchsbaum-Rim multiplicity $e(C)$, and the sequence is always
a descending sequence of non-negative integers with $e^j(C)=0$ if $j \geq r$, and $e^{r-1}(C)$ is the last positive
multiplicity. Thus the multiplicity $e^j(C)$ is now called {\it $j$-th Buchsbaum-Rim multiplicity} of $C$ or
the {\it associated Buchsbaum-Rim multiplicity} of $C$.
In this article, we investigate the detailed relation between the classical Buchsbaum-Rim multiplicity $e(C)=e^0(C)$
and the other one $e^j(C)$ for $j=1, 2, \dots , r-1$ by computing these invariants in a certain concrete case.
There are some computation of the classical Buchsbaum-Rim multiplicity
(see \cite{Bi, CLU, J, KR1, KR2} for instance).
However, it seems that the computation of the other associated Buchsbaum-Rim multiplicities is done only
for very special cases \cite{Ha, KR1, KR2}.
One of the important cases is the case where $C=R/I_1 \oplus \dots \oplus R/I_r$ is a
direct sum of cyclic modules. This case was first considered by Kirby-Rees \cite{KR1, KR2} and
they gave an interesting formula for the classical Buchsbaum-Rim multiplicity $e(C)=e^0(C)$ in terms of mixed multiplicities of ideals (see also \cite{Bi} for more direct approach).
\begin{Theorem} {\rm (Kirby-Rees \cite{KR2})}
Let $I_1, \dots , I_r$ be ${\frak m}$-primary ideals in $R$. Then we have a formula
$$e(R/I_1 \oplus \dots \oplus R/I_r)=\sum_{\stackrel{i_1, \dots , i_r \geq 0}{i_1+\dots +i_r=d}}e_{i_1 \cdots i_r}(I_1, \dots , I_r), $$
where $e_{i_1 \cdots i_r}(I_1, \dots , I_r)$ is the mixed multiplicity of $I_1, \dots , I_r$ of type $(i_1, \dots , i_r)$.
\end{Theorem}
For the other multiplicities $e^j(R/I_1 \oplus \dots \oplus R/I_r)$ where $j=1, \dots , r-1$,
Kirby-Rees \cite{KR2} considered the special case where $I_1 \subset \dots \subset I_r$ and proved the following.
\begin{Theorem} {\rm (Kirby-Rees \cite{KR2})}
Let $I_1, \dots , I_r$ be ${\frak m}$-primary ideals in $R$. Suppose that $I_1 \subset \dots \subset I_r$. Then for any $j=1, \dots , r-1$,
$$e^j(R/I_1 \oplus \dots \oplus R/I_r)=e(R/I_{j+1} \oplus \dots \oplus R/I_r). $$
In particular, the last positive associated Buchsbaum-Rim
multiplicity
$$e^{r-1}(R/I_1 \oplus \dots \oplus R/I_r)=e(R/I_r)$$
is the Hilbert-Samuel multiplicity of $I_r$.
\end{Theorem}
The purpose of this article is to compute $e^j(R/I_1 \oplus \dots \oplus R/I_r)$ for any ${\frak m}$-primary
ideals $I_1, \dots , I_r$ in $R$ and give a formula for the last positive associated Buchsbaum-Rim multiplicity $e^{r-1}(R/I_1 \oplus \dots \oplus R/I_r)$ in terms of the ordinary Hilbert-Samuel multiplicity of a sum of ideals.
Here is the main result.
\begin{Theorem}\label{main}
Let $I_1, \dots , I_r$ be arbitrary ${\frak m}$-primary ideals in $R$. Then we have a formula
$$e^{r-1}(R/I_1 \oplus \dots \oplus R/I_r)=e(R/I_1 + \dots + I_r). $$
In particular, if $I_1, \dots , I_{r-1} \subset I_r$,
$$e^{r-1}(R/I_1 \oplus \dots \oplus R/I_r)=e(R/I_r).$$
\end{Theorem}
This extends the Kirby-Rees formula
for the last positive associated Buchsbaum-Rim multiplicity
and of our previous result \cite{Ha}. Our approach is a direct computation of the Buchsbaum-Rim function
of two variables by using some ideas which is different from the one in \cite{KR2}.
We note that it seems to be difficult to get the general formula by applying their approach \cite{KR2}.
Moreover, our approach indicates the general formula for any other associated Buchsbaum-Rim multiplicities
$e^j(C)$ for $j=1, \dots , r-1$ which we will discuss and present it elsewhere.
The proof of Theorem \ref{main} will be given in section 3.
Section 2 is a preliminary character.
In section 2, we will give a few elementary lemmas that we will use in the proof of Theorem \ref{main}.
Our notation will be also fixed in this section.
Throughout this article, let $(R, {\frak m})$ be a Noetherian local ring with the maximal ideal ${\frak m}$ of dimension $d>0$.
Let $r>0$ be a fixed positive integer and let $[r]=\{1, \dots , r\}$. For a finite set $A$, ${}^{\sharp} A$ denotes the number of elements of $A$.
Vectors are always written in bold-faced letters, e.g., $\boldsymbol i =(i_1, \dots , i_r)$.
We work in the usual multi-index notation. Let $I_1, \dots , I_r$ be ideals in $R$ and
let $t_1, \dots , t_r$ be indeterminates.
Then for a vector $\boldsymbol i =(i_1, \dots , i_r) \in \mathbb Z_{\geq 0}^r$,
we denote $\boldsymbol I^{\boldsymbol i}=I_1^{i_1} \cdots I_r^{i_r}, \boldsymbol t^{\boldsymbol i}=t_1^{i_1}
\cdots t_r^{i_r}$ and $| \boldsymbol i | =i_1+ \dots + i_r$.
For vectors $\boldsymbol a, \boldsymbol b \in \mathbb Z^r$,
$\boldsymbol a \geq \boldsymbol b \stackrel{{\rm def}}{\Leftrightarrow} a_i \geq b_i \ \mbox{for all} \ i=1, \dots , r.$
Let $\boldsymbol 0=(0, \dots , 0)$ be the zero vector in $\mathbb Z_{\geq 0}^r$ and let
$\boldsymbol e=(1, 1, \dots , 1) \in \mathbb Z_{\geq 0}^{r}$.
\section{Preliminaries}
In what follows, let $I_1, \dots , I_r$ be ${\frak m}$-primary ideals in $R$ and let $C=R/I_1 \oplus \dots \oplus R/I_r$.
In order to compute the associated Buchsbaum-Rim multiplicity of $C$, by taking a minimal free presentation $R^n \stackrel{\varphi}{\to} R^r \to C \to 0$
where the image of $\varphi$ is given by $M:=\Im \varphi=I_1 \oplus \dots \oplus I_r \subset F:=R^r$, we may
assume that $S=R[t_1, \dots , t_r]$ is a polynomial ring and
$R[M]=R[I_1t_1, \dots , I_rt_r]$ is the multi-Rees algebra of $I_1, \dots , I_r$.
Then it is easy to see that for any $p, q \geq 0$, the module $M^pS_q$ can be expressed as
$${\displaystyle M^{p}S_{q}=
\sum_{\substack{| \boldsymbol n | =p+q \\ \boldsymbol n \geq \boldsymbol 0} }
\Bigg( \sum_
{\substack{| \boldsymbol i |=p \\ \boldsymbol 0 \leq \boldsymbol i \leq \boldsymbol n}} \boldsymbol
I^{\boldsymbol i}
\Bigg)
\boldsymbol t^{\boldsymbol n}. }
$$
Here we consider a finite set $H_{p, q}:=\{ \boldsymbol n \in \mathbb Z_{\geq 0}^r \mid |\boldsymbol n |=p+q \}. $
For any $\boldsymbol n \in H_{p, q}$, let
$${\displaystyle J_{p,q}({\boldsymbol n}):=\sum_
{\substack{| \boldsymbol i|=p \\ \boldsymbol 0 \leq \boldsymbol i \leq \boldsymbol n}} \boldsymbol I^
{\boldsymbol i}
}, $$
which is an ideal in $R$.
Then the function $\Lambda(p,q)$ can be described as
$${\displaystyle \Lambda(p, q) = \sum_{\boldsymbol n \in H_{p,q}}
\ell_R(R/J_{p, q}({\boldsymbol n})). }$$
For a subset $\Delta \subset H_{p, q}$, we set
$$\Lambda_{\Delta}(p, q):=\sum_{\boldsymbol n \in \Delta}
\ell_R(R/J_{p, q}({\boldsymbol n})). $$
Here we define special subsets of $H_{p, q}$, which will be often used in the proof of Theorem \ref{main}.
For $p, q>0$ and $k=1, \dots , r$, let
$$\Delta_{p, q}^{(k)}:=\{\boldsymbol n \in H_{p, q} \mid n_1, \dots , n_k>p, n_{k+1}+ \dots + n_r \leq p \}. $$
With this notation, we begin with the following.
\begin{Lemma}\label{lem1}
Let $p, q>0$ and $k=1, \dots , r$.
Then for any $\boldsymbol n \in \Delta_{p, q}^{(k)}$, we have the equality
$$J_{p, q}(\boldsymbol n)=(I_1+\dots +I_k)^{p-(n_{k+1}+\dots +n_r)} \prod_{j=k+1}^r(I_1+\dots +I_k+I_j)^{n_j}. $$
\end{Lemma}
\begin{proof}
Let $\boldsymbol n \in \Delta_{p, q}^{(k)}$. Then
\begin{multline*}
J_{p, q}(\boldsymbol n) = \sum_{\substack{| \boldsymbol i|=p \\ \boldsymbol 0 \leq \boldsymbol i \leq \boldsymbol n}} \boldsymbol I^{\boldsymbol i}
\\
\shoveleft{=\sum_{\substack{0 \leq i_{k+1} \leq n_{k+1} \\ \cdots \\ 0 \leq i_r \leq n_r}} \Bigg( \sum_{\substack{i_1, \dots , i_k \geq 0 \\ i_1+\dots +i_k=p-(i_{k+1}+ \dots +i_r)}} I_1^{i_1} \cdots I_k^{i_k} \Bigg) I_{k+1}^{i_{k+1}} \cdots I_r^{i_r}
}\\
\shoveleft{=\sum_{\substack{0 \leq i_{k+1} \leq n_{k+1} \\ \cdots \\ 0 \leq i_r \leq n_r}} (I_1+\dots +I_k)^{p-(i_{k+1}+\dots +i_r)} I_{k+1}^{i_{k+1}} \cdots I_r^{i_r}
}\\
\shoveleft{= \sum_{\substack{0 \leq i_{k+1} \leq n_{k+1} \\ \cdots \\ 0 \leq i_r \leq n_r}} (I_1+\dots +I_k)^{p-(n_{k+1}+\dots +n_r)+(n_{k+1}-i_{k+1})+ \dots + (n_r-i_r)} I_{k+1}^{i_{k+1}} \cdots I_r^{i_r}
}\\
\shoveleft{= (I_1+\cdots +I_k)^{p-(n_{k+1}+\dots +n_r)}\sum_{\substack{0 \leq i_{k+1} \leq n_{k+1} \\ \cdots \\ 0 \leq i_r \leq n_r}} (I_1+\dots +I_k)^{(n_{k+1}-i_{k+1})+ \dots + (n_r-i_r)} I_{k+1}^{i_{k+1}} \cdots I_r^{i_r}.}
\end{multline*}
Here one can easily compute the above last sum as
$$\sum_{\substack{0 \leq i_{k+1} \leq n_{k+1} \\ \cdots \\ 0 \leq i_r \leq n_r}} (I_1+\dots +I_k)^{(n_{k+1}-i_{k+1})+ \dots + (n_r-i_r)} I_{k+1}^{i_{k+1}} \cdots I_r^{i_r}
=\prod_{j=k+1}^r(I_1+\dots +I_k+I_j)^{n_j}. $$
Then we have the desired equality.
\end{proof}
\begin{Lemma}\label{lem2}
Let $p, q>0$ with $q \geq (p+1)r$ and let $k=1, \dots , r$ and $0 \leq m \leq p$. Then
$${}^\sharp \left\{ (n_1, \dots , n_k) \in \mathbb Z_{\geq 0}^k \left| \begin{array}{l}
n_1, \dots , n_k>p, \\
n_1+\dots +n_k=p+q-m
\end{array}
\right.
\right\}={q-(k-1)p-1-m \choose k-1}. $$
\end{Lemma}
\begin{proof}
Let $S:=\{ (n_1, \dots , n_k) \in \mathbb Z_{\geq 0}^k \mid n_1, \dots , n_k>p, n_1+\dots +n_k=p+q-m \}$.
Then the map
$\phi : S \to \{ (n_1, \dots , n_k) \in \mathbb Z_{\geq 0}^k \mid n_1+\dots +n_k=p+q-m-k(p+1) \}$
given by
$\phi({\boldsymbol n})={\boldsymbol n}-(p+1) \boldsymbol e$ is bijective so that the number ${}^\sharp S$
is just ${q-(k-1)p-1-m \choose k-1}$ which is the number of monomials of
degree $p+q-m-k(p+1)$ in $k$ variables.
\end{proof}
By Lemmas \ref{lem1} and \ref{lem2}, we have the explicit form of the function $\Lambda_{\Delta_{p, q}^{(k)}}(p, q)$.
\begin{Proposition}\label{basicfunction}
Let $p, q>0$ with $q \geq (p+1)r$ and let $k=1, \dots , r$. Then
$$\Lambda_{\Delta_{p, q}^{(k)}}(p, q)=\sum_{\stackrel{n_{k+1}, \dots , n_r \geq 0}{n_{k+1}+ \dots +n_r \leq p}}
{q-(k-1)p-1-(n_{k+1}+\dots +n_r) \choose k-1} \ell_R(R/{\frak a}), $$
where $\displaystyle{
{\frak a}:=(I_1+\dots +I_k)^{p-(n_{k+1}+\dots +n_r)} \prod_{j=k+1}^r(I_1+\dots +I_k+I_j)^{n_j}.
}$
In particular, we have the inequality
$$\Lambda_{\Delta_{p, q}^{(k)}}(p, q) \leq {q-(k-1)p-1 \choose k-1} \lambda_{L}(p), $$
where $\displaystyle{L=R/(I_1+\dots +I_k) \oplus \bigoplus_{j=k+1}^r R/(I_1+\dots +I_k+I_j)}$.
\end{Proposition}
\begin{proof}
Let $p, q>0$ with $q \geq (p+1)r$ and let $k=1, \dots , r$. Then
\begin{multline*}
\Lambda_{\Delta_{p, q}^{(k)}}(p, q)=\sum_{{\boldsymbol n} \in \Delta_{p, q}^{(k)}} \ell_R(R/J_{p, q}({\boldsymbol n}))
\\
\shoveleft{= \sum_{{\boldsymbol n} \in \Delta_{p, q}^{(k)}} \ell_R \bigg(
R/(I_1+\dots +I_k)^{p-(n_{k+1}+\dots +n_r)} \prod_{j=k+1}^r(I_1+\dots +I_k+I_j)^{n_j}
\bigg) \ \ \ \mbox{by Lemma \ref{lem1}}
}\\
\shoveleft{= \sum_{\substack{n_{k+1}, \dots , n_r \geq 0 \\ n_{k+1}+\dots +n_r \leq p}}
\Bigg[
{}^\sharp \left\{ (n_1, \dots , n_k) \in \mathbb Z_{\geq 0}^k
\left|
\begin{array}{l}
n_1, \dots , n_k>p, \\
n_1+\dots +n_k=p+q-(n_{k+1}+\dots +n_r)
\end{array}
\right.
\right\}
}\\
\times \
\ell_R \bigg( R/(I_1+\dots +I_k)^{p-(n_{k+1}+\dots +n_r)} \prod_{j=k+1}^r(I_1+\dots +I_k+I_j)^{n_j} \bigg)
\Bigg]
\\
\shoveleft{= \sum_{\substack{n_{k+1}, \dots , n_r \geq 0 \\ n_{k+1}+\dots +n_r \leq p}}
\Bigg[
{q-(k-1)p-1-(n_{k+1}+\dots +n_r) \choose k-1}
}\\
\times \ \ell_R \bigg( R/(I_1+\dots +I_k)^{p-(n_{k+1}+\dots +n_r)} \prod_{j=k+1}^r(I_1+\dots +I_k+I_j)^{n_j} \bigg)
\Bigg] \ \ \ \mbox{by Lemma \ref{lem2}}.
\end{multline*}
This proves the first assertion. For the second one, we first note that the above last term is at most
$$ {q-(k-1)p-1 \choose k-1} \sum_{\substack{n_{k+1}, \dots , n_r \geq 0 \\ n_{k+1}+\dots +n_r \leq p}}
\ell_R \big( R/(I_1+\dots +I_k)^{p-(n_{k+1}+\dots +n_r)} \prod_{j=k+1}^r(I_1+\dots +I_k+I_j)^{n_j} \big). $$
Then, since the above last sum is just the ordinary Buchsbaum-Rim function $\lambda_L(p)$ of
$L:=R/(I_1+\dots +I_k) \oplus \bigoplus_{j=k+1}^r R/(I_1+\dots +I_k+I_j )$ by definition, we get the desired
inequality.
\end{proof}
\begin{Remark}\label{rem}
{\rm
As stated in the above proof, the function $\lambda_L(p)$ in Proposition $\ref{basicfunction}$ is
the ordinary Buchsbaum-Rim function of $L$ where $L$ is a direct sum of $(r-k+1)$ cyclic modules.
Therefore the function $\lambda_L(p)$ is a polynomial function of degree $d+r-k$ for all large enough $p$.
}
\end{Remark}
\section{Proof of Theorem \ref{main}}
We prove Theorem \ref{main}. We work under the same situation and use the same notation as in section 2.
In order to investigate the asymptotic property of the function $\Lambda(p, q)$,
we may assume that
\begin{equation}\label{largepq}
q\geq(p+1)r \gg 0.
\end{equation}
In what follows, we fix integers $p, q$ which satisfy the condition (\ref{largepq}).
Let $H:=H_{p, q}$ and let $J(\boldsymbol n):=J_{p, q}(\boldsymbol n) $ for $\boldsymbol n \in H$.
We note here that for any ${\boldsymbol n} \in H$, there exists $i=1, \dots , r$ such that $n_i > p$
because of the condition (\ref{largepq}).
Then the set $H$ can be divided by $r$-regions as follows:
$$H=\coprod_{k=1}^rH^{(k)}, $$
where $H^{(k)}:=\{ \boldsymbol n \in H \mid {}^{\sharp} \{ i \mid n_i > p \}=k \}. $
Hence the function $\Lambda(p, q)$ can be expressed as follows:
$$\Lambda(p, q)=\sum_{k=1}^r \Lambda_{H^{(k)}} (p, q). $$
Therefore it is enough to compute each function $\Lambda_{H^{(k)}}(p, q)$.
When $k=r$, we can compute the function explicitly as follows.
\begin{Proposition}\label{k=r}
$$\Lambda_{H^{(r)}} (p, q)={q-(r-1)p-1 \choose r-1} \ell_R (R/(I_1+\dots +I_r)^p). $$
\end{Proposition}
\begin{proof}
This follows from Proposition \ref{basicfunction} since $H^{(r)}=\Delta_{p, q}^{(r)}$.
\end{proof}
Thus we can reduce the problem to compute functions $\Lambda_{H^{(k)}}(p, q)$ for $k=1, \dots , r-1$.
Let $1 \leq k \leq r-1$. To compute $\Lambda_{H^{(k)}}(p, q)$,
we divide $H^{(k)}$ into ${r \choose k}$-regions as follows:
$$H^{(k)}=\coprod_{\stackrel{A \subset [r]}{{}^{\sharp}A=r-k}} D_A^{(k)}, $$
where $D_A^{(k)}:=\{ {\boldsymbol n} \in H^{(k)} \mid n_i >p \ \mbox{for} \ i \notin A, n_i \leq p \ \mbox{for} \ i \in A \}$.
Then the function $\Lambda_{H^{(k)}}(p, q)$ can be expressed as follows:
$$\Lambda_{H^{(k)}}(p, q)=\sum_{\stackrel{A \subset [r]}{{}^{\sharp}A=r-k}} \Lambda_{D_A^{(k)}} (p, q). $$
When $k=r-1$, we can also compute the function explicitly and get the inequality as in Proposition \ref{basicfunction}.
Here is the inequality we will use in the proof of Theorem \ref{main}.
\begin{Proposition}\label{k=r-1}
There exists a polynomial $g_{r-1}(X) \in \mathbb Q[X]$ of degree $d+1$ such that
$$\Lambda_{H^{(r-1)}}(p, q) \leq {q-(r-2)p-1 \choose r-2}g_{r-1}(p). $$
\end{Proposition}
\begin{proof}
It is enough to show that for any $j=1, \dots , r$,
$$\Lambda_{D_{\{j\}}^{(r-1)}}(p,q) \leq {q-(r-2)p-1 \choose r-2} \lambda_{L_j}(p), $$
where $L_j=R/ (I_1+ \dots +\hat{I_j}+ \dots +I_{r})\oplus R/(I_1+ \dots +I_{r})$ because the function
$\lambda_{L_j}(p)$ is a polynomial function of degree $d+1$ (see Remark \ref{rem}).
We may only consider the case where $j=r$. Then it follows directly from Proposition \ref{basicfunction}
since $D_{\{r\}}^{(r-1)}=\Delta_{p, q}^{(r-1)}$.
\end{proof}
When $1 \leq k \leq r-2$, we can get the same inequality,
although the situation is not simple as in the case where $k=r-1$.
\begin{Proposition}\label{k<r-1}
For any $1 \leq k \leq r-2$, there exists a polynomial $g_{k}(X) \in \mathbb Q[X]$ of degree $d+r-k$
such that $$\Lambda_{H^{(k)}}(p, q) \leq {q-(k-1)p-1 \choose k-1}g_{k}(p). $$
\end{Proposition}
\begin{proof}
Let $1 \leq k \leq r-2$. To prove the desired inequality, it is enough to show that
for any subset $A \subset [r]$ with ${}^{\sharp} A=r-k$, there exists a polynomial
$h_A(X) \in \mathbb Q[X]$ of degree $d+r-k$ such that
$$\Lambda_{D_A^{(k)}}(p, q) \leq {q-(k-1)p-1 \choose k-1}h_{A}(p). $$
To show this, we may only consider the case where $A=\{k+1, k+2, \dots , r\}$.
We then put $D^{(k)}:=D^{(k)}_{\{k+1, \dots , r\}}$.
To investigate $\Lambda_{D^{(k)}}(p, q)$, we divide $D^{(k)}$ into two-parts:
$$D^{(k)}=E_{-}^{(k)} \coprod E_{+}^{(k)}, $$
where
$$E_{-}^{(k)}:=\{ {\boldsymbol n} \in D^{(k)} \mid n_{k+1}+ \dots + n_r \leq p \}, $$
$$E_{+}^{(k)}:=\{ {\boldsymbol n} \in D^{(k)} \mid n_{k+1}+ \dots + n_r > p \}. $$
With this notation, we have the following two lemmas.
\begin{Lemma}\label{k-}
Let $1 \leq k \leq r-2$. Then
$$\Lambda_{E^{(k)}_{-}}(p, q) \leq {q-(k-1)p-1 \choose k-1} \lambda_L(p)$$
where $\displaystyle{L=R/(I_1+\dots +I_k) \oplus \bigoplus_{j=k+1}^r R/(I_1+\dots +I_k+I_j)}$.
\end{Lemma}
\begin{proof}
This follows from Proposition \ref{basicfunction} since $E_{-}^{(k)}=\Delta_{p, q}^{(k)}$.
\end{proof}
\begin{Lemma}\label{k+}
Let $1 \leq k \leq r-2$. Then
there exists a polynomial $h(X) \in \mathbb Q[X]$ of degree $d+r-k$ such that
$$\Lambda_{E^{(k)}_{+}}(p, q) \leq {q-(k-1)p-1 \choose k-1} h(p). $$
\end{Lemma}
\begin{proof}
Let $1 \leq k \leq r-2$. Then we first note that for any ${\boldsymbol n} \in E_{+}^{(k)}, $
\begin{eqnarray}
J({\boldsymbol n})&=&\sum_{\substack{\boldsymbol 0 \leq \boldsymbol i \leq {\boldsymbol n} \\ | \boldsymbol i |=p}} \boldsymbol I^{\boldsymbol i} \notag \\
&=&\sum_{\substack{0 \leq i_{k+1} \leq n_{k+1} \\ \dots \\ 0 \leq i_r \leq n_r \\ i_{k+1}+\dots +i_r \leq p}}
\Bigg( \sum_{\substack{i_1, \dots , i_k \geq 0 \\ i_1+ \cdots +i_k=p-(i_{k+1}+ \cdots + i_r) }}
I_1^{i_1} \cdots I_k^{i_k} \Bigg) I_{k+1}^{i_{k+1}} \cdots I_r^{i_r} \notag \\
&=& \sum_{\substack{0 \leq i_{k+1} \leq n_{k+1} \\ \dots \\ 0 \leq i_r \leq n_r \\ i_{k+1}+\dots +i_r \leq p}}
(I_1+ \dots +I_k)^{p-(i_{k+1}+ \dots + i_r)} I_{k+1}^{i_{k+1}} \cdots I_r^{i_r}.
\end{eqnarray}
Here we claim the following.
\medskip
{\bf Claim 1.} There exists an ${\frak m}$-primary ideal ${\frak b}$ in $R$ such that
for any ${\boldsymbol n} \in E_{+}^{(k)},$
$$\ell_R(R/J({\boldsymbol n})) \leq \ell_R(R/{\frak b}^p). $$
\medskip
Let ${\frak b}$ be an ${\frak m}$-primary ideal in $R$ such that ${\frak b} \subset I_j$
for any $j=k+1, \dots , r$ (such as ${\frak b}=I_{k+1} \cdots I_r$).
Let ${\boldsymbol n} \in E_{+}^{(k)}. $ Since $n_{k+1}, \dots , n_r \geq 0$ and $n_{k+1}+\dots +n_r > p$,
there exist integers $a_{k+1}, \dots , a_r \in \mathbb Z$
such that
$$\left\{
\begin{array}{l}
0 \leq a_j \leq n_j \ \mbox{for any} \ j=k+1, \dots , r, \ \mbox{and}\\
a_{k+1}+ \cdots +a_r=p.
\end{array}
\right. $$
Then, by the above expression (2) of $J({\boldsymbol n})$,
$J({\boldsymbol n})
\supset I_{k+1}^{a_{k+1}} \cdots I_r^{a_r}
\supset {\frak b}^{a_{k+1}+\cdots + a_r}
= {\frak b}^p$.
Hence we have $\ell_R(R/J({\boldsymbol n})) \leq \ell_R(R/{\frak b}^p)$.
\medskip
Therefore
$$\Lambda_{E_{+}^{(k)}} (p, q)
= \sum_{{\boldsymbol n} \in E_{+}^{(k)}} \ell_R(R/J({\boldsymbol n}))
\leq \sum_{{\boldsymbol n} \in E_{+}^{(k)}} \ell_R(R/{\frak b}^p)
={}^{\sharp} E_{+}^{(k)} \cdot \ell_R(R/{\frak b}^p).
$$
{\bf Claim 2. } There exists a polynomial $h^{\circ}(X) \in \mathbb Q[X]$ of degree $r-k$ such that
$${}^{\sharp} E_{+}^{(k)} \leq {q-(k-1)p-1 \choose k-1} \cdot h^{\circ}(p). $$
To show this, we divide $E_{+}^{(k)}$ as follows:
\begin{eqnarray*}
E_{+}^{(k)} &=& \{ {\boldsymbol n} \in H^{(k)} \mid n_1, \dots , n_k > p, n_{k+1}, \dots , n_r \leq p, n_{k+1}+ \cdots + n_r > p \} \\
&=& \coprod_{\substack{0 \leq n_{k+1} \leq p \\ \cdots \\ 0 \leq n_r \leq p \\ n_{k+1}+ \cdots + n_r > p}} F(n_{k+1}, \dots , n_r)
\end{eqnarray*}
where
$$F(n_{k+1}, \dots , n_r):=\left\{(n_1, \dots , n_r) \in \mathbb Z_{\geq 0}^r \left|
\begin{array}{l}
n_1, \dots , n_k > p, \\
n_1+ \cdots + n_k = p+q-(n_{k+1}+ \dots + n_r)
\end{array}
\right.
\right\}. $$
Therefore
\begin{eqnarray*}
{}^{\sharp}E_{+}^{(k)}&=&\sum_{\substack{0 \leq n_{k+1} \leq p \\ \cdots \\ 0 \leq n_r \leq p \\ n_{k+1}+ \cdots + n_r > p}} {}^{\sharp} F(n_{k+1}, \dots , n_r) \\
&=& \sum_{\substack{0 \leq n_{k+1} \leq p \\ \cdots \\ 0 \leq n_r \leq p \\ n_{k+1}+ \cdots + n_r > p}}
{q-(k-1)p-1-(n_{k+1}+ \dots + n_r) \choose k-1} \ \ \ \ \ \mbox{by Lemma \ref{lem2}} \\
&\leq & {q-(k-1)p-1 \choose k-1} \cdot
{}^{\sharp} \left\{ (n_{k+1}, \dots, n_r) \in \mathbb Z_{\geq 0}^{r-k} \left|
\begin{array}{l}
n_{k+1}, \dots , n_r \leq p, \\
n_{k+1}+ \cdots +n_r > p
\end{array}
\right.
\right\} \\
&\leq & {q-(k-1)p-1 \choose k-1} \cdot
{}^{\sharp} \{ (n_{k+1}, \dots, n_r) \in \mathbb Z_{\geq 0}^{r-k} \mid p < n_{k+1}+ \cdots +n_r \leq (r-k)p \} \\
&=& {q-(k-1)p-1 \choose k-1} \cdot \left\{ {r-k+(r-k)p-1 \choose r-k}-{r-k+p-1 \choose r-k} \right\}.
\end{eqnarray*}
This proves Claim 2.
\medskip
Consequently, we have that
$$\Lambda_{E_{+}^{(k)}} (p, q)
\leq {}^{\sharp} E_{+}^{(k)} \cdot \ell_R(R/{\frak b}^p)
\leq {q-(k-1)p-1 \choose k-1} \cdot h^{\circ}(p) \cdot \ell_R(R/{\frak b}^p).$$
Since $\ell_R(R/{\frak b}^p)$ is a polynomial in $p$ of degree $d$,
the polynomial $h(X)$ which corresponds to $h(p)= h^{\circ}(p) \cdot \ell_R(R/{\frak b}^p)$ is our desired one.
\end{proof}
By Lemmas \ref{k-} and \ref{k+},
$$\Lambda_{D^{(k)}}(p,q)
=\Lambda_{E_{-}^{(k)}}(p,q)+\Lambda_{E_{+}^{(k)}}(p,q)
\leq {q-(k-1)p-1 \choose k-1}\Big(\lambda_L(p)+h(p)\Big).
$$
This proves Proposition \ref{k<r-1}. \end{proof}
\medskip
Now let me give a proof of Theorem \ref{main}.
\begin{proof}[Proof of Theorem \ref{main}]
By Propositions \ref{k=r-1} and \ref{k<r-1}, for any $k=1, \dots , r-1$,
there exists a polynomial $g_k(X) \in \mathbb Q[X]$ of degree $d+r-k$ such that
$$\Lambda_{H^{(k)}}(p, q) \leq {q-(k-1)p-1 \choose k-1}g_k(p). $$
Since $\displaystyle{\Lambda(p, q)=\Lambda_{H^{(r)}}(p, q)+\sum_{k=1}^{r-1} \Lambda_{H^{(k)}}(p, q)}, $
we have that by Proposition \ref{k=r},
\begin{eqnarray*}
\Lambda(p, q)-{q-(r-1)p-1 \choose r-1} \ell_R(R/(I_1+ \dots +I_r)^p) &\leq & \sum_{k=1}^{r-1} {q-(k-1)p-1 \choose k-1} g_k(p).
\end{eqnarray*}
Therefore, there exists a polynomial $g(X, Y) \in \mathbb Q[X, Y] $ with $\deg_Y g(X, Y) \leq r-2$ such that
$$\Lambda(p, q)-{q-(r-1)p-1 \choose r-1} \ell_R(R/(I_1+ \dots +I_r)^p) \leq g(p, q). $$
The LHS in the above inequality is a polynomial function of two variables with non-negative integer values so that
the function $\Lambda(p, q)$ can be expressed as
$$\Lambda(p, q)={q-(r-1)p-1 \choose r-1} \ell_R(R/(I_1+ \dots +I_r)^p)+f(p, q) $$
for some $f(X, Y) \in \mathbb Q[X, Y]$ with $\deg_Y f(X, Y) \leq r-2$.
Then, by comparing the coefficients of $p^dq^{r-1}$ in the above equality, we obtain the equality
$$e^{r-1}(R/I_1 \oplus \dots \oplus R/I_r)=e(R/I_1+ \dots +I_r). $$
Then we get the desired formula.
\end{proof}
\section*{Acknowledgments}
The author would like to thank the referee for his/her careful reading and constructive suggestions.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.